problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
25.4k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 582
39.1k
| num_tokens
int64 271
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_14557 | rasdani/github-patches | git_diff | Qiskit__qiskit-10532 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Scientific notation in QASM output causes export to other platform to fail
### Environment
- **Qiskit version**: {'qiskit-terra': '0.24.0', 'qiskit-aer': '0.12.0', 'qiskit-ignis': '0.7.1', 'qiskit-ibmq-provider': '0.20.2', 'qiskit': '0.43.0', 'qiskit-nature': None, 'qiskit-finance': None, 'qiskit-optimization': None, 'qiskit-machine-learning': None}
- **Python version**: 3.10.6
- **Operating system**: Linux Mint
### What is happening?
Exporting a circuit with small rotation gates through circuit.qasm() results in scientific notation being used. This causes issues when importing the QASM code into another quantum computing platform.
### How can we reproduce the issue?
```
from qiskit import QuantumCircuit
qc = QuantumCircuit(1)
qc.rx(0.00001, 0)
print(qc.qasm())
```
output:
```
OPENQASM 2.0;
include "qelib1.inc";
qreg q[1];
creg meas[1];
rx(1e-05) q[0];
barrier q[0];
measure q[0] -> meas[0];
```
### What should happen?
output:
```
OPENQASM 2.0;
include "qelib1.inc";
qreg q[1];
creg meas[1];
rx(0.00001) q[0];
barrier q[0];
measure q[0] -> meas[0];
```
### Any suggestions?
The conversion from float to string happens in `pi_check`. The call is `pi_check(0.00001, output="qasm", eps=1e-12)` and returns `'1e-05'`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qiskit/circuit/tools/pi_check.py`
Content:
```
1 # This code is part of Qiskit.
2 #
3 # (C) Copyright IBM 2017, 2019.
4 #
5 # This code is licensed under the Apache License, Version 2.0. You may
6 # obtain a copy of this license in the LICENSE.txt file in the root directory
7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
8 #
9 # Any modifications or derivative works of this code must retain this
10 # copyright notice, and modified files need to carry a notice indicating
11 # that they have been altered from the originals.
12 # pylint: disable=too-many-return-statements
13
14 """Check if number close to values of PI
15 """
16
17 import numpy as np
18 from qiskit.circuit.parameterexpression import ParameterExpression
19 from qiskit.exceptions import QiskitError
20
21 MAX_FRAC = 16
22 N, D = np.meshgrid(np.arange(1, MAX_FRAC + 1), np.arange(1, MAX_FRAC + 1))
23 FRAC_MESH = N / D * np.pi
24 RECIP_MESH = N / D / np.pi
25 POW_LIST = np.pi ** np.arange(2, 5)
26
27
28 def pi_check(inpt, eps=1e-9, output="text", ndigits=None):
29 """Computes if a number is close to an integer
30 fraction or multiple of PI and returns the
31 corresponding string.
32
33 Args:
34 inpt (float): Number to check.
35 eps (float): EPS to check against.
36 output (str): Options are 'text' (default),
37 'latex', 'mpl', and 'qasm'.
38 ndigits (int or None): Number of digits to print
39 if returning raw inpt.
40 If `None` (default), Python's
41 default float formatting is used.
42
43 Returns:
44 str: string representation of output.
45
46 Raises:
47 QiskitError: if output is not a valid option.
48 """
49 if isinstance(inpt, ParameterExpression):
50 param_str = str(inpt)
51 from sympy import sympify
52
53 expr = sympify(inpt._symbol_expr)
54 syms = expr.atoms()
55 for sym in syms:
56 if not sym.is_number:
57 continue
58 pi = pi_check(abs(float(sym)), eps=eps, output=output, ndigits=ndigits)
59 try:
60 _ = float(pi)
61 except (ValueError, TypeError):
62 from sympy import sstr
63
64 sym_str = sstr(abs(sym), full_prec=False)
65 param_str = param_str.replace(sym_str, pi)
66 return param_str
67 elif isinstance(inpt, str):
68 return inpt
69
70 def normalize(single_inpt):
71 if abs(single_inpt) < eps:
72 return "0"
73
74 if output == "text":
75 pi = "π"
76 elif output == "qasm":
77 pi = "pi"
78 elif output == "latex":
79 pi = "\\pi"
80 elif output == "mpl":
81 pi = "$\\pi$"
82 else:
83 raise QiskitError("pi_check parameter output should be text, latex, mpl, or qasm.")
84
85 neg_str = "-" if single_inpt < 0 else ""
86
87 # First check is for whole multiples of pi
88 val = single_inpt / np.pi
89 if abs(val) >= 1 - eps:
90 if abs(abs(val) - abs(round(val))) < eps:
91 val = int(abs(round(val)))
92 if abs(val) == 1:
93 str_out = f"{neg_str}{pi}"
94 else:
95 if output == "qasm":
96 str_out = f"{neg_str}{val}*{pi}"
97 else:
98 str_out = f"{neg_str}{val}{pi}"
99 return str_out
100
101 # Second is a check for powers of pi
102 if abs(single_inpt) > np.pi:
103 power = np.where(abs(abs(single_inpt) - POW_LIST) < eps)
104 if power[0].shape[0]:
105 if output == "qasm":
106 if ndigits is None:
107 str_out = "{}".format(single_inpt)
108 else:
109 str_out = "{:.{}g}".format(single_inpt, ndigits)
110 elif output == "latex":
111 str_out = f"{neg_str}{pi}^{power[0][0] + 2}"
112 elif output == "mpl":
113 str_out = f"{neg_str}{pi}$^{power[0][0] + 2}$"
114 else:
115 str_out = f"{neg_str}{pi}**{power[0][0] + 2}"
116 return str_out
117
118 # Third is a check for a number larger than MAX_FRAC * pi, not a
119 # multiple or power of pi, since no fractions will exceed MAX_FRAC * pi
120 if abs(single_inpt) >= (MAX_FRAC * np.pi):
121 if ndigits is None:
122 str_out = "{}".format(single_inpt)
123 else:
124 str_out = "{:.{}g}".format(single_inpt, ndigits)
125 return str_out
126
127 # Fourth check is for fractions for 1*pi in the numer and any
128 # number in the denom.
129 val = np.pi / single_inpt
130 if abs(abs(val) - abs(round(val))) < eps:
131 val = int(abs(round(val)))
132 if output == "latex":
133 str_out = f"\\frac{{{neg_str}{pi}}}{{{val}}}"
134 else:
135 str_out = f"{neg_str}{pi}/{val}"
136 return str_out
137
138 # Fifth check is for fractions where the numer > 1*pi and numer
139 # is up to MAX_FRAC*pi and denom is up to MAX_FRAC and all
140 # fractions are reduced. Ex. 15pi/16, 2pi/5, 15pi/2, 16pi/9.
141 frac = np.where(np.abs(abs(single_inpt) - FRAC_MESH) < eps)
142 if frac[0].shape[0]:
143 numer = int(frac[1][0]) + 1
144 denom = int(frac[0][0]) + 1
145 if output == "latex":
146 str_out = f"\\frac{{{neg_str}{numer}{pi}}}{{{denom}}}"
147 elif output == "qasm":
148 str_out = f"{neg_str}{numer}*{pi}/{denom}"
149 else:
150 str_out = f"{neg_str}{numer}{pi}/{denom}"
151 return str_out
152
153 # Sixth check is for fractions where the numer > 1 and numer
154 # is up to MAX_FRAC and denom is up to MAX_FRAC*pi and all
155 # fractions are reduced. Ex. 15/16pi, 2/5pi, 15/2pi, 16/9pi
156 frac = np.where(np.abs(abs(single_inpt) - RECIP_MESH) < eps)
157 if frac[0].shape[0]:
158 numer = int(frac[1][0]) + 1
159 denom = int(frac[0][0]) + 1
160 if denom == 1 and output != "qasm":
161 denom = ""
162 if output == "latex":
163 str_out = f"\\frac{{{neg_str}{numer}}}{{{denom}{pi}}}"
164 elif output == "qasm":
165 str_out = f"{neg_str}{numer}/({denom}*{pi})"
166 else:
167 str_out = f"{neg_str}{numer}/{denom}{pi}"
168 return str_out
169
170 # Nothing found
171 if ndigits is None:
172 str_out = "{}".format(single_inpt)
173 else:
174 str_out = "{:.{}g}".format(single_inpt, ndigits)
175 return str_out
176
177 complex_inpt = complex(inpt)
178 real, imag = map(normalize, [complex_inpt.real, complex_inpt.imag])
179
180 jstr = "\\jmath" if output == "latex" else "j"
181 if real == "0" and imag != "0":
182 str_out = imag + jstr
183 elif real != "0" and imag != "0":
184 op_str = "+"
185 # Remove + if imag negative except for latex fractions
186 if complex_inpt.imag < 0 and (output != "latex" or "\\frac" not in imag):
187 op_str = ""
188 str_out = f"{real}{op_str}{imag}{jstr}"
189 else:
190 str_out = real
191 return str_out
192
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/qiskit/circuit/tools/pi_check.py b/qiskit/circuit/tools/pi_check.py
--- a/qiskit/circuit/tools/pi_check.py
+++ b/qiskit/circuit/tools/pi_check.py
@@ -167,12 +167,11 @@
str_out = f"{neg_str}{numer}/{denom}{pi}"
return str_out
- # Nothing found
- if ndigits is None:
- str_out = "{}".format(single_inpt)
- else:
- str_out = "{:.{}g}".format(single_inpt, ndigits)
- return str_out
+ # Nothing found. The '#' forces a decimal point to be included, which OQ2 needs, but other
+ # formats don't really.
+ if output == "qasm":
+ return f"{single_inpt:#}" if ndigits is None else f"{single_inpt:#.{ndigits}g}"
+ return f"{single_inpt}" if ndigits is None else f"{single_inpt:.{ndigits}g}"
complex_inpt = complex(inpt)
real, imag = map(normalize, [complex_inpt.real, complex_inpt.imag])
| {"golden_diff": "diff --git a/qiskit/circuit/tools/pi_check.py b/qiskit/circuit/tools/pi_check.py\n--- a/qiskit/circuit/tools/pi_check.py\n+++ b/qiskit/circuit/tools/pi_check.py\n@@ -167,12 +167,11 @@\n str_out = f\"{neg_str}{numer}/{denom}{pi}\"\n return str_out\n \n- # Nothing found\n- if ndigits is None:\n- str_out = \"{}\".format(single_inpt)\n- else:\n- str_out = \"{:.{}g}\".format(single_inpt, ndigits)\n- return str_out\n+ # Nothing found. The '#' forces a decimal point to be included, which OQ2 needs, but other\n+ # formats don't really.\n+ if output == \"qasm\":\n+ return f\"{single_inpt:#}\" if ndigits is None else f\"{single_inpt:#.{ndigits}g}\"\n+ return f\"{single_inpt}\" if ndigits is None else f\"{single_inpt:.{ndigits}g}\"\n \n complex_inpt = complex(inpt)\n real, imag = map(normalize, [complex_inpt.real, complex_inpt.imag])\n", "issue": "Scientific notation in QASM output causes export to other platform to fail\n### Environment\n\n- **Qiskit version**: {'qiskit-terra': '0.24.0', 'qiskit-aer': '0.12.0', 'qiskit-ignis': '0.7.1', 'qiskit-ibmq-provider': '0.20.2', 'qiskit': '0.43.0', 'qiskit-nature': None, 'qiskit-finance': None, 'qiskit-optimization': None, 'qiskit-machine-learning': None}\r\n- **Python version**: 3.10.6\r\n- **Operating system**: Linux Mint\n\n### What is happening?\n\nExporting a circuit with small rotation gates through circuit.qasm() results in scientific notation being used. This causes issues when importing the QASM code into another quantum computing platform.\n\n### How can we reproduce the issue?\n\n```\r\nfrom qiskit import QuantumCircuit\r\nqc = QuantumCircuit(1)\r\nqc.rx(0.00001, 0)\r\nprint(qc.qasm())\r\n```\r\noutput:\r\n```\r\nOPENQASM 2.0;\r\ninclude \"qelib1.inc\";\r\nqreg q[1];\r\ncreg meas[1];\r\nrx(1e-05) q[0];\r\nbarrier q[0];\r\nmeasure q[0] -> meas[0];\r\n```\n\n### What should happen?\n\noutput:\r\n```\r\nOPENQASM 2.0;\r\ninclude \"qelib1.inc\";\r\nqreg q[1];\r\ncreg meas[1];\r\nrx(0.00001) q[0];\r\nbarrier q[0];\r\nmeasure q[0] -> meas[0];\r\n```\n\n### Any suggestions?\n\nThe conversion from float to string happens in `pi_check`. The call is `pi_check(0.00001, output=\"qasm\", eps=1e-12)` and returns `'1e-05'`.\n", "before_files": [{"content": "# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2017, 2019.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n# pylint: disable=too-many-return-statements\n\n\"\"\"Check if number close to values of PI\n\"\"\"\n\nimport numpy as np\nfrom qiskit.circuit.parameterexpression import ParameterExpression\nfrom qiskit.exceptions import QiskitError\n\nMAX_FRAC = 16\nN, D = np.meshgrid(np.arange(1, MAX_FRAC + 1), np.arange(1, MAX_FRAC + 1))\nFRAC_MESH = N / D * np.pi\nRECIP_MESH = N / D / np.pi\nPOW_LIST = np.pi ** np.arange(2, 5)\n\n\ndef pi_check(inpt, eps=1e-9, output=\"text\", ndigits=None):\n \"\"\"Computes if a number is close to an integer\n fraction or multiple of PI and returns the\n corresponding string.\n\n Args:\n inpt (float): Number to check.\n eps (float): EPS to check against.\n output (str): Options are 'text' (default),\n 'latex', 'mpl', and 'qasm'.\n ndigits (int or None): Number of digits to print\n if returning raw inpt.\n If `None` (default), Python's\n default float formatting is used.\n\n Returns:\n str: string representation of output.\n\n Raises:\n QiskitError: if output is not a valid option.\n \"\"\"\n if isinstance(inpt, ParameterExpression):\n param_str = str(inpt)\n from sympy import sympify\n\n expr = sympify(inpt._symbol_expr)\n syms = expr.atoms()\n for sym in syms:\n if not sym.is_number:\n continue\n pi = pi_check(abs(float(sym)), eps=eps, output=output, ndigits=ndigits)\n try:\n _ = float(pi)\n except (ValueError, TypeError):\n from sympy import sstr\n\n sym_str = sstr(abs(sym), full_prec=False)\n param_str = param_str.replace(sym_str, pi)\n return param_str\n elif isinstance(inpt, str):\n return inpt\n\n def normalize(single_inpt):\n if abs(single_inpt) < eps:\n return \"0\"\n\n if output == \"text\":\n pi = \"\u03c0\"\n elif output == \"qasm\":\n pi = \"pi\"\n elif output == \"latex\":\n pi = \"\\\\pi\"\n elif output == \"mpl\":\n pi = \"$\\\\pi$\"\n else:\n raise QiskitError(\"pi_check parameter output should be text, latex, mpl, or qasm.\")\n\n neg_str = \"-\" if single_inpt < 0 else \"\"\n\n # First check is for whole multiples of pi\n val = single_inpt / np.pi\n if abs(val) >= 1 - eps:\n if abs(abs(val) - abs(round(val))) < eps:\n val = int(abs(round(val)))\n if abs(val) == 1:\n str_out = f\"{neg_str}{pi}\"\n else:\n if output == \"qasm\":\n str_out = f\"{neg_str}{val}*{pi}\"\n else:\n str_out = f\"{neg_str}{val}{pi}\"\n return str_out\n\n # Second is a check for powers of pi\n if abs(single_inpt) > np.pi:\n power = np.where(abs(abs(single_inpt) - POW_LIST) < eps)\n if power[0].shape[0]:\n if output == \"qasm\":\n if ndigits is None:\n str_out = \"{}\".format(single_inpt)\n else:\n str_out = \"{:.{}g}\".format(single_inpt, ndigits)\n elif output == \"latex\":\n str_out = f\"{neg_str}{pi}^{power[0][0] + 2}\"\n elif output == \"mpl\":\n str_out = f\"{neg_str}{pi}$^{power[0][0] + 2}$\"\n else:\n str_out = f\"{neg_str}{pi}**{power[0][0] + 2}\"\n return str_out\n\n # Third is a check for a number larger than MAX_FRAC * pi, not a\n # multiple or power of pi, since no fractions will exceed MAX_FRAC * pi\n if abs(single_inpt) >= (MAX_FRAC * np.pi):\n if ndigits is None:\n str_out = \"{}\".format(single_inpt)\n else:\n str_out = \"{:.{}g}\".format(single_inpt, ndigits)\n return str_out\n\n # Fourth check is for fractions for 1*pi in the numer and any\n # number in the denom.\n val = np.pi / single_inpt\n if abs(abs(val) - abs(round(val))) < eps:\n val = int(abs(round(val)))\n if output == \"latex\":\n str_out = f\"\\\\frac{{{neg_str}{pi}}}{{{val}}}\"\n else:\n str_out = f\"{neg_str}{pi}/{val}\"\n return str_out\n\n # Fifth check is for fractions where the numer > 1*pi and numer\n # is up to MAX_FRAC*pi and denom is up to MAX_FRAC and all\n # fractions are reduced. Ex. 15pi/16, 2pi/5, 15pi/2, 16pi/9.\n frac = np.where(np.abs(abs(single_inpt) - FRAC_MESH) < eps)\n if frac[0].shape[0]:\n numer = int(frac[1][0]) + 1\n denom = int(frac[0][0]) + 1\n if output == \"latex\":\n str_out = f\"\\\\frac{{{neg_str}{numer}{pi}}}{{{denom}}}\"\n elif output == \"qasm\":\n str_out = f\"{neg_str}{numer}*{pi}/{denom}\"\n else:\n str_out = f\"{neg_str}{numer}{pi}/{denom}\"\n return str_out\n\n # Sixth check is for fractions where the numer > 1 and numer\n # is up to MAX_FRAC and denom is up to MAX_FRAC*pi and all\n # fractions are reduced. Ex. 15/16pi, 2/5pi, 15/2pi, 16/9pi\n frac = np.where(np.abs(abs(single_inpt) - RECIP_MESH) < eps)\n if frac[0].shape[0]:\n numer = int(frac[1][0]) + 1\n denom = int(frac[0][0]) + 1\n if denom == 1 and output != \"qasm\":\n denom = \"\"\n if output == \"latex\":\n str_out = f\"\\\\frac{{{neg_str}{numer}}}{{{denom}{pi}}}\"\n elif output == \"qasm\":\n str_out = f\"{neg_str}{numer}/({denom}*{pi})\"\n else:\n str_out = f\"{neg_str}{numer}/{denom}{pi}\"\n return str_out\n\n # Nothing found\n if ndigits is None:\n str_out = \"{}\".format(single_inpt)\n else:\n str_out = \"{:.{}g}\".format(single_inpt, ndigits)\n return str_out\n\n complex_inpt = complex(inpt)\n real, imag = map(normalize, [complex_inpt.real, complex_inpt.imag])\n\n jstr = \"\\\\jmath\" if output == \"latex\" else \"j\"\n if real == \"0\" and imag != \"0\":\n str_out = imag + jstr\n elif real != \"0\" and imag != \"0\":\n op_str = \"+\"\n # Remove + if imag negative except for latex fractions\n if complex_inpt.imag < 0 and (output != \"latex\" or \"\\\\frac\" not in imag):\n op_str = \"\"\n str_out = f\"{real}{op_str}{imag}{jstr}\"\n else:\n str_out = real\n return str_out\n", "path": "qiskit/circuit/tools/pi_check.py"}], "after_files": [{"content": "# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2017, 2019.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n# pylint: disable=too-many-return-statements\n\n\"\"\"Check if number close to values of PI\n\"\"\"\n\nimport numpy as np\nfrom qiskit.circuit.parameterexpression import ParameterExpression\nfrom qiskit.exceptions import QiskitError\n\nMAX_FRAC = 16\nN, D = np.meshgrid(np.arange(1, MAX_FRAC + 1), np.arange(1, MAX_FRAC + 1))\nFRAC_MESH = N / D * np.pi\nRECIP_MESH = N / D / np.pi\nPOW_LIST = np.pi ** np.arange(2, 5)\n\n\ndef pi_check(inpt, eps=1e-9, output=\"text\", ndigits=None):\n \"\"\"Computes if a number is close to an integer\n fraction or multiple of PI and returns the\n corresponding string.\n\n Args:\n inpt (float): Number to check.\n eps (float): EPS to check against.\n output (str): Options are 'text' (default),\n 'latex', 'mpl', and 'qasm'.\n ndigits (int or None): Number of digits to print\n if returning raw inpt.\n If `None` (default), Python's\n default float formatting is used.\n\n Returns:\n str: string representation of output.\n\n Raises:\n QiskitError: if output is not a valid option.\n \"\"\"\n if isinstance(inpt, ParameterExpression):\n param_str = str(inpt)\n from sympy import sympify\n\n expr = sympify(inpt._symbol_expr)\n syms = expr.atoms()\n for sym in syms:\n if not sym.is_number:\n continue\n pi = pi_check(abs(float(sym)), eps=eps, output=output, ndigits=ndigits)\n try:\n _ = float(pi)\n except (ValueError, TypeError):\n from sympy import sstr\n\n sym_str = sstr(abs(sym), full_prec=False)\n param_str = param_str.replace(sym_str, pi)\n return param_str\n elif isinstance(inpt, str):\n return inpt\n\n def normalize(single_inpt):\n if abs(single_inpt) < eps:\n return \"0\"\n\n if output == \"text\":\n pi = \"\u03c0\"\n elif output == \"qasm\":\n pi = \"pi\"\n elif output == \"latex\":\n pi = \"\\\\pi\"\n elif output == \"mpl\":\n pi = \"$\\\\pi$\"\n else:\n raise QiskitError(\"pi_check parameter output should be text, latex, mpl, or qasm.\")\n\n neg_str = \"-\" if single_inpt < 0 else \"\"\n\n # First check is for whole multiples of pi\n val = single_inpt / np.pi\n if abs(val) >= 1 - eps:\n if abs(abs(val) - abs(round(val))) < eps:\n val = int(abs(round(val)))\n if abs(val) == 1:\n str_out = f\"{neg_str}{pi}\"\n else:\n if output == \"qasm\":\n str_out = f\"{neg_str}{val}*{pi}\"\n else:\n str_out = f\"{neg_str}{val}{pi}\"\n return str_out\n\n # Second is a check for powers of pi\n if abs(single_inpt) > np.pi:\n power = np.where(abs(abs(single_inpt) - POW_LIST) < eps)\n if power[0].shape[0]:\n if output == \"qasm\":\n if ndigits is None:\n str_out = \"{}\".format(single_inpt)\n else:\n str_out = \"{:.{}g}\".format(single_inpt, ndigits)\n elif output == \"latex\":\n str_out = f\"{neg_str}{pi}^{power[0][0] + 2}\"\n elif output == \"mpl\":\n str_out = f\"{neg_str}{pi}$^{power[0][0] + 2}$\"\n else:\n str_out = f\"{neg_str}{pi}**{power[0][0] + 2}\"\n return str_out\n\n # Third is a check for a number larger than MAX_FRAC * pi, not a\n # multiple or power of pi, since no fractions will exceed MAX_FRAC * pi\n if abs(single_inpt) >= (MAX_FRAC * np.pi):\n if ndigits is None:\n str_out = \"{}\".format(single_inpt)\n else:\n str_out = \"{:.{}g}\".format(single_inpt, ndigits)\n return str_out\n\n # Fourth check is for fractions for 1*pi in the numer and any\n # number in the denom.\n val = np.pi / single_inpt\n if abs(abs(val) - abs(round(val))) < eps:\n val = int(abs(round(val)))\n if output == \"latex\":\n str_out = f\"\\\\frac{{{neg_str}{pi}}}{{{val}}}\"\n else:\n str_out = f\"{neg_str}{pi}/{val}\"\n return str_out\n\n # Fifth check is for fractions where the numer > 1*pi and numer\n # is up to MAX_FRAC*pi and denom is up to MAX_FRAC and all\n # fractions are reduced. Ex. 15pi/16, 2pi/5, 15pi/2, 16pi/9.\n frac = np.where(np.abs(abs(single_inpt) - FRAC_MESH) < eps)\n if frac[0].shape[0]:\n numer = int(frac[1][0]) + 1\n denom = int(frac[0][0]) + 1\n if output == \"latex\":\n str_out = f\"\\\\frac{{{neg_str}{numer}{pi}}}{{{denom}}}\"\n elif output == \"qasm\":\n str_out = f\"{neg_str}{numer}*{pi}/{denom}\"\n else:\n str_out = f\"{neg_str}{numer}{pi}/{denom}\"\n return str_out\n\n # Sixth check is for fractions where the numer > 1 and numer\n # is up to MAX_FRAC and denom is up to MAX_FRAC*pi and all\n # fractions are reduced. Ex. 15/16pi, 2/5pi, 15/2pi, 16/9pi\n frac = np.where(np.abs(abs(single_inpt) - RECIP_MESH) < eps)\n if frac[0].shape[0]:\n numer = int(frac[1][0]) + 1\n denom = int(frac[0][0]) + 1\n if denom == 1 and output != \"qasm\":\n denom = \"\"\n if output == \"latex\":\n str_out = f\"\\\\frac{{{neg_str}{numer}}}{{{denom}{pi}}}\"\n elif output == \"qasm\":\n str_out = f\"{neg_str}{numer}/({denom}*{pi})\"\n else:\n str_out = f\"{neg_str}{numer}/{denom}{pi}\"\n return str_out\n\n # Nothing found. The '#' forces a decimal point to be included, which OQ2 needs, but other\n # formats don't really.\n if output == \"qasm\":\n return f\"{single_inpt:#}\" if ndigits is None else f\"{single_inpt:#.{ndigits}g}\"\n return f\"{single_inpt}\" if ndigits is None else f\"{single_inpt:.{ndigits}g}\"\n\n complex_inpt = complex(inpt)\n real, imag = map(normalize, [complex_inpt.real, complex_inpt.imag])\n\n jstr = \"\\\\jmath\" if output == \"latex\" else \"j\"\n if real == \"0\" and imag != \"0\":\n str_out = imag + jstr\n elif real != \"0\" and imag != \"0\":\n op_str = \"+\"\n # Remove + if imag negative except for latex fractions\n if complex_inpt.imag < 0 and (output != \"latex\" or \"\\\\frac\" not in imag):\n op_str = \"\"\n str_out = f\"{real}{op_str}{imag}{jstr}\"\n else:\n str_out = real\n return str_out\n", "path": "qiskit/circuit/tools/pi_check.py"}]} | 3,014 | 267 |
gh_patches_debug_26074 | rasdani/github-patches | git_diff | comic__grand-challenge.org-448 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Make error message emails less verbose
The entire stacktrace is included right now.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/grandchallenge/evaluation/emails.py`
Content:
```
1 import json
2
3 from django.conf import settings
4 from django.core.mail import send_mail
5
6 from grandchallenge.core.urlresolvers import reverse
7
8
9 def send_failed_job_email(job):
10 message = (
11 f'Unfortunately the evaluation for the submission to '
12 f'{job.challenge.short_name} failed with an error. The error message '
13 f'is:\n\n'
14 f'{job.output}\n\n'
15 f'You may wish to try and correct this, or contact the challenge '
16 f'organizers. The following information may help them:\n'
17 f'User: {job.submission.creator.username}\n'
18 f'Job ID: {job.pk}\n'
19 f'Submission ID: {job.submission.pk}'
20 )
21 recipient_emails = [o.email for o in job.challenge.get_admins()]
22 recipient_emails.append(job.submission.creator.email)
23 for email in recipient_emails:
24 send_mail(
25 subject='Evaluation Failed',
26 message=message,
27 from_email=settings.DEFAULT_FROM_EMAIL,
28 recipient_list=[email],
29 )
30
31
32 def send_new_result_email(result):
33 recipient_emails = [o.email for o in result.challenge.get_admins()]
34 message = (
35 f'There is a new result for {result.challenge.short_name} from '
36 f'{result.job.submission.creator.username}. The following metrics '
37 f'were calculated:\n\n'
38 f'{json.dumps(result.metrics, indent=2)}\n\n'
39 )
40 if result.public:
41 leaderboard_url = reverse(
42 'evaluation:result-list',
43 kwargs={'challenge_short_name': result.challenge.short_name},
44 )
45 message += (
46 f'You can view the result on the leaderboard here: '
47 f'{leaderboard_url}'
48 )
49 recipient_emails.append(result.job.submission.creator.email)
50 else:
51 message += (
52 f'You can publish the result on the leaderboard here: '
53 f'{result.get_absolute_url()}'
54 )
55 for email in recipient_emails:
56 send_mail(
57 subject=f'New Result for {result.challenge.short_name}',
58 message=message,
59 from_email=settings.DEFAULT_FROM_EMAIL,
60 recipient_list=[email],
61 )
62
```
Path: `app/grandchallenge/evaluation/views.py`
Content:
```
1 from datetime import timedelta, datetime
2 from typing import Dict
3
4 from django.contrib.messages.views import SuccessMessageMixin
5 from django.core.files import File
6 from django.db.models import Q
7 from django.utils import timezone
8 from django.views.generic import (CreateView, ListView, DetailView, UpdateView)
9
10 from grandchallenge.core.permissions.mixins import (
11 UserIsChallengeAdminMixin, UserIsChallengeParticipantOrAdminMixin
12 )
13 from grandchallenge.core.urlresolvers import reverse
14 from grandchallenge.evaluation.forms import MethodForm, SubmissionForm
15 from grandchallenge.evaluation.models import (
16 Result, Submission, Job, Method, Config,
17 )
18
19
20 class ConfigUpdate(UserIsChallengeAdminMixin, SuccessMessageMixin, UpdateView):
21 model = Config
22 fields = (
23 'use_teams',
24 'daily_submission_limit',
25 'score_title',
26 'score_jsonpath',
27 'score_default_sort',
28 'extra_results_columns',
29 'allow_submission_comments',
30 'allow_supplementary_file',
31 'require_supplementary_file',
32 'supplementary_file_label',
33 'supplementary_file_help_text',
34 'show_supplementary_file_link',
35 )
36 success_message = "Configuration successfully updated"
37
38 def get_object(self, queryset=None):
39 challenge = self.request.challenge
40 return challenge.evaluation_config
41
42
43 class MethodCreate(UserIsChallengeAdminMixin, CreateView):
44 model = Method
45 form_class = MethodForm
46
47 def form_valid(self, form):
48 form.instance.creator = self.request.user
49 form.instance.challenge = self.request.challenge
50 uploaded_file = form.cleaned_data['chunked_upload'][0]
51 with uploaded_file.open() as f:
52 form.instance.image.save(uploaded_file.name, File(f))
53 return super(MethodCreate, self).form_valid(form)
54
55
56 class MethodList(UserIsChallengeAdminMixin, ListView):
57 model = Method
58
59 def get_queryset(self):
60 queryset = super(MethodList, self).get_queryset()
61 return queryset.filter(challenge=self.request.challenge)
62
63
64 class MethodDetail(UserIsChallengeAdminMixin, DetailView):
65 model = Method
66
67
68 class SubmissionCreate(
69 UserIsChallengeParticipantOrAdminMixin, SuccessMessageMixin, CreateView
70 ):
71 model = Submission
72 form_class = SubmissionForm
73 success_message = (
74 "Your submission was successful. "
75 "Please keep checking this page for your result."
76 )
77
78 def get_form_kwargs(self):
79 kwargs = super(SubmissionCreate, self).get_form_kwargs()
80 config = Config.objects.get(challenge=self.request.challenge)
81 kwargs.update(
82 {
83 'display_comment_field': config.allow_submission_comments,
84 'allow_supplementary_file': config.allow_supplementary_file,
85 'require_supplementary_file': config.require_supplementary_file,
86 'supplementary_file_label': config.supplementary_file_label,
87 'supplementary_file_help_text': config.supplementary_file_help_text,
88 }
89 )
90 return kwargs
91
92 def get_context_data(self, **kwargs):
93 context = super(SubmissionCreate, self).get_context_data(**kwargs)
94 config = Config.objects.get(challenge=self.request.challenge)
95 context.update(
96 self.get_next_submission(max_subs=config.daily_submission_limit)
97 )
98 pending_jobs = Job.objects.filter(
99 challenge=self.request.challenge,
100 submission__creator=self.request.user,
101 status__in=(Job.PENDING, Job.STARTED),
102 ).count()
103 context.update({'pending_jobs': pending_jobs})
104 return context
105
106 def get_next_submission(
107 self,
108 *,
109 max_subs: int,
110 period: timedelta =timedelta(days=1),
111 now: datetime = None
112 ) -> Dict:
113 """
114 Determines the number of submissions left for the user in a given time
115 period, and when they can next submit.
116
117 :return: A dictionary containing remaining_submissions (int) and
118 next_submission_at (datetime)
119 """
120 if now is None:
121 now = timezone.now()
122 subs = Submission.objects.filter(
123 challenge=self.request.challenge,
124 creator=self.request.user,
125 created__gte=now - period,
126 ).order_by(
127 '-created'
128 )
129 try:
130 next_sub_at = subs[max_subs - 1].created + period
131 except (IndexError, AssertionError):
132 next_sub_at = now
133 return {
134 'remaining_submissions': max_subs - len(subs),
135 'next_submission_at': next_sub_at,
136 }
137
138 def form_valid(self, form):
139 form.instance.creator = self.request.user
140 form.instance.challenge = self.request.challenge
141 uploaded_file = form.cleaned_data['chunked_upload'][0]
142 with uploaded_file.open() as f:
143 form.instance.file.save(uploaded_file.name, File(f))
144 return super(SubmissionCreate, self).form_valid(form)
145
146 def get_success_url(self):
147 return reverse(
148 'evaluation:job-list',
149 kwargs={'challenge_short_name': self.object.challenge.short_name},
150 )
151
152
153 class SubmissionList(UserIsChallengeParticipantOrAdminMixin, ListView):
154 model = Submission
155
156 def get_queryset(self):
157 """ Admins see everything, participants just their submissions """
158 queryset = super(SubmissionList, self).get_queryset()
159 challenge = self.request.challenge
160 if challenge.is_admin(self.request.user):
161 return queryset.filter(challenge=self.request.challenge)
162
163 else:
164 return queryset.filter(
165 Q(challenge=self.request.challenge),
166 Q(creator__pk=self.request.user.pk),
167 )
168
169
170 class SubmissionDetail(UserIsChallengeAdminMixin, DetailView):
171 # TODO - if participant: list only their submissions
172 model = Submission
173
174
175 class JobCreate(UserIsChallengeAdminMixin, CreateView):
176 model = Job
177 fields = '__all__'
178
179
180 class JobList(UserIsChallengeParticipantOrAdminMixin, ListView):
181 model = Job
182
183 def get_queryset(self):
184 """ Admins see everything, participants just their jobs """
185 queryset = super(JobList, self).get_queryset()
186 queryset = queryset.select_related('result')
187 challenge = self.request.challenge
188 if challenge.is_admin(self.request.user):
189 return queryset.filter(challenge=self.request.challenge)
190
191 else:
192 return queryset.filter(
193 Q(challenge=self.request.challenge),
194 Q(submission__creator__pk=self.request.user.pk),
195 )
196
197
198 class JobDetail(UserIsChallengeAdminMixin, DetailView):
199 # TODO - if participant: list only their jobs
200 model = Job
201
202
203 class ResultList(ListView):
204 model = Result
205
206 def get_queryset(self):
207 queryset = super(ResultList, self).get_queryset()
208 queryset = queryset.select_related(
209 'job__submission__creator__user_profile'
210 )
211 return queryset.filter(
212 Q(challenge=self.request.challenge), Q(public=True)
213 )
214
215
216 class ResultDetail(DetailView):
217 model = Result
218
219
220 class ResultUpdate(UserIsChallengeAdminMixin, SuccessMessageMixin, UpdateView):
221 model = Result
222 fields = ('public',)
223 success_message = ('Result successfully updated.')
224
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/grandchallenge/evaluation/emails.py b/app/grandchallenge/evaluation/emails.py
--- a/app/grandchallenge/evaluation/emails.py
+++ b/app/grandchallenge/evaluation/emails.py
@@ -4,6 +4,7 @@
from django.core.mail import send_mail
from grandchallenge.core.urlresolvers import reverse
+from grandchallenge.evaluation.templatetags.evaluation_extras import user_error
def send_failed_job_email(job):
@@ -11,7 +12,7 @@
f'Unfortunately the evaluation for the submission to '
f'{job.challenge.short_name} failed with an error. The error message '
f'is:\n\n'
- f'{job.output}\n\n'
+ f'{user_error(job.output)}\n\n'
f'You may wish to try and correct this, or contact the challenge '
f'organizers. The following information may help them:\n'
f'User: {job.submission.creator.username}\n'
diff --git a/app/grandchallenge/evaluation/views.py b/app/grandchallenge/evaluation/views.py
--- a/app/grandchallenge/evaluation/views.py
+++ b/app/grandchallenge/evaluation/views.py
@@ -72,7 +72,7 @@
form_class = SubmissionForm
success_message = (
"Your submission was successful. "
- "Please keep checking this page for your result."
+ "You will receive an email when your result is ready."
)
def get_form_kwargs(self):
| {"golden_diff": "diff --git a/app/grandchallenge/evaluation/emails.py b/app/grandchallenge/evaluation/emails.py\n--- a/app/grandchallenge/evaluation/emails.py\n+++ b/app/grandchallenge/evaluation/emails.py\n@@ -4,6 +4,7 @@\n from django.core.mail import send_mail\n \n from grandchallenge.core.urlresolvers import reverse\n+from grandchallenge.evaluation.templatetags.evaluation_extras import user_error\n \n \n def send_failed_job_email(job):\n@@ -11,7 +12,7 @@\n f'Unfortunately the evaluation for the submission to '\n f'{job.challenge.short_name} failed with an error. The error message '\n f'is:\\n\\n'\n- f'{job.output}\\n\\n'\n+ f'{user_error(job.output)}\\n\\n'\n f'You may wish to try and correct this, or contact the challenge '\n f'organizers. The following information may help them:\\n'\n f'User: {job.submission.creator.username}\\n'\ndiff --git a/app/grandchallenge/evaluation/views.py b/app/grandchallenge/evaluation/views.py\n--- a/app/grandchallenge/evaluation/views.py\n+++ b/app/grandchallenge/evaluation/views.py\n@@ -72,7 +72,7 @@\n form_class = SubmissionForm\n success_message = (\n \"Your submission was successful. \"\n- \"Please keep checking this page for your result.\"\n+ \"You will receive an email when your result is ready.\"\n )\n \n def get_form_kwargs(self):\n", "issue": "Make error message emails less verbose\nThe entire stacktrace is included right now.\n", "before_files": [{"content": "import json\n\nfrom django.conf import settings\nfrom django.core.mail import send_mail\n\nfrom grandchallenge.core.urlresolvers import reverse\n\n\ndef send_failed_job_email(job):\n message = (\n f'Unfortunately the evaluation for the submission to '\n f'{job.challenge.short_name} failed with an error. The error message '\n f'is:\\n\\n'\n f'{job.output}\\n\\n'\n f'You may wish to try and correct this, or contact the challenge '\n f'organizers. The following information may help them:\\n'\n f'User: {job.submission.creator.username}\\n'\n f'Job ID: {job.pk}\\n'\n f'Submission ID: {job.submission.pk}'\n )\n recipient_emails = [o.email for o in job.challenge.get_admins()]\n recipient_emails.append(job.submission.creator.email)\n for email in recipient_emails:\n send_mail(\n subject='Evaluation Failed',\n message=message,\n from_email=settings.DEFAULT_FROM_EMAIL,\n recipient_list=[email],\n )\n\n\ndef send_new_result_email(result):\n recipient_emails = [o.email for o in result.challenge.get_admins()]\n message = (\n f'There is a new result for {result.challenge.short_name} from '\n f'{result.job.submission.creator.username}. The following metrics '\n f'were calculated:\\n\\n'\n f'{json.dumps(result.metrics, indent=2)}\\n\\n'\n )\n if result.public:\n leaderboard_url = reverse(\n 'evaluation:result-list',\n kwargs={'challenge_short_name': result.challenge.short_name},\n )\n message += (\n f'You can view the result on the leaderboard here: '\n f'{leaderboard_url}'\n )\n recipient_emails.append(result.job.submission.creator.email)\n else:\n message += (\n f'You can publish the result on the leaderboard here: '\n f'{result.get_absolute_url()}'\n )\n for email in recipient_emails:\n send_mail(\n subject=f'New Result for {result.challenge.short_name}',\n message=message,\n from_email=settings.DEFAULT_FROM_EMAIL,\n recipient_list=[email],\n )\n", "path": "app/grandchallenge/evaluation/emails.py"}, {"content": "from datetime import timedelta, datetime\nfrom typing import Dict\n\nfrom django.contrib.messages.views import SuccessMessageMixin\nfrom django.core.files import File\nfrom django.db.models import Q\nfrom django.utils import timezone\nfrom django.views.generic import (CreateView, ListView, DetailView, UpdateView)\n\nfrom grandchallenge.core.permissions.mixins import (\n UserIsChallengeAdminMixin, UserIsChallengeParticipantOrAdminMixin\n)\nfrom grandchallenge.core.urlresolvers import reverse\nfrom grandchallenge.evaluation.forms import MethodForm, SubmissionForm\nfrom grandchallenge.evaluation.models import (\n Result, Submission, Job, Method, Config,\n)\n\n\nclass ConfigUpdate(UserIsChallengeAdminMixin, SuccessMessageMixin, UpdateView):\n model = Config\n fields = (\n 'use_teams',\n 'daily_submission_limit',\n 'score_title',\n 'score_jsonpath',\n 'score_default_sort',\n 'extra_results_columns',\n 'allow_submission_comments',\n 'allow_supplementary_file',\n 'require_supplementary_file',\n 'supplementary_file_label',\n 'supplementary_file_help_text',\n 'show_supplementary_file_link',\n )\n success_message = \"Configuration successfully updated\"\n\n def get_object(self, queryset=None):\n challenge = self.request.challenge\n return challenge.evaluation_config\n\n\nclass MethodCreate(UserIsChallengeAdminMixin, CreateView):\n model = Method\n form_class = MethodForm\n\n def form_valid(self, form):\n form.instance.creator = self.request.user\n form.instance.challenge = self.request.challenge\n uploaded_file = form.cleaned_data['chunked_upload'][0]\n with uploaded_file.open() as f:\n form.instance.image.save(uploaded_file.name, File(f))\n return super(MethodCreate, self).form_valid(form)\n\n\nclass MethodList(UserIsChallengeAdminMixin, ListView):\n model = Method\n\n def get_queryset(self):\n queryset = super(MethodList, self).get_queryset()\n return queryset.filter(challenge=self.request.challenge)\n\n\nclass MethodDetail(UserIsChallengeAdminMixin, DetailView):\n model = Method\n\n\nclass SubmissionCreate(\n UserIsChallengeParticipantOrAdminMixin, SuccessMessageMixin, CreateView\n):\n model = Submission\n form_class = SubmissionForm\n success_message = (\n \"Your submission was successful. \"\n \"Please keep checking this page for your result.\"\n )\n\n def get_form_kwargs(self):\n kwargs = super(SubmissionCreate, self).get_form_kwargs()\n config = Config.objects.get(challenge=self.request.challenge)\n kwargs.update(\n {\n 'display_comment_field': config.allow_submission_comments,\n 'allow_supplementary_file': config.allow_supplementary_file,\n 'require_supplementary_file': config.require_supplementary_file,\n 'supplementary_file_label': config.supplementary_file_label,\n 'supplementary_file_help_text': config.supplementary_file_help_text,\n }\n )\n return kwargs\n\n def get_context_data(self, **kwargs):\n context = super(SubmissionCreate, self).get_context_data(**kwargs)\n config = Config.objects.get(challenge=self.request.challenge)\n context.update(\n self.get_next_submission(max_subs=config.daily_submission_limit)\n )\n pending_jobs = Job.objects.filter(\n challenge=self.request.challenge,\n submission__creator=self.request.user,\n status__in=(Job.PENDING, Job.STARTED),\n ).count()\n context.update({'pending_jobs': pending_jobs})\n return context\n\n def get_next_submission(\n self,\n *,\n max_subs: int,\n period: timedelta =timedelta(days=1),\n now: datetime = None\n ) -> Dict:\n \"\"\"\n Determines the number of submissions left for the user in a given time\n period, and when they can next submit.\n\n :return: A dictionary containing remaining_submissions (int) and\n next_submission_at (datetime)\n \"\"\"\n if now is None:\n now = timezone.now()\n subs = Submission.objects.filter(\n challenge=self.request.challenge,\n creator=self.request.user,\n created__gte=now - period,\n ).order_by(\n '-created'\n )\n try:\n next_sub_at = subs[max_subs - 1].created + period\n except (IndexError, AssertionError):\n next_sub_at = now\n return {\n 'remaining_submissions': max_subs - len(subs),\n 'next_submission_at': next_sub_at,\n }\n\n def form_valid(self, form):\n form.instance.creator = self.request.user\n form.instance.challenge = self.request.challenge\n uploaded_file = form.cleaned_data['chunked_upload'][0]\n with uploaded_file.open() as f:\n form.instance.file.save(uploaded_file.name, File(f))\n return super(SubmissionCreate, self).form_valid(form)\n\n def get_success_url(self):\n return reverse(\n 'evaluation:job-list',\n kwargs={'challenge_short_name': self.object.challenge.short_name},\n )\n\n\nclass SubmissionList(UserIsChallengeParticipantOrAdminMixin, ListView):\n model = Submission\n\n def get_queryset(self):\n \"\"\" Admins see everything, participants just their submissions \"\"\"\n queryset = super(SubmissionList, self).get_queryset()\n challenge = self.request.challenge\n if challenge.is_admin(self.request.user):\n return queryset.filter(challenge=self.request.challenge)\n\n else:\n return queryset.filter(\n Q(challenge=self.request.challenge),\n Q(creator__pk=self.request.user.pk),\n )\n\n\nclass SubmissionDetail(UserIsChallengeAdminMixin, DetailView):\n # TODO - if participant: list only their submissions\n model = Submission\n\n\nclass JobCreate(UserIsChallengeAdminMixin, CreateView):\n model = Job\n fields = '__all__'\n\n\nclass JobList(UserIsChallengeParticipantOrAdminMixin, ListView):\n model = Job\n\n def get_queryset(self):\n \"\"\" Admins see everything, participants just their jobs \"\"\"\n queryset = super(JobList, self).get_queryset()\n queryset = queryset.select_related('result')\n challenge = self.request.challenge\n if challenge.is_admin(self.request.user):\n return queryset.filter(challenge=self.request.challenge)\n\n else:\n return queryset.filter(\n Q(challenge=self.request.challenge),\n Q(submission__creator__pk=self.request.user.pk),\n )\n\n\nclass JobDetail(UserIsChallengeAdminMixin, DetailView):\n # TODO - if participant: list only their jobs\n model = Job\n\n\nclass ResultList(ListView):\n model = Result\n\n def get_queryset(self):\n queryset = super(ResultList, self).get_queryset()\n queryset = queryset.select_related(\n 'job__submission__creator__user_profile'\n )\n return queryset.filter(\n Q(challenge=self.request.challenge), Q(public=True)\n )\n\n\nclass ResultDetail(DetailView):\n model = Result\n\n\nclass ResultUpdate(UserIsChallengeAdminMixin, SuccessMessageMixin, UpdateView):\n model = Result\n fields = ('public',)\n success_message = ('Result successfully updated.')\n", "path": "app/grandchallenge/evaluation/views.py"}], "after_files": [{"content": "import json\n\nfrom django.conf import settings\nfrom django.core.mail import send_mail\n\nfrom grandchallenge.core.urlresolvers import reverse\nfrom grandchallenge.evaluation.templatetags.evaluation_extras import user_error\n\n\ndef send_failed_job_email(job):\n message = (\n f'Unfortunately the evaluation for the submission to '\n f'{job.challenge.short_name} failed with an error. The error message '\n f'is:\\n\\n'\n f'{user_error(job.output)}\\n\\n'\n f'You may wish to try and correct this, or contact the challenge '\n f'organizers. The following information may help them:\\n'\n f'User: {job.submission.creator.username}\\n'\n f'Job ID: {job.pk}\\n'\n f'Submission ID: {job.submission.pk}'\n )\n recipient_emails = [o.email for o in job.challenge.get_admins()]\n recipient_emails.append(job.submission.creator.email)\n for email in recipient_emails:\n send_mail(\n subject='Evaluation Failed',\n message=message,\n from_email=settings.DEFAULT_FROM_EMAIL,\n recipient_list=[email],\n )\n\n\ndef send_new_result_email(result):\n recipient_emails = [o.email for o in result.challenge.get_admins()]\n message = (\n f'There is a new result for {result.challenge.short_name} from '\n f'{result.job.submission.creator.username}. The following metrics '\n f'were calculated:\\n\\n'\n f'{json.dumps(result.metrics, indent=2)}\\n\\n'\n )\n if result.public:\n leaderboard_url = reverse(\n 'evaluation:result-list',\n kwargs={'challenge_short_name': result.challenge.short_name},\n )\n message += (\n f'You can view the result on the leaderboard here: '\n f'{leaderboard_url}'\n )\n recipient_emails.append(result.job.submission.creator.email)\n else:\n message += (\n f'You can publish the result on the leaderboard here: '\n f'{result.get_absolute_url()}'\n )\n for email in recipient_emails:\n send_mail(\n subject=f'New Result for {result.challenge.short_name}',\n message=message,\n from_email=settings.DEFAULT_FROM_EMAIL,\n recipient_list=[email],\n )\n", "path": "app/grandchallenge/evaluation/emails.py"}, {"content": "from datetime import timedelta, datetime\nfrom typing import Dict\n\nfrom django.contrib.messages.views import SuccessMessageMixin\nfrom django.core.files import File\nfrom django.db.models import Q\nfrom django.utils import timezone\nfrom django.views.generic import (CreateView, ListView, DetailView, UpdateView)\n\nfrom grandchallenge.core.permissions.mixins import (\n UserIsChallengeAdminMixin, UserIsChallengeParticipantOrAdminMixin\n)\nfrom grandchallenge.core.urlresolvers import reverse\nfrom grandchallenge.evaluation.forms import MethodForm, SubmissionForm\nfrom grandchallenge.evaluation.models import (\n Result, Submission, Job, Method, Config,\n)\n\n\nclass ConfigUpdate(UserIsChallengeAdminMixin, SuccessMessageMixin, UpdateView):\n model = Config\n fields = (\n 'use_teams',\n 'daily_submission_limit',\n 'score_title',\n 'score_jsonpath',\n 'score_default_sort',\n 'extra_results_columns',\n 'allow_submission_comments',\n 'allow_supplementary_file',\n 'require_supplementary_file',\n 'supplementary_file_label',\n 'supplementary_file_help_text',\n 'show_supplementary_file_link',\n )\n success_message = \"Configuration successfully updated\"\n\n def get_object(self, queryset=None):\n challenge = self.request.challenge\n return challenge.evaluation_config\n\n\nclass MethodCreate(UserIsChallengeAdminMixin, CreateView):\n model = Method\n form_class = MethodForm\n\n def form_valid(self, form):\n form.instance.creator = self.request.user\n form.instance.challenge = self.request.challenge\n uploaded_file = form.cleaned_data['chunked_upload'][0]\n with uploaded_file.open() as f:\n form.instance.image.save(uploaded_file.name, File(f))\n return super(MethodCreate, self).form_valid(form)\n\n\nclass MethodList(UserIsChallengeAdminMixin, ListView):\n model = Method\n\n def get_queryset(self):\n queryset = super(MethodList, self).get_queryset()\n return queryset.filter(challenge=self.request.challenge)\n\n\nclass MethodDetail(UserIsChallengeAdminMixin, DetailView):\n model = Method\n\n\nclass SubmissionCreate(\n UserIsChallengeParticipantOrAdminMixin, SuccessMessageMixin, CreateView\n):\n model = Submission\n form_class = SubmissionForm\n success_message = (\n \"Your submission was successful. \"\n \"You will receive an email when your result is ready.\"\n )\n\n def get_form_kwargs(self):\n kwargs = super(SubmissionCreate, self).get_form_kwargs()\n config = Config.objects.get(challenge=self.request.challenge)\n kwargs.update(\n {\n 'display_comment_field': config.allow_submission_comments,\n 'allow_supplementary_file': config.allow_supplementary_file,\n 'require_supplementary_file': config.require_supplementary_file,\n 'supplementary_file_label': config.supplementary_file_label,\n 'supplementary_file_help_text': config.supplementary_file_help_text,\n }\n )\n return kwargs\n\n def get_context_data(self, **kwargs):\n context = super(SubmissionCreate, self).get_context_data(**kwargs)\n config = Config.objects.get(challenge=self.request.challenge)\n context.update(\n self.get_next_submission(max_subs=config.daily_submission_limit)\n )\n pending_jobs = Job.objects.filter(\n challenge=self.request.challenge,\n submission__creator=self.request.user,\n status__in=(Job.PENDING, Job.STARTED),\n ).count()\n context.update({'pending_jobs': pending_jobs})\n return context\n\n def get_next_submission(\n self,\n *,\n max_subs: int,\n period: timedelta =timedelta(days=1),\n now: datetime = None\n ) -> Dict:\n \"\"\"\n Determines the number of submissions left for the user in a given time\n period, and when they can next submit.\n\n :return: A dictionary containing remaining_submissions (int) and\n next_submission_at (datetime)\n \"\"\"\n if now is None:\n now = timezone.now()\n subs = Submission.objects.filter(\n challenge=self.request.challenge,\n creator=self.request.user,\n created__gte=now - period,\n ).order_by(\n '-created'\n )\n try:\n next_sub_at = subs[max_subs - 1].created + period\n except (IndexError, AssertionError):\n next_sub_at = now\n return {\n 'remaining_submissions': max_subs - len(subs),\n 'next_submission_at': next_sub_at,\n }\n\n def form_valid(self, form):\n form.instance.creator = self.request.user\n form.instance.challenge = self.request.challenge\n uploaded_file = form.cleaned_data['chunked_upload'][0]\n with uploaded_file.open() as f:\n form.instance.file.save(uploaded_file.name, File(f))\n return super(SubmissionCreate, self).form_valid(form)\n\n def get_success_url(self):\n return reverse(\n 'evaluation:job-list',\n kwargs={'challenge_short_name': self.object.challenge.short_name},\n )\n\n\nclass SubmissionList(UserIsChallengeParticipantOrAdminMixin, ListView):\n model = Submission\n\n def get_queryset(self):\n \"\"\" Admins see everything, participants just their submissions \"\"\"\n queryset = super(SubmissionList, self).get_queryset()\n challenge = self.request.challenge\n if challenge.is_admin(self.request.user):\n return queryset.filter(challenge=self.request.challenge)\n\n else:\n return queryset.filter(\n Q(challenge=self.request.challenge),\n Q(creator__pk=self.request.user.pk),\n )\n\n\nclass SubmissionDetail(UserIsChallengeAdminMixin, DetailView):\n # TODO - if participant: list only their submissions\n model = Submission\n\n\nclass JobCreate(UserIsChallengeAdminMixin, CreateView):\n model = Job\n fields = '__all__'\n\n\nclass JobList(UserIsChallengeParticipantOrAdminMixin, ListView):\n model = Job\n\n def get_queryset(self):\n \"\"\" Admins see everything, participants just their jobs \"\"\"\n queryset = super(JobList, self).get_queryset()\n queryset = queryset.select_related('result')\n challenge = self.request.challenge\n if challenge.is_admin(self.request.user):\n return queryset.filter(challenge=self.request.challenge)\n\n else:\n return queryset.filter(\n Q(challenge=self.request.challenge),\n Q(submission__creator__pk=self.request.user.pk),\n )\n\n\nclass JobDetail(UserIsChallengeAdminMixin, DetailView):\n # TODO - if participant: list only their jobs\n model = Job\n\n\nclass ResultList(ListView):\n model = Result\n\n def get_queryset(self):\n queryset = super(ResultList, self).get_queryset()\n queryset = queryset.select_related(\n 'job__submission__creator__user_profile'\n )\n return queryset.filter(\n Q(challenge=self.request.challenge), Q(public=True)\n )\n\n\nclass ResultDetail(DetailView):\n model = Result\n\n\nclass ResultUpdate(UserIsChallengeAdminMixin, SuccessMessageMixin, UpdateView):\n model = Result\n fields = ('public',)\n success_message = ('Result successfully updated.')\n", "path": "app/grandchallenge/evaluation/views.py"}]} | 2,928 | 333 |
gh_patches_debug_6549 | rasdani/github-patches | git_diff | encode__uvicorn-754 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Lifespan scope is missing asgi dict
the lifespan scope (https://asgi.readthedocs.io/en/latest/specs/lifespan.html#scope) is missing the asgi dict
prompted by
_Originally posted by @florimondmanca in https://github.com/encode/uvicorn/pull/751_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `uvicorn/lifespan/on.py`
Content:
```
1 import asyncio
2 import logging
3
4 STATE_TRANSITION_ERROR = "Got invalid state transition on lifespan protocol."
5
6
7 class LifespanOn:
8 def __init__(self, config):
9 if not config.loaded:
10 config.load()
11
12 self.config = config
13 self.logger = logging.getLogger("uvicorn.error")
14 self.startup_event = asyncio.Event()
15 self.shutdown_event = asyncio.Event()
16 self.receive_queue = asyncio.Queue()
17 self.error_occured = False
18 self.startup_failed = False
19 self.should_exit = False
20
21 async def startup(self):
22 self.logger.info("Waiting for application startup.")
23
24 loop = asyncio.get_event_loop()
25 loop.create_task(self.main())
26
27 await self.receive_queue.put({"type": "lifespan.startup"})
28 await self.startup_event.wait()
29
30 if self.startup_failed or (self.error_occured and self.config.lifespan == "on"):
31 self.logger.error("Application startup failed. Exiting.")
32 self.should_exit = True
33 else:
34 self.logger.info("Application startup complete.")
35
36 async def shutdown(self):
37 if self.error_occured:
38 return
39 self.logger.info("Waiting for application shutdown.")
40 await self.receive_queue.put({"type": "lifespan.shutdown"})
41 await self.shutdown_event.wait()
42 self.logger.info("Application shutdown complete.")
43
44 async def main(self):
45 try:
46 app = self.config.loaded_app
47 scope = {"type": "lifespan"}
48 await app(scope, self.receive, self.send)
49 except BaseException as exc:
50 self.asgi = None
51 self.error_occured = True
52 if self.startup_failed:
53 return
54 if self.config.lifespan == "auto":
55 msg = "ASGI 'lifespan' protocol appears unsupported."
56 self.logger.info(msg)
57 else:
58 msg = "Exception in 'lifespan' protocol\n"
59 self.logger.error(msg, exc_info=exc)
60 finally:
61 self.startup_event.set()
62 self.shutdown_event.set()
63
64 async def send(self, message):
65 assert message["type"] in (
66 "lifespan.startup.complete",
67 "lifespan.startup.failed",
68 "lifespan.shutdown.complete",
69 )
70
71 if message["type"] == "lifespan.startup.complete":
72 assert not self.startup_event.is_set(), STATE_TRANSITION_ERROR
73 assert not self.shutdown_event.is_set(), STATE_TRANSITION_ERROR
74 self.startup_event.set()
75
76 elif message["type"] == "lifespan.startup.failed":
77 assert not self.startup_event.is_set(), STATE_TRANSITION_ERROR
78 assert not self.shutdown_event.is_set(), STATE_TRANSITION_ERROR
79 self.startup_event.set()
80 self.startup_failed = True
81 if message.get("message"):
82 self.logger.error(message["message"])
83
84 elif message["type"] == "lifespan.shutdown.complete":
85 assert self.startup_event.is_set(), STATE_TRANSITION_ERROR
86 assert not self.shutdown_event.is_set(), STATE_TRANSITION_ERROR
87 self.shutdown_event.set()
88
89 async def receive(self):
90 return await self.receive_queue.get()
91
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/uvicorn/lifespan/on.py b/uvicorn/lifespan/on.py
--- a/uvicorn/lifespan/on.py
+++ b/uvicorn/lifespan/on.py
@@ -44,7 +44,10 @@
async def main(self):
try:
app = self.config.loaded_app
- scope = {"type": "lifespan"}
+ scope = {
+ "type": "lifespan",
+ "asgi": {"version": self.config.asgi_version, "spec_version": "2.0"},
+ }
await app(scope, self.receive, self.send)
except BaseException as exc:
self.asgi = None
| {"golden_diff": "diff --git a/uvicorn/lifespan/on.py b/uvicorn/lifespan/on.py\n--- a/uvicorn/lifespan/on.py\n+++ b/uvicorn/lifespan/on.py\n@@ -44,7 +44,10 @@\n async def main(self):\n try:\n app = self.config.loaded_app\n- scope = {\"type\": \"lifespan\"}\n+ scope = {\n+ \"type\": \"lifespan\",\n+ \"asgi\": {\"version\": self.config.asgi_version, \"spec_version\": \"2.0\"},\n+ }\n await app(scope, self.receive, self.send)\n except BaseException as exc:\n self.asgi = None\n", "issue": "Lifespan scope is missing asgi dict\nthe lifespan scope (https://asgi.readthedocs.io/en/latest/specs/lifespan.html#scope) is missing the asgi dict\r\n\r\nprompted by\r\n\r\n_Originally posted by @florimondmanca in https://github.com/encode/uvicorn/pull/751_\n", "before_files": [{"content": "import asyncio\nimport logging\n\nSTATE_TRANSITION_ERROR = \"Got invalid state transition on lifespan protocol.\"\n\n\nclass LifespanOn:\n def __init__(self, config):\n if not config.loaded:\n config.load()\n\n self.config = config\n self.logger = logging.getLogger(\"uvicorn.error\")\n self.startup_event = asyncio.Event()\n self.shutdown_event = asyncio.Event()\n self.receive_queue = asyncio.Queue()\n self.error_occured = False\n self.startup_failed = False\n self.should_exit = False\n\n async def startup(self):\n self.logger.info(\"Waiting for application startup.\")\n\n loop = asyncio.get_event_loop()\n loop.create_task(self.main())\n\n await self.receive_queue.put({\"type\": \"lifespan.startup\"})\n await self.startup_event.wait()\n\n if self.startup_failed or (self.error_occured and self.config.lifespan == \"on\"):\n self.logger.error(\"Application startup failed. Exiting.\")\n self.should_exit = True\n else:\n self.logger.info(\"Application startup complete.\")\n\n async def shutdown(self):\n if self.error_occured:\n return\n self.logger.info(\"Waiting for application shutdown.\")\n await self.receive_queue.put({\"type\": \"lifespan.shutdown\"})\n await self.shutdown_event.wait()\n self.logger.info(\"Application shutdown complete.\")\n\n async def main(self):\n try:\n app = self.config.loaded_app\n scope = {\"type\": \"lifespan\"}\n await app(scope, self.receive, self.send)\n except BaseException as exc:\n self.asgi = None\n self.error_occured = True\n if self.startup_failed:\n return\n if self.config.lifespan == \"auto\":\n msg = \"ASGI 'lifespan' protocol appears unsupported.\"\n self.logger.info(msg)\n else:\n msg = \"Exception in 'lifespan' protocol\\n\"\n self.logger.error(msg, exc_info=exc)\n finally:\n self.startup_event.set()\n self.shutdown_event.set()\n\n async def send(self, message):\n assert message[\"type\"] in (\n \"lifespan.startup.complete\",\n \"lifespan.startup.failed\",\n \"lifespan.shutdown.complete\",\n )\n\n if message[\"type\"] == \"lifespan.startup.complete\":\n assert not self.startup_event.is_set(), STATE_TRANSITION_ERROR\n assert not self.shutdown_event.is_set(), STATE_TRANSITION_ERROR\n self.startup_event.set()\n\n elif message[\"type\"] == \"lifespan.startup.failed\":\n assert not self.startup_event.is_set(), STATE_TRANSITION_ERROR\n assert not self.shutdown_event.is_set(), STATE_TRANSITION_ERROR\n self.startup_event.set()\n self.startup_failed = True\n if message.get(\"message\"):\n self.logger.error(message[\"message\"])\n\n elif message[\"type\"] == \"lifespan.shutdown.complete\":\n assert self.startup_event.is_set(), STATE_TRANSITION_ERROR\n assert not self.shutdown_event.is_set(), STATE_TRANSITION_ERROR\n self.shutdown_event.set()\n\n async def receive(self):\n return await self.receive_queue.get()\n", "path": "uvicorn/lifespan/on.py"}], "after_files": [{"content": "import asyncio\nimport logging\n\nSTATE_TRANSITION_ERROR = \"Got invalid state transition on lifespan protocol.\"\n\n\nclass LifespanOn:\n def __init__(self, config):\n if not config.loaded:\n config.load()\n\n self.config = config\n self.logger = logging.getLogger(\"uvicorn.error\")\n self.startup_event = asyncio.Event()\n self.shutdown_event = asyncio.Event()\n self.receive_queue = asyncio.Queue()\n self.error_occured = False\n self.startup_failed = False\n self.should_exit = False\n\n async def startup(self):\n self.logger.info(\"Waiting for application startup.\")\n\n loop = asyncio.get_event_loop()\n loop.create_task(self.main())\n\n await self.receive_queue.put({\"type\": \"lifespan.startup\"})\n await self.startup_event.wait()\n\n if self.startup_failed or (self.error_occured and self.config.lifespan == \"on\"):\n self.logger.error(\"Application startup failed. Exiting.\")\n self.should_exit = True\n else:\n self.logger.info(\"Application startup complete.\")\n\n async def shutdown(self):\n if self.error_occured:\n return\n self.logger.info(\"Waiting for application shutdown.\")\n await self.receive_queue.put({\"type\": \"lifespan.shutdown\"})\n await self.shutdown_event.wait()\n self.logger.info(\"Application shutdown complete.\")\n\n async def main(self):\n try:\n app = self.config.loaded_app\n scope = {\n \"type\": \"lifespan\",\n \"asgi\": {\"version\": self.config.asgi_version, \"spec_version\": \"2.0\"},\n }\n await app(scope, self.receive, self.send)\n except BaseException as exc:\n self.asgi = None\n self.error_occured = True\n if self.startup_failed:\n return\n if self.config.lifespan == \"auto\":\n msg = \"ASGI 'lifespan' protocol appears unsupported.\"\n self.logger.info(msg)\n else:\n msg = \"Exception in 'lifespan' protocol\\n\"\n self.logger.error(msg, exc_info=exc)\n finally:\n self.startup_event.set()\n self.shutdown_event.set()\n\n async def send(self, message):\n assert message[\"type\"] in (\n \"lifespan.startup.complete\",\n \"lifespan.startup.failed\",\n \"lifespan.shutdown.complete\",\n )\n\n if message[\"type\"] == \"lifespan.startup.complete\":\n assert not self.startup_event.is_set(), STATE_TRANSITION_ERROR\n assert not self.shutdown_event.is_set(), STATE_TRANSITION_ERROR\n self.startup_event.set()\n\n elif message[\"type\"] == \"lifespan.startup.failed\":\n assert not self.startup_event.is_set(), STATE_TRANSITION_ERROR\n assert not self.shutdown_event.is_set(), STATE_TRANSITION_ERROR\n self.startup_event.set()\n self.startup_failed = True\n if message.get(\"message\"):\n self.logger.error(message[\"message\"])\n\n elif message[\"type\"] == \"lifespan.shutdown.complete\":\n assert self.startup_event.is_set(), STATE_TRANSITION_ERROR\n assert not self.shutdown_event.is_set(), STATE_TRANSITION_ERROR\n self.shutdown_event.set()\n\n async def receive(self):\n return await self.receive_queue.get()\n", "path": "uvicorn/lifespan/on.py"}]} | 1,173 | 153 |
gh_patches_debug_5003 | rasdani/github-patches | git_diff | conda__conda-build-1839 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
conda build 2.1.6 sporadic failures
After updating 2.1.5 to 2.1.6, I now have sporadic (non reproductible) failures with conda build
```bash
RuntimeError: Can't build <whatever> due to unsatisfiable dependencies:
Packages/dependencies missing in current linux-64 channels:
- openssl
- zlib
- ...
```
, only on dependencies provided with the conda distribution (e.g. xz, zlib, ...)
Very annoying on running nightly builds (sometimes crashing, sometimes not)
Any clues on where to look (changes between versions) ?
conda --version is 4.3.14
thanks
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conda_build/inspect.py`
Content:
```
1 # (c) Continuum Analytics, Inc. / http://continuum.io
2 # All Rights Reserved
3 #
4 # conda is distributed under the terms of the BSD 3-clause license.
5 # Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause.
6
7 from __future__ import absolute_import, division, print_function
8
9 from collections import defaultdict
10 import logging
11 from operator import itemgetter
12 from os.path import abspath, join, dirname, exists, basename, isdir
13 import re
14 import sys
15 import tempfile
16
17 from .conda_interface import (iteritems, specs_from_args, is_linked, linked_data, linked,
18 get_index)
19 from .conda_interface import display_actions, install_actions
20
21
22 from conda_build.os_utils.ldd import get_linkages, get_package_obj_files, get_untracked_obj_files
23 from conda_build.os_utils.macho import get_rpaths, human_filetype
24 from conda_build.utils import groupby, getter, comma_join, rm_rf
25
26
27 def which_prefix(path):
28 """
29 given the path (to a (presumably) conda installed file) return the
30 environment prefix in which the file in located
31 """
32 prefix = abspath(path)
33 while True:
34 if isdir(join(prefix, 'conda-meta')):
35 # we found the it, so let's return it
36 return prefix
37 if prefix == dirname(prefix):
38 # we cannot chop off any more directories, so we didn't find it
39 return None
40 prefix = dirname(prefix)
41
42
43 def which_package(path):
44 """
45 given the path (of a (presumably) conda installed file) iterate over
46 the conda packages the file came from. Usually the iteration yields
47 only one package.
48 """
49 path = abspath(path)
50 prefix = which_prefix(path)
51 if prefix is None:
52 raise RuntimeError("could not determine conda prefix from: %s" % path)
53 for dist in linked(prefix):
54 meta = is_linked(prefix, dist)
55 if any(abspath(join(prefix, f)) == path for f in meta['files']):
56 yield dist
57
58
59 def print_object_info(info, key):
60 output_string = ""
61 gb = groupby(key, info)
62 for header in sorted(gb, key=str):
63 output_string += header + "\n"
64 for f_info in sorted(gb[header], key=getter('filename')):
65 for data in sorted(f_info):
66 if data == key:
67 continue
68 if f_info[data] is None:
69 continue
70 output_string += ' %s: %s\n' % (data, f_info[data])
71 if len([i for i in f_info if f_info[i] is not None and i != key]) > 1:
72 output_string += '\n'
73 output_string += '\n'
74 return output_string
75
76
77 class _untracked_package:
78 def __str__(self):
79 return "<untracked>"
80
81
82 untracked_package = _untracked_package()
83
84
85 def check_install(packages, platform=None, channel_urls=(), prepend=True,
86 minimal_hint=False):
87 prefix = tempfile.mkdtemp('conda')
88 try:
89 specs = specs_from_args(packages)
90 index = get_index(channel_urls=channel_urls, prepend=prepend,
91 platform=platform, prefix=prefix)
92 actions = install_actions(prefix, index, specs, pinned=False,
93 minimal_hint=minimal_hint)
94 display_actions(actions, index)
95 return actions
96 finally:
97 rm_rf(prefix)
98 return None
99
100
101 def print_linkages(depmap, show_files=False):
102 # Print system and not found last
103 dist_depmap = {}
104 for k, v in depmap.items():
105 if hasattr(k, 'dist_name'):
106 k = k.dist_name
107 dist_depmap[k] = v
108
109 depmap = dist_depmap
110 k = sorted(set(depmap.keys()) - {'system', 'not found'})
111 all_deps = k if 'not found' not in depmap.keys() else k + ['system', 'not found']
112 output_string = ""
113 for dep in all_deps:
114 output_string += "%s:\n" % dep
115 if show_files:
116 for lib, path, binary in sorted(depmap[dep]):
117 output_string += " %s (%s) from %s\n" % (lib, path, binary)
118 else:
119 for lib, path in sorted(set(map(itemgetter(0, 1), depmap[dep]))):
120 output_string += " %s (%s)\n" % (lib, path)
121 output_string += "\n"
122 return output_string
123
124
125 def replace_path(binary, path, prefix):
126 if sys.platform.startswith('linux'):
127 return abspath(path)
128 elif sys.platform.startswith('darwin'):
129 if path == basename(binary):
130 return abspath(join(prefix, binary))
131 if '@rpath' in path:
132 rpaths = get_rpaths(join(prefix, binary))
133 if not rpaths:
134 return "NO LC_RPATH FOUND"
135 else:
136 for rpath in rpaths:
137 path1 = path.replace("@rpath", rpath)
138 path1 = path1.replace('@loader_path', join(prefix, dirname(binary)))
139 if exists(abspath(join(prefix, path1))):
140 path = path1
141 break
142 else:
143 return 'not found'
144 path = path.replace('@loader_path', join(prefix, dirname(binary)))
145 if path.startswith('/'):
146 return abspath(path)
147 return 'not found'
148
149
150 def test_installable(channel='defaults'):
151 success = True
152 log = logging.getLogger(__name__)
153 has_py = re.compile(r'py(\d)(\d)')
154 for platform in ['osx-64', 'linux-32', 'linux-64', 'win-32', 'win-64']:
155 log.info("######## Testing platform %s ########", platform)
156 channels = [channel]
157 index = get_index(channel_urls=channels, prepend=False, platform=platform)
158 for _, rec in iteritems(index):
159 # If we give channels at the command line, only look at
160 # packages from those channels (not defaults).
161 if channel != 'defaults' and rec.get('schannel', 'defaults') == 'defaults':
162 continue
163 name = rec['name']
164 if name in {'conda', 'conda-build'}:
165 # conda can only be installed in the root environment
166 continue
167 # Don't fail just because the package is a different version of Python
168 # than the default. We should probably check depends rather than the
169 # build string.
170 build = rec['build']
171 match = has_py.search(build)
172 assert match if 'py' in build else True, build
173 if match:
174 additional_packages = ['python=%s.%s' % (match.group(1), match.group(2))]
175 else:
176 additional_packages = []
177
178 version = rec['version']
179 log.info('Testing %s=%s', name, version)
180
181 try:
182 install_steps = check_install([name + '=' + version] + additional_packages,
183 channel_urls=channels, prepend=False,
184 platform=platform)
185 success &= bool(install_steps)
186 except KeyboardInterrupt:
187 raise
188 # sys.exit raises an exception that doesn't subclass from Exception
189 except BaseException as e:
190 success = False
191 log.error("FAIL: %s %s on %s with %s (%s)", name, version,
192 platform, additional_packages, e)
193 return success
194
195
196 def _installed(prefix):
197 installed = linked_data(prefix)
198 installed = {rec['name']: dist for dist, rec in iteritems(installed)}
199 return installed
200
201
202 def _underlined_text(text):
203 return str(text) + '\n' + '-' * len(str(text)) + '\n\n'
204
205
206 def inspect_linkages(packages, prefix=sys.prefix, untracked=False,
207 all_packages=False, show_files=False, groupby="package"):
208 pkgmap = {}
209
210 installed = _installed(prefix)
211
212 if not packages and not untracked and not all_packages:
213 raise ValueError("At least one package or --untracked or --all must be provided")
214
215 if all_packages:
216 packages = sorted(installed.keys())
217
218 if untracked:
219 packages.append(untracked_package)
220
221 for pkg in packages:
222 if pkg == untracked_package:
223 dist = untracked_package
224 elif pkg not in installed:
225 sys.exit("Package %s is not installed in %s" % (pkg, prefix))
226 else:
227 dist = installed[pkg]
228
229 if not sys.platform.startswith(('linux', 'darwin')):
230 sys.exit("Error: conda inspect linkages is only implemented in Linux and OS X")
231
232 if dist == untracked_package:
233 obj_files = get_untracked_obj_files(prefix)
234 else:
235 obj_files = get_package_obj_files(dist, prefix)
236 linkages = get_linkages(obj_files, prefix)
237 depmap = defaultdict(list)
238 pkgmap[pkg] = depmap
239 depmap['not found'] = []
240 for binary in linkages:
241 for lib, path in linkages[binary]:
242 path = replace_path(binary, path, prefix) if path not in {'',
243 'not found'} else path
244 if path.startswith(prefix):
245 deps = list(which_package(path))
246 if len(deps) > 1:
247 logging.getLogger(__name__).warn("Warning: %s comes from multiple "
248 "packages: %s", path, comma_join(deps))
249 if not deps:
250 if exists(path):
251 depmap['untracked'].append((lib, path.split(prefix +
252 '/', 1)[-1], binary))
253 else:
254 depmap['not found'].append((lib, path.split(prefix +
255 '/', 1)[-1], binary))
256 for d in deps:
257 depmap[d].append((lib, path.split(prefix + '/',
258 1)[-1], binary))
259 elif path == 'not found':
260 depmap['not found'].append((lib, path, binary))
261 else:
262 depmap['system'].append((lib, path, binary))
263
264 output_string = ""
265 if groupby == 'package':
266 for pkg in packages:
267 output_string += _underlined_text(pkg)
268 output_string += print_linkages(pkgmap[pkg], show_files=show_files)
269
270 elif groupby == 'dependency':
271 # {pkg: {dep: [files]}} -> {dep: {pkg: [files]}}
272 inverted_map = defaultdict(lambda: defaultdict(list))
273 for pkg in pkgmap:
274 for dep in pkgmap[pkg]:
275 if pkgmap[pkg][dep]:
276 inverted_map[dep][pkg] = pkgmap[pkg][dep]
277
278 # print system and not found last
279 k = sorted(set(inverted_map.keys()) - {'system', 'not found'})
280 for dep in k + ['system', 'not found']:
281 output_string += _underlined_text(dep)
282 output_string += print_linkages(inverted_map[dep], show_files=show_files)
283
284 else:
285 raise ValueError("Unrecognized groupby: %s" % groupby)
286 if hasattr(output_string, 'decode'):
287 output_string = output_string.decode('utf-8')
288 return output_string
289
290
291 def inspect_objects(packages, prefix=sys.prefix, groupby='package'):
292 installed = _installed(prefix)
293
294 output_string = ""
295 for pkg in packages:
296 if pkg == untracked_package:
297 dist = untracked_package
298 elif pkg not in installed:
299 raise ValueError("Package %s is not installed in %s" % (pkg, prefix))
300 else:
301 dist = installed[pkg]
302
303 output_string += _underlined_text(pkg)
304
305 if not sys.platform.startswith('darwin'):
306 sys.exit("Error: conda inspect objects is only implemented in OS X")
307
308 if dist == untracked_package:
309 obj_files = get_untracked_obj_files(prefix)
310 else:
311 obj_files = get_package_obj_files(dist, prefix)
312
313 info = []
314 for f in obj_files:
315 f_info = {}
316 path = join(prefix, f)
317 f_info['filetype'] = human_filetype(path)
318 f_info['rpath'] = ':'.join(get_rpaths(path))
319 f_info['filename'] = f
320 info.append(f_info)
321
322 output_string += print_object_info(info, groupby)
323 if hasattr(output_string, 'decode'):
324 output_string = output_string.decode('utf-8')
325 return output_string
326
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/conda_build/inspect.py b/conda_build/inspect.py
--- a/conda_build/inspect.py
+++ b/conda_build/inspect.py
@@ -237,6 +237,7 @@
depmap = defaultdict(list)
pkgmap[pkg] = depmap
depmap['not found'] = []
+ depmap['system'] = []
for binary in linkages:
for lib, path in linkages[binary]:
path = replace_path(binary, path, prefix) if path not in {'',
| {"golden_diff": "diff --git a/conda_build/inspect.py b/conda_build/inspect.py\n--- a/conda_build/inspect.py\n+++ b/conda_build/inspect.py\n@@ -237,6 +237,7 @@\n depmap = defaultdict(list)\n pkgmap[pkg] = depmap\n depmap['not found'] = []\n+ depmap['system'] = []\n for binary in linkages:\n for lib, path in linkages[binary]:\n path = replace_path(binary, path, prefix) if path not in {'',\n", "issue": "conda build 2.1.6 sporadic failures\nAfter updating 2.1.5 to 2.1.6, I now have sporadic (non reproductible) failures with conda build\r\n\r\n```bash\r\nRuntimeError: Can't build <whatever> due to unsatisfiable dependencies:\r\n\r\nPackages/dependencies missing in current linux-64 channels: \r\n - openssl\r\n - zlib\r\n - ...\r\n```\r\n, only on dependencies provided with the conda distribution (e.g. xz, zlib, ...)\r\n\r\nVery annoying on running nightly builds (sometimes crashing, sometimes not)\r\n\r\nAny clues on where to look (changes between versions) ?\r\n\r\nconda --version is 4.3.14\r\n\r\nthanks\n", "before_files": [{"content": "# (c) Continuum Analytics, Inc. / http://continuum.io\n# All Rights Reserved\n#\n# conda is distributed under the terms of the BSD 3-clause license.\n# Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause.\n\nfrom __future__ import absolute_import, division, print_function\n\nfrom collections import defaultdict\nimport logging\nfrom operator import itemgetter\nfrom os.path import abspath, join, dirname, exists, basename, isdir\nimport re\nimport sys\nimport tempfile\n\nfrom .conda_interface import (iteritems, specs_from_args, is_linked, linked_data, linked,\n get_index)\nfrom .conda_interface import display_actions, install_actions\n\n\nfrom conda_build.os_utils.ldd import get_linkages, get_package_obj_files, get_untracked_obj_files\nfrom conda_build.os_utils.macho import get_rpaths, human_filetype\nfrom conda_build.utils import groupby, getter, comma_join, rm_rf\n\n\ndef which_prefix(path):\n \"\"\"\n given the path (to a (presumably) conda installed file) return the\n environment prefix in which the file in located\n \"\"\"\n prefix = abspath(path)\n while True:\n if isdir(join(prefix, 'conda-meta')):\n # we found the it, so let's return it\n return prefix\n if prefix == dirname(prefix):\n # we cannot chop off any more directories, so we didn't find it\n return None\n prefix = dirname(prefix)\n\n\ndef which_package(path):\n \"\"\"\n given the path (of a (presumably) conda installed file) iterate over\n the conda packages the file came from. Usually the iteration yields\n only one package.\n \"\"\"\n path = abspath(path)\n prefix = which_prefix(path)\n if prefix is None:\n raise RuntimeError(\"could not determine conda prefix from: %s\" % path)\n for dist in linked(prefix):\n meta = is_linked(prefix, dist)\n if any(abspath(join(prefix, f)) == path for f in meta['files']):\n yield dist\n\n\ndef print_object_info(info, key):\n output_string = \"\"\n gb = groupby(key, info)\n for header in sorted(gb, key=str):\n output_string += header + \"\\n\"\n for f_info in sorted(gb[header], key=getter('filename')):\n for data in sorted(f_info):\n if data == key:\n continue\n if f_info[data] is None:\n continue\n output_string += ' %s: %s\\n' % (data, f_info[data])\n if len([i for i in f_info if f_info[i] is not None and i != key]) > 1:\n output_string += '\\n'\n output_string += '\\n'\n return output_string\n\n\nclass _untracked_package:\n def __str__(self):\n return \"<untracked>\"\n\n\nuntracked_package = _untracked_package()\n\n\ndef check_install(packages, platform=None, channel_urls=(), prepend=True,\n minimal_hint=False):\n prefix = tempfile.mkdtemp('conda')\n try:\n specs = specs_from_args(packages)\n index = get_index(channel_urls=channel_urls, prepend=prepend,\n platform=platform, prefix=prefix)\n actions = install_actions(prefix, index, specs, pinned=False,\n minimal_hint=minimal_hint)\n display_actions(actions, index)\n return actions\n finally:\n rm_rf(prefix)\n return None\n\n\ndef print_linkages(depmap, show_files=False):\n # Print system and not found last\n dist_depmap = {}\n for k, v in depmap.items():\n if hasattr(k, 'dist_name'):\n k = k.dist_name\n dist_depmap[k] = v\n\n depmap = dist_depmap\n k = sorted(set(depmap.keys()) - {'system', 'not found'})\n all_deps = k if 'not found' not in depmap.keys() else k + ['system', 'not found']\n output_string = \"\"\n for dep in all_deps:\n output_string += \"%s:\\n\" % dep\n if show_files:\n for lib, path, binary in sorted(depmap[dep]):\n output_string += \" %s (%s) from %s\\n\" % (lib, path, binary)\n else:\n for lib, path in sorted(set(map(itemgetter(0, 1), depmap[dep]))):\n output_string += \" %s (%s)\\n\" % (lib, path)\n output_string += \"\\n\"\n return output_string\n\n\ndef replace_path(binary, path, prefix):\n if sys.platform.startswith('linux'):\n return abspath(path)\n elif sys.platform.startswith('darwin'):\n if path == basename(binary):\n return abspath(join(prefix, binary))\n if '@rpath' in path:\n rpaths = get_rpaths(join(prefix, binary))\n if not rpaths:\n return \"NO LC_RPATH FOUND\"\n else:\n for rpath in rpaths:\n path1 = path.replace(\"@rpath\", rpath)\n path1 = path1.replace('@loader_path', join(prefix, dirname(binary)))\n if exists(abspath(join(prefix, path1))):\n path = path1\n break\n else:\n return 'not found'\n path = path.replace('@loader_path', join(prefix, dirname(binary)))\n if path.startswith('/'):\n return abspath(path)\n return 'not found'\n\n\ndef test_installable(channel='defaults'):\n success = True\n log = logging.getLogger(__name__)\n has_py = re.compile(r'py(\\d)(\\d)')\n for platform in ['osx-64', 'linux-32', 'linux-64', 'win-32', 'win-64']:\n log.info(\"######## Testing platform %s ########\", platform)\n channels = [channel]\n index = get_index(channel_urls=channels, prepend=False, platform=platform)\n for _, rec in iteritems(index):\n # If we give channels at the command line, only look at\n # packages from those channels (not defaults).\n if channel != 'defaults' and rec.get('schannel', 'defaults') == 'defaults':\n continue\n name = rec['name']\n if name in {'conda', 'conda-build'}:\n # conda can only be installed in the root environment\n continue\n # Don't fail just because the package is a different version of Python\n # than the default. We should probably check depends rather than the\n # build string.\n build = rec['build']\n match = has_py.search(build)\n assert match if 'py' in build else True, build\n if match:\n additional_packages = ['python=%s.%s' % (match.group(1), match.group(2))]\n else:\n additional_packages = []\n\n version = rec['version']\n log.info('Testing %s=%s', name, version)\n\n try:\n install_steps = check_install([name + '=' + version] + additional_packages,\n channel_urls=channels, prepend=False,\n platform=platform)\n success &= bool(install_steps)\n except KeyboardInterrupt:\n raise\n # sys.exit raises an exception that doesn't subclass from Exception\n except BaseException as e:\n success = False\n log.error(\"FAIL: %s %s on %s with %s (%s)\", name, version,\n platform, additional_packages, e)\n return success\n\n\ndef _installed(prefix):\n installed = linked_data(prefix)\n installed = {rec['name']: dist for dist, rec in iteritems(installed)}\n return installed\n\n\ndef _underlined_text(text):\n return str(text) + '\\n' + '-' * len(str(text)) + '\\n\\n'\n\n\ndef inspect_linkages(packages, prefix=sys.prefix, untracked=False,\n all_packages=False, show_files=False, groupby=\"package\"):\n pkgmap = {}\n\n installed = _installed(prefix)\n\n if not packages and not untracked and not all_packages:\n raise ValueError(\"At least one package or --untracked or --all must be provided\")\n\n if all_packages:\n packages = sorted(installed.keys())\n\n if untracked:\n packages.append(untracked_package)\n\n for pkg in packages:\n if pkg == untracked_package:\n dist = untracked_package\n elif pkg not in installed:\n sys.exit(\"Package %s is not installed in %s\" % (pkg, prefix))\n else:\n dist = installed[pkg]\n\n if not sys.platform.startswith(('linux', 'darwin')):\n sys.exit(\"Error: conda inspect linkages is only implemented in Linux and OS X\")\n\n if dist == untracked_package:\n obj_files = get_untracked_obj_files(prefix)\n else:\n obj_files = get_package_obj_files(dist, prefix)\n linkages = get_linkages(obj_files, prefix)\n depmap = defaultdict(list)\n pkgmap[pkg] = depmap\n depmap['not found'] = []\n for binary in linkages:\n for lib, path in linkages[binary]:\n path = replace_path(binary, path, prefix) if path not in {'',\n 'not found'} else path\n if path.startswith(prefix):\n deps = list(which_package(path))\n if len(deps) > 1:\n logging.getLogger(__name__).warn(\"Warning: %s comes from multiple \"\n \"packages: %s\", path, comma_join(deps))\n if not deps:\n if exists(path):\n depmap['untracked'].append((lib, path.split(prefix +\n '/', 1)[-1], binary))\n else:\n depmap['not found'].append((lib, path.split(prefix +\n '/', 1)[-1], binary))\n for d in deps:\n depmap[d].append((lib, path.split(prefix + '/',\n 1)[-1], binary))\n elif path == 'not found':\n depmap['not found'].append((lib, path, binary))\n else:\n depmap['system'].append((lib, path, binary))\n\n output_string = \"\"\n if groupby == 'package':\n for pkg in packages:\n output_string += _underlined_text(pkg)\n output_string += print_linkages(pkgmap[pkg], show_files=show_files)\n\n elif groupby == 'dependency':\n # {pkg: {dep: [files]}} -> {dep: {pkg: [files]}}\n inverted_map = defaultdict(lambda: defaultdict(list))\n for pkg in pkgmap:\n for dep in pkgmap[pkg]:\n if pkgmap[pkg][dep]:\n inverted_map[dep][pkg] = pkgmap[pkg][dep]\n\n # print system and not found last\n k = sorted(set(inverted_map.keys()) - {'system', 'not found'})\n for dep in k + ['system', 'not found']:\n output_string += _underlined_text(dep)\n output_string += print_linkages(inverted_map[dep], show_files=show_files)\n\n else:\n raise ValueError(\"Unrecognized groupby: %s\" % groupby)\n if hasattr(output_string, 'decode'):\n output_string = output_string.decode('utf-8')\n return output_string\n\n\ndef inspect_objects(packages, prefix=sys.prefix, groupby='package'):\n installed = _installed(prefix)\n\n output_string = \"\"\n for pkg in packages:\n if pkg == untracked_package:\n dist = untracked_package\n elif pkg not in installed:\n raise ValueError(\"Package %s is not installed in %s\" % (pkg, prefix))\n else:\n dist = installed[pkg]\n\n output_string += _underlined_text(pkg)\n\n if not sys.platform.startswith('darwin'):\n sys.exit(\"Error: conda inspect objects is only implemented in OS X\")\n\n if dist == untracked_package:\n obj_files = get_untracked_obj_files(prefix)\n else:\n obj_files = get_package_obj_files(dist, prefix)\n\n info = []\n for f in obj_files:\n f_info = {}\n path = join(prefix, f)\n f_info['filetype'] = human_filetype(path)\n f_info['rpath'] = ':'.join(get_rpaths(path))\n f_info['filename'] = f\n info.append(f_info)\n\n output_string += print_object_info(info, groupby)\n if hasattr(output_string, 'decode'):\n output_string = output_string.decode('utf-8')\n return output_string\n", "path": "conda_build/inspect.py"}], "after_files": [{"content": "# (c) Continuum Analytics, Inc. / http://continuum.io\n# All Rights Reserved\n#\n# conda is distributed under the terms of the BSD 3-clause license.\n# Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause.\n\nfrom __future__ import absolute_import, division, print_function\n\nfrom collections import defaultdict\nimport logging\nfrom operator import itemgetter\nfrom os.path import abspath, join, dirname, exists, basename, isdir\nimport re\nimport sys\nimport tempfile\n\nfrom .conda_interface import (iteritems, specs_from_args, is_linked, linked_data, linked,\n get_index)\nfrom .conda_interface import display_actions, install_actions\n\n\nfrom conda_build.os_utils.ldd import get_linkages, get_package_obj_files, get_untracked_obj_files\nfrom conda_build.os_utils.macho import get_rpaths, human_filetype\nfrom conda_build.utils import groupby, getter, comma_join, rm_rf\n\n\ndef which_prefix(path):\n \"\"\"\n given the path (to a (presumably) conda installed file) return the\n environment prefix in which the file in located\n \"\"\"\n prefix = abspath(path)\n while True:\n if isdir(join(prefix, 'conda-meta')):\n # we found the it, so let's return it\n return prefix\n if prefix == dirname(prefix):\n # we cannot chop off any more directories, so we didn't find it\n return None\n prefix = dirname(prefix)\n\n\ndef which_package(path):\n \"\"\"\n given the path (of a (presumably) conda installed file) iterate over\n the conda packages the file came from. Usually the iteration yields\n only one package.\n \"\"\"\n path = abspath(path)\n prefix = which_prefix(path)\n if prefix is None:\n raise RuntimeError(\"could not determine conda prefix from: %s\" % path)\n for dist in linked(prefix):\n meta = is_linked(prefix, dist)\n if any(abspath(join(prefix, f)) == path for f in meta['files']):\n yield dist\n\n\ndef print_object_info(info, key):\n output_string = \"\"\n gb = groupby(key, info)\n for header in sorted(gb, key=str):\n output_string += header + \"\\n\"\n for f_info in sorted(gb[header], key=getter('filename')):\n for data in sorted(f_info):\n if data == key:\n continue\n if f_info[data] is None:\n continue\n output_string += ' %s: %s\\n' % (data, f_info[data])\n if len([i for i in f_info if f_info[i] is not None and i != key]) > 1:\n output_string += '\\n'\n output_string += '\\n'\n return output_string\n\n\nclass _untracked_package:\n def __str__(self):\n return \"<untracked>\"\n\n\nuntracked_package = _untracked_package()\n\n\ndef check_install(packages, platform=None, channel_urls=(), prepend=True,\n minimal_hint=False):\n prefix = tempfile.mkdtemp('conda')\n try:\n specs = specs_from_args(packages)\n index = get_index(channel_urls=channel_urls, prepend=prepend,\n platform=platform, prefix=prefix)\n actions = install_actions(prefix, index, specs, pinned=False,\n minimal_hint=minimal_hint)\n display_actions(actions, index)\n return actions\n finally:\n rm_rf(prefix)\n return None\n\n\ndef print_linkages(depmap, show_files=False):\n # Print system and not found last\n dist_depmap = {}\n for k, v in depmap.items():\n if hasattr(k, 'dist_name'):\n k = k.dist_name\n dist_depmap[k] = v\n\n depmap = dist_depmap\n k = sorted(set(depmap.keys()) - {'system', 'not found'})\n all_deps = k if 'not found' not in depmap.keys() else k + ['system', 'not found']\n output_string = \"\"\n for dep in all_deps:\n output_string += \"%s:\\n\" % dep\n if show_files:\n for lib, path, binary in sorted(depmap[dep]):\n output_string += \" %s (%s) from %s\\n\" % (lib, path, binary)\n else:\n for lib, path in sorted(set(map(itemgetter(0, 1), depmap[dep]))):\n output_string += \" %s (%s)\\n\" % (lib, path)\n output_string += \"\\n\"\n return output_string\n\n\ndef replace_path(binary, path, prefix):\n if sys.platform.startswith('linux'):\n return abspath(path)\n elif sys.platform.startswith('darwin'):\n if path == basename(binary):\n return abspath(join(prefix, binary))\n if '@rpath' in path:\n rpaths = get_rpaths(join(prefix, binary))\n if not rpaths:\n return \"NO LC_RPATH FOUND\"\n else:\n for rpath in rpaths:\n path1 = path.replace(\"@rpath\", rpath)\n path1 = path1.replace('@loader_path', join(prefix, dirname(binary)))\n if exists(abspath(join(prefix, path1))):\n path = path1\n break\n else:\n return 'not found'\n path = path.replace('@loader_path', join(prefix, dirname(binary)))\n if path.startswith('/'):\n return abspath(path)\n return 'not found'\n\n\ndef test_installable(channel='defaults'):\n success = True\n log = logging.getLogger(__name__)\n has_py = re.compile(r'py(\\d)(\\d)')\n for platform in ['osx-64', 'linux-32', 'linux-64', 'win-32', 'win-64']:\n log.info(\"######## Testing platform %s ########\", platform)\n channels = [channel]\n index = get_index(channel_urls=channels, prepend=False, platform=platform)\n for _, rec in iteritems(index):\n # If we give channels at the command line, only look at\n # packages from those channels (not defaults).\n if channel != 'defaults' and rec.get('schannel', 'defaults') == 'defaults':\n continue\n name = rec['name']\n if name in {'conda', 'conda-build'}:\n # conda can only be installed in the root environment\n continue\n # Don't fail just because the package is a different version of Python\n # than the default. We should probably check depends rather than the\n # build string.\n build = rec['build']\n match = has_py.search(build)\n assert match if 'py' in build else True, build\n if match:\n additional_packages = ['python=%s.%s' % (match.group(1), match.group(2))]\n else:\n additional_packages = []\n\n version = rec['version']\n log.info('Testing %s=%s', name, version)\n\n try:\n install_steps = check_install([name + '=' + version] + additional_packages,\n channel_urls=channels, prepend=False,\n platform=platform)\n success &= bool(install_steps)\n except KeyboardInterrupt:\n raise\n # sys.exit raises an exception that doesn't subclass from Exception\n except BaseException as e:\n success = False\n log.error(\"FAIL: %s %s on %s with %s (%s)\", name, version,\n platform, additional_packages, e)\n return success\n\n\ndef _installed(prefix):\n installed = linked_data(prefix)\n installed = {rec['name']: dist for dist, rec in iteritems(installed)}\n return installed\n\n\ndef _underlined_text(text):\n return str(text) + '\\n' + '-' * len(str(text)) + '\\n\\n'\n\n\ndef inspect_linkages(packages, prefix=sys.prefix, untracked=False,\n all_packages=False, show_files=False, groupby=\"package\"):\n pkgmap = {}\n\n installed = _installed(prefix)\n\n if not packages and not untracked and not all_packages:\n raise ValueError(\"At least one package or --untracked or --all must be provided\")\n\n if all_packages:\n packages = sorted(installed.keys())\n\n if untracked:\n packages.append(untracked_package)\n\n for pkg in packages:\n if pkg == untracked_package:\n dist = untracked_package\n elif pkg not in installed:\n sys.exit(\"Package %s is not installed in %s\" % (pkg, prefix))\n else:\n dist = installed[pkg]\n\n if not sys.platform.startswith(('linux', 'darwin')):\n sys.exit(\"Error: conda inspect linkages is only implemented in Linux and OS X\")\n\n if dist == untracked_package:\n obj_files = get_untracked_obj_files(prefix)\n else:\n obj_files = get_package_obj_files(dist, prefix)\n linkages = get_linkages(obj_files, prefix)\n depmap = defaultdict(list)\n pkgmap[pkg] = depmap\n depmap['not found'] = []\n depmap['system'] = []\n for binary in linkages:\n for lib, path in linkages[binary]:\n path = replace_path(binary, path, prefix) if path not in {'',\n 'not found'} else path\n if path.startswith(prefix):\n deps = list(which_package(path))\n if len(deps) > 1:\n logging.getLogger(__name__).warn(\"Warning: %s comes from multiple \"\n \"packages: %s\", path, comma_join(deps))\n if not deps:\n if exists(path):\n depmap['untracked'].append((lib, path.split(prefix +\n '/', 1)[-1], binary))\n else:\n depmap['not found'].append((lib, path.split(prefix +\n '/', 1)[-1], binary))\n for d in deps:\n depmap[d].append((lib, path.split(prefix + '/',\n 1)[-1], binary))\n elif path == 'not found':\n depmap['not found'].append((lib, path, binary))\n else:\n depmap['system'].append((lib, path, binary))\n\n output_string = \"\"\n if groupby == 'package':\n for pkg in packages:\n output_string += _underlined_text(pkg)\n output_string += print_linkages(pkgmap[pkg], show_files=show_files)\n\n elif groupby == 'dependency':\n # {pkg: {dep: [files]}} -> {dep: {pkg: [files]}}\n inverted_map = defaultdict(lambda: defaultdict(list))\n for pkg in pkgmap:\n for dep in pkgmap[pkg]:\n if pkgmap[pkg][dep]:\n inverted_map[dep][pkg] = pkgmap[pkg][dep]\n\n # print system and not found last\n k = sorted(set(inverted_map.keys()) - {'system', 'not found'})\n for dep in k + ['system', 'not found']:\n output_string += _underlined_text(dep)\n output_string += print_linkages(inverted_map[dep], show_files=show_files)\n\n else:\n raise ValueError(\"Unrecognized groupby: %s\" % groupby)\n if hasattr(output_string, 'decode'):\n output_string = output_string.decode('utf-8')\n return output_string\n\n\ndef inspect_objects(packages, prefix=sys.prefix, groupby='package'):\n installed = _installed(prefix)\n\n output_string = \"\"\n for pkg in packages:\n if pkg == untracked_package:\n dist = untracked_package\n elif pkg not in installed:\n raise ValueError(\"Package %s is not installed in %s\" % (pkg, prefix))\n else:\n dist = installed[pkg]\n\n output_string += _underlined_text(pkg)\n\n if not sys.platform.startswith('darwin'):\n sys.exit(\"Error: conda inspect objects is only implemented in OS X\")\n\n if dist == untracked_package:\n obj_files = get_untracked_obj_files(prefix)\n else:\n obj_files = get_package_obj_files(dist, prefix)\n\n info = []\n for f in obj_files:\n f_info = {}\n path = join(prefix, f)\n f_info['filetype'] = human_filetype(path)\n f_info['rpath'] = ':'.join(get_rpaths(path))\n f_info['filename'] = f\n info.append(f_info)\n\n output_string += print_object_info(info, groupby)\n if hasattr(output_string, 'decode'):\n output_string = output_string.decode('utf-8')\n return output_string\n", "path": "conda_build/inspect.py"}]} | 3,973 | 120 |
gh_patches_debug_3306 | rasdani/github-patches | git_diff | kivy__python-for-android-1815 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
packaged python is built with IPv6 disabled
Is there any reason for IPv6 being disabled?
https://github.com/kivy/python-for-android/blob/f16a66590c92bb57c69b214b2b69dfaa6dc1e0ed/pythonforandroid/recipes/python3/__init__.py#L37
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pythonforandroid/recipes/python3/__init__.py`
Content:
```
1 import sh
2 from pythonforandroid.python import GuestPythonRecipe
3 from pythonforandroid.recipe import Recipe
4
5
6 class Python3Recipe(GuestPythonRecipe):
7 '''
8 The python3's recipe.
9
10 .. note:: This recipe can be built only against API 21+. Also, in order to
11 build certain python modules, we need to add some extra recipes to our
12 build requirements:
13
14 - ctypes: you must add the recipe for ``libffi``.
15
16 .. versionchanged:: 0.6.0
17 Refactored into class
18 :class:`~pythonforandroid.python.GuestPythonRecipe`
19 '''
20
21 version = '3.7.1'
22 url = 'https://www.python.org/ftp/python/{version}/Python-{version}.tgz'
23 name = 'python3'
24
25 patches = ["patches/fix-ctypes-util-find-library.patch"]
26
27 if sh.which('lld') is not None:
28 patches = patches + ["patches/remove-fix-cortex-a8.patch"]
29
30 depends = ['hostpython3', 'sqlite3', 'openssl', 'libffi']
31 conflicts = ['python3crystax', 'python2', 'python2legacy']
32
33 configure_args = (
34 '--host={android_host}',
35 '--build={android_build}',
36 '--enable-shared',
37 '--disable-ipv6',
38 'ac_cv_file__dev_ptmx=yes',
39 'ac_cv_file__dev_ptc=no',
40 '--without-ensurepip',
41 'ac_cv_little_endian_double=yes',
42 '--prefix={prefix}',
43 '--exec-prefix={exec_prefix}')
44
45 def set_libs_flags(self, env, arch):
46 env = super(Python3Recipe, self).set_libs_flags(env, arch)
47 if 'openssl' in self.ctx.recipe_build_order:
48 recipe = Recipe.get_recipe('openssl', self.ctx)
49 self.configure_args += \
50 ('--with-openssl=' + recipe.get_build_dir(arch.arch),)
51 return env
52
53
54 recipe = Python3Recipe()
55
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pythonforandroid/recipes/python3/__init__.py b/pythonforandroid/recipes/python3/__init__.py
--- a/pythonforandroid/recipes/python3/__init__.py
+++ b/pythonforandroid/recipes/python3/__init__.py
@@ -34,7 +34,7 @@
'--host={android_host}',
'--build={android_build}',
'--enable-shared',
- '--disable-ipv6',
+ '--enable-ipv6',
'ac_cv_file__dev_ptmx=yes',
'ac_cv_file__dev_ptc=no',
'--without-ensurepip',
| {"golden_diff": "diff --git a/pythonforandroid/recipes/python3/__init__.py b/pythonforandroid/recipes/python3/__init__.py\n--- a/pythonforandroid/recipes/python3/__init__.py\n+++ b/pythonforandroid/recipes/python3/__init__.py\n@@ -34,7 +34,7 @@\n '--host={android_host}',\n '--build={android_build}',\n '--enable-shared',\n- '--disable-ipv6',\n+ '--enable-ipv6',\n 'ac_cv_file__dev_ptmx=yes',\n 'ac_cv_file__dev_ptc=no',\n '--without-ensurepip',\n", "issue": "packaged python is built with IPv6 disabled\nIs there any reason for IPv6 being disabled?\r\nhttps://github.com/kivy/python-for-android/blob/f16a66590c92bb57c69b214b2b69dfaa6dc1e0ed/pythonforandroid/recipes/python3/__init__.py#L37\n", "before_files": [{"content": "import sh\nfrom pythonforandroid.python import GuestPythonRecipe\nfrom pythonforandroid.recipe import Recipe\n\n\nclass Python3Recipe(GuestPythonRecipe):\n '''\n The python3's recipe.\n\n .. note:: This recipe can be built only against API 21+. Also, in order to\n build certain python modules, we need to add some extra recipes to our\n build requirements:\n\n - ctypes: you must add the recipe for ``libffi``.\n\n .. versionchanged:: 0.6.0\n Refactored into class\n :class:`~pythonforandroid.python.GuestPythonRecipe`\n '''\n\n version = '3.7.1'\n url = 'https://www.python.org/ftp/python/{version}/Python-{version}.tgz'\n name = 'python3'\n\n patches = [\"patches/fix-ctypes-util-find-library.patch\"]\n\n if sh.which('lld') is not None:\n patches = patches + [\"patches/remove-fix-cortex-a8.patch\"]\n\n depends = ['hostpython3', 'sqlite3', 'openssl', 'libffi']\n conflicts = ['python3crystax', 'python2', 'python2legacy']\n\n configure_args = (\n '--host={android_host}',\n '--build={android_build}',\n '--enable-shared',\n '--disable-ipv6',\n 'ac_cv_file__dev_ptmx=yes',\n 'ac_cv_file__dev_ptc=no',\n '--without-ensurepip',\n 'ac_cv_little_endian_double=yes',\n '--prefix={prefix}',\n '--exec-prefix={exec_prefix}')\n\n def set_libs_flags(self, env, arch):\n env = super(Python3Recipe, self).set_libs_flags(env, arch)\n if 'openssl' in self.ctx.recipe_build_order:\n recipe = Recipe.get_recipe('openssl', self.ctx)\n self.configure_args += \\\n ('--with-openssl=' + recipe.get_build_dir(arch.arch),)\n return env\n\n\nrecipe = Python3Recipe()\n", "path": "pythonforandroid/recipes/python3/__init__.py"}], "after_files": [{"content": "import sh\nfrom pythonforandroid.python import GuestPythonRecipe\nfrom pythonforandroid.recipe import Recipe\n\n\nclass Python3Recipe(GuestPythonRecipe):\n '''\n The python3's recipe.\n\n .. note:: This recipe can be built only against API 21+. Also, in order to\n build certain python modules, we need to add some extra recipes to our\n build requirements:\n\n - ctypes: you must add the recipe for ``libffi``.\n\n .. versionchanged:: 0.6.0\n Refactored into class\n :class:`~pythonforandroid.python.GuestPythonRecipe`\n '''\n\n version = '3.7.1'\n url = 'https://www.python.org/ftp/python/{version}/Python-{version}.tgz'\n name = 'python3'\n\n patches = [\"patches/fix-ctypes-util-find-library.patch\"]\n\n if sh.which('lld') is not None:\n patches = patches + [\"patches/remove-fix-cortex-a8.patch\"]\n\n depends = ['hostpython3', 'sqlite3', 'openssl', 'libffi']\n conflicts = ['python3crystax', 'python2', 'python2legacy']\n\n configure_args = (\n '--host={android_host}',\n '--build={android_build}',\n '--enable-shared',\n '--enable-ipv6',\n 'ac_cv_file__dev_ptmx=yes',\n 'ac_cv_file__dev_ptc=no',\n '--without-ensurepip',\n 'ac_cv_little_endian_double=yes',\n '--prefix={prefix}',\n '--exec-prefix={exec_prefix}')\n\n def set_libs_flags(self, env, arch):\n env = super(Python3Recipe, self).set_libs_flags(env, arch)\n if 'openssl' in self.ctx.recipe_build_order:\n recipe = Recipe.get_recipe('openssl', self.ctx)\n self.configure_args += \\\n ('--with-openssl=' + recipe.get_build_dir(arch.arch),)\n return env\n\n\nrecipe = Python3Recipe()\n", "path": "pythonforandroid/recipes/python3/__init__.py"}]} | 880 | 132 |
gh_patches_debug_1668 | rasdani/github-patches | git_diff | googleapis__google-cloud-python-5375 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Updated SubscriberClient docs
`subscribe_experimental` was promoted to `subscribe` but the docs for the `SubscriberClient` still suggested using `subscribe_experimental`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pubsub/google/cloud/pubsub_v1/subscriber/client.py`
Content:
```
1 # Copyright 2017, Google LLC All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from __future__ import absolute_import
16
17 import pkg_resources
18 import os
19
20 import grpc
21
22 from google.api_core import grpc_helpers
23
24 from google.cloud.pubsub_v1 import _gapic
25 from google.cloud.pubsub_v1 import types
26 from google.cloud.pubsub_v1.gapic import subscriber_client
27 from google.cloud.pubsub_v1.subscriber import futures
28 from google.cloud.pubsub_v1.subscriber._protocol import streaming_pull_manager
29
30
31 __version__ = pkg_resources.get_distribution('google-cloud-pubsub').version
32
33
34 @_gapic.add_methods(subscriber_client.SubscriberClient,
35 blacklist=('pull', 'streaming_pull'))
36 class Client(object):
37 """A subscriber client for Google Cloud Pub/Sub.
38
39 This creates an object that is capable of subscribing to messages.
40 Generally, you can instantiate this client with no arguments, and you
41 get sensible defaults.
42
43 Args:
44 kwargs (dict): Any additional arguments provided are sent as keyword
45 keyword arguments to the underlying
46 :class:`~.gapic.pubsub.v1.subscriber_client.SubscriberClient`.
47 Generally, you should not need to set additional keyword
48 arguments.
49 """
50 def __init__(self, **kwargs):
51 # Sanity check: Is our goal to use the emulator?
52 # If so, create a grpc insecure channel with the emulator host
53 # as the target.
54 if os.environ.get('PUBSUB_EMULATOR_HOST'):
55 kwargs['channel'] = grpc.insecure_channel(
56 target=os.environ.get('PUBSUB_EMULATOR_HOST'),
57 )
58
59 # Use a custom channel.
60 # We need this in order to set appropriate default message size and
61 # keepalive options.
62 if 'channel' not in kwargs:
63 kwargs['channel'] = grpc_helpers.create_channel(
64 credentials=kwargs.pop('credentials', None),
65 target=self.target,
66 scopes=subscriber_client.SubscriberClient._DEFAULT_SCOPES,
67 options={
68 'grpc.max_send_message_length': -1,
69 'grpc.max_receive_message_length': -1,
70 'grpc.keepalive_time_ms': 30000,
71 }.items(),
72 )
73
74 # Add the metrics headers, and instantiate the underlying GAPIC
75 # client.
76 self._api = subscriber_client.SubscriberClient(**kwargs)
77
78 @property
79 def target(self):
80 """Return the target (where the API is).
81
82 Returns:
83 str: The location of the API.
84 """
85 return subscriber_client.SubscriberClient.SERVICE_ADDRESS
86
87 @property
88 def api(self):
89 """The underlying gapic API client."""
90 return self._api
91
92 def subscribe(
93 self, subscription, callback, flow_control=(),
94 scheduler=None):
95 """Asynchronously start receiving messages on a given subscription.
96
97 This method starts a background thread to begin pulling messages from
98 a Pub/Sub subscription and scheduling them to be processed using the
99 provided ``callback``.
100
101 The ``callback`` will be called with an individual
102 :class:`google.cloud.pubsub_v1.subscriber.message.Message`. It is the
103 responsibility of the callback to either call ``ack()`` or ``nack()``
104 on the message when it finished processing. If an exception occurs in
105 the callback during processing, the exception is logged and the message
106 is ``nack()`` ed.
107
108 The ``flow_control`` argument can be used to control the rate of at
109 which messages are pulled. The settings are relatively conservative by
110 default to prevent "message hoarding" - a situation where the client
111 pulls a large number of messages but can not process them fast enough
112 leading it to "starve" other clients of messages. Increasing these
113 settings may lead to faster throughput for messages that do not take
114 a long time to process.
115
116 This method starts the receiver in the background and returns a
117 *Future* representing its execution. Waiting on the future (calling
118 ``result()``) will block forever or until a non-recoverable error
119 is encountered (such as loss of network connectivity). Cancelling the
120 future will signal the process to shutdown gracefully and exit.
121
122 Example
123
124 .. code-block:: python
125
126 from google.cloud.pubsub_v1 import subscriber
127
128 subscriber_client = pubsub.SubscriberClient()
129
130 # existing subscription
131 subscription = subscriber_client.subscription_path(
132 'my-project-id', 'my-subscription')
133
134 def callback(message):
135 print(message)
136 message.ack()
137
138 future = subscriber.subscribe_experimental(
139 subscription, callback)
140
141 try:
142 future.result()
143 except KeyboardInterrupt:
144 future.cancel()
145
146 Args:
147 subscription (str): The name of the subscription. The
148 subscription should have already been created (for example,
149 by using :meth:`create_subscription`).
150 callback (Callable[~.pubsub_v1.subscriber.message.Message]):
151 The callback function. This function receives the message as
152 its only argument and will be called from a different thread/
153 process depending on the scheduling strategy.
154 flow_control (~.pubsub_v1.types.FlowControl): The flow control
155 settings. Use this to prevent situations where you are
156 inundated with too many messages at once.
157 scheduler (~.pubsub_v1.subscriber.scheduler.Scheduler): An optional
158 *scheduler* to use when executing the callback. This controls
159 how callbacks are executed concurrently.
160
161 Returns:
162 google.cloud.pubsub_v1.futures.StreamingPullFuture: A Future object
163 that can be used to manage the background stream.
164 """
165 flow_control = types.FlowControl(*flow_control)
166
167 manager = streaming_pull_manager.StreamingPullManager(
168 self, subscription, flow_control=flow_control, scheduler=scheduler)
169
170 future = futures.StreamingPullFuture(manager)
171
172 manager.open(callback)
173
174 return future
175
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pubsub/google/cloud/pubsub_v1/subscriber/client.py b/pubsub/google/cloud/pubsub_v1/subscriber/client.py
--- a/pubsub/google/cloud/pubsub_v1/subscriber/client.py
+++ b/pubsub/google/cloud/pubsub_v1/subscriber/client.py
@@ -135,7 +135,7 @@
print(message)
message.ack()
- future = subscriber.subscribe_experimental(
+ future = subscriber.subscribe(
subscription, callback)
try:
| {"golden_diff": "diff --git a/pubsub/google/cloud/pubsub_v1/subscriber/client.py b/pubsub/google/cloud/pubsub_v1/subscriber/client.py\n--- a/pubsub/google/cloud/pubsub_v1/subscriber/client.py\n+++ b/pubsub/google/cloud/pubsub_v1/subscriber/client.py\n@@ -135,7 +135,7 @@\n print(message)\n message.ack()\n \n- future = subscriber.subscribe_experimental(\n+ future = subscriber.subscribe(\n subscription, callback)\n \n try:\n", "issue": "Updated SubscriberClient docs\n`subscribe_experimental` was promoted to `subscribe` but the docs for the `SubscriberClient` still suggested using `subscribe_experimental`\n", "before_files": [{"content": "# Copyright 2017, Google LLC All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\n\nimport pkg_resources\nimport os\n\nimport grpc\n\nfrom google.api_core import grpc_helpers\n\nfrom google.cloud.pubsub_v1 import _gapic\nfrom google.cloud.pubsub_v1 import types\nfrom google.cloud.pubsub_v1.gapic import subscriber_client\nfrom google.cloud.pubsub_v1.subscriber import futures\nfrom google.cloud.pubsub_v1.subscriber._protocol import streaming_pull_manager\n\n\n__version__ = pkg_resources.get_distribution('google-cloud-pubsub').version\n\n\n@_gapic.add_methods(subscriber_client.SubscriberClient,\n blacklist=('pull', 'streaming_pull'))\nclass Client(object):\n \"\"\"A subscriber client for Google Cloud Pub/Sub.\n\n This creates an object that is capable of subscribing to messages.\n Generally, you can instantiate this client with no arguments, and you\n get sensible defaults.\n\n Args:\n kwargs (dict): Any additional arguments provided are sent as keyword\n keyword arguments to the underlying\n :class:`~.gapic.pubsub.v1.subscriber_client.SubscriberClient`.\n Generally, you should not need to set additional keyword\n arguments.\n \"\"\"\n def __init__(self, **kwargs):\n # Sanity check: Is our goal to use the emulator?\n # If so, create a grpc insecure channel with the emulator host\n # as the target.\n if os.environ.get('PUBSUB_EMULATOR_HOST'):\n kwargs['channel'] = grpc.insecure_channel(\n target=os.environ.get('PUBSUB_EMULATOR_HOST'),\n )\n\n # Use a custom channel.\n # We need this in order to set appropriate default message size and\n # keepalive options.\n if 'channel' not in kwargs:\n kwargs['channel'] = grpc_helpers.create_channel(\n credentials=kwargs.pop('credentials', None),\n target=self.target,\n scopes=subscriber_client.SubscriberClient._DEFAULT_SCOPES,\n options={\n 'grpc.max_send_message_length': -1,\n 'grpc.max_receive_message_length': -1,\n 'grpc.keepalive_time_ms': 30000,\n }.items(),\n )\n\n # Add the metrics headers, and instantiate the underlying GAPIC\n # client.\n self._api = subscriber_client.SubscriberClient(**kwargs)\n\n @property\n def target(self):\n \"\"\"Return the target (where the API is).\n\n Returns:\n str: The location of the API.\n \"\"\"\n return subscriber_client.SubscriberClient.SERVICE_ADDRESS\n\n @property\n def api(self):\n \"\"\"The underlying gapic API client.\"\"\"\n return self._api\n\n def subscribe(\n self, subscription, callback, flow_control=(),\n scheduler=None):\n \"\"\"Asynchronously start receiving messages on a given subscription.\n\n This method starts a background thread to begin pulling messages from\n a Pub/Sub subscription and scheduling them to be processed using the\n provided ``callback``.\n\n The ``callback`` will be called with an individual\n :class:`google.cloud.pubsub_v1.subscriber.message.Message`. It is the\n responsibility of the callback to either call ``ack()`` or ``nack()``\n on the message when it finished processing. If an exception occurs in\n the callback during processing, the exception is logged and the message\n is ``nack()`` ed.\n\n The ``flow_control`` argument can be used to control the rate of at\n which messages are pulled. The settings are relatively conservative by\n default to prevent \"message hoarding\" - a situation where the client\n pulls a large number of messages but can not process them fast enough\n leading it to \"starve\" other clients of messages. Increasing these\n settings may lead to faster throughput for messages that do not take\n a long time to process.\n\n This method starts the receiver in the background and returns a\n *Future* representing its execution. Waiting on the future (calling\n ``result()``) will block forever or until a non-recoverable error\n is encountered (such as loss of network connectivity). Cancelling the\n future will signal the process to shutdown gracefully and exit.\n\n Example\n\n .. code-block:: python\n\n from google.cloud.pubsub_v1 import subscriber\n\n subscriber_client = pubsub.SubscriberClient()\n\n # existing subscription\n subscription = subscriber_client.subscription_path(\n 'my-project-id', 'my-subscription')\n\n def callback(message):\n print(message)\n message.ack()\n\n future = subscriber.subscribe_experimental(\n subscription, callback)\n\n try:\n future.result()\n except KeyboardInterrupt:\n future.cancel()\n\n Args:\n subscription (str): The name of the subscription. The\n subscription should have already been created (for example,\n by using :meth:`create_subscription`).\n callback (Callable[~.pubsub_v1.subscriber.message.Message]):\n The callback function. This function receives the message as\n its only argument and will be called from a different thread/\n process depending on the scheduling strategy.\n flow_control (~.pubsub_v1.types.FlowControl): The flow control\n settings. Use this to prevent situations where you are\n inundated with too many messages at once.\n scheduler (~.pubsub_v1.subscriber.scheduler.Scheduler): An optional\n *scheduler* to use when executing the callback. This controls\n how callbacks are executed concurrently.\n\n Returns:\n google.cloud.pubsub_v1.futures.StreamingPullFuture: A Future object\n that can be used to manage the background stream.\n \"\"\"\n flow_control = types.FlowControl(*flow_control)\n\n manager = streaming_pull_manager.StreamingPullManager(\n self, subscription, flow_control=flow_control, scheduler=scheduler)\n\n future = futures.StreamingPullFuture(manager)\n\n manager.open(callback)\n\n return future\n", "path": "pubsub/google/cloud/pubsub_v1/subscriber/client.py"}], "after_files": [{"content": "# Copyright 2017, Google LLC All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\n\nimport pkg_resources\nimport os\n\nimport grpc\n\nfrom google.api_core import grpc_helpers\n\nfrom google.cloud.pubsub_v1 import _gapic\nfrom google.cloud.pubsub_v1 import types\nfrom google.cloud.pubsub_v1.gapic import subscriber_client\nfrom google.cloud.pubsub_v1.subscriber import futures\nfrom google.cloud.pubsub_v1.subscriber._protocol import streaming_pull_manager\n\n\n__version__ = pkg_resources.get_distribution('google-cloud-pubsub').version\n\n\n@_gapic.add_methods(subscriber_client.SubscriberClient,\n blacklist=('pull', 'streaming_pull'))\nclass Client(object):\n \"\"\"A subscriber client for Google Cloud Pub/Sub.\n\n This creates an object that is capable of subscribing to messages.\n Generally, you can instantiate this client with no arguments, and you\n get sensible defaults.\n\n Args:\n kwargs (dict): Any additional arguments provided are sent as keyword\n keyword arguments to the underlying\n :class:`~.gapic.pubsub.v1.subscriber_client.SubscriberClient`.\n Generally, you should not need to set additional keyword\n arguments.\n \"\"\"\n def __init__(self, **kwargs):\n # Sanity check: Is our goal to use the emulator?\n # If so, create a grpc insecure channel with the emulator host\n # as the target.\n if os.environ.get('PUBSUB_EMULATOR_HOST'):\n kwargs['channel'] = grpc.insecure_channel(\n target=os.environ.get('PUBSUB_EMULATOR_HOST'),\n )\n\n # Use a custom channel.\n # We need this in order to set appropriate default message size and\n # keepalive options.\n if 'channel' not in kwargs:\n kwargs['channel'] = grpc_helpers.create_channel(\n credentials=kwargs.pop('credentials', None),\n target=self.target,\n scopes=subscriber_client.SubscriberClient._DEFAULT_SCOPES,\n options={\n 'grpc.max_send_message_length': -1,\n 'grpc.max_receive_message_length': -1,\n 'grpc.keepalive_time_ms': 30000,\n }.items(),\n )\n\n # Add the metrics headers, and instantiate the underlying GAPIC\n # client.\n self._api = subscriber_client.SubscriberClient(**kwargs)\n\n @property\n def target(self):\n \"\"\"Return the target (where the API is).\n\n Returns:\n str: The location of the API.\n \"\"\"\n return subscriber_client.SubscriberClient.SERVICE_ADDRESS\n\n @property\n def api(self):\n \"\"\"The underlying gapic API client.\"\"\"\n return self._api\n\n def subscribe(\n self, subscription, callback, flow_control=(),\n scheduler=None):\n \"\"\"Asynchronously start receiving messages on a given subscription.\n\n This method starts a background thread to begin pulling messages from\n a Pub/Sub subscription and scheduling them to be processed using the\n provided ``callback``.\n\n The ``callback`` will be called with an individual\n :class:`google.cloud.pubsub_v1.subscriber.message.Message`. It is the\n responsibility of the callback to either call ``ack()`` or ``nack()``\n on the message when it finished processing. If an exception occurs in\n the callback during processing, the exception is logged and the message\n is ``nack()`` ed.\n\n The ``flow_control`` argument can be used to control the rate of at\n which messages are pulled. The settings are relatively conservative by\n default to prevent \"message hoarding\" - a situation where the client\n pulls a large number of messages but can not process them fast enough\n leading it to \"starve\" other clients of messages. Increasing these\n settings may lead to faster throughput for messages that do not take\n a long time to process.\n\n This method starts the receiver in the background and returns a\n *Future* representing its execution. Waiting on the future (calling\n ``result()``) will block forever or until a non-recoverable error\n is encountered (such as loss of network connectivity). Cancelling the\n future will signal the process to shutdown gracefully and exit.\n\n Example\n\n .. code-block:: python\n\n from google.cloud.pubsub_v1 import subscriber\n\n subscriber_client = pubsub.SubscriberClient()\n\n # existing subscription\n subscription = subscriber_client.subscription_path(\n 'my-project-id', 'my-subscription')\n\n def callback(message):\n print(message)\n message.ack()\n\n future = subscriber.subscribe(\n subscription, callback)\n\n try:\n future.result()\n except KeyboardInterrupt:\n future.cancel()\n\n Args:\n subscription (str): The name of the subscription. The\n subscription should have already been created (for example,\n by using :meth:`create_subscription`).\n callback (Callable[~.pubsub_v1.subscriber.message.Message]):\n The callback function. This function receives the message as\n its only argument and will be called from a different thread/\n process depending on the scheduling strategy.\n flow_control (~.pubsub_v1.types.FlowControl): The flow control\n settings. Use this to prevent situations where you are\n inundated with too many messages at once.\n scheduler (~.pubsub_v1.subscriber.scheduler.Scheduler): An optional\n *scheduler* to use when executing the callback. This controls\n how callbacks are executed concurrently.\n\n Returns:\n google.cloud.pubsub_v1.futures.StreamingPullFuture: A Future object\n that can be used to manage the background stream.\n \"\"\"\n flow_control = types.FlowControl(*flow_control)\n\n manager = streaming_pull_manager.StreamingPullManager(\n self, subscription, flow_control=flow_control, scheduler=scheduler)\n\n future = futures.StreamingPullFuture(manager)\n\n manager.open(callback)\n\n return future\n", "path": "pubsub/google/cloud/pubsub_v1/subscriber/client.py"}]} | 2,099 | 109 |
gh_patches_debug_39449 | rasdani/github-patches | git_diff | pantsbuild__pants-15590 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Deprecated option hints aren't tabbed in `help` goal(s)
**Describe the bug**
Deprecated option hints don't appear to be tabbed
When running `./pants help-advanced`
```
Global deprecated options
-------------------------
--[no-]use-deprecated-python-macros
PANTS_USE_DEPRECATED_PYTHON_MACROS
use_deprecated_python_macros
default: False
current value: False
If true, use Pants's deprecated macro system for `python_requirements`, `poetry_requirements`, and `pipenv_requirements` rather than target generation.
The address for macros is different. Rather than `3rdparty/python#Django`, the address will look like `3rdparty/python:Django`. The macro (`python_requirements` et
al) also was not a proper target, meaning that you could not give it a `name`. In contrast, if the target generator sets its `name`, e.g. to `reqs`, generated
targets will have an address like `3rdparty/python:reqs#Django`.
Deprecated, will be removed in version: 2.12.0.dev0.
In Pants 2.12, the deprecated Python macros like `python_requirements` will be replaced with improved target generators, which are now enabled by default.
If you already migrated by setting `use_deprecated_python_macros = false`, simply delete the option.
Otherwise, when you are ready to upgrade, follow these steps:
1. Run `./pants update-build-files --fix-python-macros`
2. Check the logs for an ERROR log to see if you have to manually add `name=` anywhere.
3. Remove `use_deprecated_python_macros = true` from `[GLOBAL]` in pants.toml.
(Why upgrade from the old macro mechanism to target generation? Among other benefits, it makes sure that the Pants daemon is properly invalidated when you change `requirements.txt` and `pyproject.toml`.)
```
**Pants version**
`main`
**OS**
Linux
**Additional info**
N/A
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/python/pants/help/help_formatter.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 import json
5 import textwrap
6 from enum import Enum
7 from typing import List, Optional
8
9 from pants.help.help_info_extracter import OptionHelpInfo, OptionScopeHelpInfo, to_help_str
10 from pants.help.maybe_color import MaybeColor
11 from pants.option.ranked_value import Rank, RankedValue
12 from pants.util.docutil import bin_name, terminal_width
13 from pants.util.strutil import hard_wrap
14
15
16 class HelpFormatter(MaybeColor):
17 def __init__(self, *, show_advanced: bool, show_deprecated: bool, color: bool) -> None:
18 super().__init__(color=color)
19 self._show_advanced = show_advanced
20 self._show_deprecated = show_deprecated
21 self._width = terminal_width()
22
23 def format_options(self, oshi: OptionScopeHelpInfo):
24 """Return a help message for the specified options."""
25 lines = []
26
27 def add_option(ohis, *, category=None):
28 lines.append("")
29 goal_or_subsystem = "goal" if oshi.is_goal else "subsystem"
30 display_scope = f"`{oshi.scope}` {goal_or_subsystem}" if oshi.scope else "Global"
31 if category:
32 title = f"{display_scope} {category} options"
33 lines.append(self.maybe_green(f"{title}\n{'-' * len(title)}"))
34 else:
35 # The basic options section gets the description and options scope info.
36 # No need to repeat those in the advanced section.
37 title = f"{display_scope} options"
38 lines.append(self.maybe_green(f"{title}\n{'-' * len(title)}\n"))
39 lines.extend(hard_wrap(oshi.description, width=self._width))
40 lines.append(" ")
41 lines.append(f"Activated by {self.maybe_magenta(oshi.provider)}")
42 config_section = f"[{oshi.scope or 'GLOBAL'}]"
43 lines.append(f"Config section: {self.maybe_magenta(config_section)}")
44 lines.append(" ")
45 if not ohis:
46 lines.append("None available.")
47 return
48 for ohi in ohis:
49 lines.extend([*self.format_option(ohi), ""])
50
51 add_option(oshi.basic)
52 if self._show_advanced:
53 add_option(oshi.advanced, category="advanced")
54 if self._show_deprecated:
55 add_option(oshi.deprecated, category="deprecated")
56 if oshi.advanced and not self._show_advanced:
57 lines.append(
58 self.maybe_green(
59 f"Advanced options available. You can list them by running "
60 f"{bin_name()} help-advanced {oshi.scope}."
61 )
62 )
63 return [*lines, ""]
64
65 def format_option(self, ohi: OptionHelpInfo) -> List[str]:
66 """Format the help output for a single option.
67
68 :param ohi: Extracted information for option to print
69 :return: Formatted help text for this option
70 """
71
72 def maybe_parens(s: Optional[str]) -> str:
73 return f" ({s})" if s else ""
74
75 def format_value(ranked_val: RankedValue, prefix: str, left_padding: str) -> List[str]:
76 if isinstance(ranked_val.value, (list, dict)):
77 is_enum_list = (
78 isinstance(ranked_val.value, list)
79 and len(ranked_val.value) > 0
80 and isinstance(ranked_val.value[0], Enum)
81 )
82 normalized_val = (
83 [enum_elmt.value for enum_elmt in ranked_val.value]
84 if is_enum_list
85 else ranked_val.value
86 )
87 val_lines = json.dumps(normalized_val, sort_keys=True, indent=4).split("\n")
88 else:
89 val_lines = [to_help_str(ranked_val.value)]
90 val_lines[0] = f"{prefix}{val_lines[0]}"
91 val_lines[-1] = f"{val_lines[-1]}{maybe_parens(ranked_val.details)}"
92 val_lines = [self.maybe_cyan(f"{left_padding}{line}") for line in val_lines]
93 return val_lines
94
95 indent = " "
96 arg_lines = [f" {self.maybe_magenta(args)}" for args in ohi.display_args]
97 arg_lines.append(self.maybe_magenta(f" {ohi.env_var}"))
98 arg_lines.append(self.maybe_magenta(f" {ohi.config_key}"))
99 choices = "" if ohi.choices is None else f"one of: [{', '.join(ohi.choices)}]"
100 choices_lines = [
101 f"{indent}{' ' if i != 0 else ''}{self.maybe_cyan(s)}"
102 for i, s in enumerate(textwrap.wrap(f"{choices}", self._width))
103 ]
104 default_lines = format_value(RankedValue(Rank.HARDCODED, ohi.default), "default: ", indent)
105 if not ohi.value_history:
106 # Should never happen, but this keeps mypy happy.
107 raise ValueError("No value history - options not parsed.")
108 final_val = ohi.value_history.final_value
109 curr_value_lines = format_value(final_val, "current value: ", indent)
110
111 interesting_ranked_values = [
112 rv
113 for rv in reversed(ohi.value_history.ranked_values)
114 if rv.rank not in (Rank.NONE, Rank.HARDCODED, final_val.rank)
115 ]
116 value_derivation_lines = [
117 line
118 for rv in interesting_ranked_values
119 for line in format_value(rv, "overrode: ", f"{indent} ")
120 ]
121 description_lines = hard_wrap(ohi.help, indent=len(indent), width=self._width)
122 lines = [
123 *arg_lines,
124 *choices_lines,
125 *default_lines,
126 *curr_value_lines,
127 *value_derivation_lines,
128 *description_lines,
129 ]
130 if ohi.deprecated_message:
131 maybe_colorize = self.maybe_red if ohi.deprecation_active else self.maybe_yellow
132 lines.append(maybe_colorize(f"{indent}{ohi.deprecated_message}"))
133 if ohi.removal_hint:
134 lines.append(maybe_colorize(f"{indent}{ohi.removal_hint}"))
135 return lines
136
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/python/pants/help/help_formatter.py b/src/python/pants/help/help_formatter.py
--- a/src/python/pants/help/help_formatter.py
+++ b/src/python/pants/help/help_formatter.py
@@ -92,19 +92,33 @@
val_lines = [self.maybe_cyan(f"{left_padding}{line}") for line in val_lines]
return val_lines
+ def wrap(s: str) -> List[str]:
+ return hard_wrap(s, indent=len(indent), width=self._width)
+
indent = " "
+
arg_lines = [f" {self.maybe_magenta(args)}" for args in ohi.display_args]
arg_lines.append(self.maybe_magenta(f" {ohi.env_var}"))
arg_lines.append(self.maybe_magenta(f" {ohi.config_key}"))
+
choices = "" if ohi.choices is None else f"one of: [{', '.join(ohi.choices)}]"
choices_lines = [
f"{indent}{' ' if i != 0 else ''}{self.maybe_cyan(s)}"
for i, s in enumerate(textwrap.wrap(f"{choices}", self._width))
]
+
+ deprecated_lines = []
+ if ohi.deprecated_message:
+ maybe_colorize = self.maybe_red if ohi.deprecation_active else self.maybe_yellow
+ deprecated_lines.extend(wrap(maybe_colorize(ohi.deprecated_message)))
+ if ohi.removal_hint:
+ deprecated_lines.extend(wrap(maybe_colorize(ohi.removal_hint)))
+
default_lines = format_value(RankedValue(Rank.HARDCODED, ohi.default), "default: ", indent)
if not ohi.value_history:
# Should never happen, but this keeps mypy happy.
raise ValueError("No value history - options not parsed.")
+
final_val = ohi.value_history.final_value
curr_value_lines = format_value(final_val, "current value: ", indent)
@@ -118,18 +132,14 @@
for rv in interesting_ranked_values
for line in format_value(rv, "overrode: ", f"{indent} ")
]
- description_lines = hard_wrap(ohi.help, indent=len(indent), width=self._width)
+ description_lines = wrap(ohi.help)
lines = [
*arg_lines,
*choices_lines,
*default_lines,
*curr_value_lines,
*value_derivation_lines,
+ *deprecated_lines,
*description_lines,
]
- if ohi.deprecated_message:
- maybe_colorize = self.maybe_red if ohi.deprecation_active else self.maybe_yellow
- lines.append(maybe_colorize(f"{indent}{ohi.deprecated_message}"))
- if ohi.removal_hint:
- lines.append(maybe_colorize(f"{indent}{ohi.removal_hint}"))
return lines
| {"golden_diff": "diff --git a/src/python/pants/help/help_formatter.py b/src/python/pants/help/help_formatter.py\n--- a/src/python/pants/help/help_formatter.py\n+++ b/src/python/pants/help/help_formatter.py\n@@ -92,19 +92,33 @@\n val_lines = [self.maybe_cyan(f\"{left_padding}{line}\") for line in val_lines]\n return val_lines\n \n+ def wrap(s: str) -> List[str]:\n+ return hard_wrap(s, indent=len(indent), width=self._width)\n+\n indent = \" \"\n+\n arg_lines = [f\" {self.maybe_magenta(args)}\" for args in ohi.display_args]\n arg_lines.append(self.maybe_magenta(f\" {ohi.env_var}\"))\n arg_lines.append(self.maybe_magenta(f\" {ohi.config_key}\"))\n+\n choices = \"\" if ohi.choices is None else f\"one of: [{', '.join(ohi.choices)}]\"\n choices_lines = [\n f\"{indent}{' ' if i != 0 else ''}{self.maybe_cyan(s)}\"\n for i, s in enumerate(textwrap.wrap(f\"{choices}\", self._width))\n ]\n+\n+ deprecated_lines = []\n+ if ohi.deprecated_message:\n+ maybe_colorize = self.maybe_red if ohi.deprecation_active else self.maybe_yellow\n+ deprecated_lines.extend(wrap(maybe_colorize(ohi.deprecated_message)))\n+ if ohi.removal_hint:\n+ deprecated_lines.extend(wrap(maybe_colorize(ohi.removal_hint)))\n+\n default_lines = format_value(RankedValue(Rank.HARDCODED, ohi.default), \"default: \", indent)\n if not ohi.value_history:\n # Should never happen, but this keeps mypy happy.\n raise ValueError(\"No value history - options not parsed.\")\n+\n final_val = ohi.value_history.final_value\n curr_value_lines = format_value(final_val, \"current value: \", indent)\n \n@@ -118,18 +132,14 @@\n for rv in interesting_ranked_values\n for line in format_value(rv, \"overrode: \", f\"{indent} \")\n ]\n- description_lines = hard_wrap(ohi.help, indent=len(indent), width=self._width)\n+ description_lines = wrap(ohi.help)\n lines = [\n *arg_lines,\n *choices_lines,\n *default_lines,\n *curr_value_lines,\n *value_derivation_lines,\n+ *deprecated_lines,\n *description_lines,\n ]\n- if ohi.deprecated_message:\n- maybe_colorize = self.maybe_red if ohi.deprecation_active else self.maybe_yellow\n- lines.append(maybe_colorize(f\"{indent}{ohi.deprecated_message}\"))\n- if ohi.removal_hint:\n- lines.append(maybe_colorize(f\"{indent}{ohi.removal_hint}\"))\n return lines\n", "issue": "Deprecated option hints aren't tabbed in `help` goal(s)\n**Describe the bug**\r\nDeprecated option hints don't appear to be tabbed\r\n\r\nWhen running `./pants help-advanced`\r\n```\r\nGlobal deprecated options\r\n-------------------------\r\n \r\n --[no-]use-deprecated-python-macros\r\n PANTS_USE_DEPRECATED_PYTHON_MACROS\r\n use_deprecated_python_macros\r\n default: False\r\n current value: False\r\n If true, use Pants's deprecated macro system for `python_requirements`, `poetry_requirements`, and `pipenv_requirements` rather than target generation.\r\n \r\n The address for macros is different. Rather than `3rdparty/python#Django`, the address will look like `3rdparty/python:Django`. The macro (`python_requirements` et\r\n al) also was not a proper target, meaning that you could not give it a `name`. In contrast, if the target generator sets its `name`, e.g. to `reqs`, generated\r\n targets will have an address like `3rdparty/python:reqs#Django`.\r\n Deprecated, will be removed in version: 2.12.0.dev0.\r\n In Pants 2.12, the deprecated Python macros like `python_requirements` will be replaced with improved target generators, which are now enabled by default.\r\n\r\nIf you already migrated by setting `use_deprecated_python_macros = false`, simply delete the option.\r\n\r\nOtherwise, when you are ready to upgrade, follow these steps:\r\n\r\n 1. Run `./pants update-build-files --fix-python-macros`\r\n 2. Check the logs for an ERROR log to see if you have to manually add `name=` anywhere.\r\n 3. Remove `use_deprecated_python_macros = true` from `[GLOBAL]` in pants.toml.\r\n\r\n(Why upgrade from the old macro mechanism to target generation? Among other benefits, it makes sure that the Pants daemon is properly invalidated when you change `requirements.txt` and `pyproject.toml`.)\r\n```\r\n\r\n**Pants version**\r\n`main`\r\n\r\n**OS**\r\nLinux\r\n\r\n**Additional info**\r\nN/A\r\n\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nimport json\nimport textwrap\nfrom enum import Enum\nfrom typing import List, Optional\n\nfrom pants.help.help_info_extracter import OptionHelpInfo, OptionScopeHelpInfo, to_help_str\nfrom pants.help.maybe_color import MaybeColor\nfrom pants.option.ranked_value import Rank, RankedValue\nfrom pants.util.docutil import bin_name, terminal_width\nfrom pants.util.strutil import hard_wrap\n\n\nclass HelpFormatter(MaybeColor):\n def __init__(self, *, show_advanced: bool, show_deprecated: bool, color: bool) -> None:\n super().__init__(color=color)\n self._show_advanced = show_advanced\n self._show_deprecated = show_deprecated\n self._width = terminal_width()\n\n def format_options(self, oshi: OptionScopeHelpInfo):\n \"\"\"Return a help message for the specified options.\"\"\"\n lines = []\n\n def add_option(ohis, *, category=None):\n lines.append(\"\")\n goal_or_subsystem = \"goal\" if oshi.is_goal else \"subsystem\"\n display_scope = f\"`{oshi.scope}` {goal_or_subsystem}\" if oshi.scope else \"Global\"\n if category:\n title = f\"{display_scope} {category} options\"\n lines.append(self.maybe_green(f\"{title}\\n{'-' * len(title)}\"))\n else:\n # The basic options section gets the description and options scope info.\n # No need to repeat those in the advanced section.\n title = f\"{display_scope} options\"\n lines.append(self.maybe_green(f\"{title}\\n{'-' * len(title)}\\n\"))\n lines.extend(hard_wrap(oshi.description, width=self._width))\n lines.append(\" \")\n lines.append(f\"Activated by {self.maybe_magenta(oshi.provider)}\")\n config_section = f\"[{oshi.scope or 'GLOBAL'}]\"\n lines.append(f\"Config section: {self.maybe_magenta(config_section)}\")\n lines.append(\" \")\n if not ohis:\n lines.append(\"None available.\")\n return\n for ohi in ohis:\n lines.extend([*self.format_option(ohi), \"\"])\n\n add_option(oshi.basic)\n if self._show_advanced:\n add_option(oshi.advanced, category=\"advanced\")\n if self._show_deprecated:\n add_option(oshi.deprecated, category=\"deprecated\")\n if oshi.advanced and not self._show_advanced:\n lines.append(\n self.maybe_green(\n f\"Advanced options available. You can list them by running \"\n f\"{bin_name()} help-advanced {oshi.scope}.\"\n )\n )\n return [*lines, \"\"]\n\n def format_option(self, ohi: OptionHelpInfo) -> List[str]:\n \"\"\"Format the help output for a single option.\n\n :param ohi: Extracted information for option to print\n :return: Formatted help text for this option\n \"\"\"\n\n def maybe_parens(s: Optional[str]) -> str:\n return f\" ({s})\" if s else \"\"\n\n def format_value(ranked_val: RankedValue, prefix: str, left_padding: str) -> List[str]:\n if isinstance(ranked_val.value, (list, dict)):\n is_enum_list = (\n isinstance(ranked_val.value, list)\n and len(ranked_val.value) > 0\n and isinstance(ranked_val.value[0], Enum)\n )\n normalized_val = (\n [enum_elmt.value for enum_elmt in ranked_val.value]\n if is_enum_list\n else ranked_val.value\n )\n val_lines = json.dumps(normalized_val, sort_keys=True, indent=4).split(\"\\n\")\n else:\n val_lines = [to_help_str(ranked_val.value)]\n val_lines[0] = f\"{prefix}{val_lines[0]}\"\n val_lines[-1] = f\"{val_lines[-1]}{maybe_parens(ranked_val.details)}\"\n val_lines = [self.maybe_cyan(f\"{left_padding}{line}\") for line in val_lines]\n return val_lines\n\n indent = \" \"\n arg_lines = [f\" {self.maybe_magenta(args)}\" for args in ohi.display_args]\n arg_lines.append(self.maybe_magenta(f\" {ohi.env_var}\"))\n arg_lines.append(self.maybe_magenta(f\" {ohi.config_key}\"))\n choices = \"\" if ohi.choices is None else f\"one of: [{', '.join(ohi.choices)}]\"\n choices_lines = [\n f\"{indent}{' ' if i != 0 else ''}{self.maybe_cyan(s)}\"\n for i, s in enumerate(textwrap.wrap(f\"{choices}\", self._width))\n ]\n default_lines = format_value(RankedValue(Rank.HARDCODED, ohi.default), \"default: \", indent)\n if not ohi.value_history:\n # Should never happen, but this keeps mypy happy.\n raise ValueError(\"No value history - options not parsed.\")\n final_val = ohi.value_history.final_value\n curr_value_lines = format_value(final_val, \"current value: \", indent)\n\n interesting_ranked_values = [\n rv\n for rv in reversed(ohi.value_history.ranked_values)\n if rv.rank not in (Rank.NONE, Rank.HARDCODED, final_val.rank)\n ]\n value_derivation_lines = [\n line\n for rv in interesting_ranked_values\n for line in format_value(rv, \"overrode: \", f\"{indent} \")\n ]\n description_lines = hard_wrap(ohi.help, indent=len(indent), width=self._width)\n lines = [\n *arg_lines,\n *choices_lines,\n *default_lines,\n *curr_value_lines,\n *value_derivation_lines,\n *description_lines,\n ]\n if ohi.deprecated_message:\n maybe_colorize = self.maybe_red if ohi.deprecation_active else self.maybe_yellow\n lines.append(maybe_colorize(f\"{indent}{ohi.deprecated_message}\"))\n if ohi.removal_hint:\n lines.append(maybe_colorize(f\"{indent}{ohi.removal_hint}\"))\n return lines\n", "path": "src/python/pants/help/help_formatter.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nimport json\nimport textwrap\nfrom enum import Enum\nfrom typing import List, Optional\n\nfrom pants.help.help_info_extracter import OptionHelpInfo, OptionScopeHelpInfo, to_help_str\nfrom pants.help.maybe_color import MaybeColor\nfrom pants.option.ranked_value import Rank, RankedValue\nfrom pants.util.docutil import bin_name, terminal_width\nfrom pants.util.strutil import hard_wrap\n\n\nclass HelpFormatter(MaybeColor):\n def __init__(self, *, show_advanced: bool, show_deprecated: bool, color: bool) -> None:\n super().__init__(color=color)\n self._show_advanced = show_advanced\n self._show_deprecated = show_deprecated\n self._width = terminal_width()\n\n def format_options(self, oshi: OptionScopeHelpInfo):\n \"\"\"Return a help message for the specified options.\"\"\"\n lines = []\n\n def add_option(ohis, *, category=None):\n lines.append(\"\")\n goal_or_subsystem = \"goal\" if oshi.is_goal else \"subsystem\"\n display_scope = f\"`{oshi.scope}` {goal_or_subsystem}\" if oshi.scope else \"Global\"\n if category:\n title = f\"{display_scope} {category} options\"\n lines.append(self.maybe_green(f\"{title}\\n{'-' * len(title)}\"))\n else:\n # The basic options section gets the description and options scope info.\n # No need to repeat those in the advanced section.\n title = f\"{display_scope} options\"\n lines.append(self.maybe_green(f\"{title}\\n{'-' * len(title)}\\n\"))\n lines.extend(hard_wrap(oshi.description, width=self._width))\n lines.append(\" \")\n lines.append(f\"Activated by {self.maybe_magenta(oshi.provider)}\")\n config_section = f\"[{oshi.scope or 'GLOBAL'}]\"\n lines.append(f\"Config section: {self.maybe_magenta(config_section)}\")\n lines.append(\" \")\n if not ohis:\n lines.append(\"None available.\")\n return\n for ohi in ohis:\n lines.extend([*self.format_option(ohi), \"\"])\n\n add_option(oshi.basic)\n if self._show_advanced:\n add_option(oshi.advanced, category=\"advanced\")\n if self._show_deprecated:\n add_option(oshi.deprecated, category=\"deprecated\")\n if oshi.advanced and not self._show_advanced:\n lines.append(\n self.maybe_green(\n f\"Advanced options available. You can list them by running \"\n f\"{bin_name()} help-advanced {oshi.scope}.\"\n )\n )\n return [*lines, \"\"]\n\n def format_option(self, ohi: OptionHelpInfo) -> List[str]:\n \"\"\"Format the help output for a single option.\n\n :param ohi: Extracted information for option to print\n :return: Formatted help text for this option\n \"\"\"\n\n def maybe_parens(s: Optional[str]) -> str:\n return f\" ({s})\" if s else \"\"\n\n def format_value(ranked_val: RankedValue, prefix: str, left_padding: str) -> List[str]:\n if isinstance(ranked_val.value, (list, dict)):\n is_enum_list = (\n isinstance(ranked_val.value, list)\n and len(ranked_val.value) > 0\n and isinstance(ranked_val.value[0], Enum)\n )\n normalized_val = (\n [enum_elmt.value for enum_elmt in ranked_val.value]\n if is_enum_list\n else ranked_val.value\n )\n val_lines = json.dumps(normalized_val, sort_keys=True, indent=4).split(\"\\n\")\n else:\n val_lines = [to_help_str(ranked_val.value)]\n val_lines[0] = f\"{prefix}{val_lines[0]}\"\n val_lines[-1] = f\"{val_lines[-1]}{maybe_parens(ranked_val.details)}\"\n val_lines = [self.maybe_cyan(f\"{left_padding}{line}\") for line in val_lines]\n return val_lines\n\n def wrap(s: str) -> List[str]:\n return hard_wrap(s, indent=len(indent), width=self._width)\n\n indent = \" \"\n\n arg_lines = [f\" {self.maybe_magenta(args)}\" for args in ohi.display_args]\n arg_lines.append(self.maybe_magenta(f\" {ohi.env_var}\"))\n arg_lines.append(self.maybe_magenta(f\" {ohi.config_key}\"))\n\n choices = \"\" if ohi.choices is None else f\"one of: [{', '.join(ohi.choices)}]\"\n choices_lines = [\n f\"{indent}{' ' if i != 0 else ''}{self.maybe_cyan(s)}\"\n for i, s in enumerate(textwrap.wrap(f\"{choices}\", self._width))\n ]\n\n deprecated_lines = []\n if ohi.deprecated_message:\n maybe_colorize = self.maybe_red if ohi.deprecation_active else self.maybe_yellow\n deprecated_lines.extend(wrap(maybe_colorize(ohi.deprecated_message)))\n if ohi.removal_hint:\n deprecated_lines.extend(wrap(maybe_colorize(ohi.removal_hint)))\n\n default_lines = format_value(RankedValue(Rank.HARDCODED, ohi.default), \"default: \", indent)\n if not ohi.value_history:\n # Should never happen, but this keeps mypy happy.\n raise ValueError(\"No value history - options not parsed.\")\n\n final_val = ohi.value_history.final_value\n curr_value_lines = format_value(final_val, \"current value: \", indent)\n\n interesting_ranked_values = [\n rv\n for rv in reversed(ohi.value_history.ranked_values)\n if rv.rank not in (Rank.NONE, Rank.HARDCODED, final_val.rank)\n ]\n value_derivation_lines = [\n line\n for rv in interesting_ranked_values\n for line in format_value(rv, \"overrode: \", f\"{indent} \")\n ]\n description_lines = wrap(ohi.help)\n lines = [\n *arg_lines,\n *choices_lines,\n *default_lines,\n *curr_value_lines,\n *value_derivation_lines,\n *deprecated_lines,\n *description_lines,\n ]\n return lines\n", "path": "src/python/pants/help/help_formatter.py"}]} | 2,361 | 644 |
gh_patches_debug_596 | rasdani/github-patches | git_diff | pex-tool__pex-1673 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 2.1.72
On the docket:
+ [x] Fix Locker to prune un-downloaded entries. (#1666)
+ [x] Fix venv creation to ignore ambient PEX env vars. #1669
+ [x] Lockfiles: requirement might not be compatible with requested interpreter constraints #1667
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.71"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.71"
+__version__ = "2.1.72"
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.71\"\n+__version__ = \"2.1.72\"\n", "issue": "Release 2.1.72\nOn the docket:\r\n+ [x] Fix Locker to prune un-downloaded entries. (#1666)\r\n+ [x] Fix venv creation to ignore ambient PEX env vars. #1669\r\n+ [x] Lockfiles: requirement might not be compatible with requested interpreter constraints #1667 \r\n\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.71\"\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.72\"\n", "path": "pex/version.py"}]} | 386 | 96 |
gh_patches_debug_26261 | rasdani/github-patches | git_diff | kornia__kornia-2303 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`NotImplementedError` for elastic transformation with probability p < 1
### Describe the bug
With the newest kornia release (0.6.11), the random elastic transformation fails if it is not applied to every image in the batch.
The problem is that the `apply_non_transform_mask()` method in `_AugmentationBase` per default raises an `NotImplementedError` and since this method is not overwritten in `RandomElasticTransform`, the error is raised. I see that for the other `apply_non*` methods the default is to just return the input.
I see two different solutions:
1. Change the default for `apply_non_transform_mask` to return the input in `_AugmentationBase`.
2. Overwrite the method in `RandomElasticTransform` and just return the input there.
There might be good reasons to keep the `NotImplementedError` in the base class, therefore I wanted to ask first what solution you prefer. I could make a PR for this.
### Reproduction steps
```python
import torch
import kornia.augmentation as K
features = torch.rand(5, 100, 480, 640, dtype=torch.float32, device="cuda")
labels = torch.randint(0, 10, (5, 1, 480, 640), dtype=torch.int64, device="cuda")
torch.manual_seed(0)
aug = K.AugmentationSequential(
K.RandomElasticTransform(alpha=(0.7, 0.7), sigma=(16, 16), padding_mode="reflection", p=0.2)
)
features_transformed, labels_transformed = aug(features, labels.float(), data_keys=["input", "mask"])
```
### Expected behavior
No `NotImplementedError`.
### Environment
```shell
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
- PyTorch Version (e.g., 1.0): 2.0
- OS (e.g., Linux): Linux
- How you installed PyTorch (`conda`, `pip`, source): pip
- Build command you used (if compiling from source):
- Python version: 3.10.9
- CUDA/cuDNN version: 11.8
- GPU models and configuration: 3090
- Any other relevant information:
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kornia/augmentation/_2d/geometric/elastic_transform.py`
Content:
```
1 from typing import Any, Dict, Optional, Tuple, Union
2
3 import torch
4
5 from kornia.augmentation._2d.base import AugmentationBase2D
6 from kornia.constants import Resample
7 from kornia.core import Tensor
8 from kornia.geometry.boxes import Boxes
9 from kornia.geometry.transform import elastic_transform2d
10
11
12 class RandomElasticTransform(AugmentationBase2D):
13 r"""Add random elastic transformation to a tensor image.
14
15 .. image:: _static/img/RandomElasticTransform.png
16
17 Args:
18 kernel_size: the size of the Gaussian kernel.
19 sigma: The standard deviation of the Gaussian in the y and x directions,
20 respectively. Larger sigma results in smaller pixel displacements.
21 alpha: The scaling factor that controls the intensity of the deformation
22 in the y and x directions, respectively.
23 align_corners: Interpolation flag used by `grid_sample`.
24 resample: Interpolation mode used by `grid_sample`. Either 'nearest' (0) or 'bilinear' (1).
25 mode: Deprecated: Interpolation mode used by `grid_sample`. Either 'bilinear' or 'nearest'.
26 Please use the `resample` argument instead.
27 padding_mode: The padding used by ```grid_sample```. Either 'zeros', 'border' or 'refection'.
28 same_on_batch: apply the same transformation across the batch.
29 p: probability of applying the transformation.
30 keepdim: whether to keep the output shape the same as input (True) or broadcast it
31 to the batch form (False).
32
33 .. note::
34 This function internally uses :func:`kornia.geometry.transform.elastic_transform2d`.
35
36 Examples:
37 >>> import torch
38 >>> img = torch.ones(1, 1, 2, 2)
39 >>> out = RandomElasticTransform()(img)
40 >>> out.shape
41 torch.Size([1, 1, 2, 2])
42
43 To apply the exact augmenation again, you may take the advantage of the previous parameter state:
44 >>> input = torch.randn(1, 3, 32, 32)
45 >>> aug = RandomElasticTransform(p=1.)
46 >>> (aug(input) == aug(input, params=aug._params)).all()
47 tensor(True)
48 """
49
50 def __init__(
51 self,
52 kernel_size: Tuple[int, int] = (63, 63),
53 sigma: Tuple[float, float] = (32.0, 32.0),
54 alpha: Tuple[float, float] = (1.0, 1.0),
55 align_corners: bool = False,
56 resample: Union[str, int, Resample] = Resample.BILINEAR.name,
57 padding_mode: str = "zeros",
58 same_on_batch: bool = False,
59 p: float = 0.5,
60 keepdim: bool = False,
61 ) -> None:
62 super().__init__(p=p, same_on_batch=same_on_batch, p_batch=1.0, keepdim=keepdim)
63
64 self.flags = dict(
65 kernel_size=kernel_size,
66 sigma=sigma,
67 alpha=alpha,
68 align_corners=align_corners,
69 resample=Resample.get(resample),
70 padding_mode=padding_mode,
71 )
72
73 def generate_parameters(self, shape: Tuple[int, ...]) -> Dict[str, Tensor]:
74 B, _, H, W = shape
75 if self.same_on_batch:
76 noise = torch.rand(1, 2, H, W, device=self.device, dtype=self.dtype).expand(B, 2, H, W)
77 else:
78 noise = torch.rand(B, 2, H, W, device=self.device, dtype=self.dtype)
79 return dict(noise=noise * 2 - 1)
80
81 def apply_transform(
82 self, input: Tensor, params: Dict[str, Tensor], flags: Dict[str, Any], transform: Optional[Tensor] = None
83 ) -> Tensor:
84 return elastic_transform2d(
85 input,
86 params["noise"].to(input),
87 flags["kernel_size"],
88 flags["sigma"],
89 flags["alpha"],
90 flags["align_corners"],
91 flags["resample"].name.lower(),
92 flags["padding_mode"],
93 )
94
95 def apply_transform_mask(
96 self, input: Tensor, params: Dict[str, Tensor], flags: Dict[str, Any], transform: Optional[Tensor] = None
97 ) -> Tensor:
98 """Process masks corresponding to the inputs that are transformed."""
99 return self.apply_transform(input, params=params, flags=flags, transform=transform)
100
101 def apply_transform_box(
102 self, input: Boxes, params: Dict[str, Tensor], flags: Dict[str, Any], transform: Optional[Tensor] = None
103 ) -> Boxes:
104 """Process masks corresponding to the inputs that are transformed."""
105 # We assume that boxes may not be affected too much by the deformation.
106 return input
107
108 def apply_transform_class(
109 self, input: Tensor, params: Dict[str, Tensor], flags: Dict[str, Any], transform: Optional[Tensor] = None
110 ) -> Tensor:
111 """Process class tags corresponding to the inputs that are transformed."""
112 return input
113
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kornia/augmentation/_2d/geometric/elastic_transform.py b/kornia/augmentation/_2d/geometric/elastic_transform.py
--- a/kornia/augmentation/_2d/geometric/elastic_transform.py
+++ b/kornia/augmentation/_2d/geometric/elastic_transform.py
@@ -22,8 +22,6 @@
in the y and x directions, respectively.
align_corners: Interpolation flag used by `grid_sample`.
resample: Interpolation mode used by `grid_sample`. Either 'nearest' (0) or 'bilinear' (1).
- mode: Deprecated: Interpolation mode used by `grid_sample`. Either 'bilinear' or 'nearest'.
- Please use the `resample` argument instead.
padding_mode: The padding used by ```grid_sample```. Either 'zeros', 'border' or 'refection'.
same_on_batch: apply the same transformation across the batch.
p: probability of applying the transformation.
@@ -92,6 +90,11 @@
flags["padding_mode"],
)
+ def apply_non_transform_mask(
+ self, input: Tensor, params: Dict[str, Tensor], flags: Dict[str, Any], transform: Optional[Tensor] = None
+ ) -> Tensor:
+ return input
+
def apply_transform_mask(
self, input: Tensor, params: Dict[str, Tensor], flags: Dict[str, Any], transform: Optional[Tensor] = None
) -> Tensor:
| {"golden_diff": "diff --git a/kornia/augmentation/_2d/geometric/elastic_transform.py b/kornia/augmentation/_2d/geometric/elastic_transform.py\n--- a/kornia/augmentation/_2d/geometric/elastic_transform.py\n+++ b/kornia/augmentation/_2d/geometric/elastic_transform.py\n@@ -22,8 +22,6 @@\n in the y and x directions, respectively.\n align_corners: Interpolation flag used by `grid_sample`.\n resample: Interpolation mode used by `grid_sample`. Either 'nearest' (0) or 'bilinear' (1).\n- mode: Deprecated: Interpolation mode used by `grid_sample`. Either 'bilinear' or 'nearest'.\n- Please use the `resample` argument instead.\n padding_mode: The padding used by ```grid_sample```. Either 'zeros', 'border' or 'refection'.\n same_on_batch: apply the same transformation across the batch.\n p: probability of applying the transformation.\n@@ -92,6 +90,11 @@\n flags[\"padding_mode\"],\n )\n \n+ def apply_non_transform_mask(\n+ self, input: Tensor, params: Dict[str, Tensor], flags: Dict[str, Any], transform: Optional[Tensor] = None\n+ ) -> Tensor:\n+ return input\n+\n def apply_transform_mask(\n self, input: Tensor, params: Dict[str, Tensor], flags: Dict[str, Any], transform: Optional[Tensor] = None\n ) -> Tensor:\n", "issue": "`NotImplementedError` for elastic transformation with probability p < 1\n### Describe the bug\r\n\r\nWith the newest kornia release (0.6.11), the random elastic transformation fails if it is not applied to every image in the batch.\r\n\r\nThe problem is that the `apply_non_transform_mask()` method in `_AugmentationBase` per default raises an `NotImplementedError` and since this method is not overwritten in `RandomElasticTransform`, the error is raised. I see that for the other `apply_non*` methods the default is to just return the input.\r\n\r\nI see two different solutions:\r\n1. Change the default for `apply_non_transform_mask` to return the input in `_AugmentationBase`.\r\n2. Overwrite the method in `RandomElasticTransform` and just return the input there.\r\n\r\nThere might be good reasons to keep the `NotImplementedError` in the base class, therefore I wanted to ask first what solution you prefer. I could make a PR for this.\r\n\r\n### Reproduction steps\r\n\r\n```python\r\nimport torch\r\nimport kornia.augmentation as K\r\n\r\nfeatures = torch.rand(5, 100, 480, 640, dtype=torch.float32, device=\"cuda\")\r\nlabels = torch.randint(0, 10, (5, 1, 480, 640), dtype=torch.int64, device=\"cuda\")\r\n\r\ntorch.manual_seed(0)\r\naug = K.AugmentationSequential(\r\n K.RandomElasticTransform(alpha=(0.7, 0.7), sigma=(16, 16), padding_mode=\"reflection\", p=0.2)\r\n)\r\n\r\nfeatures_transformed, labels_transformed = aug(features, labels.float(), data_keys=[\"input\", \"mask\"])\r\n```\r\n\r\n\r\n### Expected behavior\r\n\r\nNo `NotImplementedError`.\r\n\r\n### Environment\r\n\r\n```shell\r\nwget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py\r\n# For security purposes, please check the contents of collect_env.py before running it.\r\npython collect_env.py\r\n```\r\n- PyTorch Version (e.g., 1.0): 2.0\r\n- OS (e.g., Linux): Linux\r\n- How you installed PyTorch (`conda`, `pip`, source): pip\r\n- Build command you used (if compiling from source):\r\n- Python version: 3.10.9\r\n- CUDA/cuDNN version: 11.8\r\n- GPU models and configuration: 3090\r\n- Any other relevant information:\r\n```\n", "before_files": [{"content": "from typing import Any, Dict, Optional, Tuple, Union\n\nimport torch\n\nfrom kornia.augmentation._2d.base import AugmentationBase2D\nfrom kornia.constants import Resample\nfrom kornia.core import Tensor\nfrom kornia.geometry.boxes import Boxes\nfrom kornia.geometry.transform import elastic_transform2d\n\n\nclass RandomElasticTransform(AugmentationBase2D):\n r\"\"\"Add random elastic transformation to a tensor image.\n\n .. image:: _static/img/RandomElasticTransform.png\n\n Args:\n kernel_size: the size of the Gaussian kernel.\n sigma: The standard deviation of the Gaussian in the y and x directions,\n respectively. Larger sigma results in smaller pixel displacements.\n alpha: The scaling factor that controls the intensity of the deformation\n in the y and x directions, respectively.\n align_corners: Interpolation flag used by `grid_sample`.\n resample: Interpolation mode used by `grid_sample`. Either 'nearest' (0) or 'bilinear' (1).\n mode: Deprecated: Interpolation mode used by `grid_sample`. Either 'bilinear' or 'nearest'.\n Please use the `resample` argument instead.\n padding_mode: The padding used by ```grid_sample```. Either 'zeros', 'border' or 'refection'.\n same_on_batch: apply the same transformation across the batch.\n p: probability of applying the transformation.\n keepdim: whether to keep the output shape the same as input (True) or broadcast it\n to the batch form (False).\n\n .. note::\n This function internally uses :func:`kornia.geometry.transform.elastic_transform2d`.\n\n Examples:\n >>> import torch\n >>> img = torch.ones(1, 1, 2, 2)\n >>> out = RandomElasticTransform()(img)\n >>> out.shape\n torch.Size([1, 1, 2, 2])\n\n To apply the exact augmenation again, you may take the advantage of the previous parameter state:\n >>> input = torch.randn(1, 3, 32, 32)\n >>> aug = RandomElasticTransform(p=1.)\n >>> (aug(input) == aug(input, params=aug._params)).all()\n tensor(True)\n \"\"\"\n\n def __init__(\n self,\n kernel_size: Tuple[int, int] = (63, 63),\n sigma: Tuple[float, float] = (32.0, 32.0),\n alpha: Tuple[float, float] = (1.0, 1.0),\n align_corners: bool = False,\n resample: Union[str, int, Resample] = Resample.BILINEAR.name,\n padding_mode: str = \"zeros\",\n same_on_batch: bool = False,\n p: float = 0.5,\n keepdim: bool = False,\n ) -> None:\n super().__init__(p=p, same_on_batch=same_on_batch, p_batch=1.0, keepdim=keepdim)\n\n self.flags = dict(\n kernel_size=kernel_size,\n sigma=sigma,\n alpha=alpha,\n align_corners=align_corners,\n resample=Resample.get(resample),\n padding_mode=padding_mode,\n )\n\n def generate_parameters(self, shape: Tuple[int, ...]) -> Dict[str, Tensor]:\n B, _, H, W = shape\n if self.same_on_batch:\n noise = torch.rand(1, 2, H, W, device=self.device, dtype=self.dtype).expand(B, 2, H, W)\n else:\n noise = torch.rand(B, 2, H, W, device=self.device, dtype=self.dtype)\n return dict(noise=noise * 2 - 1)\n\n def apply_transform(\n self, input: Tensor, params: Dict[str, Tensor], flags: Dict[str, Any], transform: Optional[Tensor] = None\n ) -> Tensor:\n return elastic_transform2d(\n input,\n params[\"noise\"].to(input),\n flags[\"kernel_size\"],\n flags[\"sigma\"],\n flags[\"alpha\"],\n flags[\"align_corners\"],\n flags[\"resample\"].name.lower(),\n flags[\"padding_mode\"],\n )\n\n def apply_transform_mask(\n self, input: Tensor, params: Dict[str, Tensor], flags: Dict[str, Any], transform: Optional[Tensor] = None\n ) -> Tensor:\n \"\"\"Process masks corresponding to the inputs that are transformed.\"\"\"\n return self.apply_transform(input, params=params, flags=flags, transform=transform)\n\n def apply_transform_box(\n self, input: Boxes, params: Dict[str, Tensor], flags: Dict[str, Any], transform: Optional[Tensor] = None\n ) -> Boxes:\n \"\"\"Process masks corresponding to the inputs that are transformed.\"\"\"\n # We assume that boxes may not be affected too much by the deformation.\n return input\n\n def apply_transform_class(\n self, input: Tensor, params: Dict[str, Tensor], flags: Dict[str, Any], transform: Optional[Tensor] = None\n ) -> Tensor:\n \"\"\"Process class tags corresponding to the inputs that are transformed.\"\"\"\n return input\n", "path": "kornia/augmentation/_2d/geometric/elastic_transform.py"}], "after_files": [{"content": "from typing import Any, Dict, Optional, Tuple, Union\n\nimport torch\n\nfrom kornia.augmentation._2d.base import AugmentationBase2D\nfrom kornia.constants import Resample\nfrom kornia.core import Tensor\nfrom kornia.geometry.boxes import Boxes\nfrom kornia.geometry.transform import elastic_transform2d\n\n\nclass RandomElasticTransform(AugmentationBase2D):\n r\"\"\"Add random elastic transformation to a tensor image.\n\n .. image:: _static/img/RandomElasticTransform.png\n\n Args:\n kernel_size: the size of the Gaussian kernel.\n sigma: The standard deviation of the Gaussian in the y and x directions,\n respectively. Larger sigma results in smaller pixel displacements.\n alpha: The scaling factor that controls the intensity of the deformation\n in the y and x directions, respectively.\n align_corners: Interpolation flag used by `grid_sample`.\n resample: Interpolation mode used by `grid_sample`. Either 'nearest' (0) or 'bilinear' (1).\n padding_mode: The padding used by ```grid_sample```. Either 'zeros', 'border' or 'refection'.\n same_on_batch: apply the same transformation across the batch.\n p: probability of applying the transformation.\n keepdim: whether to keep the output shape the same as input (True) or broadcast it\n to the batch form (False).\n\n .. note::\n This function internally uses :func:`kornia.geometry.transform.elastic_transform2d`.\n\n Examples:\n >>> import torch\n >>> img = torch.ones(1, 1, 2, 2)\n >>> out = RandomElasticTransform()(img)\n >>> out.shape\n torch.Size([1, 1, 2, 2])\n\n To apply the exact augmenation again, you may take the advantage of the previous parameter state:\n >>> input = torch.randn(1, 3, 32, 32)\n >>> aug = RandomElasticTransform(p=1.)\n >>> (aug(input) == aug(input, params=aug._params)).all()\n tensor(True)\n \"\"\"\n\n def __init__(\n self,\n kernel_size: Tuple[int, int] = (63, 63),\n sigma: Tuple[float, float] = (32.0, 32.0),\n alpha: Tuple[float, float] = (1.0, 1.0),\n align_corners: bool = False,\n resample: Union[str, int, Resample] = Resample.BILINEAR.name,\n padding_mode: str = \"zeros\",\n same_on_batch: bool = False,\n p: float = 0.5,\n keepdim: bool = False,\n ) -> None:\n super().__init__(p=p, same_on_batch=same_on_batch, p_batch=1.0, keepdim=keepdim)\n\n self.flags = dict(\n kernel_size=kernel_size,\n sigma=sigma,\n alpha=alpha,\n align_corners=align_corners,\n resample=Resample.get(resample),\n padding_mode=padding_mode,\n )\n\n def generate_parameters(self, shape: Tuple[int, ...]) -> Dict[str, Tensor]:\n B, _, H, W = shape\n if self.same_on_batch:\n noise = torch.rand(1, 2, H, W, device=self.device, dtype=self.dtype).expand(B, 2, H, W)\n else:\n noise = torch.rand(B, 2, H, W, device=self.device, dtype=self.dtype)\n return dict(noise=noise * 2 - 1)\n\n def apply_transform(\n self, input: Tensor, params: Dict[str, Tensor], flags: Dict[str, Any], transform: Optional[Tensor] = None\n ) -> Tensor:\n return elastic_transform2d(\n input,\n params[\"noise\"].to(input),\n flags[\"kernel_size\"],\n flags[\"sigma\"],\n flags[\"alpha\"],\n flags[\"align_corners\"],\n flags[\"resample\"].name.lower(),\n flags[\"padding_mode\"],\n )\n\n def apply_non_transform_mask(\n self, input: Tensor, params: Dict[str, Tensor], flags: Dict[str, Any], transform: Optional[Tensor] = None\n ) -> Tensor:\n return input\n\n def apply_transform_mask(\n self, input: Tensor, params: Dict[str, Tensor], flags: Dict[str, Any], transform: Optional[Tensor] = None\n ) -> Tensor:\n \"\"\"Process masks corresponding to the inputs that are transformed.\"\"\"\n return self.apply_transform(input, params=params, flags=flags, transform=transform)\n\n def apply_transform_box(\n self, input: Boxes, params: Dict[str, Tensor], flags: Dict[str, Any], transform: Optional[Tensor] = None\n ) -> Boxes:\n \"\"\"Process masks corresponding to the inputs that are transformed.\"\"\"\n # We assume that boxes may not be affected too much by the deformation.\n return input\n\n def apply_transform_class(\n self, input: Tensor, params: Dict[str, Tensor], flags: Dict[str, Any], transform: Optional[Tensor] = None\n ) -> Tensor:\n \"\"\"Process class tags corresponding to the inputs that are transformed.\"\"\"\n return input\n", "path": "kornia/augmentation/_2d/geometric/elastic_transform.py"}]} | 2,154 | 332 |
gh_patches_debug_361 | rasdani/github-patches | git_diff | NVIDIA__NVFlare-1314 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Command "nvflare" not found
Unfortunately I can't run nvflare on Ubuntu because it tells me that the command (`nvflare simulator -h`) is not found. Even if I use the Docker solution. Inside the docker it also shows "command not found". Thus unable to run local experiments.
I tried the MWE from https://nvflare.readthedocs.io/en/2.2.1/getting_started.html
Used OS: Ubuntu 18.04.5 LTS
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 # Copyright (c) 2021, NVIDIA CORPORATION.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # Configuration file for the Sphinx documentation builder.
16 #
17 # This file only contains a selection of the most common options. For a full
18 # list see the documentation:
19 # http://www.sphinx-doc.org/en/master/config
20
21 # -- Path setup --------------------------------------------------------------
22
23 # If extensions (or modules to document with autodoc) are in another directory,
24 # add these directories to sys.path here. If the directory is relative to the
25 # documentation root, use os.path.abspath to make it absolute, like shown here.
26 #
27 import sphinx_rtd_theme
28 import os
29 import sys
30 from sphinx.domains.python import PythonDomain
31 import subprocess
32
33
34 class PatchedPythonDomain(PythonDomain):
35 def resolve_xref(self, env, fromdocname, builder, typ, target, node, contnode):
36 if "refspecific" in node:
37 del node["refspecific"]
38 return super(PatchedPythonDomain, self).resolve_xref(env, fromdocname, builder, typ, target, node, contnode)
39
40
41 sys.path.insert(0, os.path.abspath(".."))
42 print(sys.path)
43
44 # -- Project information -----------------------------------------------------
45
46 project = "NVIDIA FLARE"
47 copyright = "2022, NVIDIA"
48 author = "NVIDIA"
49
50 # The full version, including alpha/beta/rc tags
51 release = "2.2.1"
52 version = "2.2.1"
53
54
55 # -- General configuration ---------------------------------------------------
56
57 # Add any Sphinx extension module names here, as strings. They can be
58 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
59 # ones.
60 # Add napoleon to the extensions list
61 # source_parsers = {'.md': CommonMarkParser}
62
63 templates_path = ["templates"]
64
65 source_suffix = {
66 ".rst": "restructuredtext",
67 ".txt": "restructuredtext",
68 ".md": "markdown",
69 }
70
71 extensions = [
72 "recommonmark",
73 "sphinx.ext.intersphinx",
74 "sphinx.ext.mathjax",
75 "sphinx.ext.napoleon",
76 "sphinx.ext.autodoc",
77 "sphinx.ext.viewcode",
78 "sphinx.ext.autosectionlabel",
79 ]
80
81 autoclass_content = "both"
82 add_module_names = False
83 autosectionlabel_prefix_document = True
84
85 # Add any paths that contain templates here, relative to this directory.
86 # templates_path = ['_templates']
87
88 # List of patterns, relative to source directory, that match files and
89 # directories to ignore when looking for source files.
90 # This pattern also affects html_static_path and html_extra_path.
91 exclude_patterns = []
92
93
94 # -- Options for HTML output -------------------------------------------------
95
96 # The theme to use for HTML and HTML Help pages. See the documentation for
97 # a list of builtin themes.
98 #
99 html_theme = "sphinx_rtd_theme"
100 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
101 html_theme_options = {
102 "collapse_navigation": True,
103 "display_version": True,
104 "navigation_depth": 5,
105 "sticky_navigation": True, # Set to False to disable the sticky nav while scrolling.
106 # 'logo_only': True, # if we have a html_logo below, this shows /only/ the logo with no title text
107 }
108 html_scaled_image_link = False
109 html_show_sourcelink = True
110 html_favicon = "favicon.ico"
111
112 # Add any paths that contain custom static files (such as style sheets) here,
113 # relative to this directory. They are copied after the builtin static files,
114 # so a file named "default.css" will overwrite the builtin "default.css".
115 html_static_path = ["_static"]
116
117
118 def generate_apidocs(*args):
119 """Generate API docs automatically by trawling the available modules"""
120 module_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "..", "nvflare"))
121 output_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "apidocs"))
122 print(f"output_path {output_path}")
123 print(f"module_path {module_path}")
124 subprocess.check_call(
125 [sys.executable, "-m", "sphinx.ext.apidoc", "-f", "-e"]
126 + ["-o", output_path]
127 + [module_path]
128 + [os.path.join(module_path, p) for p in exclude_patterns]
129 )
130
131
132 def setup(app):
133 app.connect("builder-inited", generate_apidocs)
134 app.add_domain(PatchedPythonDomain, override=True)
135 app.add_css_file("css/additions.css")
136
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -76,6 +76,7 @@
"sphinx.ext.autodoc",
"sphinx.ext.viewcode",
"sphinx.ext.autosectionlabel",
+ "sphinx_copybutton",
]
autoclass_content = "both"
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -76,6 +76,7 @@\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.viewcode\",\n \"sphinx.ext.autosectionlabel\",\n+ \"sphinx_copybutton\",\n ]\n \n autoclass_content = \"both\"\n", "issue": "[BUG] Command \"nvflare\" not found \nUnfortunately I can't run nvflare on Ubuntu because it tells me that the command (`nvflare simulator -h`) is not found. Even if I use the Docker solution. Inside the docker it also shows \"command not found\". Thus unable to run local experiments.\r\n\r\nI tried the MWE from https://nvflare.readthedocs.io/en/2.2.1/getting_started.html\r\n\r\nUsed OS: Ubuntu 18.04.5 LTS\r\n\n", "before_files": [{"content": "# Copyright (c) 2021, NVIDIA CORPORATION.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# http://www.sphinx-doc.org/en/master/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport sphinx_rtd_theme\nimport os\nimport sys\nfrom sphinx.domains.python import PythonDomain\nimport subprocess\n\n\nclass PatchedPythonDomain(PythonDomain):\n def resolve_xref(self, env, fromdocname, builder, typ, target, node, contnode):\n if \"refspecific\" in node:\n del node[\"refspecific\"]\n return super(PatchedPythonDomain, self).resolve_xref(env, fromdocname, builder, typ, target, node, contnode)\n\n\nsys.path.insert(0, os.path.abspath(\"..\"))\nprint(sys.path)\n\n# -- Project information -----------------------------------------------------\n\nproject = \"NVIDIA FLARE\"\ncopyright = \"2022, NVIDIA\"\nauthor = \"NVIDIA\"\n\n# The full version, including alpha/beta/rc tags\nrelease = \"2.2.1\"\nversion = \"2.2.1\"\n\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\n# Add napoleon to the extensions list\n# source_parsers = {'.md': CommonMarkParser}\n\ntemplates_path = [\"templates\"]\n\nsource_suffix = {\n \".rst\": \"restructuredtext\",\n \".txt\": \"restructuredtext\",\n \".md\": \"markdown\",\n}\n\nextensions = [\n \"recommonmark\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.mathjax\",\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.viewcode\",\n \"sphinx.ext.autosectionlabel\",\n]\n\nautoclass_content = \"both\"\nadd_module_names = False\nautosectionlabel_prefix_document = True\n\n# Add any paths that contain templates here, relative to this directory.\n# templates_path = ['_templates']\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = []\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"sphinx_rtd_theme\"\nhtml_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\nhtml_theme_options = {\n \"collapse_navigation\": True,\n \"display_version\": True,\n \"navigation_depth\": 5,\n \"sticky_navigation\": True, # Set to False to disable the sticky nav while scrolling.\n # 'logo_only': True, # if we have a html_logo below, this shows /only/ the logo with no title text\n}\nhtml_scaled_image_link = False\nhtml_show_sourcelink = True\nhtml_favicon = \"favicon.ico\"\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n\ndef generate_apidocs(*args):\n \"\"\"Generate API docs automatically by trawling the available modules\"\"\"\n module_path = os.path.abspath(os.path.join(os.path.dirname(__file__), \"..\", \"nvflare\"))\n output_path = os.path.abspath(os.path.join(os.path.dirname(__file__), \"apidocs\"))\n print(f\"output_path {output_path}\")\n print(f\"module_path {module_path}\")\n subprocess.check_call(\n [sys.executable, \"-m\", \"sphinx.ext.apidoc\", \"-f\", \"-e\"]\n + [\"-o\", output_path]\n + [module_path]\n + [os.path.join(module_path, p) for p in exclude_patterns]\n )\n\n\ndef setup(app):\n app.connect(\"builder-inited\", generate_apidocs)\n app.add_domain(PatchedPythonDomain, override=True)\n app.add_css_file(\"css/additions.css\")\n", "path": "docs/conf.py"}], "after_files": [{"content": "# Copyright (c) 2021, NVIDIA CORPORATION.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# http://www.sphinx-doc.org/en/master/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport sphinx_rtd_theme\nimport os\nimport sys\nfrom sphinx.domains.python import PythonDomain\nimport subprocess\n\n\nclass PatchedPythonDomain(PythonDomain):\n def resolve_xref(self, env, fromdocname, builder, typ, target, node, contnode):\n if \"refspecific\" in node:\n del node[\"refspecific\"]\n return super(PatchedPythonDomain, self).resolve_xref(env, fromdocname, builder, typ, target, node, contnode)\n\n\nsys.path.insert(0, os.path.abspath(\"..\"))\nprint(sys.path)\n\n# -- Project information -----------------------------------------------------\n\nproject = \"NVIDIA FLARE\"\ncopyright = \"2022, NVIDIA\"\nauthor = \"NVIDIA\"\n\n# The full version, including alpha/beta/rc tags\nrelease = \"2.2.1\"\nversion = \"2.2.1\"\n\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\n# Add napoleon to the extensions list\n# source_parsers = {'.md': CommonMarkParser}\n\ntemplates_path = [\"templates\"]\n\nsource_suffix = {\n \".rst\": \"restructuredtext\",\n \".txt\": \"restructuredtext\",\n \".md\": \"markdown\",\n}\n\nextensions = [\n \"recommonmark\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.mathjax\",\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.viewcode\",\n \"sphinx.ext.autosectionlabel\",\n \"sphinx_copybutton\",\n]\n\nautoclass_content = \"both\"\nadd_module_names = False\nautosectionlabel_prefix_document = True\n\n# Add any paths that contain templates here, relative to this directory.\n# templates_path = ['_templates']\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = []\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"sphinx_rtd_theme\"\nhtml_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\nhtml_theme_options = {\n \"collapse_navigation\": True,\n \"display_version\": True,\n \"navigation_depth\": 5,\n \"sticky_navigation\": True, # Set to False to disable the sticky nav while scrolling.\n # 'logo_only': True, # if we have a html_logo below, this shows /only/ the logo with no title text\n}\nhtml_scaled_image_link = False\nhtml_show_sourcelink = True\nhtml_favicon = \"favicon.ico\"\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n\ndef generate_apidocs(*args):\n \"\"\"Generate API docs automatically by trawling the available modules\"\"\"\n module_path = os.path.abspath(os.path.join(os.path.dirname(__file__), \"..\", \"nvflare\"))\n output_path = os.path.abspath(os.path.join(os.path.dirname(__file__), \"apidocs\"))\n print(f\"output_path {output_path}\")\n print(f\"module_path {module_path}\")\n subprocess.check_call(\n [sys.executable, \"-m\", \"sphinx.ext.apidoc\", \"-f\", \"-e\"]\n + [\"-o\", output_path]\n + [module_path]\n + [os.path.join(module_path, p) for p in exclude_patterns]\n )\n\n\ndef setup(app):\n app.connect(\"builder-inited\", generate_apidocs)\n app.add_domain(PatchedPythonDomain, override=True)\n app.add_css_file(\"css/additions.css\")\n", "path": "docs/conf.py"}]} | 1,760 | 80 |
gh_patches_debug_28379 | rasdani/github-patches | git_diff | python-trio__trio-855 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
serve_tcp() fails out-of-the-box on systems with disabled IPv6
I recently had occasion to run a Trio server on a Linux system that had `ipv6.disable=1` in the kernel command line. I don't know why it was configured this way, but some Googling suggests that this is not such a very uncommon configuration. In this configuration, `getaddrinfo(None, ...)` still returns an IPv6 option, but `socket(AF_INET6, ...)` fails with `[Errno 97] Address family not supported by protocol`. This also affects several tests in trio's test suite.
I can work around this by explicitly serving on `"0.0.0.0"`, but it feels like a rough edge. As an alternative, maybe trio could suppress `EAFNOSUPPORT` errors from IPv6 socket creation in `open_tcp_listeners` if the IPv4 socket was created OK. Thoughts?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `trio/_highlevel_open_tcp_listeners.py`
Content:
```
1 import sys
2 from math import inf
3
4 import trio
5 from . import socket as tsocket
6
7 __all__ = ["open_tcp_listeners", "serve_tcp"]
8
9
10 # Default backlog size:
11 #
12 # Having the backlog too low can cause practical problems (a perfectly healthy
13 # service that starts failing to accept connections if they arrive in a
14 # burst).
15 #
16 # Having it too high doesn't really cause any problems. Like any buffer, you
17 # want backlog queue to be zero usually, and it won't save you if you're
18 # getting connection attempts faster than you can call accept() on an ongoing
19 # basis. But unlike other buffers, this one doesn't really provide any
20 # backpressure. If a connection gets stuck waiting in the backlog queue, then
21 # from the peer's point of view the connection succeeded but then their
22 # send/recv will stall until we get to it, possibly for a long time. OTOH if
23 # there isn't room in the backlog queue... then their connect stalls, possibly
24 # for a long time, which is pretty much the same thing.
25 #
26 # A large backlog can also use a bit more kernel memory, but this seems fairly
27 # negligible these days.
28 #
29 # So this suggests we should make the backlog as large as possible. This also
30 # matches what Golang does. However, they do it in a weird way, where they
31 # have a bunch of code to sniff out the configured upper limit for backlog on
32 # different operating systems. But on every system, passing in a too-large
33 # backlog just causes it to be silently truncated to the configured maximum,
34 # so this is unnecessary -- we can just pass in "infinity" and get the maximum
35 # that way. (Verified on Windows, Linux, macOS using
36 # notes-to-self/measure-listen-backlog.py)
37 def _compute_backlog(backlog):
38 if backlog is None:
39 backlog = inf
40 # Many systems (Linux, BSDs, ...) store the backlog in a uint16 and are
41 # missing overflow protection, so we apply our own overflow protection.
42 # https://github.com/golang/go/issues/5030
43 return min(backlog, 0xffff)
44
45
46 async def open_tcp_listeners(port, *, host=None, backlog=None):
47 """Create :class:`SocketListener` objects to listen for TCP connections.
48
49 Args:
50
51 port (int): The port to listen on.
52
53 If you use 0 as your port, then the kernel will automatically pick
54 an arbitrary open port. But be careful: if you use this feature when
55 binding to multiple IP addresses, then each IP address will get its
56 own random port, and the returned listeners will probably be
57 listening on different ports. In particular, this will happen if you
58 use ``host=None`` – which is the default – because in this case
59 :func:`open_tcp_listeners` will bind to both the IPv4 wildcard
60 address (``0.0.0.0``) and also the IPv6 wildcard address (``::``).
61
62 host (str, bytes-like, or None): The local interface to bind to. This is
63 passed to :func:`~socket.getaddrinfo` with the ``AI_PASSIVE`` flag
64 set.
65
66 If you want to bind to the wildcard address on both IPv4 and IPv6,
67 in order to accept connections on all available interfaces, then
68 pass ``None``. This is the default.
69
70 If you have a specific interface you want to bind to, pass its IP
71 address or hostname here. If a hostname resolves to multiple IP
72 addresses, this function will open one listener on each of them.
73
74 If you want to use only IPv4, or only IPv6, but want to accept on
75 all interfaces, pass the family-specific wildcard address:
76 ``"0.0.0.0"`` for IPv4-only and ``"::"`` for IPv6-only.
77
78 backlog (int or None): The listen backlog to use. If you leave this as
79 ``None`` then Trio will pick a good default. (Currently: whatever
80 your system has configured as the maximum backlog.)
81
82 Returns:
83 list of :class:`SocketListener`
84
85 """
86 # getaddrinfo sometimes allows port=None, sometimes not (depending on
87 # whether host=None). And on some systems it treats "" as 0, others it
88 # doesn't:
89 # http://klickverbot.at/blog/2012/01/getaddrinfo-edge-case-behavior-on-windows-linux-and-osx/
90 if not isinstance(port, int):
91 raise TypeError("port must be an int or str, not {!r}".format(port))
92
93 backlog = _compute_backlog(backlog)
94
95 addresses = await tsocket.getaddrinfo(
96 host,
97 port,
98 type=tsocket.SOCK_STREAM,
99 flags=tsocket.AI_PASSIVE,
100 )
101
102 listeners = []
103 try:
104 for family, type, proto, _, sockaddr in addresses:
105 sock = tsocket.socket(family, type, proto)
106 try:
107 # See https://github.com/python-trio/trio/issues/39
108 if sys.platform == "win32":
109 sock.setsockopt(
110 tsocket.SOL_SOCKET, tsocket.SO_EXCLUSIVEADDRUSE, 1
111 )
112 else:
113 sock.setsockopt(
114 tsocket.SOL_SOCKET, tsocket.SO_REUSEADDR, 1
115 )
116
117 if family == tsocket.AF_INET6:
118 sock.setsockopt(
119 tsocket.IPPROTO_IPV6, tsocket.IPV6_V6ONLY, 1
120 )
121
122 await sock.bind(sockaddr)
123 sock.listen(backlog)
124
125 listeners.append(trio.SocketListener(sock))
126 except:
127 sock.close()
128 raise
129 except:
130 for listener in listeners:
131 listener.socket.close()
132 raise
133
134 return listeners
135
136
137 async def serve_tcp(
138 handler,
139 port,
140 *,
141 host=None,
142 backlog=None,
143 handler_nursery=None,
144 task_status=trio.TASK_STATUS_IGNORED
145 ):
146 """Listen for incoming TCP connections, and for each one start a task
147 running ``handler(stream)``.
148
149 This is a thin convenience wrapper around :func:`open_tcp_listeners` and
150 :func:`serve_listeners` – see them for full details.
151
152 .. warning::
153
154 If ``handler`` raises an exception, then this function doesn't do
155 anything special to catch it – so by default the exception will
156 propagate out and crash your server. If you don't want this, then catch
157 exceptions inside your ``handler``, or use a ``handler_nursery`` object
158 that responds to exceptions in some other way.
159
160 When used with ``nursery.start`` you get back the newly opened listeners.
161 So, for example, if you want to start a server in your test suite and then
162 connect to it to check that it's working properly, you can use something
163 like::
164
165 from trio.testing import open_stream_to_socket_listener
166
167 async with trio.open_nursery() as nursery:
168 listeners = await nursery.start(serve_tcp, handler, 0)
169 client_stream = await open_stream_to_socket_listener(listeners[0])
170
171 # Then send and receive data on 'client_stream', for example:
172 await client_stream.send_all(b"GET / HTTP/1.0\\r\\n\\r\\n")
173
174 This avoids several common pitfalls:
175
176 1. It lets the kernel pick a random open port, so your test suite doesn't
177 depend on any particular port being open.
178
179 2. It waits for the server to be accepting connections on that port before
180 ``start`` returns, so there's no race condition where the incoming
181 connection arrives before the server is ready.
182
183 3. It uses the Listener object to find out which port was picked, so it
184 can connect to the right place.
185
186 Args:
187 handler: The handler to start for each incoming connection. Passed to
188 :func:`serve_listeners`.
189
190 port: The port to listen on. Use 0 to let the kernel pick an open port.
191 Passed to :func:`open_tcp_listeners`.
192
193 host (str, bytes, or None): The host interface to listen on; use
194 ``None`` to bind to the wildcard address. Passed to
195 :func:`open_tcp_listeners`.
196
197 backlog: The listen backlog, or None to have a good default picked.
198 Passed to :func:`open_tcp_listeners`.
199
200 handler_nursery: The nursery to start handlers in, or None to use an
201 internal nursery. Passed to :func:`serve_listeners`.
202
203 task_status: This function can be used with ``nursery.start``.
204
205 Returns:
206 This function only returns when cancelled.
207
208 """
209 listeners = await trio.open_tcp_listeners(port, host=host, backlog=backlog)
210 await trio.serve_listeners(
211 handler,
212 listeners,
213 handler_nursery=handler_nursery,
214 task_status=task_status
215 )
216
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/trio/_highlevel_open_tcp_listeners.py b/trio/_highlevel_open_tcp_listeners.py
--- a/trio/_highlevel_open_tcp_listeners.py
+++ b/trio/_highlevel_open_tcp_listeners.py
@@ -1,3 +1,4 @@
+import errno
import sys
from math import inf
@@ -100,9 +101,23 @@
)
listeners = []
+ unsupported_address_families = []
try:
for family, type, proto, _, sockaddr in addresses:
- sock = tsocket.socket(family, type, proto)
+ try:
+ sock = tsocket.socket(family, type, proto)
+ except OSError as ex:
+ if ex.errno == errno.EAFNOSUPPORT:
+ # If a system only supports IPv4, or only IPv6, it
+ # is still likely that getaddrinfo will return
+ # both an IPv4 and an IPv6 address. As long as at
+ # least one of the returned addresses can be
+ # turned into a socket, we won't complain about a
+ # failure to create the other.
+ unsupported_address_families.append(ex)
+ continue
+ else:
+ raise
try:
# See https://github.com/python-trio/trio/issues/39
if sys.platform == "win32":
@@ -131,6 +146,13 @@
listener.socket.close()
raise
+ if unsupported_address_families and not listeners:
+ raise OSError(
+ errno.EAFNOSUPPORT,
+ "This system doesn't support any of the kinds of "
+ "socket that that address could use"
+ ) from trio.MultiError(unsupported_address_families)
+
return listeners
| {"golden_diff": "diff --git a/trio/_highlevel_open_tcp_listeners.py b/trio/_highlevel_open_tcp_listeners.py\n--- a/trio/_highlevel_open_tcp_listeners.py\n+++ b/trio/_highlevel_open_tcp_listeners.py\n@@ -1,3 +1,4 @@\n+import errno\n import sys\n from math import inf\n \n@@ -100,9 +101,23 @@\n )\n \n listeners = []\n+ unsupported_address_families = []\n try:\n for family, type, proto, _, sockaddr in addresses:\n- sock = tsocket.socket(family, type, proto)\n+ try:\n+ sock = tsocket.socket(family, type, proto)\n+ except OSError as ex:\n+ if ex.errno == errno.EAFNOSUPPORT:\n+ # If a system only supports IPv4, or only IPv6, it\n+ # is still likely that getaddrinfo will return\n+ # both an IPv4 and an IPv6 address. As long as at\n+ # least one of the returned addresses can be\n+ # turned into a socket, we won't complain about a\n+ # failure to create the other.\n+ unsupported_address_families.append(ex)\n+ continue\n+ else:\n+ raise\n try:\n # See https://github.com/python-trio/trio/issues/39\n if sys.platform == \"win32\":\n@@ -131,6 +146,13 @@\n listener.socket.close()\n raise\n \n+ if unsupported_address_families and not listeners:\n+ raise OSError(\n+ errno.EAFNOSUPPORT,\n+ \"This system doesn't support any of the kinds of \"\n+ \"socket that that address could use\"\n+ ) from trio.MultiError(unsupported_address_families)\n+\n return listeners\n", "issue": "serve_tcp() fails out-of-the-box on systems with disabled IPv6\nI recently had occasion to run a Trio server on a Linux system that had `ipv6.disable=1` in the kernel command line. I don't know why it was configured this way, but some Googling suggests that this is not such a very uncommon configuration. In this configuration, `getaddrinfo(None, ...)` still returns an IPv6 option, but `socket(AF_INET6, ...)` fails with `[Errno 97] Address family not supported by protocol`. This also affects several tests in trio's test suite.\r\n\r\nI can work around this by explicitly serving on `\"0.0.0.0\"`, but it feels like a rough edge. As an alternative, maybe trio could suppress `EAFNOSUPPORT` errors from IPv6 socket creation in `open_tcp_listeners` if the IPv4 socket was created OK. Thoughts?\n", "before_files": [{"content": "import sys\nfrom math import inf\n\nimport trio\nfrom . import socket as tsocket\n\n__all__ = [\"open_tcp_listeners\", \"serve_tcp\"]\n\n\n# Default backlog size:\n#\n# Having the backlog too low can cause practical problems (a perfectly healthy\n# service that starts failing to accept connections if they arrive in a\n# burst).\n#\n# Having it too high doesn't really cause any problems. Like any buffer, you\n# want backlog queue to be zero usually, and it won't save you if you're\n# getting connection attempts faster than you can call accept() on an ongoing\n# basis. But unlike other buffers, this one doesn't really provide any\n# backpressure. If a connection gets stuck waiting in the backlog queue, then\n# from the peer's point of view the connection succeeded but then their\n# send/recv will stall until we get to it, possibly for a long time. OTOH if\n# there isn't room in the backlog queue... then their connect stalls, possibly\n# for a long time, which is pretty much the same thing.\n#\n# A large backlog can also use a bit more kernel memory, but this seems fairly\n# negligible these days.\n#\n# So this suggests we should make the backlog as large as possible. This also\n# matches what Golang does. However, they do it in a weird way, where they\n# have a bunch of code to sniff out the configured upper limit for backlog on\n# different operating systems. But on every system, passing in a too-large\n# backlog just causes it to be silently truncated to the configured maximum,\n# so this is unnecessary -- we can just pass in \"infinity\" and get the maximum\n# that way. (Verified on Windows, Linux, macOS using\n# notes-to-self/measure-listen-backlog.py)\ndef _compute_backlog(backlog):\n if backlog is None:\n backlog = inf\n # Many systems (Linux, BSDs, ...) store the backlog in a uint16 and are\n # missing overflow protection, so we apply our own overflow protection.\n # https://github.com/golang/go/issues/5030\n return min(backlog, 0xffff)\n\n\nasync def open_tcp_listeners(port, *, host=None, backlog=None):\n \"\"\"Create :class:`SocketListener` objects to listen for TCP connections.\n\n Args:\n\n port (int): The port to listen on.\n\n If you use 0 as your port, then the kernel will automatically pick\n an arbitrary open port. But be careful: if you use this feature when\n binding to multiple IP addresses, then each IP address will get its\n own random port, and the returned listeners will probably be\n listening on different ports. In particular, this will happen if you\n use ``host=None`` \u2013 which is the default \u2013 because in this case\n :func:`open_tcp_listeners` will bind to both the IPv4 wildcard\n address (``0.0.0.0``) and also the IPv6 wildcard address (``::``).\n\n host (str, bytes-like, or None): The local interface to bind to. This is\n passed to :func:`~socket.getaddrinfo` with the ``AI_PASSIVE`` flag\n set.\n\n If you want to bind to the wildcard address on both IPv4 and IPv6,\n in order to accept connections on all available interfaces, then\n pass ``None``. This is the default.\n\n If you have a specific interface you want to bind to, pass its IP\n address or hostname here. If a hostname resolves to multiple IP\n addresses, this function will open one listener on each of them.\n\n If you want to use only IPv4, or only IPv6, but want to accept on\n all interfaces, pass the family-specific wildcard address:\n ``\"0.0.0.0\"`` for IPv4-only and ``\"::\"`` for IPv6-only.\n\n backlog (int or None): The listen backlog to use. If you leave this as\n ``None`` then Trio will pick a good default. (Currently: whatever\n your system has configured as the maximum backlog.)\n\n Returns:\n list of :class:`SocketListener`\n\n \"\"\"\n # getaddrinfo sometimes allows port=None, sometimes not (depending on\n # whether host=None). And on some systems it treats \"\" as 0, others it\n # doesn't:\n # http://klickverbot.at/blog/2012/01/getaddrinfo-edge-case-behavior-on-windows-linux-and-osx/\n if not isinstance(port, int):\n raise TypeError(\"port must be an int or str, not {!r}\".format(port))\n\n backlog = _compute_backlog(backlog)\n\n addresses = await tsocket.getaddrinfo(\n host,\n port,\n type=tsocket.SOCK_STREAM,\n flags=tsocket.AI_PASSIVE,\n )\n\n listeners = []\n try:\n for family, type, proto, _, sockaddr in addresses:\n sock = tsocket.socket(family, type, proto)\n try:\n # See https://github.com/python-trio/trio/issues/39\n if sys.platform == \"win32\":\n sock.setsockopt(\n tsocket.SOL_SOCKET, tsocket.SO_EXCLUSIVEADDRUSE, 1\n )\n else:\n sock.setsockopt(\n tsocket.SOL_SOCKET, tsocket.SO_REUSEADDR, 1\n )\n\n if family == tsocket.AF_INET6:\n sock.setsockopt(\n tsocket.IPPROTO_IPV6, tsocket.IPV6_V6ONLY, 1\n )\n\n await sock.bind(sockaddr)\n sock.listen(backlog)\n\n listeners.append(trio.SocketListener(sock))\n except:\n sock.close()\n raise\n except:\n for listener in listeners:\n listener.socket.close()\n raise\n\n return listeners\n\n\nasync def serve_tcp(\n handler,\n port,\n *,\n host=None,\n backlog=None,\n handler_nursery=None,\n task_status=trio.TASK_STATUS_IGNORED\n):\n \"\"\"Listen for incoming TCP connections, and for each one start a task\n running ``handler(stream)``.\n\n This is a thin convenience wrapper around :func:`open_tcp_listeners` and\n :func:`serve_listeners` \u2013 see them for full details.\n\n .. warning::\n\n If ``handler`` raises an exception, then this function doesn't do\n anything special to catch it \u2013 so by default the exception will\n propagate out and crash your server. If you don't want this, then catch\n exceptions inside your ``handler``, or use a ``handler_nursery`` object\n that responds to exceptions in some other way.\n\n When used with ``nursery.start`` you get back the newly opened listeners.\n So, for example, if you want to start a server in your test suite and then\n connect to it to check that it's working properly, you can use something\n like::\n\n from trio.testing import open_stream_to_socket_listener\n\n async with trio.open_nursery() as nursery:\n listeners = await nursery.start(serve_tcp, handler, 0)\n client_stream = await open_stream_to_socket_listener(listeners[0])\n\n # Then send and receive data on 'client_stream', for example:\n await client_stream.send_all(b\"GET / HTTP/1.0\\\\r\\\\n\\\\r\\\\n\")\n\n This avoids several common pitfalls:\n\n 1. It lets the kernel pick a random open port, so your test suite doesn't\n depend on any particular port being open.\n\n 2. It waits for the server to be accepting connections on that port before\n ``start`` returns, so there's no race condition where the incoming\n connection arrives before the server is ready.\n\n 3. It uses the Listener object to find out which port was picked, so it\n can connect to the right place.\n\n Args:\n handler: The handler to start for each incoming connection. Passed to\n :func:`serve_listeners`.\n\n port: The port to listen on. Use 0 to let the kernel pick an open port.\n Passed to :func:`open_tcp_listeners`.\n\n host (str, bytes, or None): The host interface to listen on; use\n ``None`` to bind to the wildcard address. Passed to\n :func:`open_tcp_listeners`.\n\n backlog: The listen backlog, or None to have a good default picked.\n Passed to :func:`open_tcp_listeners`.\n\n handler_nursery: The nursery to start handlers in, or None to use an\n internal nursery. Passed to :func:`serve_listeners`.\n\n task_status: This function can be used with ``nursery.start``.\n\n Returns:\n This function only returns when cancelled.\n\n \"\"\"\n listeners = await trio.open_tcp_listeners(port, host=host, backlog=backlog)\n await trio.serve_listeners(\n handler,\n listeners,\n handler_nursery=handler_nursery,\n task_status=task_status\n )\n", "path": "trio/_highlevel_open_tcp_listeners.py"}], "after_files": [{"content": "import errno\nimport sys\nfrom math import inf\n\nimport trio\nfrom . import socket as tsocket\n\n__all__ = [\"open_tcp_listeners\", \"serve_tcp\"]\n\n\n# Default backlog size:\n#\n# Having the backlog too low can cause practical problems (a perfectly healthy\n# service that starts failing to accept connections if they arrive in a\n# burst).\n#\n# Having it too high doesn't really cause any problems. Like any buffer, you\n# want backlog queue to be zero usually, and it won't save you if you're\n# getting connection attempts faster than you can call accept() on an ongoing\n# basis. But unlike other buffers, this one doesn't really provide any\n# backpressure. If a connection gets stuck waiting in the backlog queue, then\n# from the peer's point of view the connection succeeded but then their\n# send/recv will stall until we get to it, possibly for a long time. OTOH if\n# there isn't room in the backlog queue... then their connect stalls, possibly\n# for a long time, which is pretty much the same thing.\n#\n# A large backlog can also use a bit more kernel memory, but this seems fairly\n# negligible these days.\n#\n# So this suggests we should make the backlog as large as possible. This also\n# matches what Golang does. However, they do it in a weird way, where they\n# have a bunch of code to sniff out the configured upper limit for backlog on\n# different operating systems. But on every system, passing in a too-large\n# backlog just causes it to be silently truncated to the configured maximum,\n# so this is unnecessary -- we can just pass in \"infinity\" and get the maximum\n# that way. (Verified on Windows, Linux, macOS using\n# notes-to-self/measure-listen-backlog.py)\ndef _compute_backlog(backlog):\n if backlog is None:\n backlog = inf\n # Many systems (Linux, BSDs, ...) store the backlog in a uint16 and are\n # missing overflow protection, so we apply our own overflow protection.\n # https://github.com/golang/go/issues/5030\n return min(backlog, 0xffff)\n\n\nasync def open_tcp_listeners(port, *, host=None, backlog=None):\n \"\"\"Create :class:`SocketListener` objects to listen for TCP connections.\n\n Args:\n\n port (int): The port to listen on.\n\n If you use 0 as your port, then the kernel will automatically pick\n an arbitrary open port. But be careful: if you use this feature when\n binding to multiple IP addresses, then each IP address will get its\n own random port, and the returned listeners will probably be\n listening on different ports. In particular, this will happen if you\n use ``host=None`` \u2013 which is the default \u2013 because in this case\n :func:`open_tcp_listeners` will bind to both the IPv4 wildcard\n address (``0.0.0.0``) and also the IPv6 wildcard address (``::``).\n\n host (str, bytes-like, or None): The local interface to bind to. This is\n passed to :func:`~socket.getaddrinfo` with the ``AI_PASSIVE`` flag\n set.\n\n If you want to bind to the wildcard address on both IPv4 and IPv6,\n in order to accept connections on all available interfaces, then\n pass ``None``. This is the default.\n\n If you have a specific interface you want to bind to, pass its IP\n address or hostname here. If a hostname resolves to multiple IP\n addresses, this function will open one listener on each of them.\n\n If you want to use only IPv4, or only IPv6, but want to accept on\n all interfaces, pass the family-specific wildcard address:\n ``\"0.0.0.0\"`` for IPv4-only and ``\"::\"`` for IPv6-only.\n\n backlog (int or None): The listen backlog to use. If you leave this as\n ``None`` then Trio will pick a good default. (Currently: whatever\n your system has configured as the maximum backlog.)\n\n Returns:\n list of :class:`SocketListener`\n\n \"\"\"\n # getaddrinfo sometimes allows port=None, sometimes not (depending on\n # whether host=None). And on some systems it treats \"\" as 0, others it\n # doesn't:\n # http://klickverbot.at/blog/2012/01/getaddrinfo-edge-case-behavior-on-windows-linux-and-osx/\n if not isinstance(port, int):\n raise TypeError(\"port must be an int or str, not {!r}\".format(port))\n\n backlog = _compute_backlog(backlog)\n\n addresses = await tsocket.getaddrinfo(\n host,\n port,\n type=tsocket.SOCK_STREAM,\n flags=tsocket.AI_PASSIVE,\n )\n\n listeners = []\n unsupported_address_families = []\n try:\n for family, type, proto, _, sockaddr in addresses:\n try:\n sock = tsocket.socket(family, type, proto)\n except OSError as ex:\n if ex.errno == errno.EAFNOSUPPORT:\n # If a system only supports IPv4, or only IPv6, it\n # is still likely that getaddrinfo will return\n # both an IPv4 and an IPv6 address. As long as at\n # least one of the returned addresses can be\n # turned into a socket, we won't complain about a\n # failure to create the other.\n unsupported_address_families.append(ex)\n continue\n else:\n raise\n try:\n # See https://github.com/python-trio/trio/issues/39\n if sys.platform == \"win32\":\n sock.setsockopt(\n tsocket.SOL_SOCKET, tsocket.SO_EXCLUSIVEADDRUSE, 1\n )\n else:\n sock.setsockopt(\n tsocket.SOL_SOCKET, tsocket.SO_REUSEADDR, 1\n )\n\n if family == tsocket.AF_INET6:\n sock.setsockopt(\n tsocket.IPPROTO_IPV6, tsocket.IPV6_V6ONLY, 1\n )\n\n await sock.bind(sockaddr)\n sock.listen(backlog)\n\n listeners.append(trio.SocketListener(sock))\n except:\n sock.close()\n raise\n except:\n for listener in listeners:\n listener.socket.close()\n raise\n\n if unsupported_address_families and not listeners:\n raise OSError(\n errno.EAFNOSUPPORT,\n \"This system doesn't support any of the kinds of \"\n \"socket that that address could use\"\n ) from trio.MultiError(unsupported_address_families)\n\n return listeners\n\n\nasync def serve_tcp(\n handler,\n port,\n *,\n host=None,\n backlog=None,\n handler_nursery=None,\n task_status=trio.TASK_STATUS_IGNORED\n):\n \"\"\"Listen for incoming TCP connections, and for each one start a task\n running ``handler(stream)``.\n\n This is a thin convenience wrapper around :func:`open_tcp_listeners` and\n :func:`serve_listeners` \u2013 see them for full details.\n\n .. warning::\n\n If ``handler`` raises an exception, then this function doesn't do\n anything special to catch it \u2013 so by default the exception will\n propagate out and crash your server. If you don't want this, then catch\n exceptions inside your ``handler``, or use a ``handler_nursery`` object\n that responds to exceptions in some other way.\n\n When used with ``nursery.start`` you get back the newly opened listeners.\n So, for example, if you want to start a server in your test suite and then\n connect to it to check that it's working properly, you can use something\n like::\n\n from trio.testing import open_stream_to_socket_listener\n\n async with trio.open_nursery() as nursery:\n listeners = await nursery.start(serve_tcp, handler, 0)\n client_stream = await open_stream_to_socket_listener(listeners[0])\n\n # Then send and receive data on 'client_stream', for example:\n await client_stream.send_all(b\"GET / HTTP/1.0\\\\r\\\\n\\\\r\\\\n\")\n\n This avoids several common pitfalls:\n\n 1. It lets the kernel pick a random open port, so your test suite doesn't\n depend on any particular port being open.\n\n 2. It waits for the server to be accepting connections on that port before\n ``start`` returns, so there's no race condition where the incoming\n connection arrives before the server is ready.\n\n 3. It uses the Listener object to find out which port was picked, so it\n can connect to the right place.\n\n Args:\n handler: The handler to start for each incoming connection. Passed to\n :func:`serve_listeners`.\n\n port: The port to listen on. Use 0 to let the kernel pick an open port.\n Passed to :func:`open_tcp_listeners`.\n\n host (str, bytes, or None): The host interface to listen on; use\n ``None`` to bind to the wildcard address. Passed to\n :func:`open_tcp_listeners`.\n\n backlog: The listen backlog, or None to have a good default picked.\n Passed to :func:`open_tcp_listeners`.\n\n handler_nursery: The nursery to start handlers in, or None to use an\n internal nursery. Passed to :func:`serve_listeners`.\n\n task_status: This function can be used with ``nursery.start``.\n\n Returns:\n This function only returns when cancelled.\n\n \"\"\"\n listeners = await trio.open_tcp_listeners(port, host=host, backlog=backlog)\n await trio.serve_listeners(\n handler,\n listeners,\n handler_nursery=handler_nursery,\n task_status=task_status\n )\n", "path": "trio/_highlevel_open_tcp_listeners.py"}]} | 2,960 | 399 |
gh_patches_debug_906 | rasdani/github-patches | git_diff | dotkom__onlineweb4-501 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
UserResource in API should not display last login date publicly
Somewhat sensitive information...
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/api/v0/authentication.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 from tastypie import fields
4 from tastypie.resources import ModelResource
5 from tastypie.authorization import Authorization
6
7 from apps.authentication.models import OnlineUser as User
8
9 class UserResource(ModelResource):
10
11 class Meta:
12 queryset = User.objects.all()
13 resource_name = 'user'
14 fields = ['username', 'first_name', 'last_name', 'last_login', 'email', ]
15
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/apps/api/v0/authentication.py b/apps/api/v0/authentication.py
--- a/apps/api/v0/authentication.py
+++ b/apps/api/v0/authentication.py
@@ -11,4 +11,4 @@
class Meta:
queryset = User.objects.all()
resource_name = 'user'
- fields = ['username', 'first_name', 'last_name', 'last_login', 'email', ]
+ fields = ['username', 'first_name', 'last_name', 'email', ]
| {"golden_diff": "diff --git a/apps/api/v0/authentication.py b/apps/api/v0/authentication.py\n--- a/apps/api/v0/authentication.py\n+++ b/apps/api/v0/authentication.py\n@@ -11,4 +11,4 @@\n class Meta:\n queryset = User.objects.all()\n resource_name = 'user'\n- fields = ['username', 'first_name', 'last_name', 'last_login', 'email', ]\n+ fields = ['username', 'first_name', 'last_name', 'email', ]\n", "issue": "UserResource in API should not display last login date publicly\nSomewhat sensitive information...\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom tastypie import fields\nfrom tastypie.resources import ModelResource\nfrom tastypie.authorization import Authorization\n\nfrom apps.authentication.models import OnlineUser as User\n\nclass UserResource(ModelResource):\n\n class Meta:\n queryset = User.objects.all()\n resource_name = 'user'\n fields = ['username', 'first_name', 'last_name', 'last_login', 'email', ]\n", "path": "apps/api/v0/authentication.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom tastypie import fields\nfrom tastypie.resources import ModelResource\nfrom tastypie.authorization import Authorization\n\nfrom apps.authentication.models import OnlineUser as User\n\nclass UserResource(ModelResource):\n\n class Meta:\n queryset = User.objects.all()\n resource_name = 'user'\n fields = ['username', 'first_name', 'last_name', 'email', ]\n", "path": "apps/api/v0/authentication.py"}]} | 390 | 108 |
gh_patches_debug_11661 | rasdani/github-patches | git_diff | projectmesa__mesa-1439 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Seed and random should not be class attributes
**Describe the bug**
<!-- A clear and concise description the bug -->
Every time a model instance is created, it updates the seed and random attribute of every other instance of the same class.
**Expected behavior**
<!-- A clear and concise description of what you expected to happen -->
I would expect each instance to have their own attribute. This way, the behavior of each instance is independent and predictable.
**To Reproduce**
<!-- Steps to reproduce the bug, or a link to a project where the bug is visible -->
```pycon
>>> from mesa import Model
>>> class Ex(Model):
... def __init__(self, seed=2):
... pass
...
>>> a = Ex(seed=1)
>>> print(a._seed, a.random)
1 <random.Random object at 0x282a8d0>
>>> b = Ex(seed=2)
>>> print(a._seed, a.random)
2 <random.Random object at 0x282b2d0>
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mesa/model.py`
Content:
```
1 """
2 The model class for Mesa framework.
3
4 Core Objects: Model
5
6 """
7 # Mypy; for the `|` operator purpose
8 # Remove this __future__ import once the oldest supported Python is 3.10
9 from __future__ import annotations
10
11 import random
12
13 from mesa.datacollection import DataCollector
14
15 # mypy
16 from typing import Any
17
18
19 class Model:
20 """Base class for models."""
21
22 def __new__(cls, *args: Any, **kwargs: Any) -> Any:
23 """Create a new model object and instantiate its RNG automatically."""
24 cls._seed = kwargs.get("seed", None)
25 cls.random = random.Random(cls._seed)
26 return object.__new__(cls)
27
28 def __init__(self, *args: Any, **kwargs: Any) -> None:
29 """Create a new model. Overload this method with the actual code to
30 start the model.
31
32 Attributes:
33 schedule: schedule object
34 running: a bool indicating if the model should continue running
35
36 """
37
38 self.running = True
39 self.schedule = None
40 self.current_id = 0
41
42 def run_model(self) -> None:
43 """Run the model until the end condition is reached. Overload as
44 needed.
45
46 """
47 while self.running:
48 self.step()
49
50 def step(self) -> None:
51 """A single step. Fill in here."""
52 pass
53
54 def next_id(self) -> int:
55 """Return the next unique ID for agents, increment current_id"""
56 self.current_id += 1
57 return self.current_id
58
59 def reset_randomizer(self, seed: int | None = None) -> None:
60 """Reset the model random number generator.
61
62 Args:
63 seed: A new seed for the RNG; if None, reset using the current seed
64 """
65
66 if seed is None:
67 seed = self._seed
68 self.random.seed(seed)
69 self._seed = seed
70
71 def initialize_data_collector(
72 self, model_reporters=None, agent_reporters=None, tables=None
73 ) -> None:
74 if not hasattr(self, "schedule") or self.schedule is None:
75 raise RuntimeError(
76 "You must initialize the scheduler (self.schedule) before initializing the data collector."
77 )
78 if self.schedule.get_agent_count() == 0:
79 raise RuntimeError(
80 "You must add agents to the scheduler before initializing the data collector."
81 )
82 self.datacollector = DataCollector(
83 model_reporters=model_reporters,
84 agent_reporters=agent_reporters,
85 tables=tables,
86 )
87 # Collect data for the first time during initialization.
88 self.datacollector.collect(self)
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mesa/model.py b/mesa/model.py
--- a/mesa/model.py
+++ b/mesa/model.py
@@ -21,9 +21,10 @@
def __new__(cls, *args: Any, **kwargs: Any) -> Any:
"""Create a new model object and instantiate its RNG automatically."""
- cls._seed = kwargs.get("seed", None)
- cls.random = random.Random(cls._seed)
- return object.__new__(cls)
+ obj = object.__new__(cls)
+ obj._seed = kwargs.get("seed", None)
+ obj.random = random.Random(obj._seed)
+ return obj
def __init__(self, *args: Any, **kwargs: Any) -> None:
"""Create a new model. Overload this method with the actual code to
| {"golden_diff": "diff --git a/mesa/model.py b/mesa/model.py\n--- a/mesa/model.py\n+++ b/mesa/model.py\n@@ -21,9 +21,10 @@\n \n def __new__(cls, *args: Any, **kwargs: Any) -> Any:\n \"\"\"Create a new model object and instantiate its RNG automatically.\"\"\"\n- cls._seed = kwargs.get(\"seed\", None)\n- cls.random = random.Random(cls._seed)\n- return object.__new__(cls)\n+ obj = object.__new__(cls)\n+ obj._seed = kwargs.get(\"seed\", None)\n+ obj.random = random.Random(obj._seed)\n+ return obj\n \n def __init__(self, *args: Any, **kwargs: Any) -> None:\n \"\"\"Create a new model. Overload this method with the actual code to\n", "issue": "Seed and random should not be class attributes\n**Describe the bug**\r\n<!-- A clear and concise description the bug -->\r\n\r\nEvery time a model instance is created, it updates the seed and random attribute of every other instance of the same class.\r\n\r\n**Expected behavior**\r\n<!-- A clear and concise description of what you expected to happen -->\r\n\r\nI would expect each instance to have their own attribute. This way, the behavior of each instance is independent and predictable.\r\n\r\n**To Reproduce**\r\n<!-- Steps to reproduce the bug, or a link to a project where the bug is visible -->\r\n\r\n```pycon\r\n>>> from mesa import Model\r\n>>> class Ex(Model):\r\n... def __init__(self, seed=2):\r\n... pass\r\n...\r\n>>> a = Ex(seed=1)\r\n>>> print(a._seed, a.random)\r\n1 <random.Random object at 0x282a8d0>\r\n>>> b = Ex(seed=2)\r\n>>> print(a._seed, a.random)\r\n2 <random.Random object at 0x282b2d0>\r\n```\n", "before_files": [{"content": "\"\"\"\nThe model class for Mesa framework.\n\nCore Objects: Model\n\n\"\"\"\n# Mypy; for the `|` operator purpose\n# Remove this __future__ import once the oldest supported Python is 3.10\nfrom __future__ import annotations\n\nimport random\n\nfrom mesa.datacollection import DataCollector\n\n# mypy\nfrom typing import Any\n\n\nclass Model:\n \"\"\"Base class for models.\"\"\"\n\n def __new__(cls, *args: Any, **kwargs: Any) -> Any:\n \"\"\"Create a new model object and instantiate its RNG automatically.\"\"\"\n cls._seed = kwargs.get(\"seed\", None)\n cls.random = random.Random(cls._seed)\n return object.__new__(cls)\n\n def __init__(self, *args: Any, **kwargs: Any) -> None:\n \"\"\"Create a new model. Overload this method with the actual code to\n start the model.\n\n Attributes:\n schedule: schedule object\n running: a bool indicating if the model should continue running\n\n \"\"\"\n\n self.running = True\n self.schedule = None\n self.current_id = 0\n\n def run_model(self) -> None:\n \"\"\"Run the model until the end condition is reached. Overload as\n needed.\n\n \"\"\"\n while self.running:\n self.step()\n\n def step(self) -> None:\n \"\"\"A single step. Fill in here.\"\"\"\n pass\n\n def next_id(self) -> int:\n \"\"\"Return the next unique ID for agents, increment current_id\"\"\"\n self.current_id += 1\n return self.current_id\n\n def reset_randomizer(self, seed: int | None = None) -> None:\n \"\"\"Reset the model random number generator.\n\n Args:\n seed: A new seed for the RNG; if None, reset using the current seed\n \"\"\"\n\n if seed is None:\n seed = self._seed\n self.random.seed(seed)\n self._seed = seed\n\n def initialize_data_collector(\n self, model_reporters=None, agent_reporters=None, tables=None\n ) -> None:\n if not hasattr(self, \"schedule\") or self.schedule is None:\n raise RuntimeError(\n \"You must initialize the scheduler (self.schedule) before initializing the data collector.\"\n )\n if self.schedule.get_agent_count() == 0:\n raise RuntimeError(\n \"You must add agents to the scheduler before initializing the data collector.\"\n )\n self.datacollector = DataCollector(\n model_reporters=model_reporters,\n agent_reporters=agent_reporters,\n tables=tables,\n )\n # Collect data for the first time during initialization.\n self.datacollector.collect(self)\n", "path": "mesa/model.py"}], "after_files": [{"content": "\"\"\"\nThe model class for Mesa framework.\n\nCore Objects: Model\n\n\"\"\"\n# Mypy; for the `|` operator purpose\n# Remove this __future__ import once the oldest supported Python is 3.10\nfrom __future__ import annotations\n\nimport random\n\nfrom mesa.datacollection import DataCollector\n\n# mypy\nfrom typing import Any\n\n\nclass Model:\n \"\"\"Base class for models.\"\"\"\n\n def __new__(cls, *args: Any, **kwargs: Any) -> Any:\n \"\"\"Create a new model object and instantiate its RNG automatically.\"\"\"\n obj = object.__new__(cls)\n obj._seed = kwargs.get(\"seed\", None)\n obj.random = random.Random(obj._seed)\n return obj\n\n def __init__(self, *args: Any, **kwargs: Any) -> None:\n \"\"\"Create a new model. Overload this method with the actual code to\n start the model.\n\n Attributes:\n schedule: schedule object\n running: a bool indicating if the model should continue running\n\n \"\"\"\n\n self.running = True\n self.schedule = None\n self.current_id = 0\n\n def run_model(self) -> None:\n \"\"\"Run the model until the end condition is reached. Overload as\n needed.\n\n \"\"\"\n while self.running:\n self.step()\n\n def step(self) -> None:\n \"\"\"A single step. Fill in here.\"\"\"\n pass\n\n def next_id(self) -> int:\n \"\"\"Return the next unique ID for agents, increment current_id\"\"\"\n self.current_id += 1\n return self.current_id\n\n def reset_randomizer(self, seed: int | None = None) -> None:\n \"\"\"Reset the model random number generator.\n\n Args:\n seed: A new seed for the RNG; if None, reset using the current seed\n \"\"\"\n\n if seed is None:\n seed = self._seed\n self.random.seed(seed)\n self._seed = seed\n\n def initialize_data_collector(\n self, model_reporters=None, agent_reporters=None, tables=None\n ) -> None:\n if not hasattr(self, \"schedule\") or self.schedule is None:\n raise RuntimeError(\n \"You must initialize the scheduler (self.schedule) before initializing the data collector.\"\n )\n if self.schedule.get_agent_count() == 0:\n raise RuntimeError(\n \"You must add agents to the scheduler before initializing the data collector.\"\n )\n self.datacollector = DataCollector(\n model_reporters=model_reporters,\n agent_reporters=agent_reporters,\n tables=tables,\n )\n # Collect data for the first time during initialization.\n self.datacollector.collect(self)\n", "path": "mesa/model.py"}]} | 1,215 | 184 |
gh_patches_debug_25285 | rasdani/github-patches | git_diff | DataDog__dd-agent-2139 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot find Docker cgroup directory. Be sure your system is supported.
Running docker 1.8.3 on Ubuntu 14.04. Running dd-agent under Kubernetes. Kept getting the error message in the subject until I made a patch to 5.6.2 https://gist.github.com/bakins/00cc12f26e2947af6679
I am not running the packaged version of docker (ie, the deb) - just the binary (issues with Kubernetes for me using packaged version).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `utils/dockerutil.py`
Content:
```
1 # stdlib
2 import os
3
4 # 3rd party
5 from docker import Client
6 from docker import tls
7
8
9 class MountException(Exception):
10 pass
11
12 # Default docker client settings
13 DEFAULT_TIMEOUT = 5
14 DEFAULT_VERSION = 'auto'
15
16 _docker_client_settings = {"version": DEFAULT_VERSION}
17
18
19 def get_docker_settings():
20 global _docker_client_settings
21 return _docker_client_settings
22
23
24 def reset_docker_settings():
25 global _docker_client_settings
26 _docker_client_settings = {"version": DEFAULT_VERSION}
27
28
29 def set_docker_settings(init_config, instance):
30 global _docker_client_settings
31 _docker_client_settings = {
32 "version": init_config.get('api_version', DEFAULT_VERSION),
33 "base_url": instance.get("url"),
34 "timeout": int(init_config.get('timeout', DEFAULT_TIMEOUT)),
35 }
36
37 if init_config.get('tls', False):
38 client_cert_path = init_config.get('tls_client_cert')
39 client_key_path = init_config.get('tls_client_key')
40 cacert = init_config.get('tls_cacert')
41 verify = init_config.get('tls_verify')
42
43 client_cert = None
44 if client_cert_path is not None and client_key_path is not None:
45 client_cert = (client_cert_path, client_key_path)
46
47 verify = verify if verify is not None else cacert
48 tls_config = tls.TLSConfig(client_cert=client_cert, verify=verify)
49 _docker_client_settings["tls"] = tls_config
50
51
52 def get_client():
53 return Client(**_docker_client_settings)
54
55
56 def find_cgroup(hierarchy, docker_root):
57 """Find the mount point for a specified cgroup hierarchy.
58
59 Works with old style and new style mounts.
60 """
61 with open(os.path.join(docker_root, "/proc/mounts"), 'r') as fp:
62 mounts = map(lambda x: x.split(), fp.read().splitlines())
63 cgroup_mounts = filter(lambda x: x[2] == "cgroup", mounts)
64 if len(cgroup_mounts) == 0:
65 raise Exception(
66 "Can't find mounted cgroups. If you run the Agent inside a container,"
67 " please refer to the documentation.")
68 # Old cgroup style
69 if len(cgroup_mounts) == 1:
70 return os.path.join(docker_root, cgroup_mounts[0][1])
71
72 candidate = None
73 for _, mountpoint, _, opts, _, _ in cgroup_mounts:
74 if hierarchy in opts:
75 if mountpoint.startswith("/host/"):
76 return os.path.join(docker_root, mountpoint)
77 candidate = mountpoint
78
79 if candidate is not None:
80 return os.path.join(docker_root, candidate)
81 raise Exception("Can't find mounted %s cgroups." % hierarchy)
82
83
84 def find_cgroup_filename_pattern(mountpoints, container_id):
85 # We try with different cgroups so that it works even if only one is properly working
86 for mountpoint in mountpoints.itervalues():
87 stat_file_path_lxc = os.path.join(mountpoint, "lxc")
88 stat_file_path_docker = os.path.join(mountpoint, "docker")
89 stat_file_path_coreos = os.path.join(mountpoint, "system.slice")
90 stat_file_path_kubernetes = os.path.join(mountpoint, container_id)
91 stat_file_path_kubernetes_docker = os.path.join(mountpoint, "system", "docker", container_id)
92
93 if os.path.exists(stat_file_path_lxc):
94 return os.path.join('%(mountpoint)s/lxc/%(id)s/%(file)s')
95 elif os.path.exists(stat_file_path_docker):
96 return os.path.join('%(mountpoint)s/docker/%(id)s/%(file)s')
97 elif os.path.exists(stat_file_path_coreos):
98 return os.path.join('%(mountpoint)s/system.slice/docker-%(id)s.scope/%(file)s')
99 elif os.path.exists(stat_file_path_kubernetes):
100 return os.path.join('%(mountpoint)s/%(id)s/%(file)s')
101 elif os.path.exists(stat_file_path_kubernetes_docker):
102 return os.path.join('%(mountpoint)s/system/docker/%(id)s/%(file)s')
103
104 raise MountException("Cannot find Docker cgroup directory. Be sure your system is supported.")
105
106
107 def image_tag_extractor(entity, key):
108 if "Image" in entity:
109 split = entity["Image"].split(":")
110 if len(split) <= key:
111 return None
112 elif len(split) > 2:
113 # if the repo is in the image name and has the form 'docker.clearbit:5000'
114 # the split will be like [repo_url, repo_port/image_name, image_tag]. Let's avoid that
115 split = [':'.join(split[:-1]), split[-1]]
116 return [split[key]]
117 if "RepoTags" in entity:
118 splits = [el.split(":") for el in entity["RepoTags"]]
119 tags = set()
120 for split in splits:
121 if len(split) > 2:
122 split = [':'.join(split[:-1]), split[-1]]
123 if len(split) > key:
124 tags.add(split[key])
125 if len(tags) > 0:
126 return list(tags)
127 return None
128
129
130 def container_name_extractor(co):
131 names = co.get('Names', [])
132 if names is not None:
133 # we sort the list to make sure that a docker API update introducing
134 # new names with a single "/" won't make us report dups.
135 names = sorted(names)
136 for name in names:
137 # the leading "/" is legit, if there's another one it means the name is actually an alias
138 if name.count('/') <= 1:
139 return [str(name).lstrip('/')]
140 return co.get('Id')[:11]
141
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/utils/dockerutil.py b/utils/dockerutil.py
--- a/utils/dockerutil.py
+++ b/utils/dockerutil.py
@@ -89,6 +89,7 @@
stat_file_path_coreos = os.path.join(mountpoint, "system.slice")
stat_file_path_kubernetes = os.path.join(mountpoint, container_id)
stat_file_path_kubernetes_docker = os.path.join(mountpoint, "system", "docker", container_id)
+ stat_file_path_docker_daemon = os.path.join(mountpoint, "docker-daemon", "docker", container_id)
if os.path.exists(stat_file_path_lxc):
return os.path.join('%(mountpoint)s/lxc/%(id)s/%(file)s')
@@ -100,6 +101,9 @@
return os.path.join('%(mountpoint)s/%(id)s/%(file)s')
elif os.path.exists(stat_file_path_kubernetes_docker):
return os.path.join('%(mountpoint)s/system/docker/%(id)s/%(file)s')
+ elif os.path.exists(stat_file_path_docker_daemon):
+ return os.path.join('%(mountpoint)s/docker-daemon/docker/%(id)s/%(file)s')
+
raise MountException("Cannot find Docker cgroup directory. Be sure your system is supported.")
| {"golden_diff": "diff --git a/utils/dockerutil.py b/utils/dockerutil.py\n--- a/utils/dockerutil.py\n+++ b/utils/dockerutil.py\n@@ -89,6 +89,7 @@\n stat_file_path_coreos = os.path.join(mountpoint, \"system.slice\")\n stat_file_path_kubernetes = os.path.join(mountpoint, container_id)\n stat_file_path_kubernetes_docker = os.path.join(mountpoint, \"system\", \"docker\", container_id)\n+ stat_file_path_docker_daemon = os.path.join(mountpoint, \"docker-daemon\", \"docker\", container_id)\n \n if os.path.exists(stat_file_path_lxc):\n return os.path.join('%(mountpoint)s/lxc/%(id)s/%(file)s')\n@@ -100,6 +101,9 @@\n return os.path.join('%(mountpoint)s/%(id)s/%(file)s')\n elif os.path.exists(stat_file_path_kubernetes_docker):\n return os.path.join('%(mountpoint)s/system/docker/%(id)s/%(file)s')\n+ elif os.path.exists(stat_file_path_docker_daemon):\n+ return os.path.join('%(mountpoint)s/docker-daemon/docker/%(id)s/%(file)s')\n+\n \n raise MountException(\"Cannot find Docker cgroup directory. Be sure your system is supported.\")\n", "issue": "Cannot find Docker cgroup directory. Be sure your system is supported.\nRunning docker 1.8.3 on Ubuntu 14.04. Running dd-agent under Kubernetes. Kept getting the error message in the subject until I made a patch to 5.6.2 https://gist.github.com/bakins/00cc12f26e2947af6679\n\nI am not running the packaged version of docker (ie, the deb) - just the binary (issues with Kubernetes for me using packaged version).\n\n", "before_files": [{"content": "# stdlib\nimport os\n\n# 3rd party\nfrom docker import Client\nfrom docker import tls\n\n\nclass MountException(Exception):\n pass\n\n# Default docker client settings\nDEFAULT_TIMEOUT = 5\nDEFAULT_VERSION = 'auto'\n\n_docker_client_settings = {\"version\": DEFAULT_VERSION}\n\n\ndef get_docker_settings():\n global _docker_client_settings\n return _docker_client_settings\n\n\ndef reset_docker_settings():\n global _docker_client_settings\n _docker_client_settings = {\"version\": DEFAULT_VERSION}\n\n\ndef set_docker_settings(init_config, instance):\n global _docker_client_settings\n _docker_client_settings = {\n \"version\": init_config.get('api_version', DEFAULT_VERSION),\n \"base_url\": instance.get(\"url\"),\n \"timeout\": int(init_config.get('timeout', DEFAULT_TIMEOUT)),\n }\n\n if init_config.get('tls', False):\n client_cert_path = init_config.get('tls_client_cert')\n client_key_path = init_config.get('tls_client_key')\n cacert = init_config.get('tls_cacert')\n verify = init_config.get('tls_verify')\n\n client_cert = None\n if client_cert_path is not None and client_key_path is not None:\n client_cert = (client_cert_path, client_key_path)\n\n verify = verify if verify is not None else cacert\n tls_config = tls.TLSConfig(client_cert=client_cert, verify=verify)\n _docker_client_settings[\"tls\"] = tls_config\n\n\ndef get_client():\n return Client(**_docker_client_settings)\n\n\ndef find_cgroup(hierarchy, docker_root):\n \"\"\"Find the mount point for a specified cgroup hierarchy.\n\n Works with old style and new style mounts.\n \"\"\"\n with open(os.path.join(docker_root, \"/proc/mounts\"), 'r') as fp:\n mounts = map(lambda x: x.split(), fp.read().splitlines())\n cgroup_mounts = filter(lambda x: x[2] == \"cgroup\", mounts)\n if len(cgroup_mounts) == 0:\n raise Exception(\n \"Can't find mounted cgroups. If you run the Agent inside a container,\"\n \" please refer to the documentation.\")\n # Old cgroup style\n if len(cgroup_mounts) == 1:\n return os.path.join(docker_root, cgroup_mounts[0][1])\n\n candidate = None\n for _, mountpoint, _, opts, _, _ in cgroup_mounts:\n if hierarchy in opts:\n if mountpoint.startswith(\"/host/\"):\n return os.path.join(docker_root, mountpoint)\n candidate = mountpoint\n\n if candidate is not None:\n return os.path.join(docker_root, candidate)\n raise Exception(\"Can't find mounted %s cgroups.\" % hierarchy)\n\n\ndef find_cgroup_filename_pattern(mountpoints, container_id):\n # We try with different cgroups so that it works even if only one is properly working\n for mountpoint in mountpoints.itervalues():\n stat_file_path_lxc = os.path.join(mountpoint, \"lxc\")\n stat_file_path_docker = os.path.join(mountpoint, \"docker\")\n stat_file_path_coreos = os.path.join(mountpoint, \"system.slice\")\n stat_file_path_kubernetes = os.path.join(mountpoint, container_id)\n stat_file_path_kubernetes_docker = os.path.join(mountpoint, \"system\", \"docker\", container_id)\n\n if os.path.exists(stat_file_path_lxc):\n return os.path.join('%(mountpoint)s/lxc/%(id)s/%(file)s')\n elif os.path.exists(stat_file_path_docker):\n return os.path.join('%(mountpoint)s/docker/%(id)s/%(file)s')\n elif os.path.exists(stat_file_path_coreos):\n return os.path.join('%(mountpoint)s/system.slice/docker-%(id)s.scope/%(file)s')\n elif os.path.exists(stat_file_path_kubernetes):\n return os.path.join('%(mountpoint)s/%(id)s/%(file)s')\n elif os.path.exists(stat_file_path_kubernetes_docker):\n return os.path.join('%(mountpoint)s/system/docker/%(id)s/%(file)s')\n\n raise MountException(\"Cannot find Docker cgroup directory. Be sure your system is supported.\")\n\n\ndef image_tag_extractor(entity, key):\n if \"Image\" in entity:\n split = entity[\"Image\"].split(\":\")\n if len(split) <= key:\n return None\n elif len(split) > 2:\n # if the repo is in the image name and has the form 'docker.clearbit:5000'\n # the split will be like [repo_url, repo_port/image_name, image_tag]. Let's avoid that\n split = [':'.join(split[:-1]), split[-1]]\n return [split[key]]\n if \"RepoTags\" in entity:\n splits = [el.split(\":\") for el in entity[\"RepoTags\"]]\n tags = set()\n for split in splits:\n if len(split) > 2:\n split = [':'.join(split[:-1]), split[-1]]\n if len(split) > key:\n tags.add(split[key])\n if len(tags) > 0:\n return list(tags)\n return None\n\n\ndef container_name_extractor(co):\n names = co.get('Names', [])\n if names is not None:\n # we sort the list to make sure that a docker API update introducing\n # new names with a single \"/\" won't make us report dups.\n names = sorted(names)\n for name in names:\n # the leading \"/\" is legit, if there's another one it means the name is actually an alias\n if name.count('/') <= 1:\n return [str(name).lstrip('/')]\n return co.get('Id')[:11]\n", "path": "utils/dockerutil.py"}], "after_files": [{"content": "# stdlib\nimport os\n\n# 3rd party\nfrom docker import Client\nfrom docker import tls\n\n\nclass MountException(Exception):\n pass\n\n# Default docker client settings\nDEFAULT_TIMEOUT = 5\nDEFAULT_VERSION = 'auto'\n\n_docker_client_settings = {\"version\": DEFAULT_VERSION}\n\n\ndef get_docker_settings():\n global _docker_client_settings\n return _docker_client_settings\n\n\ndef reset_docker_settings():\n global _docker_client_settings\n _docker_client_settings = {\"version\": DEFAULT_VERSION}\n\n\ndef set_docker_settings(init_config, instance):\n global _docker_client_settings\n _docker_client_settings = {\n \"version\": init_config.get('api_version', DEFAULT_VERSION),\n \"base_url\": instance.get(\"url\"),\n \"timeout\": int(init_config.get('timeout', DEFAULT_TIMEOUT)),\n }\n\n if init_config.get('tls', False):\n client_cert_path = init_config.get('tls_client_cert')\n client_key_path = init_config.get('tls_client_key')\n cacert = init_config.get('tls_cacert')\n verify = init_config.get('tls_verify')\n\n client_cert = None\n if client_cert_path is not None and client_key_path is not None:\n client_cert = (client_cert_path, client_key_path)\n\n verify = verify if verify is not None else cacert\n tls_config = tls.TLSConfig(client_cert=client_cert, verify=verify)\n _docker_client_settings[\"tls\"] = tls_config\n\n\ndef get_client():\n return Client(**_docker_client_settings)\n\n\ndef find_cgroup(hierarchy, docker_root):\n \"\"\"Find the mount point for a specified cgroup hierarchy.\n\n Works with old style and new style mounts.\n \"\"\"\n with open(os.path.join(docker_root, \"/proc/mounts\"), 'r') as fp:\n mounts = map(lambda x: x.split(), fp.read().splitlines())\n cgroup_mounts = filter(lambda x: x[2] == \"cgroup\", mounts)\n if len(cgroup_mounts) == 0:\n raise Exception(\n \"Can't find mounted cgroups. If you run the Agent inside a container,\"\n \" please refer to the documentation.\")\n # Old cgroup style\n if len(cgroup_mounts) == 1:\n return os.path.join(docker_root, cgroup_mounts[0][1])\n\n candidate = None\n for _, mountpoint, _, opts, _, _ in cgroup_mounts:\n if hierarchy in opts:\n if mountpoint.startswith(\"/host/\"):\n return os.path.join(docker_root, mountpoint)\n candidate = mountpoint\n\n if candidate is not None:\n return os.path.join(docker_root, candidate)\n raise Exception(\"Can't find mounted %s cgroups.\" % hierarchy)\n\n\ndef find_cgroup_filename_pattern(mountpoints, container_id):\n # We try with different cgroups so that it works even if only one is properly working\n for mountpoint in mountpoints.itervalues():\n stat_file_path_lxc = os.path.join(mountpoint, \"lxc\")\n stat_file_path_docker = os.path.join(mountpoint, \"docker\")\n stat_file_path_coreos = os.path.join(mountpoint, \"system.slice\")\n stat_file_path_kubernetes = os.path.join(mountpoint, container_id)\n stat_file_path_kubernetes_docker = os.path.join(mountpoint, \"system\", \"docker\", container_id)\n stat_file_path_docker_daemon = os.path.join(mountpoint, \"docker-daemon\", \"docker\", container_id)\n\n if os.path.exists(stat_file_path_lxc):\n return os.path.join('%(mountpoint)s/lxc/%(id)s/%(file)s')\n elif os.path.exists(stat_file_path_docker):\n return os.path.join('%(mountpoint)s/docker/%(id)s/%(file)s')\n elif os.path.exists(stat_file_path_coreos):\n return os.path.join('%(mountpoint)s/system.slice/docker-%(id)s.scope/%(file)s')\n elif os.path.exists(stat_file_path_kubernetes):\n return os.path.join('%(mountpoint)s/%(id)s/%(file)s')\n elif os.path.exists(stat_file_path_kubernetes_docker):\n return os.path.join('%(mountpoint)s/system/docker/%(id)s/%(file)s')\n elif os.path.exists(stat_file_path_docker_daemon):\n return os.path.join('%(mountpoint)s/docker-daemon/docker/%(id)s/%(file)s')\n\n\n raise MountException(\"Cannot find Docker cgroup directory. Be sure your system is supported.\")\n\n\ndef image_tag_extractor(entity, key):\n if \"Image\" in entity:\n split = entity[\"Image\"].split(\":\")\n if len(split) <= key:\n return None\n elif len(split) > 2:\n # if the repo is in the image name and has the form 'docker.clearbit:5000'\n # the split will be like [repo_url, repo_port/image_name, image_tag]. Let's avoid that\n split = [':'.join(split[:-1]), split[-1]]\n return [split[key]]\n if \"RepoTags\" in entity:\n splits = [el.split(\":\") for el in entity[\"RepoTags\"]]\n tags = set()\n for split in splits:\n if len(split) > 2:\n split = [':'.join(split[:-1]), split[-1]]\n if len(split) > key:\n tags.add(split[key])\n if len(tags) > 0:\n return list(tags)\n return None\n\n\ndef container_name_extractor(co):\n names = co.get('Names', [])\n if names is not None:\n # we sort the list to make sure that a docker API update introducing\n # new names with a single \"/\" won't make us report dups.\n names = sorted(names)\n for name in names:\n # the leading \"/\" is legit, if there's another one it means the name is actually an alias\n if name.count('/') <= 1:\n return [str(name).lstrip('/')]\n return co.get('Id')[:11]\n", "path": "utils/dockerutil.py"}]} | 1,944 | 287 |
gh_patches_debug_40452 | rasdani/github-patches | git_diff | encode__httpx-2382 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add support for streaming multipart/form-data
(After initial discussion in https://github.com/encode/httpx/discussions/2227.)
As far as I can see, there's no way to stream `multipart/form-data` uploads when the `Content-Length` is unknown, since HTTPX tries to get body size some way or another (last resort is to read it all in memory and count). Please, let me know if I overlooked something!
Use case would be to stream something whose size is unknown beforehand, like fetching and in parallel uploading a CSV export from a Postgres database, say.
HTTP/1.1 has chunked encoding for that. As you know, that streams the _message_ itself, not the resource. This encoding is supported for "direct file uploads" in HTTPX, but not for `multipart/form-data`. This is strange, since chunked encoding is independent of `Content-Type`. Indeed, if both chunking and compression are in place, you first compress, and then you chunk the compressed bytes.
In HTTP/2 chunked encoding does not exist, byte size is per DATA frame. As you know, a message is allowed to have several. So, in this case the API would support uploading with multiple DATA frames, coming from a generator or iterator, for example.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `httpx/_multipart.py`
Content:
```
1 import binascii
2 import io
3 import os
4 import typing
5 from pathlib import Path
6
7 from ._types import (
8 AsyncByteStream,
9 FileContent,
10 FileTypes,
11 RequestData,
12 RequestFiles,
13 SyncByteStream,
14 )
15 from ._utils import (
16 format_form_param,
17 guess_content_type,
18 peek_filelike_length,
19 primitive_value_to_str,
20 to_bytes,
21 )
22
23
24 def get_multipart_boundary_from_content_type(
25 content_type: typing.Optional[bytes],
26 ) -> typing.Optional[bytes]:
27 if not content_type or not content_type.startswith(b"multipart/form-data"):
28 return None
29 # parse boundary according to
30 # https://www.rfc-editor.org/rfc/rfc2046#section-5.1.1
31 if b";" in content_type:
32 for section in content_type.split(b";"):
33 if section.strip().lower().startswith(b"boundary="):
34 return section.strip()[len(b"boundary=") :].strip(b'"')
35 return None
36
37
38 class DataField:
39 """
40 A single form field item, within a multipart form field.
41 """
42
43 def __init__(
44 self, name: str, value: typing.Union[str, bytes, int, float, None]
45 ) -> None:
46 if not isinstance(name, str):
47 raise TypeError(
48 f"Invalid type for name. Expected str, got {type(name)}: {name!r}"
49 )
50 if value is not None and not isinstance(value, (str, bytes, int, float)):
51 raise TypeError(
52 f"Invalid type for value. Expected primitive type, got {type(value)}: {value!r}"
53 )
54 self.name = name
55 self.value: typing.Union[str, bytes] = (
56 value if isinstance(value, bytes) else primitive_value_to_str(value)
57 )
58
59 def render_headers(self) -> bytes:
60 if not hasattr(self, "_headers"):
61 name = format_form_param("name", self.name)
62 self._headers = b"".join(
63 [b"Content-Disposition: form-data; ", name, b"\r\n\r\n"]
64 )
65
66 return self._headers
67
68 def render_data(self) -> bytes:
69 if not hasattr(self, "_data"):
70 self._data = to_bytes(self.value)
71
72 return self._data
73
74 def get_length(self) -> int:
75 headers = self.render_headers()
76 data = self.render_data()
77 return len(headers) + len(data)
78
79 def render(self) -> typing.Iterator[bytes]:
80 yield self.render_headers()
81 yield self.render_data()
82
83
84 class FileField:
85 """
86 A single file field item, within a multipart form field.
87 """
88
89 CHUNK_SIZE = 64 * 1024
90
91 def __init__(self, name: str, value: FileTypes) -> None:
92 self.name = name
93
94 fileobj: FileContent
95
96 headers: typing.Dict[str, str] = {}
97 content_type: typing.Optional[str] = None
98
99 # This large tuple based API largely mirror's requests' API
100 # It would be good to think of better APIs for this that we could include in httpx 2.0
101 # since variable length tuples (especially of 4 elements) are quite unwieldly
102 if isinstance(value, tuple):
103 if len(value) == 2:
104 # neither the 3rd parameter (content_type) nor the 4th (headers) was included
105 filename, fileobj = value # type: ignore
106 elif len(value) == 3:
107 filename, fileobj, content_type = value # type: ignore
108 else:
109 # all 4 parameters included
110 filename, fileobj, content_type, headers = value # type: ignore
111 else:
112 filename = Path(str(getattr(value, "name", "upload"))).name
113 fileobj = value
114
115 if content_type is None:
116 content_type = guess_content_type(filename)
117
118 has_content_type_header = any("content-type" in key.lower() for key in headers)
119 if content_type is not None and not has_content_type_header:
120 # note that unlike requests, we ignore the content_type
121 # provided in the 3rd tuple element if it is also included in the headers
122 # requests does the opposite (it overwrites the header with the 3rd tuple element)
123 headers["Content-Type"] = content_type
124
125 if "b" not in getattr(fileobj, "mode", "b"):
126 raise TypeError(
127 "Multipart file uploads must be opened in binary mode, not text mode."
128 )
129 if isinstance(fileobj, io.StringIO):
130 raise TypeError(
131 "Multipart file uploads require 'io.BytesIO', not 'io.StringIO'."
132 )
133
134 self.filename = filename
135 self.file = fileobj
136 self.headers = headers
137
138 def get_length(self) -> int:
139 headers = self.render_headers()
140
141 if isinstance(self.file, (str, bytes)):
142 return len(headers) + len(to_bytes(self.file))
143
144 # Let's do our best not to read `file` into memory.
145 file_length = peek_filelike_length(self.file)
146 if file_length is None:
147 # As a last resort, read file and cache contents for later.
148 assert not hasattr(self, "_data")
149 self._data = to_bytes(self.file.read())
150 file_length = len(self._data)
151
152 return len(headers) + file_length
153
154 def render_headers(self) -> bytes:
155 if not hasattr(self, "_headers"):
156 parts = [
157 b"Content-Disposition: form-data; ",
158 format_form_param("name", self.name),
159 ]
160 if self.filename:
161 filename = format_form_param("filename", self.filename)
162 parts.extend([b"; ", filename])
163 for header_name, header_value in self.headers.items():
164 key, val = f"\r\n{header_name}: ".encode(), header_value.encode()
165 parts.extend([key, val])
166 parts.append(b"\r\n\r\n")
167 self._headers = b"".join(parts)
168
169 return self._headers
170
171 def render_data(self) -> typing.Iterator[bytes]:
172 if isinstance(self.file, (str, bytes)):
173 yield to_bytes(self.file)
174 return
175
176 if hasattr(self, "_data"):
177 # Already rendered.
178 yield self._data
179 return
180
181 if hasattr(self.file, "seek"):
182 self.file.seek(0)
183
184 chunk = self.file.read(self.CHUNK_SIZE)
185 while chunk:
186 yield to_bytes(chunk)
187 chunk = self.file.read(self.CHUNK_SIZE)
188
189 def render(self) -> typing.Iterator[bytes]:
190 yield self.render_headers()
191 yield from self.render_data()
192
193
194 class MultipartStream(SyncByteStream, AsyncByteStream):
195 """
196 Request content as streaming multipart encoded form data.
197 """
198
199 def __init__(
200 self,
201 data: RequestData,
202 files: RequestFiles,
203 boundary: typing.Optional[bytes] = None,
204 ) -> None:
205 if boundary is None:
206 boundary = binascii.hexlify(os.urandom(16))
207
208 self.boundary = boundary
209 self.content_type = "multipart/form-data; boundary=%s" % boundary.decode(
210 "ascii"
211 )
212 self.fields = list(self._iter_fields(data, files))
213
214 def _iter_fields(
215 self, data: RequestData, files: RequestFiles
216 ) -> typing.Iterator[typing.Union[FileField, DataField]]:
217 for name, value in data.items():
218 if isinstance(value, (tuple, list)):
219 for item in value:
220 yield DataField(name=name, value=item)
221 else:
222 yield DataField(name=name, value=value)
223
224 file_items = files.items() if isinstance(files, typing.Mapping) else files
225 for name, value in file_items:
226 yield FileField(name=name, value=value)
227
228 def iter_chunks(self) -> typing.Iterator[bytes]:
229 for field in self.fields:
230 yield b"--%s\r\n" % self.boundary
231 yield from field.render()
232 yield b"\r\n"
233 yield b"--%s--\r\n" % self.boundary
234
235 def iter_chunks_lengths(self) -> typing.Iterator[int]:
236 boundary_length = len(self.boundary)
237 # Follow closely what `.iter_chunks()` does.
238 for field in self.fields:
239 yield 2 + boundary_length + 2
240 yield field.get_length()
241 yield 2
242 yield 2 + boundary_length + 4
243
244 def get_content_length(self) -> int:
245 return sum(self.iter_chunks_lengths())
246
247 # Content stream interface.
248
249 def get_headers(self) -> typing.Dict[str, str]:
250 content_length = str(self.get_content_length())
251 content_type = self.content_type
252 return {"Content-Length": content_length, "Content-Type": content_type}
253
254 def __iter__(self) -> typing.Iterator[bytes]:
255 for chunk in self.iter_chunks():
256 yield chunk
257
258 async def __aiter__(self) -> typing.AsyncIterator[bytes]:
259 for chunk in self.iter_chunks():
260 yield chunk
261
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/httpx/_multipart.py b/httpx/_multipart.py
--- a/httpx/_multipart.py
+++ b/httpx/_multipart.py
@@ -135,19 +135,18 @@
self.file = fileobj
self.headers = headers
- def get_length(self) -> int:
+ def get_length(self) -> typing.Optional[int]:
headers = self.render_headers()
if isinstance(self.file, (str, bytes)):
return len(headers) + len(to_bytes(self.file))
- # Let's do our best not to read `file` into memory.
file_length = peek_filelike_length(self.file)
+
+ # If we can't determine the filesize without reading it into memory,
+ # then return `None` here, to indicate an unknown file length.
if file_length is None:
- # As a last resort, read file and cache contents for later.
- assert not hasattr(self, "_data")
- self._data = to_bytes(self.file.read())
- file_length = len(self._data)
+ return None
return len(headers) + file_length
@@ -173,13 +172,11 @@
yield to_bytes(self.file)
return
- if hasattr(self, "_data"):
- # Already rendered.
- yield self._data
- return
-
if hasattr(self.file, "seek"):
- self.file.seek(0)
+ try:
+ self.file.seek(0)
+ except io.UnsupportedOperation:
+ pass
chunk = self.file.read(self.CHUNK_SIZE)
while chunk:
@@ -232,24 +229,34 @@
yield b"\r\n"
yield b"--%s--\r\n" % self.boundary
- def iter_chunks_lengths(self) -> typing.Iterator[int]:
+ def get_content_length(self) -> typing.Optional[int]:
+ """
+ Return the length of the multipart encoded content, or `None` if
+ any of the files have a length that cannot be determined upfront.
+ """
boundary_length = len(self.boundary)
- # Follow closely what `.iter_chunks()` does.
+ length = 0
+
for field in self.fields:
- yield 2 + boundary_length + 2
- yield field.get_length()
- yield 2
- yield 2 + boundary_length + 4
+ field_length = field.get_length()
+ if field_length is None:
+ return None
+
+ length += 2 + boundary_length + 2 # b"--{boundary}\r\n"
+ length += field_length
+ length += 2 # b"\r\n"
- def get_content_length(self) -> int:
- return sum(self.iter_chunks_lengths())
+ length += 2 + boundary_length + 4 # b"--{boundary}--\r\n"
+ return length
# Content stream interface.
def get_headers(self) -> typing.Dict[str, str]:
- content_length = str(self.get_content_length())
+ content_length = self.get_content_length()
content_type = self.content_type
- return {"Content-Length": content_length, "Content-Type": content_type}
+ if content_length is None:
+ return {"Transfer-Encoding": "chunked", "Content-Type": content_type}
+ return {"Content-Length": str(content_length), "Content-Type": content_type}
def __iter__(self) -> typing.Iterator[bytes]:
for chunk in self.iter_chunks():
| {"golden_diff": "diff --git a/httpx/_multipart.py b/httpx/_multipart.py\n--- a/httpx/_multipart.py\n+++ b/httpx/_multipart.py\n@@ -135,19 +135,18 @@\n self.file = fileobj\n self.headers = headers\n \n- def get_length(self) -> int:\n+ def get_length(self) -> typing.Optional[int]:\n headers = self.render_headers()\n \n if isinstance(self.file, (str, bytes)):\n return len(headers) + len(to_bytes(self.file))\n \n- # Let's do our best not to read `file` into memory.\n file_length = peek_filelike_length(self.file)\n+\n+ # If we can't determine the filesize without reading it into memory,\n+ # then return `None` here, to indicate an unknown file length.\n if file_length is None:\n- # As a last resort, read file and cache contents for later.\n- assert not hasattr(self, \"_data\")\n- self._data = to_bytes(self.file.read())\n- file_length = len(self._data)\n+ return None\n \n return len(headers) + file_length\n \n@@ -173,13 +172,11 @@\n yield to_bytes(self.file)\n return\n \n- if hasattr(self, \"_data\"):\n- # Already rendered.\n- yield self._data\n- return\n-\n if hasattr(self.file, \"seek\"):\n- self.file.seek(0)\n+ try:\n+ self.file.seek(0)\n+ except io.UnsupportedOperation:\n+ pass\n \n chunk = self.file.read(self.CHUNK_SIZE)\n while chunk:\n@@ -232,24 +229,34 @@\n yield b\"\\r\\n\"\n yield b\"--%s--\\r\\n\" % self.boundary\n \n- def iter_chunks_lengths(self) -> typing.Iterator[int]:\n+ def get_content_length(self) -> typing.Optional[int]:\n+ \"\"\"\n+ Return the length of the multipart encoded content, or `None` if\n+ any of the files have a length that cannot be determined upfront.\n+ \"\"\"\n boundary_length = len(self.boundary)\n- # Follow closely what `.iter_chunks()` does.\n+ length = 0\n+\n for field in self.fields:\n- yield 2 + boundary_length + 2\n- yield field.get_length()\n- yield 2\n- yield 2 + boundary_length + 4\n+ field_length = field.get_length()\n+ if field_length is None:\n+ return None\n+\n+ length += 2 + boundary_length + 2 # b\"--{boundary}\\r\\n\"\n+ length += field_length\n+ length += 2 # b\"\\r\\n\"\n \n- def get_content_length(self) -> int:\n- return sum(self.iter_chunks_lengths())\n+ length += 2 + boundary_length + 4 # b\"--{boundary}--\\r\\n\"\n+ return length\n \n # Content stream interface.\n \n def get_headers(self) -> typing.Dict[str, str]:\n- content_length = str(self.get_content_length())\n+ content_length = self.get_content_length()\n content_type = self.content_type\n- return {\"Content-Length\": content_length, \"Content-Type\": content_type}\n+ if content_length is None:\n+ return {\"Transfer-Encoding\": \"chunked\", \"Content-Type\": content_type}\n+ return {\"Content-Length\": str(content_length), \"Content-Type\": content_type}\n \n def __iter__(self) -> typing.Iterator[bytes]:\n for chunk in self.iter_chunks():\n", "issue": "Add support for streaming multipart/form-data\n(After initial discussion in https://github.com/encode/httpx/discussions/2227.)\r\n\r\nAs far as I can see, there's no way to stream `multipart/form-data` uploads when the `Content-Length` is unknown, since HTTPX tries to get body size some way or another (last resort is to read it all in memory and count). Please, let me know if I overlooked something!\r\n\r\nUse case would be to stream something whose size is unknown beforehand, like fetching and in parallel uploading a CSV export from a Postgres database, say.\r\n\r\nHTTP/1.1 has chunked encoding for that. As you know, that streams the _message_ itself, not the resource. This encoding is supported for \"direct file uploads\" in HTTPX, but not for `multipart/form-data`. This is strange, since chunked encoding is independent of `Content-Type`. Indeed, if both chunking and compression are in place, you first compress, and then you chunk the compressed bytes.\r\n\r\nIn HTTP/2 chunked encoding does not exist, byte size is per DATA frame. As you know, a message is allowed to have several. So, in this case the API would support uploading with multiple DATA frames, coming from a generator or iterator, for example.\r\n\n", "before_files": [{"content": "import binascii\nimport io\nimport os\nimport typing\nfrom pathlib import Path\n\nfrom ._types import (\n AsyncByteStream,\n FileContent,\n FileTypes,\n RequestData,\n RequestFiles,\n SyncByteStream,\n)\nfrom ._utils import (\n format_form_param,\n guess_content_type,\n peek_filelike_length,\n primitive_value_to_str,\n to_bytes,\n)\n\n\ndef get_multipart_boundary_from_content_type(\n content_type: typing.Optional[bytes],\n) -> typing.Optional[bytes]:\n if not content_type or not content_type.startswith(b\"multipart/form-data\"):\n return None\n # parse boundary according to\n # https://www.rfc-editor.org/rfc/rfc2046#section-5.1.1\n if b\";\" in content_type:\n for section in content_type.split(b\";\"):\n if section.strip().lower().startswith(b\"boundary=\"):\n return section.strip()[len(b\"boundary=\") :].strip(b'\"')\n return None\n\n\nclass DataField:\n \"\"\"\n A single form field item, within a multipart form field.\n \"\"\"\n\n def __init__(\n self, name: str, value: typing.Union[str, bytes, int, float, None]\n ) -> None:\n if not isinstance(name, str):\n raise TypeError(\n f\"Invalid type for name. Expected str, got {type(name)}: {name!r}\"\n )\n if value is not None and not isinstance(value, (str, bytes, int, float)):\n raise TypeError(\n f\"Invalid type for value. Expected primitive type, got {type(value)}: {value!r}\"\n )\n self.name = name\n self.value: typing.Union[str, bytes] = (\n value if isinstance(value, bytes) else primitive_value_to_str(value)\n )\n\n def render_headers(self) -> bytes:\n if not hasattr(self, \"_headers\"):\n name = format_form_param(\"name\", self.name)\n self._headers = b\"\".join(\n [b\"Content-Disposition: form-data; \", name, b\"\\r\\n\\r\\n\"]\n )\n\n return self._headers\n\n def render_data(self) -> bytes:\n if not hasattr(self, \"_data\"):\n self._data = to_bytes(self.value)\n\n return self._data\n\n def get_length(self) -> int:\n headers = self.render_headers()\n data = self.render_data()\n return len(headers) + len(data)\n\n def render(self) -> typing.Iterator[bytes]:\n yield self.render_headers()\n yield self.render_data()\n\n\nclass FileField:\n \"\"\"\n A single file field item, within a multipart form field.\n \"\"\"\n\n CHUNK_SIZE = 64 * 1024\n\n def __init__(self, name: str, value: FileTypes) -> None:\n self.name = name\n\n fileobj: FileContent\n\n headers: typing.Dict[str, str] = {}\n content_type: typing.Optional[str] = None\n\n # This large tuple based API largely mirror's requests' API\n # It would be good to think of better APIs for this that we could include in httpx 2.0\n # since variable length tuples (especially of 4 elements) are quite unwieldly\n if isinstance(value, tuple):\n if len(value) == 2:\n # neither the 3rd parameter (content_type) nor the 4th (headers) was included\n filename, fileobj = value # type: ignore\n elif len(value) == 3:\n filename, fileobj, content_type = value # type: ignore\n else:\n # all 4 parameters included\n filename, fileobj, content_type, headers = value # type: ignore\n else:\n filename = Path(str(getattr(value, \"name\", \"upload\"))).name\n fileobj = value\n\n if content_type is None:\n content_type = guess_content_type(filename)\n\n has_content_type_header = any(\"content-type\" in key.lower() for key in headers)\n if content_type is not None and not has_content_type_header:\n # note that unlike requests, we ignore the content_type\n # provided in the 3rd tuple element if it is also included in the headers\n # requests does the opposite (it overwrites the header with the 3rd tuple element)\n headers[\"Content-Type\"] = content_type\n\n if \"b\" not in getattr(fileobj, \"mode\", \"b\"):\n raise TypeError(\n \"Multipart file uploads must be opened in binary mode, not text mode.\"\n )\n if isinstance(fileobj, io.StringIO):\n raise TypeError(\n \"Multipart file uploads require 'io.BytesIO', not 'io.StringIO'.\"\n )\n\n self.filename = filename\n self.file = fileobj\n self.headers = headers\n\n def get_length(self) -> int:\n headers = self.render_headers()\n\n if isinstance(self.file, (str, bytes)):\n return len(headers) + len(to_bytes(self.file))\n\n # Let's do our best not to read `file` into memory.\n file_length = peek_filelike_length(self.file)\n if file_length is None:\n # As a last resort, read file and cache contents for later.\n assert not hasattr(self, \"_data\")\n self._data = to_bytes(self.file.read())\n file_length = len(self._data)\n\n return len(headers) + file_length\n\n def render_headers(self) -> bytes:\n if not hasattr(self, \"_headers\"):\n parts = [\n b\"Content-Disposition: form-data; \",\n format_form_param(\"name\", self.name),\n ]\n if self.filename:\n filename = format_form_param(\"filename\", self.filename)\n parts.extend([b\"; \", filename])\n for header_name, header_value in self.headers.items():\n key, val = f\"\\r\\n{header_name}: \".encode(), header_value.encode()\n parts.extend([key, val])\n parts.append(b\"\\r\\n\\r\\n\")\n self._headers = b\"\".join(parts)\n\n return self._headers\n\n def render_data(self) -> typing.Iterator[bytes]:\n if isinstance(self.file, (str, bytes)):\n yield to_bytes(self.file)\n return\n\n if hasattr(self, \"_data\"):\n # Already rendered.\n yield self._data\n return\n\n if hasattr(self.file, \"seek\"):\n self.file.seek(0)\n\n chunk = self.file.read(self.CHUNK_SIZE)\n while chunk:\n yield to_bytes(chunk)\n chunk = self.file.read(self.CHUNK_SIZE)\n\n def render(self) -> typing.Iterator[bytes]:\n yield self.render_headers()\n yield from self.render_data()\n\n\nclass MultipartStream(SyncByteStream, AsyncByteStream):\n \"\"\"\n Request content as streaming multipart encoded form data.\n \"\"\"\n\n def __init__(\n self,\n data: RequestData,\n files: RequestFiles,\n boundary: typing.Optional[bytes] = None,\n ) -> None:\n if boundary is None:\n boundary = binascii.hexlify(os.urandom(16))\n\n self.boundary = boundary\n self.content_type = \"multipart/form-data; boundary=%s\" % boundary.decode(\n \"ascii\"\n )\n self.fields = list(self._iter_fields(data, files))\n\n def _iter_fields(\n self, data: RequestData, files: RequestFiles\n ) -> typing.Iterator[typing.Union[FileField, DataField]]:\n for name, value in data.items():\n if isinstance(value, (tuple, list)):\n for item in value:\n yield DataField(name=name, value=item)\n else:\n yield DataField(name=name, value=value)\n\n file_items = files.items() if isinstance(files, typing.Mapping) else files\n for name, value in file_items:\n yield FileField(name=name, value=value)\n\n def iter_chunks(self) -> typing.Iterator[bytes]:\n for field in self.fields:\n yield b\"--%s\\r\\n\" % self.boundary\n yield from field.render()\n yield b\"\\r\\n\"\n yield b\"--%s--\\r\\n\" % self.boundary\n\n def iter_chunks_lengths(self) -> typing.Iterator[int]:\n boundary_length = len(self.boundary)\n # Follow closely what `.iter_chunks()` does.\n for field in self.fields:\n yield 2 + boundary_length + 2\n yield field.get_length()\n yield 2\n yield 2 + boundary_length + 4\n\n def get_content_length(self) -> int:\n return sum(self.iter_chunks_lengths())\n\n # Content stream interface.\n\n def get_headers(self) -> typing.Dict[str, str]:\n content_length = str(self.get_content_length())\n content_type = self.content_type\n return {\"Content-Length\": content_length, \"Content-Type\": content_type}\n\n def __iter__(self) -> typing.Iterator[bytes]:\n for chunk in self.iter_chunks():\n yield chunk\n\n async def __aiter__(self) -> typing.AsyncIterator[bytes]:\n for chunk in self.iter_chunks():\n yield chunk\n", "path": "httpx/_multipart.py"}], "after_files": [{"content": "import binascii\nimport io\nimport os\nimport typing\nfrom pathlib import Path\n\nfrom ._types import (\n AsyncByteStream,\n FileContent,\n FileTypes,\n RequestData,\n RequestFiles,\n SyncByteStream,\n)\nfrom ._utils import (\n format_form_param,\n guess_content_type,\n peek_filelike_length,\n primitive_value_to_str,\n to_bytes,\n)\n\n\ndef get_multipart_boundary_from_content_type(\n content_type: typing.Optional[bytes],\n) -> typing.Optional[bytes]:\n if not content_type or not content_type.startswith(b\"multipart/form-data\"):\n return None\n # parse boundary according to\n # https://www.rfc-editor.org/rfc/rfc2046#section-5.1.1\n if b\";\" in content_type:\n for section in content_type.split(b\";\"):\n if section.strip().lower().startswith(b\"boundary=\"):\n return section.strip()[len(b\"boundary=\") :].strip(b'\"')\n return None\n\n\nclass DataField:\n \"\"\"\n A single form field item, within a multipart form field.\n \"\"\"\n\n def __init__(\n self, name: str, value: typing.Union[str, bytes, int, float, None]\n ) -> None:\n if not isinstance(name, str):\n raise TypeError(\n f\"Invalid type for name. Expected str, got {type(name)}: {name!r}\"\n )\n if value is not None and not isinstance(value, (str, bytes, int, float)):\n raise TypeError(\n f\"Invalid type for value. Expected primitive type, got {type(value)}: {value!r}\"\n )\n self.name = name\n self.value: typing.Union[str, bytes] = (\n value if isinstance(value, bytes) else primitive_value_to_str(value)\n )\n\n def render_headers(self) -> bytes:\n if not hasattr(self, \"_headers\"):\n name = format_form_param(\"name\", self.name)\n self._headers = b\"\".join(\n [b\"Content-Disposition: form-data; \", name, b\"\\r\\n\\r\\n\"]\n )\n\n return self._headers\n\n def render_data(self) -> bytes:\n if not hasattr(self, \"_data\"):\n self._data = to_bytes(self.value)\n\n return self._data\n\n def get_length(self) -> int:\n headers = self.render_headers()\n data = self.render_data()\n return len(headers) + len(data)\n\n def render(self) -> typing.Iterator[bytes]:\n yield self.render_headers()\n yield self.render_data()\n\n\nclass FileField:\n \"\"\"\n A single file field item, within a multipart form field.\n \"\"\"\n\n CHUNK_SIZE = 64 * 1024\n\n def __init__(self, name: str, value: FileTypes) -> None:\n self.name = name\n\n fileobj: FileContent\n\n headers: typing.Dict[str, str] = {}\n content_type: typing.Optional[str] = None\n\n # This large tuple based API largely mirror's requests' API\n # It would be good to think of better APIs for this that we could include in httpx 2.0\n # since variable length tuples (especially of 4 elements) are quite unwieldly\n if isinstance(value, tuple):\n if len(value) == 2:\n # neither the 3rd parameter (content_type) nor the 4th (headers) was included\n filename, fileobj = value # type: ignore\n elif len(value) == 3:\n filename, fileobj, content_type = value # type: ignore\n else:\n # all 4 parameters included\n filename, fileobj, content_type, headers = value # type: ignore\n else:\n filename = Path(str(getattr(value, \"name\", \"upload\"))).name\n fileobj = value\n\n if content_type is None:\n content_type = guess_content_type(filename)\n\n has_content_type_header = any(\"content-type\" in key.lower() for key in headers)\n if content_type is not None and not has_content_type_header:\n # note that unlike requests, we ignore the content_type\n # provided in the 3rd tuple element if it is also included in the headers\n # requests does the opposite (it overwrites the header with the 3rd tuple element)\n headers[\"Content-Type\"] = content_type\n\n if \"b\" not in getattr(fileobj, \"mode\", \"b\"):\n raise TypeError(\n \"Multipart file uploads must be opened in binary mode, not text mode.\"\n )\n if isinstance(fileobj, io.StringIO):\n raise TypeError(\n \"Multipart file uploads require 'io.BytesIO', not 'io.StringIO'.\"\n )\n\n self.filename = filename\n self.file = fileobj\n self.headers = headers\n\n def get_length(self) -> typing.Optional[int]:\n headers = self.render_headers()\n\n if isinstance(self.file, (str, bytes)):\n return len(headers) + len(to_bytes(self.file))\n\n file_length = peek_filelike_length(self.file)\n\n # If we can't determine the filesize without reading it into memory,\n # then return `None` here, to indicate an unknown file length.\n if file_length is None:\n return None\n\n return len(headers) + file_length\n\n def render_headers(self) -> bytes:\n if not hasattr(self, \"_headers\"):\n parts = [\n b\"Content-Disposition: form-data; \",\n format_form_param(\"name\", self.name),\n ]\n if self.filename:\n filename = format_form_param(\"filename\", self.filename)\n parts.extend([b\"; \", filename])\n for header_name, header_value in self.headers.items():\n key, val = f\"\\r\\n{header_name}: \".encode(), header_value.encode()\n parts.extend([key, val])\n parts.append(b\"\\r\\n\\r\\n\")\n self._headers = b\"\".join(parts)\n\n return self._headers\n\n def render_data(self) -> typing.Iterator[bytes]:\n if isinstance(self.file, (str, bytes)):\n yield to_bytes(self.file)\n return\n\n if hasattr(self.file, \"seek\"):\n try:\n self.file.seek(0)\n except io.UnsupportedOperation:\n pass\n\n chunk = self.file.read(self.CHUNK_SIZE)\n while chunk:\n yield to_bytes(chunk)\n chunk = self.file.read(self.CHUNK_SIZE)\n\n def render(self) -> typing.Iterator[bytes]:\n yield self.render_headers()\n yield from self.render_data()\n\n\nclass MultipartStream(SyncByteStream, AsyncByteStream):\n \"\"\"\n Request content as streaming multipart encoded form data.\n \"\"\"\n\n def __init__(\n self,\n data: RequestData,\n files: RequestFiles,\n boundary: typing.Optional[bytes] = None,\n ) -> None:\n if boundary is None:\n boundary = binascii.hexlify(os.urandom(16))\n\n self.boundary = boundary\n self.content_type = \"multipart/form-data; boundary=%s\" % boundary.decode(\n \"ascii\"\n )\n self.fields = list(self._iter_fields(data, files))\n\n def _iter_fields(\n self, data: RequestData, files: RequestFiles\n ) -> typing.Iterator[typing.Union[FileField, DataField]]:\n for name, value in data.items():\n if isinstance(value, (tuple, list)):\n for item in value:\n yield DataField(name=name, value=item)\n else:\n yield DataField(name=name, value=value)\n\n file_items = files.items() if isinstance(files, typing.Mapping) else files\n for name, value in file_items:\n yield FileField(name=name, value=value)\n\n def iter_chunks(self) -> typing.Iterator[bytes]:\n for field in self.fields:\n yield b\"--%s\\r\\n\" % self.boundary\n yield from field.render()\n yield b\"\\r\\n\"\n yield b\"--%s--\\r\\n\" % self.boundary\n\n def get_content_length(self) -> typing.Optional[int]:\n \"\"\"\n Return the length of the multipart encoded content, or `None` if\n any of the files have a length that cannot be determined upfront.\n \"\"\"\n boundary_length = len(self.boundary)\n length = 0\n\n for field in self.fields:\n field_length = field.get_length()\n if field_length is None:\n return None\n\n length += 2 + boundary_length + 2 # b\"--{boundary}\\r\\n\"\n length += field_length\n length += 2 # b\"\\r\\n\"\n\n length += 2 + boundary_length + 4 # b\"--{boundary}--\\r\\n\"\n return length\n\n # Content stream interface.\n\n def get_headers(self) -> typing.Dict[str, str]:\n content_length = self.get_content_length()\n content_type = self.content_type\n if content_length is None:\n return {\"Transfer-Encoding\": \"chunked\", \"Content-Type\": content_type}\n return {\"Content-Length\": str(content_length), \"Content-Type\": content_type}\n\n def __iter__(self) -> typing.Iterator[bytes]:\n for chunk in self.iter_chunks():\n yield chunk\n\n async def __aiter__(self) -> typing.AsyncIterator[bytes]:\n for chunk in self.iter_chunks():\n yield chunk\n", "path": "httpx/_multipart.py"}]} | 3,171 | 777 |
gh_patches_debug_11715 | rasdani/github-patches | git_diff | safe-global__safe-config-service-361 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Gas price editable from the Chain model
Please make the Gas Price editable directly via Chain configs.
Like we did for Features and Wallets.
Thanks!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/chains/admin.py`
Content:
```
1 from django.contrib import admin
2 from django.db.models import Model
3
4 from .models import Chain, Feature, GasPrice, Wallet
5
6
7 class FeatureInline(admin.TabularInline[Model]):
8 model = Feature.chains.through
9 extra = 0
10 verbose_name_plural = "Features enabled for this chain"
11
12
13 class WalletInline(admin.TabularInline[Model]):
14 model = Wallet.chains.through
15 extra = 0
16 verbose_name_plural = "Wallets enabled for this chain"
17
18
19 @admin.register(Chain)
20 class ChainAdmin(admin.ModelAdmin[Chain]):
21 list_display = (
22 "id",
23 "name",
24 "rpc_uri",
25 "safe_apps_rpc_uri",
26 "relevance",
27 )
28 search_fields = ("name", "id")
29 ordering = (
30 "relevance",
31 "name",
32 )
33 inlines = [FeatureInline, WalletInline]
34
35
36 @admin.register(GasPrice)
37 class GasPriceAdmin(admin.ModelAdmin[GasPrice]):
38 list_display = (
39 "chain_id",
40 "oracle_uri",
41 "fixed_wei_value",
42 "rank",
43 )
44 search_fields = ("chain_id", "oracle_uri")
45 ordering = ("rank",)
46
47
48 @admin.register(Wallet)
49 class WalletAdmin(admin.ModelAdmin[Wallet]):
50 list_display = ("key",)
51
52
53 @admin.register(Feature)
54 class FeatureAdmin(admin.ModelAdmin[Feature]):
55 list_display = ("key",)
56
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/chains/admin.py b/src/chains/admin.py
--- a/src/chains/admin.py
+++ b/src/chains/admin.py
@@ -4,6 +4,12 @@
from .models import Chain, Feature, GasPrice, Wallet
+class GasPriceInline(admin.TabularInline[Model]):
+ model = GasPrice
+ extra = 0
+ verbose_name_plural = "Gas prices set for this chain"
+
+
class FeatureInline(admin.TabularInline[Model]):
model = Feature.chains.through
extra = 0
@@ -30,7 +36,7 @@
"relevance",
"name",
)
- inlines = [FeatureInline, WalletInline]
+ inlines = [FeatureInline, GasPriceInline, WalletInline]
@admin.register(GasPrice)
| {"golden_diff": "diff --git a/src/chains/admin.py b/src/chains/admin.py\n--- a/src/chains/admin.py\n+++ b/src/chains/admin.py\n@@ -4,6 +4,12 @@\n from .models import Chain, Feature, GasPrice, Wallet\n \n \n+class GasPriceInline(admin.TabularInline[Model]):\n+ model = GasPrice\n+ extra = 0\n+ verbose_name_plural = \"Gas prices set for this chain\"\n+\n+\n class FeatureInline(admin.TabularInline[Model]):\n model = Feature.chains.through\n extra = 0\n@@ -30,7 +36,7 @@\n \"relevance\",\n \"name\",\n )\n- inlines = [FeatureInline, WalletInline]\n+ inlines = [FeatureInline, GasPriceInline, WalletInline]\n \n \n @admin.register(GasPrice)\n", "issue": "Gas price editable from the Chain model\nPlease make the Gas Price editable directly via Chain configs.\r\nLike we did for Features and Wallets.\r\nThanks!\n", "before_files": [{"content": "from django.contrib import admin\nfrom django.db.models import Model\n\nfrom .models import Chain, Feature, GasPrice, Wallet\n\n\nclass FeatureInline(admin.TabularInline[Model]):\n model = Feature.chains.through\n extra = 0\n verbose_name_plural = \"Features enabled for this chain\"\n\n\nclass WalletInline(admin.TabularInline[Model]):\n model = Wallet.chains.through\n extra = 0\n verbose_name_plural = \"Wallets enabled for this chain\"\n\n\[email protected](Chain)\nclass ChainAdmin(admin.ModelAdmin[Chain]):\n list_display = (\n \"id\",\n \"name\",\n \"rpc_uri\",\n \"safe_apps_rpc_uri\",\n \"relevance\",\n )\n search_fields = (\"name\", \"id\")\n ordering = (\n \"relevance\",\n \"name\",\n )\n inlines = [FeatureInline, WalletInline]\n\n\[email protected](GasPrice)\nclass GasPriceAdmin(admin.ModelAdmin[GasPrice]):\n list_display = (\n \"chain_id\",\n \"oracle_uri\",\n \"fixed_wei_value\",\n \"rank\",\n )\n search_fields = (\"chain_id\", \"oracle_uri\")\n ordering = (\"rank\",)\n\n\[email protected](Wallet)\nclass WalletAdmin(admin.ModelAdmin[Wallet]):\n list_display = (\"key\",)\n\n\[email protected](Feature)\nclass FeatureAdmin(admin.ModelAdmin[Feature]):\n list_display = (\"key\",)\n", "path": "src/chains/admin.py"}], "after_files": [{"content": "from django.contrib import admin\nfrom django.db.models import Model\n\nfrom .models import Chain, Feature, GasPrice, Wallet\n\n\nclass GasPriceInline(admin.TabularInline[Model]):\n model = GasPrice\n extra = 0\n verbose_name_plural = \"Gas prices set for this chain\"\n\n\nclass FeatureInline(admin.TabularInline[Model]):\n model = Feature.chains.through\n extra = 0\n verbose_name_plural = \"Features enabled for this chain\"\n\n\nclass WalletInline(admin.TabularInline[Model]):\n model = Wallet.chains.through\n extra = 0\n verbose_name_plural = \"Wallets enabled for this chain\"\n\n\[email protected](Chain)\nclass ChainAdmin(admin.ModelAdmin[Chain]):\n list_display = (\n \"id\",\n \"name\",\n \"rpc_uri\",\n \"safe_apps_rpc_uri\",\n \"relevance\",\n )\n search_fields = (\"name\", \"id\")\n ordering = (\n \"relevance\",\n \"name\",\n )\n inlines = [FeatureInline, GasPriceInline, WalletInline]\n\n\[email protected](GasPrice)\nclass GasPriceAdmin(admin.ModelAdmin[GasPrice]):\n list_display = (\n \"chain_id\",\n \"oracle_uri\",\n \"fixed_wei_value\",\n \"rank\",\n )\n search_fields = (\"chain_id\", \"oracle_uri\")\n ordering = (\"rank\",)\n\n\[email protected](Wallet)\nclass WalletAdmin(admin.ModelAdmin[Wallet]):\n list_display = (\"key\",)\n\n\[email protected](Feature)\nclass FeatureAdmin(admin.ModelAdmin[Feature]):\n list_display = (\"key\",)\n", "path": "src/chains/admin.py"}]} | 696 | 183 |
gh_patches_debug_38671 | rasdani/github-patches | git_diff | scikit-hep__pyhf-435 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
json2xml needs to add support for the HiFa DTD
# Description
json2xml does not include the DTD statement in the XML outputs that it generates.
# Expected Behavior
Should include DTD statement.
# Actual Behavior
Does not include DTD statement.
# Checklist
- [x] Run `git fetch` to get the most up to date version of `master`
- [x] Searched through existing Issues to confirm this is not a duplicate issue
- [x] Filled out the Description, Expected Behavior, Actual Behavior, and Steps to Reproduce sections above or have edited/removed them in a way that fully describes the issue
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyhf/writexml.py`
Content:
```
1 import logging
2
3 import os
4 import xml.etree.cElementTree as ET
5 import numpy as np
6 import uproot
7 from uproot_methods.classes import TH1
8
9 _ROOT_DATA_FILE = None
10
11 log = logging.getLogger(__name__)
12
13
14 def _make_hist_name(channel, sample, modifier='', prefix='hist', suffix=''):
15 return "{prefix}{middle}{suffix}".format(
16 prefix=prefix,
17 suffix=suffix,
18 middle='_'.join(filter(lambda x: x, [channel, sample, modifier])),
19 )
20
21
22 def _export_root_histogram(histname, data):
23 h = TH1.from_numpy((np.asarray(data), np.arange(len(data) + 1)))
24 h._fName = histname
25 # NB: uproot crashes for some reason, figure out why later
26 # if histname in _ROOT_DATA_FILE:
27 # raise KeyError('Duplicate key {0} being written.'.format(histname))
28 _ROOT_DATA_FILE[histname] = h
29
30
31 # https://stackoverflow.com/a/4590052
32 def indent(elem, level=0):
33 i = "\n" + level * " "
34 if elem:
35 if not elem.text or not elem.text.strip():
36 elem.text = i + " "
37 if not elem.tail or not elem.tail.strip():
38 elem.tail = i
39 for elem in elem:
40 indent(elem, level + 1)
41 if not elem.tail or not elem.tail.strip():
42 elem.tail = i
43 else:
44 if level and (not elem.tail or not elem.tail.strip()):
45 elem.tail = i
46
47
48 def build_measurement(measurementspec):
49 config = measurementspec['config']
50 name = measurementspec['name']
51 poi = config['poi']
52
53 # we want to know which parameters are fixed (constant)
54 # and to additionally extract the luminosity information
55 fixed_params = []
56 lumi = 1.0
57 lumierr = 0.0
58 for parameter in config['parameters']:
59 if parameter.get('fixed', False):
60 pname = parameter['name']
61 if pname == 'lumi':
62 fixed_params.append('Lumi')
63 else:
64 fixed_params.append(pname)
65 # we found luminosity, so handle it
66 if parameter['name'] == 'lumi':
67 lumi = parameter['auxdata'][0]
68 lumierr = parameter['sigmas'][0]
69
70 # define measurement
71 meas = ET.Element("Measurement", Name=name, Lumi=str(lumi), LumiRelErr=str(lumierr))
72 poiel = ET.Element('POI')
73 poiel.text = poi
74 meas.append(poiel)
75
76 # add fixed parameters (constant)
77 if fixed_params:
78 se = ET.Element('ParamSetting', Const='True')
79 se.text = ' '.join(fixed_params)
80 meas.append(se)
81 return meas
82
83
84 def build_modifier(modifierspec, channelname, samplename, sampledata):
85 if modifierspec['name'] == 'lumi':
86 return None
87 mod_map = {
88 'histosys': 'HistoSys',
89 'staterror': 'StatError',
90 'normsys': 'OverallSys',
91 'shapesys': 'ShapeSys',
92 'normfactor': 'NormFactor',
93 'shapefactor': 'ShapeFactor',
94 }
95
96 attrs = {'Name': modifierspec['name']}
97 if modifierspec['type'] == 'histosys':
98 attrs['HistoNameLow'] = _make_hist_name(
99 channelname, samplename, modifierspec['name'], suffix='Low'
100 )
101 attrs['HistoNameHigh'] = _make_hist_name(
102 channelname, samplename, modifierspec['name'], suffix='High'
103 )
104 _export_root_histogram(attrs['HistoNameLow'], modifierspec['data']['lo_data'])
105 _export_root_histogram(attrs['HistoNameHigh'], modifierspec['data']['hi_data'])
106 elif modifierspec['type'] == 'normsys':
107 attrs['High'] = str(modifierspec['data']['hi'])
108 attrs['Low'] = str(modifierspec['data']['lo'])
109 elif modifierspec['type'] == 'normfactor':
110 attrs['Val'] = '1'
111 attrs['High'] = '10'
112 attrs['Low'] = '0'
113 elif modifierspec['type'] == 'staterror':
114 attrs['Activate'] = 'True'
115 attrs['HistoName'] = _make_hist_name(
116 channelname, samplename, modifierspec['name']
117 )
118 # need to make this a relative uncertainty stored in ROOT file
119 _export_root_histogram(
120 attrs['HistoName'], np.divide(modifierspec['data'], sampledata).tolist()
121 )
122 elif modifierspec['type'] == 'shapesys':
123 attrs['ConstraintType'] = 'Poisson'
124 attrs['HistoName'] = _make_hist_name(
125 channelname, samplename, modifierspec['name']
126 )
127 # need to make this a relative uncertainty stored in ROOT file
128 _export_root_histogram(
129 attrs['HistoName'],
130 [np.divide(a, b) for a, b in zip(modifierspec['data'], sampledata)],
131 )
132 else:
133 log.warning(
134 'Skipping {0}({1}) for now'.format(
135 modifierspec['name'], modifierspec['type']
136 )
137 )
138
139 modifier = ET.Element(mod_map[modifierspec['type']], **attrs)
140 return modifier
141
142
143 def build_sample(samplespec, channelname):
144 histname = _make_hist_name(channelname, samplespec['name'])
145 attrs = {
146 'Name': samplespec['name'],
147 'HistoName': histname,
148 'InputFile': _ROOT_DATA_FILE._path,
149 'NormalizeByTheory': 'False',
150 }
151 sample = ET.Element('Sample', **attrs)
152 for modspec in samplespec['modifiers']:
153 # if lumi modifier added for this sample, need to set NormalizeByTheory
154 if modspec['type'] == 'lumi':
155 sample.attrib.update({'NormalizeByTheory': 'True'})
156 modifier = build_modifier(
157 modspec, channelname, samplespec['name'], samplespec['data']
158 )
159 if modifier is not None:
160 sample.append(modifier)
161 _export_root_histogram(histname, samplespec['data'])
162 return sample
163
164
165 def build_data(dataspec, channelname):
166 histname = _make_hist_name(channelname, 'data')
167 data = ET.Element('Data', HistoName=histname, InputFile=_ROOT_DATA_FILE._path)
168 _export_root_histogram(histname, dataspec[channelname])
169 return data
170
171
172 def build_channel(channelspec, dataspec):
173 channel = ET.Element(
174 'Channel', Name=channelspec['name'], InputFile=_ROOT_DATA_FILE._path
175 )
176 if dataspec:
177 data = build_data(dataspec, channelspec['name'])
178 channel.append(data)
179 for samplespec in channelspec['samples']:
180 channel.append(build_sample(samplespec, channelspec['name']))
181 return channel
182
183
184 def writexml(spec, specdir, data_rootdir, resultprefix):
185 global _ROOT_DATA_FILE
186
187 combination = ET.Element(
188 "Combination", OutputFilePrefix=os.path.join('.', specdir, resultprefix)
189 )
190
191 with uproot.recreate(os.path.join(data_rootdir, 'data.root')) as _ROOT_DATA_FILE:
192 for channelspec in spec['channels']:
193 channelfilename = os.path.join(
194 specdir, '{0:s}_{1:s}.xml'.format(resultprefix, channelspec['name'])
195 )
196 with open(channelfilename, 'w') as channelfile:
197 channel = build_channel(channelspec, spec.get('data'))
198 indent(channel)
199 channelfile.write(
200 ET.tostring(channel, encoding='utf-8').decode('utf-8')
201 )
202
203 inp = ET.Element("Input")
204 inp.text = channelfilename
205 combination.append(inp)
206
207 for measurement in spec['toplvl']['measurements']:
208 combination.append(build_measurement(measurement))
209 indent(combination)
210 return ET.tostring(combination, encoding='utf-8')
211
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pyhf/writexml.py b/pyhf/writexml.py
--- a/pyhf/writexml.py
+++ b/pyhf/writexml.py
@@ -1,6 +1,8 @@
import logging
import os
+import shutil
+import pkg_resources
import xml.etree.cElementTree as ET
import numpy as np
import uproot
@@ -68,7 +70,13 @@
lumierr = parameter['sigmas'][0]
# define measurement
- meas = ET.Element("Measurement", Name=name, Lumi=str(lumi), LumiRelErr=str(lumierr))
+ meas = ET.Element(
+ "Measurement",
+ Name=name,
+ Lumi=str(lumi),
+ LumiRelErr=str(lumierr),
+ ExportOnly=str(True),
+ )
poiel = ET.Element('POI')
poiel.text = poi
meas.append(poiel)
@@ -115,6 +123,7 @@
attrs['HistoName'] = _make_hist_name(
channelname, samplename, modifierspec['name']
)
+ del attrs['Name']
# need to make this a relative uncertainty stored in ROOT file
_export_root_histogram(
attrs['HistoName'], np.divide(modifierspec['data'], sampledata).tolist()
@@ -184,6 +193,10 @@
def writexml(spec, specdir, data_rootdir, resultprefix):
global _ROOT_DATA_FILE
+ shutil.copyfile(
+ pkg_resources.resource_filename(__name__, 'data/HistFactorySchema.dtd'),
+ os.path.join(os.path.dirname(specdir), 'HistFactorySchema.dtd'),
+ )
combination = ET.Element(
"Combination", OutputFilePrefix=os.path.join('.', specdir, resultprefix)
)
@@ -196,6 +209,9 @@
with open(channelfilename, 'w') as channelfile:
channel = build_channel(channelspec, spec.get('data'))
indent(channel)
+ channelfile.write(
+ "<!DOCTYPE Channel SYSTEM '../HistFactorySchema.dtd'>\n\n"
+ )
channelfile.write(
ET.tostring(channel, encoding='utf-8').decode('utf-8')
)
@@ -207,4 +223,6 @@
for measurement in spec['toplvl']['measurements']:
combination.append(build_measurement(measurement))
indent(combination)
- return ET.tostring(combination, encoding='utf-8')
+ return "<!DOCTYPE Combination SYSTEM 'HistFactorySchema.dtd'>\n\n".encode(
+ "utf-8"
+ ) + ET.tostring(combination, encoding='utf-8')
| {"golden_diff": "diff --git a/pyhf/writexml.py b/pyhf/writexml.py\n--- a/pyhf/writexml.py\n+++ b/pyhf/writexml.py\n@@ -1,6 +1,8 @@\n import logging\n \n import os\n+import shutil\n+import pkg_resources\n import xml.etree.cElementTree as ET\n import numpy as np\n import uproot\n@@ -68,7 +70,13 @@\n lumierr = parameter['sigmas'][0]\n \n # define measurement\n- meas = ET.Element(\"Measurement\", Name=name, Lumi=str(lumi), LumiRelErr=str(lumierr))\n+ meas = ET.Element(\n+ \"Measurement\",\n+ Name=name,\n+ Lumi=str(lumi),\n+ LumiRelErr=str(lumierr),\n+ ExportOnly=str(True),\n+ )\n poiel = ET.Element('POI')\n poiel.text = poi\n meas.append(poiel)\n@@ -115,6 +123,7 @@\n attrs['HistoName'] = _make_hist_name(\n channelname, samplename, modifierspec['name']\n )\n+ del attrs['Name']\n # need to make this a relative uncertainty stored in ROOT file\n _export_root_histogram(\n attrs['HistoName'], np.divide(modifierspec['data'], sampledata).tolist()\n@@ -184,6 +193,10 @@\n def writexml(spec, specdir, data_rootdir, resultprefix):\n global _ROOT_DATA_FILE\n \n+ shutil.copyfile(\n+ pkg_resources.resource_filename(__name__, 'data/HistFactorySchema.dtd'),\n+ os.path.join(os.path.dirname(specdir), 'HistFactorySchema.dtd'),\n+ )\n combination = ET.Element(\n \"Combination\", OutputFilePrefix=os.path.join('.', specdir, resultprefix)\n )\n@@ -196,6 +209,9 @@\n with open(channelfilename, 'w') as channelfile:\n channel = build_channel(channelspec, spec.get('data'))\n indent(channel)\n+ channelfile.write(\n+ \"<!DOCTYPE Channel SYSTEM '../HistFactorySchema.dtd'>\\n\\n\"\n+ )\n channelfile.write(\n ET.tostring(channel, encoding='utf-8').decode('utf-8')\n )\n@@ -207,4 +223,6 @@\n for measurement in spec['toplvl']['measurements']:\n combination.append(build_measurement(measurement))\n indent(combination)\n- return ET.tostring(combination, encoding='utf-8')\n+ return \"<!DOCTYPE Combination SYSTEM 'HistFactorySchema.dtd'>\\n\\n\".encode(\n+ \"utf-8\"\n+ ) + ET.tostring(combination, encoding='utf-8')\n", "issue": "json2xml needs to add support for the HiFa DTD\n# Description\r\n\r\njson2xml does not include the DTD statement in the XML outputs that it generates.\r\n\r\n# Expected Behavior\r\n\r\nShould include DTD statement.\r\n\r\n# Actual Behavior\r\n\r\nDoes not include DTD statement.\r\n\r\n# Checklist\r\n\r\n- [x] Run `git fetch` to get the most up to date version of `master`\r\n- [x] Searched through existing Issues to confirm this is not a duplicate issue\r\n- [x] Filled out the Description, Expected Behavior, Actual Behavior, and Steps to Reproduce sections above or have edited/removed them in a way that fully describes the issue\r\n\n", "before_files": [{"content": "import logging\n\nimport os\nimport xml.etree.cElementTree as ET\nimport numpy as np\nimport uproot\nfrom uproot_methods.classes import TH1\n\n_ROOT_DATA_FILE = None\n\nlog = logging.getLogger(__name__)\n\n\ndef _make_hist_name(channel, sample, modifier='', prefix='hist', suffix=''):\n return \"{prefix}{middle}{suffix}\".format(\n prefix=prefix,\n suffix=suffix,\n middle='_'.join(filter(lambda x: x, [channel, sample, modifier])),\n )\n\n\ndef _export_root_histogram(histname, data):\n h = TH1.from_numpy((np.asarray(data), np.arange(len(data) + 1)))\n h._fName = histname\n # NB: uproot crashes for some reason, figure out why later\n # if histname in _ROOT_DATA_FILE:\n # raise KeyError('Duplicate key {0} being written.'.format(histname))\n _ROOT_DATA_FILE[histname] = h\n\n\n# https://stackoverflow.com/a/4590052\ndef indent(elem, level=0):\n i = \"\\n\" + level * \" \"\n if elem:\n if not elem.text or not elem.text.strip():\n elem.text = i + \" \"\n if not elem.tail or not elem.tail.strip():\n elem.tail = i\n for elem in elem:\n indent(elem, level + 1)\n if not elem.tail or not elem.tail.strip():\n elem.tail = i\n else:\n if level and (not elem.tail or not elem.tail.strip()):\n elem.tail = i\n\n\ndef build_measurement(measurementspec):\n config = measurementspec['config']\n name = measurementspec['name']\n poi = config['poi']\n\n # we want to know which parameters are fixed (constant)\n # and to additionally extract the luminosity information\n fixed_params = []\n lumi = 1.0\n lumierr = 0.0\n for parameter in config['parameters']:\n if parameter.get('fixed', False):\n pname = parameter['name']\n if pname == 'lumi':\n fixed_params.append('Lumi')\n else:\n fixed_params.append(pname)\n # we found luminosity, so handle it\n if parameter['name'] == 'lumi':\n lumi = parameter['auxdata'][0]\n lumierr = parameter['sigmas'][0]\n\n # define measurement\n meas = ET.Element(\"Measurement\", Name=name, Lumi=str(lumi), LumiRelErr=str(lumierr))\n poiel = ET.Element('POI')\n poiel.text = poi\n meas.append(poiel)\n\n # add fixed parameters (constant)\n if fixed_params:\n se = ET.Element('ParamSetting', Const='True')\n se.text = ' '.join(fixed_params)\n meas.append(se)\n return meas\n\n\ndef build_modifier(modifierspec, channelname, samplename, sampledata):\n if modifierspec['name'] == 'lumi':\n return None\n mod_map = {\n 'histosys': 'HistoSys',\n 'staterror': 'StatError',\n 'normsys': 'OverallSys',\n 'shapesys': 'ShapeSys',\n 'normfactor': 'NormFactor',\n 'shapefactor': 'ShapeFactor',\n }\n\n attrs = {'Name': modifierspec['name']}\n if modifierspec['type'] == 'histosys':\n attrs['HistoNameLow'] = _make_hist_name(\n channelname, samplename, modifierspec['name'], suffix='Low'\n )\n attrs['HistoNameHigh'] = _make_hist_name(\n channelname, samplename, modifierspec['name'], suffix='High'\n )\n _export_root_histogram(attrs['HistoNameLow'], modifierspec['data']['lo_data'])\n _export_root_histogram(attrs['HistoNameHigh'], modifierspec['data']['hi_data'])\n elif modifierspec['type'] == 'normsys':\n attrs['High'] = str(modifierspec['data']['hi'])\n attrs['Low'] = str(modifierspec['data']['lo'])\n elif modifierspec['type'] == 'normfactor':\n attrs['Val'] = '1'\n attrs['High'] = '10'\n attrs['Low'] = '0'\n elif modifierspec['type'] == 'staterror':\n attrs['Activate'] = 'True'\n attrs['HistoName'] = _make_hist_name(\n channelname, samplename, modifierspec['name']\n )\n # need to make this a relative uncertainty stored in ROOT file\n _export_root_histogram(\n attrs['HistoName'], np.divide(modifierspec['data'], sampledata).tolist()\n )\n elif modifierspec['type'] == 'shapesys':\n attrs['ConstraintType'] = 'Poisson'\n attrs['HistoName'] = _make_hist_name(\n channelname, samplename, modifierspec['name']\n )\n # need to make this a relative uncertainty stored in ROOT file\n _export_root_histogram(\n attrs['HistoName'],\n [np.divide(a, b) for a, b in zip(modifierspec['data'], sampledata)],\n )\n else:\n log.warning(\n 'Skipping {0}({1}) for now'.format(\n modifierspec['name'], modifierspec['type']\n )\n )\n\n modifier = ET.Element(mod_map[modifierspec['type']], **attrs)\n return modifier\n\n\ndef build_sample(samplespec, channelname):\n histname = _make_hist_name(channelname, samplespec['name'])\n attrs = {\n 'Name': samplespec['name'],\n 'HistoName': histname,\n 'InputFile': _ROOT_DATA_FILE._path,\n 'NormalizeByTheory': 'False',\n }\n sample = ET.Element('Sample', **attrs)\n for modspec in samplespec['modifiers']:\n # if lumi modifier added for this sample, need to set NormalizeByTheory\n if modspec['type'] == 'lumi':\n sample.attrib.update({'NormalizeByTheory': 'True'})\n modifier = build_modifier(\n modspec, channelname, samplespec['name'], samplespec['data']\n )\n if modifier is not None:\n sample.append(modifier)\n _export_root_histogram(histname, samplespec['data'])\n return sample\n\n\ndef build_data(dataspec, channelname):\n histname = _make_hist_name(channelname, 'data')\n data = ET.Element('Data', HistoName=histname, InputFile=_ROOT_DATA_FILE._path)\n _export_root_histogram(histname, dataspec[channelname])\n return data\n\n\ndef build_channel(channelspec, dataspec):\n channel = ET.Element(\n 'Channel', Name=channelspec['name'], InputFile=_ROOT_DATA_FILE._path\n )\n if dataspec:\n data = build_data(dataspec, channelspec['name'])\n channel.append(data)\n for samplespec in channelspec['samples']:\n channel.append(build_sample(samplespec, channelspec['name']))\n return channel\n\n\ndef writexml(spec, specdir, data_rootdir, resultprefix):\n global _ROOT_DATA_FILE\n\n combination = ET.Element(\n \"Combination\", OutputFilePrefix=os.path.join('.', specdir, resultprefix)\n )\n\n with uproot.recreate(os.path.join(data_rootdir, 'data.root')) as _ROOT_DATA_FILE:\n for channelspec in spec['channels']:\n channelfilename = os.path.join(\n specdir, '{0:s}_{1:s}.xml'.format(resultprefix, channelspec['name'])\n )\n with open(channelfilename, 'w') as channelfile:\n channel = build_channel(channelspec, spec.get('data'))\n indent(channel)\n channelfile.write(\n ET.tostring(channel, encoding='utf-8').decode('utf-8')\n )\n\n inp = ET.Element(\"Input\")\n inp.text = channelfilename\n combination.append(inp)\n\n for measurement in spec['toplvl']['measurements']:\n combination.append(build_measurement(measurement))\n indent(combination)\n return ET.tostring(combination, encoding='utf-8')\n", "path": "pyhf/writexml.py"}], "after_files": [{"content": "import logging\n\nimport os\nimport shutil\nimport pkg_resources\nimport xml.etree.cElementTree as ET\nimport numpy as np\nimport uproot\nfrom uproot_methods.classes import TH1\n\n_ROOT_DATA_FILE = None\n\nlog = logging.getLogger(__name__)\n\n\ndef _make_hist_name(channel, sample, modifier='', prefix='hist', suffix=''):\n return \"{prefix}{middle}{suffix}\".format(\n prefix=prefix,\n suffix=suffix,\n middle='_'.join(filter(lambda x: x, [channel, sample, modifier])),\n )\n\n\ndef _export_root_histogram(histname, data):\n h = TH1.from_numpy((np.asarray(data), np.arange(len(data) + 1)))\n h._fName = histname\n # NB: uproot crashes for some reason, figure out why later\n # if histname in _ROOT_DATA_FILE:\n # raise KeyError('Duplicate key {0} being written.'.format(histname))\n _ROOT_DATA_FILE[histname] = h\n\n\n# https://stackoverflow.com/a/4590052\ndef indent(elem, level=0):\n i = \"\\n\" + level * \" \"\n if elem:\n if not elem.text or not elem.text.strip():\n elem.text = i + \" \"\n if not elem.tail or not elem.tail.strip():\n elem.tail = i\n for elem in elem:\n indent(elem, level + 1)\n if not elem.tail or not elem.tail.strip():\n elem.tail = i\n else:\n if level and (not elem.tail or not elem.tail.strip()):\n elem.tail = i\n\n\ndef build_measurement(measurementspec):\n config = measurementspec['config']\n name = measurementspec['name']\n poi = config['poi']\n\n # we want to know which parameters are fixed (constant)\n # and to additionally extract the luminosity information\n fixed_params = []\n lumi = 1.0\n lumierr = 0.0\n for parameter in config['parameters']:\n if parameter.get('fixed', False):\n pname = parameter['name']\n if pname == 'lumi':\n fixed_params.append('Lumi')\n else:\n fixed_params.append(pname)\n # we found luminosity, so handle it\n if parameter['name'] == 'lumi':\n lumi = parameter['auxdata'][0]\n lumierr = parameter['sigmas'][0]\n\n # define measurement\n meas = ET.Element(\n \"Measurement\",\n Name=name,\n Lumi=str(lumi),\n LumiRelErr=str(lumierr),\n ExportOnly=str(True),\n )\n poiel = ET.Element('POI')\n poiel.text = poi\n meas.append(poiel)\n\n # add fixed parameters (constant)\n if fixed_params:\n se = ET.Element('ParamSetting', Const='True')\n se.text = ' '.join(fixed_params)\n meas.append(se)\n return meas\n\n\ndef build_modifier(modifierspec, channelname, samplename, sampledata):\n if modifierspec['name'] == 'lumi':\n return None\n mod_map = {\n 'histosys': 'HistoSys',\n 'staterror': 'StatError',\n 'normsys': 'OverallSys',\n 'shapesys': 'ShapeSys',\n 'normfactor': 'NormFactor',\n 'shapefactor': 'ShapeFactor',\n }\n\n attrs = {'Name': modifierspec['name']}\n if modifierspec['type'] == 'histosys':\n attrs['HistoNameLow'] = _make_hist_name(\n channelname, samplename, modifierspec['name'], suffix='Low'\n )\n attrs['HistoNameHigh'] = _make_hist_name(\n channelname, samplename, modifierspec['name'], suffix='High'\n )\n _export_root_histogram(attrs['HistoNameLow'], modifierspec['data']['lo_data'])\n _export_root_histogram(attrs['HistoNameHigh'], modifierspec['data']['hi_data'])\n elif modifierspec['type'] == 'normsys':\n attrs['High'] = str(modifierspec['data']['hi'])\n attrs['Low'] = str(modifierspec['data']['lo'])\n elif modifierspec['type'] == 'normfactor':\n attrs['Val'] = '1'\n attrs['High'] = '10'\n attrs['Low'] = '0'\n elif modifierspec['type'] == 'staterror':\n attrs['Activate'] = 'True'\n attrs['HistoName'] = _make_hist_name(\n channelname, samplename, modifierspec['name']\n )\n del attrs['Name']\n # need to make this a relative uncertainty stored in ROOT file\n _export_root_histogram(\n attrs['HistoName'], np.divide(modifierspec['data'], sampledata).tolist()\n )\n elif modifierspec['type'] == 'shapesys':\n attrs['ConstraintType'] = 'Poisson'\n attrs['HistoName'] = _make_hist_name(\n channelname, samplename, modifierspec['name']\n )\n # need to make this a relative uncertainty stored in ROOT file\n _export_root_histogram(\n attrs['HistoName'],\n [np.divide(a, b) for a, b in zip(modifierspec['data'], sampledata)],\n )\n else:\n log.warning(\n 'Skipping {0}({1}) for now'.format(\n modifierspec['name'], modifierspec['type']\n )\n )\n\n modifier = ET.Element(mod_map[modifierspec['type']], **attrs)\n return modifier\n\n\ndef build_sample(samplespec, channelname):\n histname = _make_hist_name(channelname, samplespec['name'])\n attrs = {\n 'Name': samplespec['name'],\n 'HistoName': histname,\n 'InputFile': _ROOT_DATA_FILE._path,\n 'NormalizeByTheory': 'False',\n }\n sample = ET.Element('Sample', **attrs)\n for modspec in samplespec['modifiers']:\n # if lumi modifier added for this sample, need to set NormalizeByTheory\n if modspec['type'] == 'lumi':\n sample.attrib.update({'NormalizeByTheory': 'True'})\n modifier = build_modifier(\n modspec, channelname, samplespec['name'], samplespec['data']\n )\n if modifier is not None:\n sample.append(modifier)\n _export_root_histogram(histname, samplespec['data'])\n return sample\n\n\ndef build_data(dataspec, channelname):\n histname = _make_hist_name(channelname, 'data')\n data = ET.Element('Data', HistoName=histname, InputFile=_ROOT_DATA_FILE._path)\n _export_root_histogram(histname, dataspec[channelname])\n return data\n\n\ndef build_channel(channelspec, dataspec):\n channel = ET.Element(\n 'Channel', Name=channelspec['name'], InputFile=_ROOT_DATA_FILE._path\n )\n if dataspec:\n data = build_data(dataspec, channelspec['name'])\n channel.append(data)\n for samplespec in channelspec['samples']:\n channel.append(build_sample(samplespec, channelspec['name']))\n return channel\n\n\ndef writexml(spec, specdir, data_rootdir, resultprefix):\n global _ROOT_DATA_FILE\n\n shutil.copyfile(\n pkg_resources.resource_filename(__name__, 'data/HistFactorySchema.dtd'),\n os.path.join(os.path.dirname(specdir), 'HistFactorySchema.dtd'),\n )\n combination = ET.Element(\n \"Combination\", OutputFilePrefix=os.path.join('.', specdir, resultprefix)\n )\n\n with uproot.recreate(os.path.join(data_rootdir, 'data.root')) as _ROOT_DATA_FILE:\n for channelspec in spec['channels']:\n channelfilename = os.path.join(\n specdir, '{0:s}_{1:s}.xml'.format(resultprefix, channelspec['name'])\n )\n with open(channelfilename, 'w') as channelfile:\n channel = build_channel(channelspec, spec.get('data'))\n indent(channel)\n channelfile.write(\n \"<!DOCTYPE Channel SYSTEM '../HistFactorySchema.dtd'>\\n\\n\"\n )\n channelfile.write(\n ET.tostring(channel, encoding='utf-8').decode('utf-8')\n )\n\n inp = ET.Element(\"Input\")\n inp.text = channelfilename\n combination.append(inp)\n\n for measurement in spec['toplvl']['measurements']:\n combination.append(build_measurement(measurement))\n indent(combination)\n return \"<!DOCTYPE Combination SYSTEM 'HistFactorySchema.dtd'>\\n\\n\".encode(\n \"utf-8\"\n ) + ET.tostring(combination, encoding='utf-8')\n", "path": "pyhf/writexml.py"}]} | 2,700 | 610 |
gh_patches_debug_10702 | rasdani/github-patches | git_diff | horovod__horovod-2605 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Dependency links in setup.py causing issues with private registry
**Environment:**
1. Framework: (TensorFlow, Keras, PyTorch, MXNet) all
2. Framework version: n/a
3. Horovod version: 0.20.1+
4. MPI version: n/a
5. CUDA version: n/a
6. NCCL version:
7. Python version:
8. Spark / PySpark version:
9. OS and version:
10. GCC version:
11. CMake version:
**Checklist:**
1. Did you search issues to find if somebody asked this question before?
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)?
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)?
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)?
**Bug report:**
Please describe erroneous behavior you're observing and steps to reproduce it.
In horovod 0.20.x the setup.py was updated to add dependency links. Although it says it is removed from pip, it still causes issues in enterprise settings when a private registry is used as pip for some reason tries to connect to download.pytorch.org. This causes horovod versions 0.20.x to be uninstallable as pip just waits for a timeout on these for a long long time. Can we please remove these from the setup.py?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright 2019 Uber Technologies, Inc. All Rights Reserved.
2 # Modifications copyright Microsoft
3 # Modifications copyright (C) 2020, NVIDIA CORPORATION. All rights reserved.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16 # ==============================================================================
17
18 import os
19 import subprocess
20 import sys
21 import textwrap
22
23 from setuptools import setup, Extension, find_packages
24 from setuptools.command.build_ext import build_ext
25
26 from horovod import __version__
27
28
29 class CMakeExtension(Extension):
30 def __init__(self, name, cmake_lists_dir='.', sources=[], **kwa):
31 Extension.__init__(self, name, sources=sources, **kwa)
32 self.cmake_lists_dir = os.path.abspath(cmake_lists_dir)
33
34
35 tensorflow_mpi_lib = CMakeExtension('horovod.tensorflow.mpi_lib',
36 cmake_lists_dir='.', sources=[])
37 torch_mpi_lib_v2 = CMakeExtension('horovod.torch.mpi_lib_v2',
38 cmake_lists_dir='.', sources=[])
39 mxnet_mpi_lib = CMakeExtension('horovod.mxnet.mpi_lib',
40 cmake_lists_dir='.', sources=[])
41
42 def is_build_action():
43 if len(sys.argv) <= 1:
44 return False
45
46 if sys.argv[1].startswith('build'):
47 return True
48
49 if sys.argv[1].startswith('bdist'):
50 return True
51
52 if sys.argv[1].startswith('install'):
53 return True
54
55
56 def get_cmake_bin():
57 return os.environ.get('HOROVOD_CMAKE', 'cmake')
58
59
60 class custom_build_ext(build_ext):
61 def build_extensions(self):
62 cmake_bin = get_cmake_bin()
63
64 config = 'Debug' if self.debug else 'RelWithDebInfo'
65
66 ext_name = self.extensions[0].name
67 build_dir = self.get_ext_fullpath(ext_name).replace(self.get_ext_filename(ext_name), '')
68 build_dir = os.path.abspath(build_dir)
69
70 cmake_args = ['-DCMAKE_BUILD_TYPE=' + config,
71 '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_{}={}'.format(config.upper(), build_dir),
72 '-DPYTHON_EXECUTABLE:FILEPATH=' + sys.executable]
73
74 make_args = []
75 if self.verbose:
76 make_args.append('VERBOSE=1')
77
78 cmake_build_args = ['--config', config]
79 if make_args:
80 # -- specifies that these args are going to the native build tool: make
81 cmake_build_args += ['--'] + make_args
82
83 if not os.path.exists(self.build_temp):
84 os.makedirs(self.build_temp)
85
86 # Config and build the extension
87 try:
88 subprocess.check_call([cmake_bin, self.extensions[0].cmake_lists_dir] + cmake_args,
89 cwd=self.build_temp)
90 subprocess.check_call([cmake_bin, '--build', '.'] + cmake_build_args,
91 cwd=self.build_temp)
92 except OSError as e:
93 raise RuntimeError('CMake failed: {}'.format(str(e)))
94
95
96 # python packages required to use horovod in general
97 require_list = ['cloudpickle', 'psutil', 'pyyaml', 'dataclasses;python_version<"3.7"']
98
99 # framework dependencies
100 tensorflow_require_list = ['tensorflow']
101 tensorflow_cpu_require_list = ['tensorflow-cpu']
102 tensorflow_gpu_require_list = ['tensorflow-gpu']
103 keras_require_list = ['keras>=2.0.8,!=2.0.9,!=2.1.0,!=2.1.1']
104 pytorch_require_list = ['torch']
105 mxnet_require_list = ['mxnet>=1.4.1']
106 pyspark_require_list = ['pyspark>=2.3.2;python_version<"3.8"',
107 'pyspark>=3.0.0;python_version>="3.8"']
108 # Pin h5py: https://github.com/h5py/h5py/issues/1732
109 spark_require_list = ['h5py<3', 'numpy', 'petastorm>=0.9.8', 'pyarrow>=0.15.0'] + \
110 pyspark_require_list
111 ray_require_list = ['ray']
112
113 # all frameworks' dependencies
114 all_frameworks_require_list = tensorflow_require_list + \
115 tensorflow_gpu_require_list + \
116 keras_require_list + \
117 pytorch_require_list + \
118 mxnet_require_list + \
119 spark_require_list
120
121 # python packages required / recommended to develop horovod
122 # e.g., set of framework versions pinned for development, keep in sync with Dockerfile.test.cpu
123 # NOTE: do not use versions with +cpu or +gpu here as users would need to add --find-links to pip
124 dev_require_list = ['tensorflow-cpu==1.15.0',
125 'keras==2.2.4',
126 'torch==1.2.0',
127 'torchvision==0.4.0',
128 'mxnet==1.5.0',
129 'pyspark==2.4.7'] + spark_require_list
130
131 # python packages required only to run tests
132 # Pin h5py: https://github.com/h5py/h5py/issues/1732
133 test_require_list = ['mock', 'pytest', 'pytest-forked', 'parameterized', 'h5py<3']
134
135 # Skip cffi if pytorch extension explicitly disabled
136 if not os.environ.get('HOROVOD_WITHOUT_PYTORCH'):
137 require_list.append('cffi>=1.4.0')
138
139
140 def get_package_version():
141 return __version__ + "+" + os.environ['HOROVOD_LOCAL_VERSION'] if 'HOROVOD_LOCAL_VERSION' in os.environ else __version__
142
143
144 setup(name='horovod',
145 version=get_package_version(),
146 packages=find_packages(),
147 description='Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.',
148 author='The Horovod Authors',
149 long_description=textwrap.dedent('''\
150 Horovod is a distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.
151 The goal of Horovod is to make distributed Deep Learning fast and easy to use.'''),
152 url='https://github.com/horovod/horovod',
153 keywords=['deep learning', 'tensorflow', 'keras', 'pytorch', 'mxnet', 'spark', 'AI'],
154 classifiers=[
155 'License :: OSI Approved :: Apache Software License',
156 'Development Status :: 4 - Beta',
157 'Intended Audience :: Developers',
158 'Topic :: Scientific/Engineering :: Artificial Intelligence',
159 ],
160 ext_modules=[tensorflow_mpi_lib, torch_mpi_lib_v2, mxnet_mpi_lib],
161 cmdclass={'build_ext': custom_build_ext},
162 # cffi is required for PyTorch
163 # If cffi is specified in setup_requires, it will need libffi to be installed on the machine,
164 # which is undesirable. Luckily, `install` action will install cffi before executing build,
165 # so it's only necessary for `build*` or `bdist*` actions.
166 setup_requires=require_list if is_build_action() else [],
167 install_requires=require_list,
168 tests_require=test_require_list,
169 extras_require={
170 'all-frameworks': all_frameworks_require_list,
171 'tensorflow': tensorflow_require_list,
172 'tensorflow-cpu': tensorflow_cpu_require_list,
173 'tensorflow-gpu': tensorflow_gpu_require_list,
174 'keras': keras_require_list,
175 'pytorch': pytorch_require_list,
176 'mxnet': mxnet_require_list,
177 'spark': spark_require_list,
178 'ray': ray_require_list,
179 'dev': dev_require_list,
180 'test': test_require_list,
181 },
182 # not used by pip since 19.0: https://github.com/pypa/pip/issues/4187#issuecomment-415067034
183 # here for completeness as pip install needs some of these via -f for versions with '+cpu'
184 # for examples, see Dockerfile.test.cpu and Dockerfile.test.gpu
185 dependency_links=[
186 'https://download.pytorch.org/whl/torch_stable.html',
187 'https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html'
188 ],
189 python_requires='>=3.6',
190 zip_safe=False,
191 entry_points={
192 'console_scripts': [
193 'horovodrun = horovod.runner.launch:run_commandline'
194 ]
195 })
196
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -179,13 +179,6 @@
'dev': dev_require_list,
'test': test_require_list,
},
- # not used by pip since 19.0: https://github.com/pypa/pip/issues/4187#issuecomment-415067034
- # here for completeness as pip install needs some of these via -f for versions with '+cpu'
- # for examples, see Dockerfile.test.cpu and Dockerfile.test.gpu
- dependency_links=[
- 'https://download.pytorch.org/whl/torch_stable.html',
- 'https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html'
- ],
python_requires='>=3.6',
zip_safe=False,
entry_points={
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -179,13 +179,6 @@\n 'dev': dev_require_list,\n 'test': test_require_list,\n },\n- # not used by pip since 19.0: https://github.com/pypa/pip/issues/4187#issuecomment-415067034\n- # here for completeness as pip install needs some of these via -f for versions with '+cpu'\n- # for examples, see Dockerfile.test.cpu and Dockerfile.test.gpu\n- dependency_links=[\n- 'https://download.pytorch.org/whl/torch_stable.html',\n- 'https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html'\n- ],\n python_requires='>=3.6',\n zip_safe=False,\n entry_points={\n", "issue": "Dependency links in setup.py causing issues with private registry \n**Environment:**\r\n1. Framework: (TensorFlow, Keras, PyTorch, MXNet) all\r\n2. Framework version: n/a\r\n3. Horovod version: 0.20.1+\r\n4. MPI version: n/a\r\n5. CUDA version: n/a\r\n6. NCCL version:\r\n7. Python version:\r\n8. Spark / PySpark version:\r\n9. OS and version:\r\n10. GCC version:\r\n11. CMake version:\r\n\r\n**Checklist:**\r\n1. Did you search issues to find if somebody asked this question before?\r\n2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)?\r\n3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)?\r\n4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)?\r\n\r\n**Bug report:**\r\nPlease describe erroneous behavior you're observing and steps to reproduce it.\r\n\r\nIn horovod 0.20.x the setup.py was updated to add dependency links. Although it says it is removed from pip, it still causes issues in enterprise settings when a private registry is used as pip for some reason tries to connect to download.pytorch.org. This causes horovod versions 0.20.x to be uninstallable as pip just waits for a timeout on these for a long long time. Can we please remove these from the setup.py? \n", "before_files": [{"content": "# Copyright 2019 Uber Technologies, Inc. All Rights Reserved.\n# Modifications copyright Microsoft\n# Modifications copyright (C) 2020, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\nimport os\nimport subprocess\nimport sys\nimport textwrap\n\nfrom setuptools import setup, Extension, find_packages\nfrom setuptools.command.build_ext import build_ext\n\nfrom horovod import __version__\n\n\nclass CMakeExtension(Extension):\n def __init__(self, name, cmake_lists_dir='.', sources=[], **kwa):\n Extension.__init__(self, name, sources=sources, **kwa)\n self.cmake_lists_dir = os.path.abspath(cmake_lists_dir)\n\n\ntensorflow_mpi_lib = CMakeExtension('horovod.tensorflow.mpi_lib',\n cmake_lists_dir='.', sources=[])\ntorch_mpi_lib_v2 = CMakeExtension('horovod.torch.mpi_lib_v2',\n cmake_lists_dir='.', sources=[])\nmxnet_mpi_lib = CMakeExtension('horovod.mxnet.mpi_lib',\n cmake_lists_dir='.', sources=[])\n\ndef is_build_action():\n if len(sys.argv) <= 1:\n return False\n\n if sys.argv[1].startswith('build'):\n return True\n\n if sys.argv[1].startswith('bdist'):\n return True\n\n if sys.argv[1].startswith('install'):\n return True\n\n\ndef get_cmake_bin():\n return os.environ.get('HOROVOD_CMAKE', 'cmake')\n\n\nclass custom_build_ext(build_ext):\n def build_extensions(self):\n cmake_bin = get_cmake_bin()\n\n config = 'Debug' if self.debug else 'RelWithDebInfo'\n\n ext_name = self.extensions[0].name\n build_dir = self.get_ext_fullpath(ext_name).replace(self.get_ext_filename(ext_name), '')\n build_dir = os.path.abspath(build_dir)\n\n cmake_args = ['-DCMAKE_BUILD_TYPE=' + config,\n '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_{}={}'.format(config.upper(), build_dir),\n '-DPYTHON_EXECUTABLE:FILEPATH=' + sys.executable]\n\n make_args = []\n if self.verbose:\n make_args.append('VERBOSE=1')\n\n cmake_build_args = ['--config', config]\n if make_args:\n # -- specifies that these args are going to the native build tool: make\n cmake_build_args += ['--'] + make_args\n\n if not os.path.exists(self.build_temp):\n os.makedirs(self.build_temp)\n\n # Config and build the extension\n try:\n subprocess.check_call([cmake_bin, self.extensions[0].cmake_lists_dir] + cmake_args,\n cwd=self.build_temp)\n subprocess.check_call([cmake_bin, '--build', '.'] + cmake_build_args,\n cwd=self.build_temp)\n except OSError as e:\n raise RuntimeError('CMake failed: {}'.format(str(e)))\n\n\n# python packages required to use horovod in general\nrequire_list = ['cloudpickle', 'psutil', 'pyyaml', 'dataclasses;python_version<\"3.7\"']\n\n# framework dependencies\ntensorflow_require_list = ['tensorflow']\ntensorflow_cpu_require_list = ['tensorflow-cpu']\ntensorflow_gpu_require_list = ['tensorflow-gpu']\nkeras_require_list = ['keras>=2.0.8,!=2.0.9,!=2.1.0,!=2.1.1']\npytorch_require_list = ['torch']\nmxnet_require_list = ['mxnet>=1.4.1']\npyspark_require_list = ['pyspark>=2.3.2;python_version<\"3.8\"',\n 'pyspark>=3.0.0;python_version>=\"3.8\"']\n# Pin h5py: https://github.com/h5py/h5py/issues/1732\nspark_require_list = ['h5py<3', 'numpy', 'petastorm>=0.9.8', 'pyarrow>=0.15.0'] + \\\n pyspark_require_list\nray_require_list = ['ray']\n\n# all frameworks' dependencies\nall_frameworks_require_list = tensorflow_require_list + \\\n tensorflow_gpu_require_list + \\\n keras_require_list + \\\n pytorch_require_list + \\\n mxnet_require_list + \\\n spark_require_list\n\n# python packages required / recommended to develop horovod\n# e.g., set of framework versions pinned for development, keep in sync with Dockerfile.test.cpu\n# NOTE: do not use versions with +cpu or +gpu here as users would need to add --find-links to pip\ndev_require_list = ['tensorflow-cpu==1.15.0',\n 'keras==2.2.4',\n 'torch==1.2.0',\n 'torchvision==0.4.0',\n 'mxnet==1.5.0',\n 'pyspark==2.4.7'] + spark_require_list\n\n# python packages required only to run tests\n# Pin h5py: https://github.com/h5py/h5py/issues/1732\ntest_require_list = ['mock', 'pytest', 'pytest-forked', 'parameterized', 'h5py<3']\n\n# Skip cffi if pytorch extension explicitly disabled\nif not os.environ.get('HOROVOD_WITHOUT_PYTORCH'):\n require_list.append('cffi>=1.4.0')\n\n\ndef get_package_version():\n return __version__ + \"+\" + os.environ['HOROVOD_LOCAL_VERSION'] if 'HOROVOD_LOCAL_VERSION' in os.environ else __version__\n\n\nsetup(name='horovod',\n version=get_package_version(),\n packages=find_packages(),\n description='Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.',\n author='The Horovod Authors',\n long_description=textwrap.dedent('''\\\n Horovod is a distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.\n The goal of Horovod is to make distributed Deep Learning fast and easy to use.'''),\n url='https://github.com/horovod/horovod',\n keywords=['deep learning', 'tensorflow', 'keras', 'pytorch', 'mxnet', 'spark', 'AI'],\n classifiers=[\n 'License :: OSI Approved :: Apache Software License',\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n ],\n ext_modules=[tensorflow_mpi_lib, torch_mpi_lib_v2, mxnet_mpi_lib],\n cmdclass={'build_ext': custom_build_ext},\n # cffi is required for PyTorch\n # If cffi is specified in setup_requires, it will need libffi to be installed on the machine,\n # which is undesirable. Luckily, `install` action will install cffi before executing build,\n # so it's only necessary for `build*` or `bdist*` actions.\n setup_requires=require_list if is_build_action() else [],\n install_requires=require_list,\n tests_require=test_require_list,\n extras_require={\n 'all-frameworks': all_frameworks_require_list,\n 'tensorflow': tensorflow_require_list,\n 'tensorflow-cpu': tensorflow_cpu_require_list,\n 'tensorflow-gpu': tensorflow_gpu_require_list,\n 'keras': keras_require_list,\n 'pytorch': pytorch_require_list,\n 'mxnet': mxnet_require_list,\n 'spark': spark_require_list,\n 'ray': ray_require_list,\n 'dev': dev_require_list,\n 'test': test_require_list,\n },\n # not used by pip since 19.0: https://github.com/pypa/pip/issues/4187#issuecomment-415067034\n # here for completeness as pip install needs some of these via -f for versions with '+cpu'\n # for examples, see Dockerfile.test.cpu and Dockerfile.test.gpu\n dependency_links=[\n 'https://download.pytorch.org/whl/torch_stable.html',\n 'https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html'\n ],\n python_requires='>=3.6',\n zip_safe=False,\n entry_points={\n 'console_scripts': [\n 'horovodrun = horovod.runner.launch:run_commandline'\n ]\n })\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2019 Uber Technologies, Inc. All Rights Reserved.\n# Modifications copyright Microsoft\n# Modifications copyright (C) 2020, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\nimport os\nimport subprocess\nimport sys\nimport textwrap\n\nfrom setuptools import setup, Extension, find_packages\nfrom setuptools.command.build_ext import build_ext\n\nfrom horovod import __version__\n\n\nclass CMakeExtension(Extension):\n def __init__(self, name, cmake_lists_dir='.', sources=[], **kwa):\n Extension.__init__(self, name, sources=sources, **kwa)\n self.cmake_lists_dir = os.path.abspath(cmake_lists_dir)\n\n\ntensorflow_mpi_lib = CMakeExtension('horovod.tensorflow.mpi_lib',\n cmake_lists_dir='.', sources=[])\ntorch_mpi_lib_v2 = CMakeExtension('horovod.torch.mpi_lib_v2',\n cmake_lists_dir='.', sources=[])\nmxnet_mpi_lib = CMakeExtension('horovod.mxnet.mpi_lib',\n cmake_lists_dir='.', sources=[])\n\ndef is_build_action():\n if len(sys.argv) <= 1:\n return False\n\n if sys.argv[1].startswith('build'):\n return True\n\n if sys.argv[1].startswith('bdist'):\n return True\n\n if sys.argv[1].startswith('install'):\n return True\n\n\ndef get_cmake_bin():\n return os.environ.get('HOROVOD_CMAKE', 'cmake')\n\n\nclass custom_build_ext(build_ext):\n def build_extensions(self):\n cmake_bin = get_cmake_bin()\n\n config = 'Debug' if self.debug else 'RelWithDebInfo'\n\n ext_name = self.extensions[0].name\n build_dir = self.get_ext_fullpath(ext_name).replace(self.get_ext_filename(ext_name), '')\n build_dir = os.path.abspath(build_dir)\n\n cmake_args = ['-DCMAKE_BUILD_TYPE=' + config,\n '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_{}={}'.format(config.upper(), build_dir),\n '-DPYTHON_EXECUTABLE:FILEPATH=' + sys.executable]\n\n make_args = []\n if self.verbose:\n make_args.append('VERBOSE=1')\n\n cmake_build_args = ['--config', config]\n if make_args:\n # -- specifies that these args are going to the native build tool: make\n cmake_build_args += ['--'] + make_args\n\n if not os.path.exists(self.build_temp):\n os.makedirs(self.build_temp)\n\n # Config and build the extension\n try:\n subprocess.check_call([cmake_bin, self.extensions[0].cmake_lists_dir] + cmake_args,\n cwd=self.build_temp)\n subprocess.check_call([cmake_bin, '--build', '.'] + cmake_build_args,\n cwd=self.build_temp)\n except OSError as e:\n raise RuntimeError('CMake failed: {}'.format(str(e)))\n\n\n# python packages required to use horovod in general\nrequire_list = ['cloudpickle', 'psutil', 'pyyaml', 'dataclasses;python_version<\"3.7\"']\n\n# framework dependencies\ntensorflow_require_list = ['tensorflow']\ntensorflow_cpu_require_list = ['tensorflow-cpu']\ntensorflow_gpu_require_list = ['tensorflow-gpu']\nkeras_require_list = ['keras>=2.0.8,!=2.0.9,!=2.1.0,!=2.1.1']\npytorch_require_list = ['torch']\nmxnet_require_list = ['mxnet>=1.4.1']\npyspark_require_list = ['pyspark>=2.3.2;python_version<\"3.8\"',\n 'pyspark>=3.0.0;python_version>=\"3.8\"']\n# Pin h5py: https://github.com/h5py/h5py/issues/1732\nspark_require_list = ['h5py<3', 'numpy', 'petastorm>=0.9.8', 'pyarrow>=0.15.0'] + \\\n pyspark_require_list\nray_require_list = ['ray']\n\n# all frameworks' dependencies\nall_frameworks_require_list = tensorflow_require_list + \\\n tensorflow_gpu_require_list + \\\n keras_require_list + \\\n pytorch_require_list + \\\n mxnet_require_list + \\\n spark_require_list\n\n# python packages required / recommended to develop horovod\n# e.g., set of framework versions pinned for development, keep in sync with Dockerfile.test.cpu\n# NOTE: do not use versions with +cpu or +gpu here as users would need to add --find-links to pip\ndev_require_list = ['tensorflow-cpu==1.15.0',\n 'keras==2.2.4',\n 'torch==1.2.0',\n 'torchvision==0.4.0',\n 'mxnet==1.5.0',\n 'pyspark==2.4.7'] + spark_require_list\n\n# python packages required only to run tests\n# Pin h5py: https://github.com/h5py/h5py/issues/1732\ntest_require_list = ['mock', 'pytest', 'pytest-forked', 'parameterized', 'h5py<3']\n\n# Skip cffi if pytorch extension explicitly disabled\nif not os.environ.get('HOROVOD_WITHOUT_PYTORCH'):\n require_list.append('cffi>=1.4.0')\n\n\ndef get_package_version():\n return __version__ + \"+\" + os.environ['HOROVOD_LOCAL_VERSION'] if 'HOROVOD_LOCAL_VERSION' in os.environ else __version__\n\n\nsetup(name='horovod',\n version=get_package_version(),\n packages=find_packages(),\n description='Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.',\n author='The Horovod Authors',\n long_description=textwrap.dedent('''\\\n Horovod is a distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.\n The goal of Horovod is to make distributed Deep Learning fast and easy to use.'''),\n url='https://github.com/horovod/horovod',\n keywords=['deep learning', 'tensorflow', 'keras', 'pytorch', 'mxnet', 'spark', 'AI'],\n classifiers=[\n 'License :: OSI Approved :: Apache Software License',\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n ],\n ext_modules=[tensorflow_mpi_lib, torch_mpi_lib_v2, mxnet_mpi_lib],\n cmdclass={'build_ext': custom_build_ext},\n # cffi is required for PyTorch\n # If cffi is specified in setup_requires, it will need libffi to be installed on the machine,\n # which is undesirable. Luckily, `install` action will install cffi before executing build,\n # so it's only necessary for `build*` or `bdist*` actions.\n setup_requires=require_list if is_build_action() else [],\n install_requires=require_list,\n tests_require=test_require_list,\n extras_require={\n 'all-frameworks': all_frameworks_require_list,\n 'tensorflow': tensorflow_require_list,\n 'tensorflow-cpu': tensorflow_cpu_require_list,\n 'tensorflow-gpu': tensorflow_gpu_require_list,\n 'keras': keras_require_list,\n 'pytorch': pytorch_require_list,\n 'mxnet': mxnet_require_list,\n 'spark': spark_require_list,\n 'ray': ray_require_list,\n 'dev': dev_require_list,\n 'test': test_require_list,\n },\n python_requires='>=3.6',\n zip_safe=False,\n entry_points={\n 'console_scripts': [\n 'horovodrun = horovod.runner.launch:run_commandline'\n ]\n })\n", "path": "setup.py"}]} | 3,025 | 201 |
gh_patches_debug_18892 | rasdani/github-patches | git_diff | python__peps-634 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pep2rss can raise UnicodeEncodeError on certain platforms
This is due to the default encoding of [`open()`](https://docs.python.org/3/library/functions.html#open) being platform dependent.
For example, on Windows (CP-1252), it can raise
`UnicodeEncodeError: 'charmap' codec can't encode character '\u0141' in position 1545: character maps to <undefined>`
when handling the character, 'Ł', in the name of the author of [PEP 569](https://www.python.org/dev/peps/pep-0569/).
https://github.com/python/peps/blob/433faca4cf1bb8a0550d01b86b2b4581b3eda460/pep-0569.rst#L5
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pep2rss.py`
Content:
```
1 #!/usr/bin/env python
2
3 # usage: pep-hook.py $REPOS $REV
4 # (standard post-commit args)
5
6 import os, glob, time, datetime, stat, re, sys
7 import codecs
8 import PyRSS2Gen as rssgen
9
10 RSS_PATH = os.path.join(sys.argv[1], 'peps.rss')
11
12 def firstline_startingwith(full_path, text):
13 for line in codecs.open(full_path, encoding="utf-8"):
14 if line.startswith(text):
15 return line[len(text):].strip()
16 return None
17
18 # get list of peps with creation time (from "Created:" string in pep .txt)
19 peps = glob.glob('pep-*.txt')
20 def pep_creation_dt(full_path):
21 created_str = firstline_startingwith(full_path, 'Created:')
22 # bleh, I was hoping to avoid re but some PEPs editorialize
23 # on the Created line
24 m = re.search(r'''(\d+-\w+-\d{4})''', created_str)
25 if not m:
26 # some older ones have an empty line, that's okay, if it's old
27 # we ipso facto don't care about it.
28 # "return None" would make the most sense but datetime objects
29 # refuse to compare with that. :-|
30 return datetime.datetime(*time.localtime(0)[:6])
31 created_str = m.group(1)
32 try:
33 t = time.strptime(created_str, '%d-%b-%Y')
34 except ValueError:
35 t = time.strptime(created_str, '%d-%B-%Y')
36 return datetime.datetime(*t[:6])
37 peps_with_dt = [(pep_creation_dt(full_path), full_path) for full_path in peps]
38 # sort peps by date, newest first
39 peps_with_dt.sort(reverse=True)
40
41 # generate rss items for 10 most recent peps
42 items = []
43 for dt, full_path in peps_with_dt[:10]:
44 try:
45 n = int(full_path.split('-')[-1].split('.')[0])
46 except ValueError:
47 pass
48 title = firstline_startingwith(full_path, 'Title:')
49 author = firstline_startingwith(full_path, 'Author:')
50 url = 'http://www.python.org/dev/peps/pep-%0.4d' % n
51 item = rssgen.RSSItem(
52 title = 'PEP %d: %s' % (n, title),
53 link = url,
54 description = 'Author: %s' % author,
55 guid = rssgen.Guid(url),
56 pubDate = dt)
57 items.append(item)
58
59 # the rss envelope
60 desc = """
61 Newest Python Enhancement Proposals (PEPs) - Information on new
62 language features, and some meta-information like release
63 procedure and schedules
64 """.strip()
65 rss = rssgen.RSS2(
66 title = 'Newest Python PEPs',
67 link = 'http://www.python.org/dev/peps',
68 description = desc,
69 lastBuildDate = datetime.datetime.now(),
70 items = items)
71
72 with open(RSS_PATH, 'w') as fp:
73 fp.write(rss.to_xml())
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pep2rss.py b/pep2rss.py
--- a/pep2rss.py
+++ b/pep2rss.py
@@ -1,16 +1,15 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# usage: pep-hook.py $REPOS $REV
# (standard post-commit args)
import os, glob, time, datetime, stat, re, sys
-import codecs
import PyRSS2Gen as rssgen
RSS_PATH = os.path.join(sys.argv[1], 'peps.rss')
def firstline_startingwith(full_path, text):
- for line in codecs.open(full_path, encoding="utf-8"):
+ for line in open(full_path, encoding="utf-8"):
if line.startswith(text):
return line[len(text):].strip()
return None
@@ -69,5 +68,5 @@
lastBuildDate = datetime.datetime.now(),
items = items)
-with open(RSS_PATH, 'w') as fp:
- fp.write(rss.to_xml())
+with open(RSS_PATH, 'w', encoding="utf-8") as fp:
+ fp.write(rss.to_xml(encoding="utf-8"))
| {"golden_diff": "diff --git a/pep2rss.py b/pep2rss.py\n--- a/pep2rss.py\n+++ b/pep2rss.py\n@@ -1,16 +1,15 @@\n-#!/usr/bin/env python\n+#!/usr/bin/env python3\n \n # usage: pep-hook.py $REPOS $REV\n # (standard post-commit args)\n \n import os, glob, time, datetime, stat, re, sys\n-import codecs\n import PyRSS2Gen as rssgen\n \n RSS_PATH = os.path.join(sys.argv[1], 'peps.rss')\n \n def firstline_startingwith(full_path, text):\n- for line in codecs.open(full_path, encoding=\"utf-8\"):\n+ for line in open(full_path, encoding=\"utf-8\"):\n if line.startswith(text):\n return line[len(text):].strip()\n return None\n@@ -69,5 +68,5 @@\n lastBuildDate = datetime.datetime.now(),\n items = items)\n \n-with open(RSS_PATH, 'w') as fp:\n- fp.write(rss.to_xml())\n+with open(RSS_PATH, 'w', encoding=\"utf-8\") as fp:\n+ fp.write(rss.to_xml(encoding=\"utf-8\"))\n", "issue": "pep2rss can raise UnicodeEncodeError on certain platforms\nThis is due to the default encoding of [`open()`](https://docs.python.org/3/library/functions.html#open) being platform dependent.\r\n\r\nFor example, on Windows (CP-1252), it can raise\r\n`UnicodeEncodeError: 'charmap' codec can't encode character '\\u0141' in position 1545: character maps to <undefined>`\r\nwhen handling the character, '\u0141', in the name of the author of [PEP 569](https://www.python.org/dev/peps/pep-0569/).\r\nhttps://github.com/python/peps/blob/433faca4cf1bb8a0550d01b86b2b4581b3eda460/pep-0569.rst#L5\n", "before_files": [{"content": "#!/usr/bin/env python\n\n# usage: pep-hook.py $REPOS $REV\n# (standard post-commit args)\n\nimport os, glob, time, datetime, stat, re, sys\nimport codecs\nimport PyRSS2Gen as rssgen\n\nRSS_PATH = os.path.join(sys.argv[1], 'peps.rss')\n\ndef firstline_startingwith(full_path, text):\n for line in codecs.open(full_path, encoding=\"utf-8\"):\n if line.startswith(text):\n return line[len(text):].strip()\n return None\n\n# get list of peps with creation time (from \"Created:\" string in pep .txt)\npeps = glob.glob('pep-*.txt')\ndef pep_creation_dt(full_path):\n created_str = firstline_startingwith(full_path, 'Created:')\n # bleh, I was hoping to avoid re but some PEPs editorialize\n # on the Created line\n m = re.search(r'''(\\d+-\\w+-\\d{4})''', created_str)\n if not m:\n # some older ones have an empty line, that's okay, if it's old\n # we ipso facto don't care about it.\n # \"return None\" would make the most sense but datetime objects\n # refuse to compare with that. :-|\n return datetime.datetime(*time.localtime(0)[:6])\n created_str = m.group(1)\n try:\n t = time.strptime(created_str, '%d-%b-%Y')\n except ValueError:\n t = time.strptime(created_str, '%d-%B-%Y')\n return datetime.datetime(*t[:6])\npeps_with_dt = [(pep_creation_dt(full_path), full_path) for full_path in peps]\n# sort peps by date, newest first\npeps_with_dt.sort(reverse=True)\n\n# generate rss items for 10 most recent peps\nitems = []\nfor dt, full_path in peps_with_dt[:10]:\n try:\n n = int(full_path.split('-')[-1].split('.')[0])\n except ValueError:\n pass\n title = firstline_startingwith(full_path, 'Title:')\n author = firstline_startingwith(full_path, 'Author:')\n url = 'http://www.python.org/dev/peps/pep-%0.4d' % n\n item = rssgen.RSSItem(\n title = 'PEP %d: %s' % (n, title),\n link = url,\n description = 'Author: %s' % author,\n guid = rssgen.Guid(url),\n pubDate = dt)\n items.append(item)\n\n# the rss envelope\ndesc = \"\"\"\nNewest Python Enhancement Proposals (PEPs) - Information on new\nlanguage features, and some meta-information like release\nprocedure and schedules\n\"\"\".strip()\nrss = rssgen.RSS2(\n title = 'Newest Python PEPs',\n link = 'http://www.python.org/dev/peps',\n description = desc,\n lastBuildDate = datetime.datetime.now(),\n items = items)\n\nwith open(RSS_PATH, 'w') as fp:\n fp.write(rss.to_xml())\n", "path": "pep2rss.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\n# usage: pep-hook.py $REPOS $REV\n# (standard post-commit args)\n\nimport os, glob, time, datetime, stat, re, sys\nimport PyRSS2Gen as rssgen\n\nRSS_PATH = os.path.join(sys.argv[1], 'peps.rss')\n\ndef firstline_startingwith(full_path, text):\n for line in open(full_path, encoding=\"utf-8\"):\n if line.startswith(text):\n return line[len(text):].strip()\n return None\n\n# get list of peps with creation time (from \"Created:\" string in pep .txt)\npeps = glob.glob('pep-*.txt')\ndef pep_creation_dt(full_path):\n created_str = firstline_startingwith(full_path, 'Created:')\n # bleh, I was hoping to avoid re but some PEPs editorialize\n # on the Created line\n m = re.search(r'''(\\d+-\\w+-\\d{4})''', created_str)\n if not m:\n # some older ones have an empty line, that's okay, if it's old\n # we ipso facto don't care about it.\n # \"return None\" would make the most sense but datetime objects\n # refuse to compare with that. :-|\n return datetime.datetime(*time.localtime(0)[:6])\n created_str = m.group(1)\n try:\n t = time.strptime(created_str, '%d-%b-%Y')\n except ValueError:\n t = time.strptime(created_str, '%d-%B-%Y')\n return datetime.datetime(*t[:6])\npeps_with_dt = [(pep_creation_dt(full_path), full_path) for full_path in peps]\n# sort peps by date, newest first\npeps_with_dt.sort(reverse=True)\n\n# generate rss items for 10 most recent peps\nitems = []\nfor dt, full_path in peps_with_dt[:10]:\n try:\n n = int(full_path.split('-')[-1].split('.')[0])\n except ValueError:\n pass\n title = firstline_startingwith(full_path, 'Title:')\n author = firstline_startingwith(full_path, 'Author:')\n url = 'http://www.python.org/dev/peps/pep-%0.4d' % n\n item = rssgen.RSSItem(\n title = 'PEP %d: %s' % (n, title),\n link = url,\n description = 'Author: %s' % author,\n guid = rssgen.Guid(url),\n pubDate = dt)\n items.append(item)\n\n# the rss envelope\ndesc = \"\"\"\nNewest Python Enhancement Proposals (PEPs) - Information on new\nlanguage features, and some meta-information like release\nprocedure and schedules\n\"\"\".strip()\nrss = rssgen.RSS2(\n title = 'Newest Python PEPs',\n link = 'http://www.python.org/dev/peps',\n description = desc,\n lastBuildDate = datetime.datetime.now(),\n items = items)\n\nwith open(RSS_PATH, 'w', encoding=\"utf-8\") as fp:\n fp.write(rss.to_xml(encoding=\"utf-8\"))\n", "path": "pep2rss.py"}]} | 1,271 | 270 |
gh_patches_debug_30551 | rasdani/github-patches | git_diff | beeware__toga-717 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix octal literals in constants.py on the Windows platform, so it will work in Python 3.
The literals in the file won't work in Python 3, and by fixing them you can get as far as an empty window on Windows.
P.S. This is my first time doing a pull request on GitHub, hope this is ok.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/winforms/toga_winforms/libs.py`
Content:
```
1 import ctypes
2
3 import clr
4
5 clr.AddReference("System.Windows.Forms")
6
7 import System.Windows.Forms as WinForms # noqa: E402
8 from System import Decimal as ClrDecimal # noqa: E402, F401
9 from System import Single # noqa: E402, F401
10 from System import Convert # noqa: E402, F401
11 from System import DateTime as WinDateTime # noqa: E402, F401
12 from System import Threading # noqa: E402, F401
13 from System import Uri # noqa: E402, F401
14 from System import Environment # noqa: E402, F401
15
16 from System.Drawing import Icon as WinIcon # noqa: E402, F401
17 from System.Drawing import Image as WinImage # noqa: E402, F401
18 from System.Drawing import Font as WinFont # noqa: E402, F401
19 from System.Drawing import ContentAlignment, Size, Point # noqa: E402, F401
20 from System.Drawing import FontFamily, FontStyle, SystemFonts # noqa: E402, F401
21 from System.Drawing import Text, Color, Bitmap # noqa: E402, F401
22 from toga.constants import LEFT, RIGHT, CENTER, JUSTIFY # noqa: E402
23 from toga.fonts import (
24 MESSAGE,
25 SYSTEM,
26 SERIF,
27 SANS_SERIF,
28 CURSIVE,
29 FANTASY,
30 MONOSPACE,
31 ) # noqa: E402
32
33 user32 = ctypes.windll.user32
34 win_version = Environment.OSVersion.Version.Major
35
36
37 def TextAlignment(value):
38 return {
39 LEFT: ContentAlignment.MiddleLeft,
40 RIGHT: ContentAlignment.MiddleRight,
41 CENTER: ContentAlignment.MiddleCenter,
42 JUSTIFY: ContentAlignment.MiddleLeft,
43 }[value]
44
45
46 # Justify simply sets Left alignment. Is this the best option?
47 def HorizontalTextAlignment(value):
48 return {
49 LEFT: WinForms.HorizontalAlignment.Left,
50 RIGHT: WinForms.HorizontalAlignment.Right,
51 CENTER: WinForms.HorizontalAlignment.Center,
52 JUSTIFY: WinForms.HorizontalAlignment.Left,
53 }[value]
54
55
56 def add_handler(cmd):
57 action = cmd.action
58
59 def handler(sender, event):
60 return action(None)
61
62 return handler
63
64
65 def win_font_family(value):
66 win_families = {
67 SYSTEM: SystemFonts.DefaultFont.FontFamily,
68 MESSAGE: SystemFonts.MenuFont.FontFamily,
69 SERIF: FontFamily.GenericSerif,
70 SANS_SERIF: FontFamily.GenericSansSerif,
71 CURSIVE: FontFamily("Comic Sans MS"),
72 FANTASY: FontFamily("Impact"),
73 MONOSPACE: FontFamily.GenericMonospace,
74 }
75 for key in win_families:
76 if value in key:
77 return win_families[key]
78 if value in Text.InstalledFontCollection().Families:
79 return FontFamily(value)
80 else:
81 print(
82 "Unable to load font-family '{}', loading {} instead".format(
83 value, SystemFonts.DefaultFont.FontFamily)
84 )
85 return SystemFonts.DefaultFont.FontFamily
86
```
Path: `src/winforms/toga_winforms/app.py`
Content:
```
1 import sys
2
3 import toga
4
5 from .libs import Threading, WinForms, add_handler, user32, win_version
6 from .window import Window
7
8
9 class MainWindow(Window):
10 def on_close(self):
11 pass
12
13
14 class App:
15 _MAIN_WINDOW_CLASS = MainWindow
16
17 def __init__(self, interface):
18 self.interface = interface
19 self.interface._impl = self
20
21 def create(self):
22 self.native = WinForms.Application
23
24 if win_version >= 6:
25 user32.SetProcessDPIAware(True)
26 self.native.EnableVisualStyles()
27 self.native.SetCompatibleTextRenderingDefault(False)
28
29 self.interface.commands.add(
30 toga.Command(None, 'About ' + self.interface.name, group=toga.Group.HELP),
31 toga.Command(None, 'Preferences', group=toga.Group.FILE),
32 # Quit should always be the last item, in a section on it's own
33 toga.Command(lambda s: self.exit(), 'Exit ' + self.interface.name, shortcut='q', group=toga.Group.FILE,
34 section=sys.maxsize),
35 toga.Command(None, 'Visit homepage', group=toga.Group.HELP)
36 )
37 self._create_app_commands()
38
39 # Call user code to populate the main window
40 self.interface.startup()
41 self._menu_items = {}
42 self.create_menus()
43 self.interface.main_window._impl.native.Icon = \
44 self.interface.icon.bind(self.interface.factory).native
45
46 def create_menus(self):
47 toga.Group.FILE.order = 0
48 # Only create the menu if the menu item index has been created.
49 if hasattr(self, '_menu_items'):
50 menubar = WinForms.MenuStrip()
51 submenu = None
52 for cmd in self.interface.commands:
53 if cmd == toga.GROUP_BREAK:
54 menubar.Items.Add(submenu)
55 submenu = None
56 elif cmd == toga.SECTION_BREAK:
57 submenu.DropDownItems.Add('-')
58 else:
59 if submenu is None:
60 submenu = WinForms.ToolStripMenuItem(cmd.group.label)
61 item = WinForms.ToolStripMenuItem(cmd.label)
62 if cmd.action:
63 item.Click += add_handler(cmd)
64 else:
65 item.Enabled = False
66 cmd._widgets.append(item)
67 self._menu_items[item] = cmd
68 submenu.DropDownItems.Add(item)
69 if submenu:
70 menubar.Items.Add(submenu)
71 self.interface.main_window._impl.native.Controls.Add(menubar)
72 self.interface.main_window._impl.native.MainMenuStrip = menubar
73 self.interface.main_window.content.refresh()
74
75 def _create_app_commands(self):
76 # No extra menus
77 pass
78
79 def open_document(self, fileURL):
80 '''Add a new document to this app.'''
81 print("STUB: If you want to handle opening documents, implement App.open_document(fileURL)")
82
83 def run_app(self):
84 self.create()
85 self.native.Run(self.interface.main_window._impl.native)
86
87 def main_loop(self):
88 thread = Threading.Thread(Threading.ThreadStart(self.run_app))
89 thread.SetApartmentState(Threading.ApartmentState.STA)
90 thread.Start()
91 thread.Join()
92
93 def exit(self):
94 self.native.Exit()
95
96 def set_on_exit(self, value):
97 pass
98
99 def current_window(self):
100 self.interface.factory.not_implemented('App.current_window()')
101
102 def enter_full_screen(self, windows):
103 self.interface.factory.not_implemented('App.enter_full_screen()')
104
105 def exit_full_screen(self, windows):
106 self.interface.factory.not_implemented('App.exit_full_screen()')
107
108 def set_cursor(self, value):
109 self.interface.factory.not_implemented('App.set_cursor()')
110
111 def show_cursor(self):
112 self.interface.factory.not_implemented('App.show_cursor()')
113
114 def hide_cursor(self):
115 self.interface.factory.not_implemented('App.hide_cursor()')
116
117
118 class DocumentApp(App):
119 def _create_app_commands(self):
120 self.interface.commands.add(
121 toga.Command(
122 lambda w: self.open_file,
123 label='Open...',
124 shortcut='o',
125 group=toga.Group.FILE,
126 section=0
127 ),
128 )
129
130 def open_document(self, fileURL):
131 """Open a new document in this app.
132
133 Args:
134 fileURL (str): The URL/path to the file to add as a document.
135 """
136 self.interface.factory.not_implemented('DocumentApp.open_document()')
137
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/winforms/toga_winforms/app.py b/src/winforms/toga_winforms/app.py
--- a/src/winforms/toga_winforms/app.py
+++ b/src/winforms/toga_winforms/app.py
@@ -2,7 +2,7 @@
import toga
-from .libs import Threading, WinForms, add_handler, user32, win_version
+from .libs import Threading, WinForms, add_handler, user32, win_version, shcore
from .window import Window
@@ -21,8 +21,24 @@
def create(self):
self.native = WinForms.Application
- if win_version >= 6:
- user32.SetProcessDPIAware(True)
+ # Check the version of windows and make sure we are setting the DPI mode
+ # with the most up to date API
+ # Windows Versioning Check Sources : https://www.lifewire.com/windows-version-numbers-2625171
+ # and https://docs.microsoft.com/en-us/windows/release-information/
+ if win_version.Major >= 6: # Checks for Windows Vista or later
+ # Represents Windows 8.1 up to Windows 10 before Build 1703 which should use
+ # SetProcessDpiAwareness(True)
+ if ((win_version.Major == 6 and win_version.Minor == 3) or
+ (win_version.Major == 10 and win_version.Build < 15063)):
+ shcore.SetProcessDpiAwareness(True)
+ # Represents Windows 10 Build 1703 and beyond which should use
+ # SetProcessDpiAwarenessContext(-2)
+ elif win_version.Major == 10 and win_version.Build >= 15063:
+ user32.SetProcessDpiAwarenessContext(-2)
+ # Any other version of windows should use SetProcessDPIAware(True)
+ else:
+ user32.SetProcessDPIAware(True)
+
self.native.EnableVisualStyles()
self.native.SetCompatibleTextRenderingDefault(False)
diff --git a/src/winforms/toga_winforms/libs.py b/src/winforms/toga_winforms/libs.py
--- a/src/winforms/toga_winforms/libs.py
+++ b/src/winforms/toga_winforms/libs.py
@@ -31,7 +31,8 @@
) # noqa: E402
user32 = ctypes.windll.user32
-win_version = Environment.OSVersion.Version.Major
+shcore = ctypes.windll.shcore
+win_version = Environment.OSVersion.Version
def TextAlignment(value):
| {"golden_diff": "diff --git a/src/winforms/toga_winforms/app.py b/src/winforms/toga_winforms/app.py\n--- a/src/winforms/toga_winforms/app.py\n+++ b/src/winforms/toga_winforms/app.py\n@@ -2,7 +2,7 @@\n \n import toga\n \n-from .libs import Threading, WinForms, add_handler, user32, win_version\n+from .libs import Threading, WinForms, add_handler, user32, win_version, shcore\n from .window import Window\n \n \n@@ -21,8 +21,24 @@\n def create(self):\n self.native = WinForms.Application\n \n- if win_version >= 6:\n- user32.SetProcessDPIAware(True)\n+ # Check the version of windows and make sure we are setting the DPI mode\n+ # with the most up to date API\n+ # Windows Versioning Check Sources : https://www.lifewire.com/windows-version-numbers-2625171\n+ # and https://docs.microsoft.com/en-us/windows/release-information/\n+ if win_version.Major >= 6: # Checks for Windows Vista or later\n+ # Represents Windows 8.1 up to Windows 10 before Build 1703 which should use\n+ # SetProcessDpiAwareness(True)\n+ if ((win_version.Major == 6 and win_version.Minor == 3) or\n+ (win_version.Major == 10 and win_version.Build < 15063)):\n+ shcore.SetProcessDpiAwareness(True)\n+ # Represents Windows 10 Build 1703 and beyond which should use\n+ # SetProcessDpiAwarenessContext(-2)\n+ elif win_version.Major == 10 and win_version.Build >= 15063:\n+ user32.SetProcessDpiAwarenessContext(-2)\n+ # Any other version of windows should use SetProcessDPIAware(True)\n+ else:\n+ user32.SetProcessDPIAware(True)\n+\n self.native.EnableVisualStyles()\n self.native.SetCompatibleTextRenderingDefault(False)\n \ndiff --git a/src/winforms/toga_winforms/libs.py b/src/winforms/toga_winforms/libs.py\n--- a/src/winforms/toga_winforms/libs.py\n+++ b/src/winforms/toga_winforms/libs.py\n@@ -31,7 +31,8 @@\n ) # noqa: E402\n \n user32 = ctypes.windll.user32\n-win_version = Environment.OSVersion.Version.Major\n+shcore = ctypes.windll.shcore\n+win_version = Environment.OSVersion.Version\n \n \n def TextAlignment(value):\n", "issue": "Fix octal literals in constants.py on the Windows platform, so it will work in Python 3.\nThe literals in the file won't work in Python 3, and by fixing them you can get as far as an empty window on Windows.\n\nP.S. This is my first time doing a pull request on GitHub, hope this is ok.\n\n", "before_files": [{"content": "import ctypes\n\nimport clr\n\nclr.AddReference(\"System.Windows.Forms\")\n\nimport System.Windows.Forms as WinForms # noqa: E402\nfrom System import Decimal as ClrDecimal # noqa: E402, F401\nfrom System import Single # noqa: E402, F401\nfrom System import Convert # noqa: E402, F401\nfrom System import DateTime as WinDateTime # noqa: E402, F401\nfrom System import Threading # noqa: E402, F401\nfrom System import Uri # noqa: E402, F401\nfrom System import Environment # noqa: E402, F401\n\nfrom System.Drawing import Icon as WinIcon # noqa: E402, F401\nfrom System.Drawing import Image as WinImage # noqa: E402, F401\nfrom System.Drawing import Font as WinFont # noqa: E402, F401\nfrom System.Drawing import ContentAlignment, Size, Point # noqa: E402, F401\nfrom System.Drawing import FontFamily, FontStyle, SystemFonts # noqa: E402, F401\nfrom System.Drawing import Text, Color, Bitmap # noqa: E402, F401\nfrom toga.constants import LEFT, RIGHT, CENTER, JUSTIFY # noqa: E402\nfrom toga.fonts import (\n MESSAGE,\n SYSTEM,\n SERIF,\n SANS_SERIF,\n CURSIVE,\n FANTASY,\n MONOSPACE,\n) # noqa: E402\n\nuser32 = ctypes.windll.user32\nwin_version = Environment.OSVersion.Version.Major\n\n\ndef TextAlignment(value):\n return {\n LEFT: ContentAlignment.MiddleLeft,\n RIGHT: ContentAlignment.MiddleRight,\n CENTER: ContentAlignment.MiddleCenter,\n JUSTIFY: ContentAlignment.MiddleLeft,\n }[value]\n\n\n# Justify simply sets Left alignment. Is this the best option?\ndef HorizontalTextAlignment(value):\n return {\n LEFT: WinForms.HorizontalAlignment.Left,\n RIGHT: WinForms.HorizontalAlignment.Right,\n CENTER: WinForms.HorizontalAlignment.Center,\n JUSTIFY: WinForms.HorizontalAlignment.Left,\n }[value]\n\n\ndef add_handler(cmd):\n action = cmd.action\n\n def handler(sender, event):\n return action(None)\n\n return handler\n\n\ndef win_font_family(value):\n win_families = {\n SYSTEM: SystemFonts.DefaultFont.FontFamily,\n MESSAGE: SystemFonts.MenuFont.FontFamily,\n SERIF: FontFamily.GenericSerif,\n SANS_SERIF: FontFamily.GenericSansSerif,\n CURSIVE: FontFamily(\"Comic Sans MS\"),\n FANTASY: FontFamily(\"Impact\"),\n MONOSPACE: FontFamily.GenericMonospace,\n }\n for key in win_families:\n if value in key:\n return win_families[key]\n if value in Text.InstalledFontCollection().Families:\n return FontFamily(value)\n else:\n print(\n \"Unable to load font-family '{}', loading {} instead\".format(\n value, SystemFonts.DefaultFont.FontFamily)\n )\n return SystemFonts.DefaultFont.FontFamily\n", "path": "src/winforms/toga_winforms/libs.py"}, {"content": "import sys\n\nimport toga\n\nfrom .libs import Threading, WinForms, add_handler, user32, win_version\nfrom .window import Window\n\n\nclass MainWindow(Window):\n def on_close(self):\n pass\n\n\nclass App:\n _MAIN_WINDOW_CLASS = MainWindow\n\n def __init__(self, interface):\n self.interface = interface\n self.interface._impl = self\n\n def create(self):\n self.native = WinForms.Application\n\n if win_version >= 6:\n user32.SetProcessDPIAware(True)\n self.native.EnableVisualStyles()\n self.native.SetCompatibleTextRenderingDefault(False)\n\n self.interface.commands.add(\n toga.Command(None, 'About ' + self.interface.name, group=toga.Group.HELP),\n toga.Command(None, 'Preferences', group=toga.Group.FILE),\n # Quit should always be the last item, in a section on it's own\n toga.Command(lambda s: self.exit(), 'Exit ' + self.interface.name, shortcut='q', group=toga.Group.FILE,\n section=sys.maxsize),\n toga.Command(None, 'Visit homepage', group=toga.Group.HELP)\n )\n self._create_app_commands()\n\n # Call user code to populate the main window\n self.interface.startup()\n self._menu_items = {}\n self.create_menus()\n self.interface.main_window._impl.native.Icon = \\\n self.interface.icon.bind(self.interface.factory).native\n\n def create_menus(self):\n toga.Group.FILE.order = 0\n # Only create the menu if the menu item index has been created.\n if hasattr(self, '_menu_items'):\n menubar = WinForms.MenuStrip()\n submenu = None\n for cmd in self.interface.commands:\n if cmd == toga.GROUP_BREAK:\n menubar.Items.Add(submenu)\n submenu = None\n elif cmd == toga.SECTION_BREAK:\n submenu.DropDownItems.Add('-')\n else:\n if submenu is None:\n submenu = WinForms.ToolStripMenuItem(cmd.group.label)\n item = WinForms.ToolStripMenuItem(cmd.label)\n if cmd.action:\n item.Click += add_handler(cmd)\n else:\n item.Enabled = False\n cmd._widgets.append(item)\n self._menu_items[item] = cmd\n submenu.DropDownItems.Add(item)\n if submenu:\n menubar.Items.Add(submenu)\n self.interface.main_window._impl.native.Controls.Add(menubar)\n self.interface.main_window._impl.native.MainMenuStrip = menubar\n self.interface.main_window.content.refresh()\n\n def _create_app_commands(self):\n # No extra menus\n pass\n\n def open_document(self, fileURL):\n '''Add a new document to this app.'''\n print(\"STUB: If you want to handle opening documents, implement App.open_document(fileURL)\")\n\n def run_app(self):\n self.create()\n self.native.Run(self.interface.main_window._impl.native)\n\n def main_loop(self):\n thread = Threading.Thread(Threading.ThreadStart(self.run_app))\n thread.SetApartmentState(Threading.ApartmentState.STA)\n thread.Start()\n thread.Join()\n\n def exit(self):\n self.native.Exit()\n\n def set_on_exit(self, value):\n pass\n\n def current_window(self):\n self.interface.factory.not_implemented('App.current_window()')\n\n def enter_full_screen(self, windows):\n self.interface.factory.not_implemented('App.enter_full_screen()')\n\n def exit_full_screen(self, windows):\n self.interface.factory.not_implemented('App.exit_full_screen()')\n\n def set_cursor(self, value):\n self.interface.factory.not_implemented('App.set_cursor()')\n\n def show_cursor(self):\n self.interface.factory.not_implemented('App.show_cursor()')\n\n def hide_cursor(self):\n self.interface.factory.not_implemented('App.hide_cursor()')\n\n\nclass DocumentApp(App):\n def _create_app_commands(self):\n self.interface.commands.add(\n toga.Command(\n lambda w: self.open_file,\n label='Open...',\n shortcut='o',\n group=toga.Group.FILE,\n section=0\n ),\n )\n\n def open_document(self, fileURL):\n \"\"\"Open a new document in this app.\n\n Args:\n fileURL (str): The URL/path to the file to add as a document.\n \"\"\"\n self.interface.factory.not_implemented('DocumentApp.open_document()')\n", "path": "src/winforms/toga_winforms/app.py"}], "after_files": [{"content": "import ctypes\n\nimport clr\n\nclr.AddReference(\"System.Windows.Forms\")\n\nimport System.Windows.Forms as WinForms # noqa: E402\nfrom System import Decimal as ClrDecimal # noqa: E402, F401\nfrom System import Single # noqa: E402, F401\nfrom System import Convert # noqa: E402, F401\nfrom System import DateTime as WinDateTime # noqa: E402, F401\nfrom System import Threading # noqa: E402, F401\nfrom System import Uri # noqa: E402, F401\nfrom System import Environment # noqa: E402, F401\n\nfrom System.Drawing import Icon as WinIcon # noqa: E402, F401\nfrom System.Drawing import Image as WinImage # noqa: E402, F401\nfrom System.Drawing import Font as WinFont # noqa: E402, F401\nfrom System.Drawing import ContentAlignment, Size, Point # noqa: E402, F401\nfrom System.Drawing import FontFamily, FontStyle, SystemFonts # noqa: E402, F401\nfrom System.Drawing import Text, Color, Bitmap # noqa: E402, F401\nfrom toga.constants import LEFT, RIGHT, CENTER, JUSTIFY # noqa: E402\nfrom toga.fonts import (\n MESSAGE,\n SYSTEM,\n SERIF,\n SANS_SERIF,\n CURSIVE,\n FANTASY,\n MONOSPACE,\n) # noqa: E402\n\nuser32 = ctypes.windll.user32\nshcore = ctypes.windll.shcore\nwin_version = Environment.OSVersion.Version\n\n\ndef TextAlignment(value):\n return {\n LEFT: ContentAlignment.MiddleLeft,\n RIGHT: ContentAlignment.MiddleRight,\n CENTER: ContentAlignment.MiddleCenter,\n JUSTIFY: ContentAlignment.MiddleLeft,\n }[value]\n\n\n# Justify simply sets Left alignment. Is this the best option?\ndef HorizontalTextAlignment(value):\n return {\n LEFT: WinForms.HorizontalAlignment.Left,\n RIGHT: WinForms.HorizontalAlignment.Right,\n CENTER: WinForms.HorizontalAlignment.Center,\n JUSTIFY: WinForms.HorizontalAlignment.Left,\n }[value]\n\n\ndef add_handler(cmd):\n action = cmd.action\n\n def handler(sender, event):\n return action(None)\n\n return handler\n\n\ndef win_font_family(value):\n win_families = {\n SYSTEM: SystemFonts.DefaultFont.FontFamily,\n MESSAGE: SystemFonts.MenuFont.FontFamily,\n SERIF: FontFamily.GenericSerif,\n SANS_SERIF: FontFamily.GenericSansSerif,\n CURSIVE: FontFamily(\"Comic Sans MS\"),\n FANTASY: FontFamily(\"Impact\"),\n MONOSPACE: FontFamily.GenericMonospace,\n }\n for key in win_families:\n if value in key:\n return win_families[key]\n if value in Text.InstalledFontCollection().Families:\n return FontFamily(value)\n else:\n print(\n \"Unable to load font-family '{}', loading {} instead\".format(\n value, SystemFonts.DefaultFont.FontFamily)\n )\n return SystemFonts.DefaultFont.FontFamily\n", "path": "src/winforms/toga_winforms/libs.py"}, {"content": "import sys\n\nimport toga\n\nfrom .libs import Threading, WinForms, add_handler, user32, win_version, shcore\nfrom .window import Window\n\n\nclass MainWindow(Window):\n def on_close(self):\n pass\n\n\nclass App:\n _MAIN_WINDOW_CLASS = MainWindow\n\n def __init__(self, interface):\n self.interface = interface\n self.interface._impl = self\n\n def create(self):\n self.native = WinForms.Application\n\n # Check the version of windows and make sure we are setting the DPI mode\n # with the most up to date API\n # Windows Versioning Check Sources : https://www.lifewire.com/windows-version-numbers-2625171\n # and https://docs.microsoft.com/en-us/windows/release-information/\n if win_version.Major >= 6: # Checks for Windows Vista or later\n # Represents Windows 8.1 up to Windows 10 before Build 1703 which should use\n # SetProcessDpiAwareness(True)\n if ((win_version.Major == 6 and win_version.Minor == 3) or\n (win_version.Major == 10 and win_version.Build < 15063)):\n shcore.SetProcessDpiAwareness(True)\n # Represents Windows 10 Build 1703 and beyond which should use\n # SetProcessDpiAwarenessContext(-2)\n elif win_version.Major == 10 and win_version.Build >= 15063:\n user32.SetProcessDpiAwarenessContext(-2)\n # Any other version of windows should use SetProcessDPIAware(True)\n else:\n user32.SetProcessDPIAware(True)\n\n self.native.EnableVisualStyles()\n self.native.SetCompatibleTextRenderingDefault(False)\n\n self.interface.commands.add(\n toga.Command(None, 'About ' + self.interface.name, group=toga.Group.HELP),\n toga.Command(None, 'Preferences', group=toga.Group.FILE),\n # Quit should always be the last item, in a section on it's own\n toga.Command(lambda s: self.exit(), 'Exit ' + self.interface.name, shortcut='q', group=toga.Group.FILE,\n section=sys.maxsize),\n toga.Command(None, 'Visit homepage', group=toga.Group.HELP)\n )\n self._create_app_commands()\n\n # Call user code to populate the main window\n self.interface.startup()\n self._menu_items = {}\n self.create_menus()\n self.interface.main_window._impl.native.Icon = \\\n self.interface.icon.bind(self.interface.factory).native\n\n def create_menus(self):\n toga.Group.FILE.order = 0\n # Only create the menu if the menu item index has been created.\n if hasattr(self, '_menu_items'):\n menubar = WinForms.MenuStrip()\n submenu = None\n for cmd in self.interface.commands:\n if cmd == toga.GROUP_BREAK:\n menubar.Items.Add(submenu)\n submenu = None\n elif cmd == toga.SECTION_BREAK:\n submenu.DropDownItems.Add('-')\n else:\n if submenu is None:\n submenu = WinForms.ToolStripMenuItem(cmd.group.label)\n item = WinForms.ToolStripMenuItem(cmd.label)\n if cmd.action:\n item.Click += add_handler(cmd)\n else:\n item.Enabled = False\n cmd._widgets.append(item)\n self._menu_items[item] = cmd\n submenu.DropDownItems.Add(item)\n if submenu:\n menubar.Items.Add(submenu)\n self.interface.main_window._impl.native.Controls.Add(menubar)\n self.interface.main_window._impl.native.MainMenuStrip = menubar\n self.interface.main_window.content.refresh()\n\n def _create_app_commands(self):\n # No extra menus\n pass\n\n def open_document(self, fileURL):\n '''Add a new document to this app.'''\n print(\"STUB: If you want to handle opening documents, implement App.open_document(fileURL)\")\n\n def run_app(self):\n self.create()\n self.native.Run(self.interface.main_window._impl.native)\n\n def main_loop(self):\n thread = Threading.Thread(Threading.ThreadStart(self.run_app))\n thread.SetApartmentState(Threading.ApartmentState.STA)\n thread.Start()\n thread.Join()\n\n def exit(self):\n self.native.Exit()\n\n def set_on_exit(self, value):\n pass\n\n def current_window(self):\n self.interface.factory.not_implemented('App.current_window()')\n\n def enter_full_screen(self, windows):\n self.interface.factory.not_implemented('App.enter_full_screen()')\n\n def exit_full_screen(self, windows):\n self.interface.factory.not_implemented('App.exit_full_screen()')\n\n def set_cursor(self, value):\n self.interface.factory.not_implemented('App.set_cursor()')\n\n def show_cursor(self):\n self.interface.factory.not_implemented('App.show_cursor()')\n\n def hide_cursor(self):\n self.interface.factory.not_implemented('App.hide_cursor()')\n\n\nclass DocumentApp(App):\n def _create_app_commands(self):\n self.interface.commands.add(\n toga.Command(\n lambda w: self.open_file,\n label='Open...',\n shortcut='o',\n group=toga.Group.FILE,\n section=0\n ),\n )\n\n def open_document(self, fileURL):\n \"\"\"Open a new document in this app.\n\n Args:\n fileURL (str): The URL/path to the file to add as a document.\n \"\"\"\n self.interface.factory.not_implemented('DocumentApp.open_document()')\n", "path": "src/winforms/toga_winforms/app.py"}]} | 2,461 | 587 |
gh_patches_debug_4567 | rasdani/github-patches | git_diff | mozmeao__snippets-service-813 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
UnicodeEncodeError when Slack request.post
UnicodeEncodeError: 'latin-1' codec can't encode characters in position 258-264: Body ('γιώργος') is not valid Latin-1. Use body.encode('utf-8') if you want to send it encoded in UTF-8.
Sentry Issue: https://sentry.prod.mozaws.net/operations/snippets/issues/4891896/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `snippets/base/slack.py`
Content:
```
1 import logging
2
3 from django.conf import settings
4 from django.template.loader import render_to_string
5
6 import requests
7 from raven.contrib.django.models import client as sentry_client
8
9 logger = logging.getLogger(__name__)
10
11
12 def send_slack(template_name, snippet):
13 data = render_to_string('slack/{}.jinja.json'.format(template_name),
14 context={'snippet': snippet})
15 _send_slack(data)
16
17
18 def _send_slack(data):
19 if not (settings.SLACK_ENABLE and settings.SLACK_WEBHOOK):
20 logger.info('Slack is not enabled.')
21 return
22
23 try:
24 response = requests.post(settings.SLACK_WEBHOOK, data=data,
25 headers={'Content-Type': 'application/json'},
26 timeout=4)
27 response.raise_for_status()
28 except requests.exceptions.RequestException:
29 sentry_client.captureException()
30
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/snippets/base/slack.py b/snippets/base/slack.py
--- a/snippets/base/slack.py
+++ b/snippets/base/slack.py
@@ -21,7 +21,7 @@
return
try:
- response = requests.post(settings.SLACK_WEBHOOK, data=data,
+ response = requests.post(settings.SLACK_WEBHOOK, data=data.encode('utf-8'),
headers={'Content-Type': 'application/json'},
timeout=4)
response.raise_for_status()
| {"golden_diff": "diff --git a/snippets/base/slack.py b/snippets/base/slack.py\n--- a/snippets/base/slack.py\n+++ b/snippets/base/slack.py\n@@ -21,7 +21,7 @@\n return\n \n try:\n- response = requests.post(settings.SLACK_WEBHOOK, data=data,\n+ response = requests.post(settings.SLACK_WEBHOOK, data=data.encode('utf-8'),\n headers={'Content-Type': 'application/json'},\n timeout=4)\n response.raise_for_status()\n", "issue": "UnicodeEncodeError when Slack request.post\nUnicodeEncodeError: 'latin-1' codec can't encode characters in position 258-264: Body ('\u03b3\u03b9\u03ce\u03c1\u03b3\u03bf\u03c2') is not valid Latin-1. Use body.encode('utf-8') if you want to send it encoded in UTF-8.\r\n\r\nSentry Issue: https://sentry.prod.mozaws.net/operations/snippets/issues/4891896/\n", "before_files": [{"content": "import logging\n\nfrom django.conf import settings\nfrom django.template.loader import render_to_string\n\nimport requests\nfrom raven.contrib.django.models import client as sentry_client\n\nlogger = logging.getLogger(__name__)\n\n\ndef send_slack(template_name, snippet):\n data = render_to_string('slack/{}.jinja.json'.format(template_name),\n context={'snippet': snippet})\n _send_slack(data)\n\n\ndef _send_slack(data):\n if not (settings.SLACK_ENABLE and settings.SLACK_WEBHOOK):\n logger.info('Slack is not enabled.')\n return\n\n try:\n response = requests.post(settings.SLACK_WEBHOOK, data=data,\n headers={'Content-Type': 'application/json'},\n timeout=4)\n response.raise_for_status()\n except requests.exceptions.RequestException:\n sentry_client.captureException()\n", "path": "snippets/base/slack.py"}], "after_files": [{"content": "import logging\n\nfrom django.conf import settings\nfrom django.template.loader import render_to_string\n\nimport requests\nfrom raven.contrib.django.models import client as sentry_client\n\nlogger = logging.getLogger(__name__)\n\n\ndef send_slack(template_name, snippet):\n data = render_to_string('slack/{}.jinja.json'.format(template_name),\n context={'snippet': snippet})\n _send_slack(data)\n\n\ndef _send_slack(data):\n if not (settings.SLACK_ENABLE and settings.SLACK_WEBHOOK):\n logger.info('Slack is not enabled.')\n return\n\n try:\n response = requests.post(settings.SLACK_WEBHOOK, data=data.encode('utf-8'),\n headers={'Content-Type': 'application/json'},\n timeout=4)\n response.raise_for_status()\n except requests.exceptions.RequestException:\n sentry_client.captureException()\n", "path": "snippets/base/slack.py"}]} | 589 | 112 |
gh_patches_debug_37813 | rasdani/github-patches | git_diff | python-discord__bot-811 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add pep8-naming to linting toolchain
We've decided to use this plugin in our linting toolchain, which allows us to test things like variable name casing (no camelCase please!) and whether classes and methods use `cls` and `self`.
## Implementation
Here are the errors it can catch:
code | sample message
-- | --
N801 | class names should use CapWords convention
N802 | function name should be lowercase
N803 | argument name should be lowercase
N804 | first argument of a classmethod should be named 'cls'
N805 | first argument of a method should be named 'self'
N806 | variable in function should be lowercase
N807 | function name should not start and end with '__'
|
N811 | constant imported as non constant
N812 | lowercase imported as non lowercase
N813 | camelcase imported as lowercase
N814 | camelcase imported as constant
N815 | mixedCase variable in class scope
N816 | mixedCase variable in global scope
We have not decided exactly which of these we should ignore, so it will be up to the person who handles this issue to discuss this with the core devs and come to a consensus on whether we should ignore some of these. It will probably be easier to see which ones here are not necessary after actually running a lint with this plugin on our current codebase.
The person implementing this will also need to fix all the linting errors that occur once this has been added to the toolchain.
## Steps
- [ ] Add the plugin to the toolchain
- [ ] Run it and look through the errors that are created. Are any of them out of touch with our style guidelines, and should we ignore some of these rules? Discuss with the core developers if you think so.
- [ ] Fix all the linting problems.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bot/utils/__init__.py`
Content:
```
1 from abc import ABCMeta
2 from typing import Any, Hashable
3
4 from discord.ext.commands import CogMeta
5
6
7 class CogABCMeta(CogMeta, ABCMeta):
8 """Metaclass for ABCs meant to be implemented as Cogs."""
9
10 pass
11
12
13 class CaseInsensitiveDict(dict):
14 """
15 We found this class on StackOverflow. Thanks to m000 for writing it!
16
17 https://stackoverflow.com/a/32888599/4022104
18 """
19
20 @classmethod
21 def _k(cls, key: Hashable) -> Hashable:
22 """Return lowered key if a string-like is passed, otherwise pass key straight through."""
23 return key.lower() if isinstance(key, str) else key
24
25 def __init__(self, *args, **kwargs):
26 super(CaseInsensitiveDict, self).__init__(*args, **kwargs)
27 self._convert_keys()
28
29 def __getitem__(self, key: Hashable) -> Any:
30 """Case insensitive __setitem__."""
31 return super(CaseInsensitiveDict, self).__getitem__(self.__class__._k(key))
32
33 def __setitem__(self, key: Hashable, value: Any):
34 """Case insensitive __setitem__."""
35 super(CaseInsensitiveDict, self).__setitem__(self.__class__._k(key), value)
36
37 def __delitem__(self, key: Hashable) -> Any:
38 """Case insensitive __delitem__."""
39 return super(CaseInsensitiveDict, self).__delitem__(self.__class__._k(key))
40
41 def __contains__(self, key: Hashable) -> bool:
42 """Case insensitive __contains__."""
43 return super(CaseInsensitiveDict, self).__contains__(self.__class__._k(key))
44
45 def pop(self, key: Hashable, *args, **kwargs) -> Any:
46 """Case insensitive pop."""
47 return super(CaseInsensitiveDict, self).pop(self.__class__._k(key), *args, **kwargs)
48
49 def get(self, key: Hashable, *args, **kwargs) -> Any:
50 """Case insensitive get."""
51 return super(CaseInsensitiveDict, self).get(self.__class__._k(key), *args, **kwargs)
52
53 def setdefault(self, key: Hashable, *args, **kwargs) -> Any:
54 """Case insensitive setdefault."""
55 return super(CaseInsensitiveDict, self).setdefault(self.__class__._k(key), *args, **kwargs)
56
57 def update(self, E: Any = None, **F) -> None:
58 """Case insensitive update."""
59 super(CaseInsensitiveDict, self).update(self.__class__(E))
60 super(CaseInsensitiveDict, self).update(self.__class__(**F))
61
62 def _convert_keys(self) -> None:
63 """Helper method to lowercase all existing string-like keys."""
64 for k in list(self.keys()):
65 v = super(CaseInsensitiveDict, self).pop(k)
66 self.__setitem__(k, v)
67
```
Path: `bot/cogs/free.py`
Content:
```
1 import logging
2 from datetime import datetime
3 from operator import itemgetter
4
5 from discord import Colour, Embed, Member, utils
6 from discord.ext.commands import Cog, Context, command
7
8 from bot.bot import Bot
9 from bot.constants import Categories, Channels, Free, STAFF_ROLES
10 from bot.decorators import redirect_output
11
12 log = logging.getLogger(__name__)
13
14 TIMEOUT = Free.activity_timeout
15 RATE = Free.cooldown_rate
16 PER = Free.cooldown_per
17
18
19 class Free(Cog):
20 """Tries to figure out which help channels are free."""
21
22 PYTHON_HELP_ID = Categories.python_help
23
24 @command(name="free", aliases=('f',))
25 @redirect_output(destination_channel=Channels.bot_commands, bypass_roles=STAFF_ROLES)
26 async def free(self, ctx: Context, user: Member = None, seek: int = 2) -> None:
27 """
28 Lists free help channels by likeliness of availability.
29
30 seek is used only when this command is invoked in a help channel.
31 You cannot override seek without mentioning a user first.
32
33 When seek is 2, we are avoiding considering the last active message
34 in a channel to be the one that invoked this command.
35
36 When seek is 3 or more, a user has been mentioned on the assumption
37 that they asked if the channel is free or they asked their question
38 in an active channel, and we want the message before that happened.
39 """
40 free_channels = []
41 python_help = utils.get(ctx.guild.categories, id=self.PYTHON_HELP_ID)
42
43 if user is not None and seek == 2:
44 seek = 3
45 elif not 0 < seek < 10:
46 seek = 3
47
48 # Iterate through all the help channels
49 # to check latest activity
50 for channel in python_help.channels:
51 # Seek further back in the help channel
52 # the command was invoked in
53 if channel.id == ctx.channel.id:
54 messages = await channel.history(limit=seek).flatten()
55 msg = messages[seek - 1]
56 # Otherwise get last message
57 else:
58 msg = await channel.history(limit=1).next() # noqa (False positive)
59
60 inactive = (datetime.utcnow() - msg.created_at).seconds
61 if inactive > TIMEOUT:
62 free_channels.append((inactive, channel))
63
64 embed = Embed()
65 embed.colour = Colour.blurple()
66 embed.title = "**Looking for a free help channel?**"
67
68 if user is not None:
69 embed.description = f"**Hey {user.mention}!**\n\n"
70 else:
71 embed.description = ""
72
73 # Display all potentially inactive channels
74 # in descending order of inactivity
75 if free_channels:
76 # Sort channels in descending order by seconds
77 # Get position in list, inactivity, and channel object
78 # For each channel, add to embed.description
79 sorted_channels = sorted(free_channels, key=itemgetter(0), reverse=True)
80
81 for (inactive, channel) in sorted_channels[:3]:
82 minutes, seconds = divmod(inactive, 60)
83 if minutes > 59:
84 hours, minutes = divmod(minutes, 60)
85 embed.description += f"{channel.mention} **{hours}h {minutes}m {seconds}s** inactive\n"
86 else:
87 embed.description += f"{channel.mention} **{minutes}m {seconds}s** inactive\n"
88
89 embed.set_footer(text="Please confirm these channels are free before posting")
90 else:
91 embed.description = (
92 "Doesn't look like any channels are available right now. "
93 "You're welcome to check for yourself to be sure. "
94 "If all channels are truly busy, please be patient "
95 "as one will likely be available soon."
96 )
97
98 await ctx.send(embed=embed)
99
100
101 def setup(bot: Bot) -> None:
102 """Load the Free cog."""
103 bot.add_cog(Free())
104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bot/cogs/free.py b/bot/cogs/free.py
--- a/bot/cogs/free.py
+++ b/bot/cogs/free.py
@@ -55,7 +55,7 @@
msg = messages[seek - 1]
# Otherwise get last message
else:
- msg = await channel.history(limit=1).next() # noqa (False positive)
+ msg = await channel.history(limit=1).next() # noqa: B305
inactive = (datetime.utcnow() - msg.created_at).seconds
if inactive > TIMEOUT:
diff --git a/bot/utils/__init__.py b/bot/utils/__init__.py
--- a/bot/utils/__init__.py
+++ b/bot/utils/__init__.py
@@ -1,5 +1,4 @@
from abc import ABCMeta
-from typing import Any, Hashable
from discord.ext.commands import CogMeta
@@ -8,59 +7,3 @@
"""Metaclass for ABCs meant to be implemented as Cogs."""
pass
-
-
-class CaseInsensitiveDict(dict):
- """
- We found this class on StackOverflow. Thanks to m000 for writing it!
-
- https://stackoverflow.com/a/32888599/4022104
- """
-
- @classmethod
- def _k(cls, key: Hashable) -> Hashable:
- """Return lowered key if a string-like is passed, otherwise pass key straight through."""
- return key.lower() if isinstance(key, str) else key
-
- def __init__(self, *args, **kwargs):
- super(CaseInsensitiveDict, self).__init__(*args, **kwargs)
- self._convert_keys()
-
- def __getitem__(self, key: Hashable) -> Any:
- """Case insensitive __setitem__."""
- return super(CaseInsensitiveDict, self).__getitem__(self.__class__._k(key))
-
- def __setitem__(self, key: Hashable, value: Any):
- """Case insensitive __setitem__."""
- super(CaseInsensitiveDict, self).__setitem__(self.__class__._k(key), value)
-
- def __delitem__(self, key: Hashable) -> Any:
- """Case insensitive __delitem__."""
- return super(CaseInsensitiveDict, self).__delitem__(self.__class__._k(key))
-
- def __contains__(self, key: Hashable) -> bool:
- """Case insensitive __contains__."""
- return super(CaseInsensitiveDict, self).__contains__(self.__class__._k(key))
-
- def pop(self, key: Hashable, *args, **kwargs) -> Any:
- """Case insensitive pop."""
- return super(CaseInsensitiveDict, self).pop(self.__class__._k(key), *args, **kwargs)
-
- def get(self, key: Hashable, *args, **kwargs) -> Any:
- """Case insensitive get."""
- return super(CaseInsensitiveDict, self).get(self.__class__._k(key), *args, **kwargs)
-
- def setdefault(self, key: Hashable, *args, **kwargs) -> Any:
- """Case insensitive setdefault."""
- return super(CaseInsensitiveDict, self).setdefault(self.__class__._k(key), *args, **kwargs)
-
- def update(self, E: Any = None, **F) -> None:
- """Case insensitive update."""
- super(CaseInsensitiveDict, self).update(self.__class__(E))
- super(CaseInsensitiveDict, self).update(self.__class__(**F))
-
- def _convert_keys(self) -> None:
- """Helper method to lowercase all existing string-like keys."""
- for k in list(self.keys()):
- v = super(CaseInsensitiveDict, self).pop(k)
- self.__setitem__(k, v)
| {"golden_diff": "diff --git a/bot/cogs/free.py b/bot/cogs/free.py\n--- a/bot/cogs/free.py\n+++ b/bot/cogs/free.py\n@@ -55,7 +55,7 @@\n msg = messages[seek - 1]\n # Otherwise get last message\n else:\n- msg = await channel.history(limit=1).next() # noqa (False positive)\n+ msg = await channel.history(limit=1).next() # noqa: B305\n \n inactive = (datetime.utcnow() - msg.created_at).seconds\n if inactive > TIMEOUT:\ndiff --git a/bot/utils/__init__.py b/bot/utils/__init__.py\n--- a/bot/utils/__init__.py\n+++ b/bot/utils/__init__.py\n@@ -1,5 +1,4 @@\n from abc import ABCMeta\n-from typing import Any, Hashable\n \n from discord.ext.commands import CogMeta\n \n@@ -8,59 +7,3 @@\n \"\"\"Metaclass for ABCs meant to be implemented as Cogs.\"\"\"\n \n pass\n-\n-\n-class CaseInsensitiveDict(dict):\n- \"\"\"\n- We found this class on StackOverflow. Thanks to m000 for writing it!\n-\n- https://stackoverflow.com/a/32888599/4022104\n- \"\"\"\n-\n- @classmethod\n- def _k(cls, key: Hashable) -> Hashable:\n- \"\"\"Return lowered key if a string-like is passed, otherwise pass key straight through.\"\"\"\n- return key.lower() if isinstance(key, str) else key\n-\n- def __init__(self, *args, **kwargs):\n- super(CaseInsensitiveDict, self).__init__(*args, **kwargs)\n- self._convert_keys()\n-\n- def __getitem__(self, key: Hashable) -> Any:\n- \"\"\"Case insensitive __setitem__.\"\"\"\n- return super(CaseInsensitiveDict, self).__getitem__(self.__class__._k(key))\n-\n- def __setitem__(self, key: Hashable, value: Any):\n- \"\"\"Case insensitive __setitem__.\"\"\"\n- super(CaseInsensitiveDict, self).__setitem__(self.__class__._k(key), value)\n-\n- def __delitem__(self, key: Hashable) -> Any:\n- \"\"\"Case insensitive __delitem__.\"\"\"\n- return super(CaseInsensitiveDict, self).__delitem__(self.__class__._k(key))\n-\n- def __contains__(self, key: Hashable) -> bool:\n- \"\"\"Case insensitive __contains__.\"\"\"\n- return super(CaseInsensitiveDict, self).__contains__(self.__class__._k(key))\n-\n- def pop(self, key: Hashable, *args, **kwargs) -> Any:\n- \"\"\"Case insensitive pop.\"\"\"\n- return super(CaseInsensitiveDict, self).pop(self.__class__._k(key), *args, **kwargs)\n-\n- def get(self, key: Hashable, *args, **kwargs) -> Any:\n- \"\"\"Case insensitive get.\"\"\"\n- return super(CaseInsensitiveDict, self).get(self.__class__._k(key), *args, **kwargs)\n-\n- def setdefault(self, key: Hashable, *args, **kwargs) -> Any:\n- \"\"\"Case insensitive setdefault.\"\"\"\n- return super(CaseInsensitiveDict, self).setdefault(self.__class__._k(key), *args, **kwargs)\n-\n- def update(self, E: Any = None, **F) -> None:\n- \"\"\"Case insensitive update.\"\"\"\n- super(CaseInsensitiveDict, self).update(self.__class__(E))\n- super(CaseInsensitiveDict, self).update(self.__class__(**F))\n-\n- def _convert_keys(self) -> None:\n- \"\"\"Helper method to lowercase all existing string-like keys.\"\"\"\n- for k in list(self.keys()):\n- v = super(CaseInsensitiveDict, self).pop(k)\n- self.__setitem__(k, v)\n", "issue": "Add pep8-naming to linting toolchain\nWe've decided to use this plugin in our linting toolchain, which allows us to test things like variable name casing (no camelCase please!) and whether classes and methods use `cls` and `self`.\r\n\r\n## Implementation\r\nHere are the errors it can catch:\r\n\r\ncode | sample message\r\n-- | --\r\nN801 | class names should use CapWords convention\r\nN802 | function name should be lowercase\r\nN803 | argument name should be lowercase\r\nN804 | first argument of a classmethod should be named 'cls'\r\nN805 | first argument of a method should be named 'self'\r\nN806 | variable in function should be lowercase\r\nN807 | function name should not start and end with '__'\r\n\u00a0 | \u00a0\r\nN811 | constant imported as non constant\r\nN812 | lowercase imported as non lowercase\r\nN813 | camelcase imported as lowercase\r\nN814 | camelcase imported as constant\r\nN815 | mixedCase variable in class scope\r\nN816 | mixedCase variable in global scope\r\n\r\nWe have not decided exactly which of these we should ignore, so it will be up to the person who handles this issue to discuss this with the core devs and come to a consensus on whether we should ignore some of these. It will probably be easier to see which ones here are not necessary after actually running a lint with this plugin on our current codebase.\r\n\r\nThe person implementing this will also need to fix all the linting errors that occur once this has been added to the toolchain.\r\n\r\n## Steps\r\n- [ ] Add the plugin to the toolchain\r\n- [ ] Run it and look through the errors that are created. Are any of them out of touch with our style guidelines, and should we ignore some of these rules? Discuss with the core developers if you think so.\r\n- [ ] Fix all the linting problems.\n", "before_files": [{"content": "from abc import ABCMeta\nfrom typing import Any, Hashable\n\nfrom discord.ext.commands import CogMeta\n\n\nclass CogABCMeta(CogMeta, ABCMeta):\n \"\"\"Metaclass for ABCs meant to be implemented as Cogs.\"\"\"\n\n pass\n\n\nclass CaseInsensitiveDict(dict):\n \"\"\"\n We found this class on StackOverflow. Thanks to m000 for writing it!\n\n https://stackoverflow.com/a/32888599/4022104\n \"\"\"\n\n @classmethod\n def _k(cls, key: Hashable) -> Hashable:\n \"\"\"Return lowered key if a string-like is passed, otherwise pass key straight through.\"\"\"\n return key.lower() if isinstance(key, str) else key\n\n def __init__(self, *args, **kwargs):\n super(CaseInsensitiveDict, self).__init__(*args, **kwargs)\n self._convert_keys()\n\n def __getitem__(self, key: Hashable) -> Any:\n \"\"\"Case insensitive __setitem__.\"\"\"\n return super(CaseInsensitiveDict, self).__getitem__(self.__class__._k(key))\n\n def __setitem__(self, key: Hashable, value: Any):\n \"\"\"Case insensitive __setitem__.\"\"\"\n super(CaseInsensitiveDict, self).__setitem__(self.__class__._k(key), value)\n\n def __delitem__(self, key: Hashable) -> Any:\n \"\"\"Case insensitive __delitem__.\"\"\"\n return super(CaseInsensitiveDict, self).__delitem__(self.__class__._k(key))\n\n def __contains__(self, key: Hashable) -> bool:\n \"\"\"Case insensitive __contains__.\"\"\"\n return super(CaseInsensitiveDict, self).__contains__(self.__class__._k(key))\n\n def pop(self, key: Hashable, *args, **kwargs) -> Any:\n \"\"\"Case insensitive pop.\"\"\"\n return super(CaseInsensitiveDict, self).pop(self.__class__._k(key), *args, **kwargs)\n\n def get(self, key: Hashable, *args, **kwargs) -> Any:\n \"\"\"Case insensitive get.\"\"\"\n return super(CaseInsensitiveDict, self).get(self.__class__._k(key), *args, **kwargs)\n\n def setdefault(self, key: Hashable, *args, **kwargs) -> Any:\n \"\"\"Case insensitive setdefault.\"\"\"\n return super(CaseInsensitiveDict, self).setdefault(self.__class__._k(key), *args, **kwargs)\n\n def update(self, E: Any = None, **F) -> None:\n \"\"\"Case insensitive update.\"\"\"\n super(CaseInsensitiveDict, self).update(self.__class__(E))\n super(CaseInsensitiveDict, self).update(self.__class__(**F))\n\n def _convert_keys(self) -> None:\n \"\"\"Helper method to lowercase all existing string-like keys.\"\"\"\n for k in list(self.keys()):\n v = super(CaseInsensitiveDict, self).pop(k)\n self.__setitem__(k, v)\n", "path": "bot/utils/__init__.py"}, {"content": "import logging\nfrom datetime import datetime\nfrom operator import itemgetter\n\nfrom discord import Colour, Embed, Member, utils\nfrom discord.ext.commands import Cog, Context, command\n\nfrom bot.bot import Bot\nfrom bot.constants import Categories, Channels, Free, STAFF_ROLES\nfrom bot.decorators import redirect_output\n\nlog = logging.getLogger(__name__)\n\nTIMEOUT = Free.activity_timeout\nRATE = Free.cooldown_rate\nPER = Free.cooldown_per\n\n\nclass Free(Cog):\n \"\"\"Tries to figure out which help channels are free.\"\"\"\n\n PYTHON_HELP_ID = Categories.python_help\n\n @command(name=\"free\", aliases=('f',))\n @redirect_output(destination_channel=Channels.bot_commands, bypass_roles=STAFF_ROLES)\n async def free(self, ctx: Context, user: Member = None, seek: int = 2) -> None:\n \"\"\"\n Lists free help channels by likeliness of availability.\n\n seek is used only when this command is invoked in a help channel.\n You cannot override seek without mentioning a user first.\n\n When seek is 2, we are avoiding considering the last active message\n in a channel to be the one that invoked this command.\n\n When seek is 3 or more, a user has been mentioned on the assumption\n that they asked if the channel is free or they asked their question\n in an active channel, and we want the message before that happened.\n \"\"\"\n free_channels = []\n python_help = utils.get(ctx.guild.categories, id=self.PYTHON_HELP_ID)\n\n if user is not None and seek == 2:\n seek = 3\n elif not 0 < seek < 10:\n seek = 3\n\n # Iterate through all the help channels\n # to check latest activity\n for channel in python_help.channels:\n # Seek further back in the help channel\n # the command was invoked in\n if channel.id == ctx.channel.id:\n messages = await channel.history(limit=seek).flatten()\n msg = messages[seek - 1]\n # Otherwise get last message\n else:\n msg = await channel.history(limit=1).next() # noqa (False positive)\n\n inactive = (datetime.utcnow() - msg.created_at).seconds\n if inactive > TIMEOUT:\n free_channels.append((inactive, channel))\n\n embed = Embed()\n embed.colour = Colour.blurple()\n embed.title = \"**Looking for a free help channel?**\"\n\n if user is not None:\n embed.description = f\"**Hey {user.mention}!**\\n\\n\"\n else:\n embed.description = \"\"\n\n # Display all potentially inactive channels\n # in descending order of inactivity\n if free_channels:\n # Sort channels in descending order by seconds\n # Get position in list, inactivity, and channel object\n # For each channel, add to embed.description\n sorted_channels = sorted(free_channels, key=itemgetter(0), reverse=True)\n\n for (inactive, channel) in sorted_channels[:3]:\n minutes, seconds = divmod(inactive, 60)\n if minutes > 59:\n hours, minutes = divmod(minutes, 60)\n embed.description += f\"{channel.mention} **{hours}h {minutes}m {seconds}s** inactive\\n\"\n else:\n embed.description += f\"{channel.mention} **{minutes}m {seconds}s** inactive\\n\"\n\n embed.set_footer(text=\"Please confirm these channels are free before posting\")\n else:\n embed.description = (\n \"Doesn't look like any channels are available right now. \"\n \"You're welcome to check for yourself to be sure. \"\n \"If all channels are truly busy, please be patient \"\n \"as one will likely be available soon.\"\n )\n\n await ctx.send(embed=embed)\n\n\ndef setup(bot: Bot) -> None:\n \"\"\"Load the Free cog.\"\"\"\n bot.add_cog(Free())\n", "path": "bot/cogs/free.py"}], "after_files": [{"content": "from abc import ABCMeta\n\nfrom discord.ext.commands import CogMeta\n\n\nclass CogABCMeta(CogMeta, ABCMeta):\n \"\"\"Metaclass for ABCs meant to be implemented as Cogs.\"\"\"\n\n pass\n", "path": "bot/utils/__init__.py"}, {"content": "import logging\nfrom datetime import datetime\nfrom operator import itemgetter\n\nfrom discord import Colour, Embed, Member, utils\nfrom discord.ext.commands import Cog, Context, command\n\nfrom bot.bot import Bot\nfrom bot.constants import Categories, Channels, Free, STAFF_ROLES\nfrom bot.decorators import redirect_output\n\nlog = logging.getLogger(__name__)\n\nTIMEOUT = Free.activity_timeout\nRATE = Free.cooldown_rate\nPER = Free.cooldown_per\n\n\nclass Free(Cog):\n \"\"\"Tries to figure out which help channels are free.\"\"\"\n\n PYTHON_HELP_ID = Categories.python_help\n\n @command(name=\"free\", aliases=('f',))\n @redirect_output(destination_channel=Channels.bot_commands, bypass_roles=STAFF_ROLES)\n async def free(self, ctx: Context, user: Member = None, seek: int = 2) -> None:\n \"\"\"\n Lists free help channels by likeliness of availability.\n\n seek is used only when this command is invoked in a help channel.\n You cannot override seek without mentioning a user first.\n\n When seek is 2, we are avoiding considering the last active message\n in a channel to be the one that invoked this command.\n\n When seek is 3 or more, a user has been mentioned on the assumption\n that they asked if the channel is free or they asked their question\n in an active channel, and we want the message before that happened.\n \"\"\"\n free_channels = []\n python_help = utils.get(ctx.guild.categories, id=self.PYTHON_HELP_ID)\n\n if user is not None and seek == 2:\n seek = 3\n elif not 0 < seek < 10:\n seek = 3\n\n # Iterate through all the help channels\n # to check latest activity\n for channel in python_help.channels:\n # Seek further back in the help channel\n # the command was invoked in\n if channel.id == ctx.channel.id:\n messages = await channel.history(limit=seek).flatten()\n msg = messages[seek - 1]\n # Otherwise get last message\n else:\n msg = await channel.history(limit=1).next() # noqa: B305\n\n inactive = (datetime.utcnow() - msg.created_at).seconds\n if inactive > TIMEOUT:\n free_channels.append((inactive, channel))\n\n embed = Embed()\n embed.colour = Colour.blurple()\n embed.title = \"**Looking for a free help channel?**\"\n\n if user is not None:\n embed.description = f\"**Hey {user.mention}!**\\n\\n\"\n else:\n embed.description = \"\"\n\n # Display all potentially inactive channels\n # in descending order of inactivity\n if free_channels:\n # Sort channels in descending order by seconds\n # Get position in list, inactivity, and channel object\n # For each channel, add to embed.description\n sorted_channels = sorted(free_channels, key=itemgetter(0), reverse=True)\n\n for (inactive, channel) in sorted_channels[:3]:\n minutes, seconds = divmod(inactive, 60)\n if minutes > 59:\n hours, minutes = divmod(minutes, 60)\n embed.description += f\"{channel.mention} **{hours}h {minutes}m {seconds}s** inactive\\n\"\n else:\n embed.description += f\"{channel.mention} **{minutes}m {seconds}s** inactive\\n\"\n\n embed.set_footer(text=\"Please confirm these channels are free before posting\")\n else:\n embed.description = (\n \"Doesn't look like any channels are available right now. \"\n \"You're welcome to check for yourself to be sure. \"\n \"If all channels are truly busy, please be patient \"\n \"as one will likely be available soon.\"\n )\n\n await ctx.send(embed=embed)\n\n\ndef setup(bot: Bot) -> None:\n \"\"\"Load the Free cog.\"\"\"\n bot.add_cog(Free())\n", "path": "bot/cogs/free.py"}]} | 2,520 | 871 |
gh_patches_debug_11352 | rasdani/github-patches | git_diff | netket__netket-1487 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error in fermi-hubbard example
Hello, there.
It seems that the definition of create and annihilation operators are defined reversely in fermi-hubbard model, and results are wrong. [link](https://github.com/netket/netket/blob/26372e3527ae12861cc7953c24654614e8679d87/Examples/Fermions/fermi_hubbard.py#L25C1-L26)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `Examples/Fermions/fermi_hubbard.py`
Content:
```
1 import netket as nk
2 import numpy as np
3 import matplotlib.pyplot as plt
4 import json
5
6 from netket import experimental as nkx
7
8 L = 2 # take a 2x2 lattice
9 D = 2
10 t = 1 # tunneling/hopping
11 U = 0.01 # coulomb
12
13 # create the graph our fermions can hop on
14 g = nk.graph.Hypercube(length=L, n_dim=D, pbc=True)
15 n_sites = g.n_nodes
16
17 # create a hilbert space with 2 up and 2 down spins
18 hi = nkx.hilbert.SpinOrbitalFermions(n_sites, s=1 / 2, n_fermions=(2, 2))
19
20
21 # create an operator representing fermi hubbard interactions
22 # -t (i^ j + h.c.) + U (i^ i j^ j)
23 # we will create a helper function to abbreviate the creation, destruction and number operators
24 # each operator has a site and spin projection (sz) in order to find the right position in the hilbert space samples
25 def c(site, sz):
26 return nkx.operator.fermion.create(hi, site, sz=sz)
27
28
29 def cdag(site, sz):
30 return nkx.operator.fermion.destroy(hi, site, sz=sz)
31
32
33 def nc(site, sz):
34 return nkx.operator.fermion.number(hi, site, sz=sz)
35
36
37 up = +1 / 2
38 down = -1 / 2
39 ham = 0.0
40 for sz in (up, down):
41 for u, v in g.edges():
42 ham += -t * cdag(u, sz) * c(v, sz) - t * cdag(v, sz) * c(u, sz)
43 for u in g.nodes():
44 ham += U * nc(u, up) * nc(u, down)
45
46 print("Hamiltonian =", ham.operator_string())
47
48 # metropolis exchange moves fermions around according to a graph
49 # the physical graph has LxL vertices, but the computational basis defined by the
50 # hilbert space contains (2s+1)*L*L occupation numbers
51 # by taking a disjoint copy of the lattice, we can
52 # move the fermions around independently for both spins
53 # and therefore conserve the number of fermions with up and down spin
54
55 # g.n_nodes == L*L --> disj_graph == 2*L*L
56 disj_graph = nk.graph.disjoint_union(g, g)
57 sa = nk.sampler.MetropolisExchange(hi, graph=disj_graph, n_chains=16)
58
59 # since the hilbert basis is a set of occupation numbers, we can take a general RBM
60 # we take complex parameters, since it learns sign structures more easily, and for even fermion number, the wave function might be complex
61 ma = nk.models.RBM(alpha=1, param_dtype=complex, use_visible_bias=False)
62 vs = nk.vqs.MCState(sa, ma, n_discard_per_chain=100, n_samples=512)
63
64 # we will use sgd with Stochastic Reconfiguration
65 opt = nk.optimizer.Sgd(learning_rate=0.01)
66 sr = nk.optimizer.SR(diag_shift=0.1)
67
68 gs = nk.driver.VMC(ham, opt, variational_state=vs, preconditioner=sr)
69
70 # now run the optimization
71 # first step will take longer in order to compile
72 exp_name = "fermions_test"
73 gs.run(500, out=exp_name)
74
75 ############## plot #################
76
77 ed_energies = np.linalg.eigvalsh(ham.to_dense())
78
79 with open(f"{exp_name}.log", "r") as f:
80 data = json.load(f)
81
82 x = data["Energy"]["iters"]
83 y = data["Energy"]["Mean"]["real"]
84
85 # plot the energy levels
86 plt.axhline(ed_energies[0], color="red", label="E0")
87 for e in ed_energies[1:]:
88 plt.axhline(e, color="black")
89 plt.plot(x, y, color="red", label="VMC")
90 plt.xlabel("step")
91 plt.ylabel("E")
92 plt.show()
93
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/Examples/Fermions/fermi_hubbard.py b/Examples/Fermions/fermi_hubbard.py
--- a/Examples/Fermions/fermi_hubbard.py
+++ b/Examples/Fermions/fermi_hubbard.py
@@ -23,11 +23,11 @@
# we will create a helper function to abbreviate the creation, destruction and number operators
# each operator has a site and spin projection (sz) in order to find the right position in the hilbert space samples
def c(site, sz):
- return nkx.operator.fermion.create(hi, site, sz=sz)
+ return nkx.operator.fermion.destroy(hi, site, sz=sz)
def cdag(site, sz):
- return nkx.operator.fermion.destroy(hi, site, sz=sz)
+ return nkx.operator.fermion.create(hi, site, sz=sz)
def nc(site, sz):
| {"golden_diff": "diff --git a/Examples/Fermions/fermi_hubbard.py b/Examples/Fermions/fermi_hubbard.py\n--- a/Examples/Fermions/fermi_hubbard.py\n+++ b/Examples/Fermions/fermi_hubbard.py\n@@ -23,11 +23,11 @@\n # we will create a helper function to abbreviate the creation, destruction and number operators\n # each operator has a site and spin projection (sz) in order to find the right position in the hilbert space samples\n def c(site, sz):\n- return nkx.operator.fermion.create(hi, site, sz=sz)\n+ return nkx.operator.fermion.destroy(hi, site, sz=sz)\n \n \n def cdag(site, sz):\n- return nkx.operator.fermion.destroy(hi, site, sz=sz)\n+ return nkx.operator.fermion.create(hi, site, sz=sz)\n \n \n def nc(site, sz):\n", "issue": "Error in fermi-hubbard example\nHello, there.\r\n\r\nIt seems that the definition of create and annihilation operators are defined reversely in fermi-hubbard model, and results are wrong. [link](https://github.com/netket/netket/blob/26372e3527ae12861cc7953c24654614e8679d87/Examples/Fermions/fermi_hubbard.py#L25C1-L26)\n", "before_files": [{"content": "import netket as nk\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport json\n\nfrom netket import experimental as nkx\n\nL = 2 # take a 2x2 lattice\nD = 2\nt = 1 # tunneling/hopping\nU = 0.01 # coulomb\n\n# create the graph our fermions can hop on\ng = nk.graph.Hypercube(length=L, n_dim=D, pbc=True)\nn_sites = g.n_nodes\n\n# create a hilbert space with 2 up and 2 down spins\nhi = nkx.hilbert.SpinOrbitalFermions(n_sites, s=1 / 2, n_fermions=(2, 2))\n\n\n# create an operator representing fermi hubbard interactions\n# -t (i^ j + h.c.) + U (i^ i j^ j)\n# we will create a helper function to abbreviate the creation, destruction and number operators\n# each operator has a site and spin projection (sz) in order to find the right position in the hilbert space samples\ndef c(site, sz):\n return nkx.operator.fermion.create(hi, site, sz=sz)\n\n\ndef cdag(site, sz):\n return nkx.operator.fermion.destroy(hi, site, sz=sz)\n\n\ndef nc(site, sz):\n return nkx.operator.fermion.number(hi, site, sz=sz)\n\n\nup = +1 / 2\ndown = -1 / 2\nham = 0.0\nfor sz in (up, down):\n for u, v in g.edges():\n ham += -t * cdag(u, sz) * c(v, sz) - t * cdag(v, sz) * c(u, sz)\nfor u in g.nodes():\n ham += U * nc(u, up) * nc(u, down)\n\nprint(\"Hamiltonian =\", ham.operator_string())\n\n# metropolis exchange moves fermions around according to a graph\n# the physical graph has LxL vertices, but the computational basis defined by the\n# hilbert space contains (2s+1)*L*L occupation numbers\n# by taking a disjoint copy of the lattice, we can\n# move the fermions around independently for both spins\n# and therefore conserve the number of fermions with up and down spin\n\n# g.n_nodes == L*L --> disj_graph == 2*L*L\ndisj_graph = nk.graph.disjoint_union(g, g)\nsa = nk.sampler.MetropolisExchange(hi, graph=disj_graph, n_chains=16)\n\n# since the hilbert basis is a set of occupation numbers, we can take a general RBM\n# we take complex parameters, since it learns sign structures more easily, and for even fermion number, the wave function might be complex\nma = nk.models.RBM(alpha=1, param_dtype=complex, use_visible_bias=False)\nvs = nk.vqs.MCState(sa, ma, n_discard_per_chain=100, n_samples=512)\n\n# we will use sgd with Stochastic Reconfiguration\nopt = nk.optimizer.Sgd(learning_rate=0.01)\nsr = nk.optimizer.SR(diag_shift=0.1)\n\ngs = nk.driver.VMC(ham, opt, variational_state=vs, preconditioner=sr)\n\n# now run the optimization\n# first step will take longer in order to compile\nexp_name = \"fermions_test\"\ngs.run(500, out=exp_name)\n\n############## plot #################\n\ned_energies = np.linalg.eigvalsh(ham.to_dense())\n\nwith open(f\"{exp_name}.log\", \"r\") as f:\n data = json.load(f)\n\nx = data[\"Energy\"][\"iters\"]\ny = data[\"Energy\"][\"Mean\"][\"real\"]\n\n# plot the energy levels\nplt.axhline(ed_energies[0], color=\"red\", label=\"E0\")\nfor e in ed_energies[1:]:\n plt.axhline(e, color=\"black\")\nplt.plot(x, y, color=\"red\", label=\"VMC\")\nplt.xlabel(\"step\")\nplt.ylabel(\"E\")\nplt.show()\n", "path": "Examples/Fermions/fermi_hubbard.py"}], "after_files": [{"content": "import netket as nk\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport json\n\nfrom netket import experimental as nkx\n\nL = 2 # take a 2x2 lattice\nD = 2\nt = 1 # tunneling/hopping\nU = 0.01 # coulomb\n\n# create the graph our fermions can hop on\ng = nk.graph.Hypercube(length=L, n_dim=D, pbc=True)\nn_sites = g.n_nodes\n\n# create a hilbert space with 2 up and 2 down spins\nhi = nkx.hilbert.SpinOrbitalFermions(n_sites, s=1 / 2, n_fermions=(2, 2))\n\n\n# create an operator representing fermi hubbard interactions\n# -t (i^ j + h.c.) + U (i^ i j^ j)\n# we will create a helper function to abbreviate the creation, destruction and number operators\n# each operator has a site and spin projection (sz) in order to find the right position in the hilbert space samples\ndef c(site, sz):\n return nkx.operator.fermion.destroy(hi, site, sz=sz)\n\n\ndef cdag(site, sz):\n return nkx.operator.fermion.create(hi, site, sz=sz)\n\n\ndef nc(site, sz):\n return nkx.operator.fermion.number(hi, site, sz=sz)\n\n\nup = +1 / 2\ndown = -1 / 2\nham = 0.0\nfor sz in (up, down):\n for u, v in g.edges():\n ham += -t * cdag(u, sz) * c(v, sz) - t * cdag(v, sz) * c(u, sz)\nfor u in g.nodes():\n ham += U * nc(u, up) * nc(u, down)\n\nprint(\"Hamiltonian =\", ham.operator_string())\n\n# metropolis exchange moves fermions around according to a graph\n# the physical graph has LxL vertices, but the computational basis defined by the\n# hilbert space contains (2s+1)*L*L occupation numbers\n# by taking a disjoint copy of the lattice, we can\n# move the fermions around independently for both spins\n# and therefore conserve the number of fermions with up and down spin\n\n# g.n_nodes == L*L --> disj_graph == 2*L*L\ndisj_graph = nk.graph.disjoint_union(g, g)\nsa = nk.sampler.MetropolisExchange(hi, graph=disj_graph, n_chains=16)\n\n# since the hilbert basis is a set of occupation numbers, we can take a general RBM\n# we take complex parameters, since it learns sign structures more easily, and for even fermion number, the wave function might be complex\nma = nk.models.RBM(alpha=1, param_dtype=complex, use_visible_bias=False)\nvs = nk.vqs.MCState(sa, ma, n_discard_per_chain=100, n_samples=512)\n\n# we will use sgd with Stochastic Reconfiguration\nopt = nk.optimizer.Sgd(learning_rate=0.01)\nsr = nk.optimizer.SR(diag_shift=0.1)\n\ngs = nk.driver.VMC(ham, opt, variational_state=vs, preconditioner=sr)\n\n# now run the optimization\n# first step will take longer in order to compile\nexp_name = \"fermions_test\"\ngs.run(500, out=exp_name)\n\n############## plot #################\n\ned_energies = np.linalg.eigvalsh(ham.to_dense())\n\nwith open(f\"{exp_name}.log\", \"r\") as f:\n data = json.load(f)\n\nx = data[\"Energy\"][\"iters\"]\ny = data[\"Energy\"][\"Mean\"][\"real\"]\n\n# plot the energy levels\nplt.axhline(ed_energies[0], color=\"red\", label=\"E0\")\nfor e in ed_energies[1:]:\n plt.axhline(e, color=\"black\")\nplt.plot(x, y, color=\"red\", label=\"VMC\")\nplt.xlabel(\"step\")\nplt.ylabel(\"E\")\nplt.show()\n", "path": "Examples/Fermions/fermi_hubbard.py"}]} | 1,472 | 219 |
gh_patches_debug_25100 | rasdani/github-patches | git_diff | ros__ros_comm-433 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
message_filters: Incorporate python approximate time synchronizer
https://github.com/ros-perception/image_pipeline/issues/76
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `utilities/message_filters/src/message_filters/__init__.py`
Content:
```
1 # Copyright (c) 2009, Willow Garage, Inc.
2 # All rights reserved.
3 #
4 # Redistribution and use in source and binary forms, with or without
5 # modification, are permitted provided that the following conditions are met:
6 #
7 # * Redistributions of source code must retain the above copyright
8 # notice, this list of conditions and the following disclaimer.
9 # * Redistributions in binary form must reproduce the above copyright
10 # notice, this list of conditions and the following disclaimer in the
11 # documentation and/or other materials provided with the distribution.
12 # * Neither the name of the Willow Garage, Inc. nor the names of its
13 # contributors may be used to endorse or promote products derived from
14 # this software without specific prior written permission.
15 #
16 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
17 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
18 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
19 # ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
20 # LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
21 # CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
22 # SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
23 # INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
24 # CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
25 # ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
26 # POSSIBILITY OF SUCH DAMAGE.
27
28 """
29 Message Filter Objects
30 ======================
31 """
32
33 import threading
34 import rospy
35
36 class SimpleFilter:
37
38 def __init__(self):
39 self.callbacks = {}
40
41 def registerCallback(self, cb, *args):
42 """
43 Register a callback function `cb` to be called when this filter
44 has output.
45 The filter calls the function ``cb`` with a filter-dependent list of arguments,
46 followed by the call-supplied arguments ``args``.
47 """
48
49 conn = len(self.callbacks)
50 self.callbacks[conn] = (cb, args)
51 return conn
52
53 def signalMessage(self, *msg):
54 for (cb, args) in self.callbacks.values():
55 cb(*(msg + args))
56
57 class Subscriber(SimpleFilter):
58
59 """
60 ROS subscription filter. Identical arguments as :class:`rospy.Subscriber`.
61
62 This class acts as a highest-level filter, simply passing messages
63 from a ROS subscription through to the filters which have connected
64 to it.
65 """
66 def __init__(self, *args, **kwargs):
67 SimpleFilter.__init__(self)
68 self.topic = args[0]
69 kwargs['callback'] = self.callback
70 self.sub = rospy.Subscriber(*args, **kwargs)
71
72 def callback(self, msg):
73 self.signalMessage(msg)
74
75 def getTopic(self):
76 return self.topic
77
78 class Cache(SimpleFilter):
79
80 """
81 Stores a time history of messages.
82
83 Given a stream of messages, the most recent ``cache_size`` messages
84 are cached in a ring buffer, from which time intervals of the cache
85 can then be retrieved by the client.
86 """
87
88 def __init__(self, f, cache_size = 1):
89 SimpleFilter.__init__(self)
90 self.connectInput(f)
91 self.cache_size = cache_size
92
93 def connectInput(self, f):
94 self.incoming_connection = f.registerCallback(self.add)
95
96 def add(self, msg):
97 # Add msg to cache... XXX TODO
98
99 self.signalMessage(msg)
100
101 class TimeSynchronizer(SimpleFilter):
102
103 """
104 Synchronizes messages by their timestamps.
105
106 :class:`TimeSynchronizer` synchronizes incoming message filters by the
107 timestamps contained in their messages' headers. TimeSynchronizer
108 listens on multiple input message filters ``fs``, and invokes the callback
109 when it has a collection of messages with matching timestamps.
110
111 The signature of the callback function is::
112
113 def callback(msg1, ... msgN):
114
115 where N is the number of input message filters, and each message is
116 the output of the corresponding filter in ``fs``.
117 The required ``queue size`` parameter specifies how many sets of
118 messages it should store from each input filter (by timestamp)
119 while waiting for messages to arrive and complete their "set".
120 """
121
122 def __init__(self, fs, queue_size):
123 SimpleFilter.__init__(self)
124 self.connectInput(fs)
125 self.queue_size = queue_size
126 self.lock = threading.Lock()
127
128 def connectInput(self, fs):
129 self.queues = [{} for f in fs]
130 self.input_connections = [f.registerCallback(self.add, q) for (f, q) in zip(fs, self.queues)]
131
132 def add(self, msg, my_queue):
133 self.lock.acquire()
134 my_queue[msg.header.stamp] = msg
135 while len(my_queue) > self.queue_size:
136 del my_queue[min(my_queue)]
137 # common is the set of timestamps that occur in all queues
138 common = reduce(set.intersection, [set(q) for q in self.queues])
139 for t in sorted(common):
140 # msgs is list of msgs (one from each queue) with stamp t
141 msgs = [q[t] for q in self.queues]
142 self.signalMessage(*msgs)
143 for q in self.queues:
144 del q[t]
145 self.lock.release()
146
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/utilities/message_filters/src/message_filters/__init__.py b/utilities/message_filters/src/message_filters/__init__.py
--- a/utilities/message_filters/src/message_filters/__init__.py
+++ b/utilities/message_filters/src/message_filters/__init__.py
@@ -30,6 +30,7 @@
======================
"""
+import itertools
import threading
import rospy
@@ -143,3 +144,33 @@
for q in self.queues:
del q[t]
self.lock.release()
+
+class ApproximateTimeSynchronizer(TimeSynchronizer):
+
+ """
+ Approximately synchronizes messages by their timestamps.
+
+ :class:`ApproximateTimeSynchronizer` synchronizes incoming message filters by the
+ timestamps contained in their messages' headers. The API is the same as TimeSynchronizer
+ except for an extra `slop` parameter in the constructor that defines the delay (in seconds)
+ with which messages can be synchronized
+ """
+
+ def __init__(self, fs, queue_size, slop):
+ TimeSynchronizer.__init__(self, fs, queue_size)
+ self.slop = rospy.Duration.from_sec(slop)
+
+ def add(self, msg, my_queue):
+ self.lock.acquire()
+ my_queue[msg.header.stamp] = msg
+ while len(my_queue) > self.queue_size:
+ del my_queue[min(my_queue)]
+ for vv in itertools.product(*[list(q.keys()) for q in self.queues]):
+ qt = list(zip(self.queues, vv))
+ if ( ((max(vv) - min(vv)) < self.slop) and
+ (len([1 for q,t in qt if t not in q]) == 0) ):
+ msgs = [q[t] for q,t in qt]
+ self.signalMessage(*msgs)
+ for q,t in qt:
+ del q[t]
+ self.lock.release()
| {"golden_diff": "diff --git a/utilities/message_filters/src/message_filters/__init__.py b/utilities/message_filters/src/message_filters/__init__.py\n--- a/utilities/message_filters/src/message_filters/__init__.py\n+++ b/utilities/message_filters/src/message_filters/__init__.py\n@@ -30,6 +30,7 @@\n ======================\n \"\"\"\n \n+import itertools\n import threading\n import rospy\n \n@@ -143,3 +144,33 @@\n for q in self.queues:\n del q[t]\n self.lock.release()\n+\n+class ApproximateTimeSynchronizer(TimeSynchronizer):\n+\n+ \"\"\"\n+ Approximately synchronizes messages by their timestamps.\n+\n+ :class:`ApproximateTimeSynchronizer` synchronizes incoming message filters by the\n+ timestamps contained in their messages' headers. The API is the same as TimeSynchronizer\n+ except for an extra `slop` parameter in the constructor that defines the delay (in seconds)\n+ with which messages can be synchronized\n+ \"\"\"\n+\n+ def __init__(self, fs, queue_size, slop):\n+ TimeSynchronizer.__init__(self, fs, queue_size)\n+ self.slop = rospy.Duration.from_sec(slop)\n+\n+ def add(self, msg, my_queue):\n+ self.lock.acquire()\n+ my_queue[msg.header.stamp] = msg\n+ while len(my_queue) > self.queue_size:\n+ del my_queue[min(my_queue)]\n+ for vv in itertools.product(*[list(q.keys()) for q in self.queues]):\n+ qt = list(zip(self.queues, vv))\n+ if ( ((max(vv) - min(vv)) < self.slop) and\n+ (len([1 for q,t in qt if t not in q]) == 0) ):\n+ msgs = [q[t] for q,t in qt]\n+ self.signalMessage(*msgs)\n+ for q,t in qt:\n+ del q[t]\n+ self.lock.release()\n", "issue": "message_filters: Incorporate python approximate time synchronizer\nhttps://github.com/ros-perception/image_pipeline/issues/76\n\n", "before_files": [{"content": "# Copyright (c) 2009, Willow Garage, Inc.\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n# * Redistributions in binary form must reproduce the above copyright\n# notice, this list of conditions and the following disclaimer in the\n# documentation and/or other materials provided with the distribution.\n# * Neither the name of the Willow Garage, Inc. nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n# POSSIBILITY OF SUCH DAMAGE.\n\n\"\"\"\nMessage Filter Objects\n======================\n\"\"\"\n\nimport threading\nimport rospy\n\nclass SimpleFilter:\n\n def __init__(self):\n self.callbacks = {}\n\n def registerCallback(self, cb, *args):\n \"\"\"\n Register a callback function `cb` to be called when this filter\n has output.\n The filter calls the function ``cb`` with a filter-dependent list of arguments,\n followed by the call-supplied arguments ``args``.\n \"\"\"\n\n conn = len(self.callbacks)\n self.callbacks[conn] = (cb, args)\n return conn\n\n def signalMessage(self, *msg):\n for (cb, args) in self.callbacks.values():\n cb(*(msg + args))\n\nclass Subscriber(SimpleFilter):\n \n \"\"\"\n ROS subscription filter. Identical arguments as :class:`rospy.Subscriber`.\n\n This class acts as a highest-level filter, simply passing messages\n from a ROS subscription through to the filters which have connected\n to it.\n \"\"\"\n def __init__(self, *args, **kwargs):\n SimpleFilter.__init__(self)\n self.topic = args[0]\n kwargs['callback'] = self.callback\n self.sub = rospy.Subscriber(*args, **kwargs)\n\n def callback(self, msg):\n self.signalMessage(msg)\n\n def getTopic(self):\n return self.topic\n\nclass Cache(SimpleFilter):\n\n \"\"\"\n Stores a time history of messages.\n\n Given a stream of messages, the most recent ``cache_size`` messages\n are cached in a ring buffer, from which time intervals of the cache\n can then be retrieved by the client.\n \"\"\"\n\n def __init__(self, f, cache_size = 1):\n SimpleFilter.__init__(self)\n self.connectInput(f)\n self.cache_size = cache_size\n\n def connectInput(self, f):\n self.incoming_connection = f.registerCallback(self.add)\n\n def add(self, msg):\n # Add msg to cache... XXX TODO\n\n self.signalMessage(msg)\n\nclass TimeSynchronizer(SimpleFilter):\n\n \"\"\"\n Synchronizes messages by their timestamps.\n\n :class:`TimeSynchronizer` synchronizes incoming message filters by the\n timestamps contained in their messages' headers. TimeSynchronizer\n listens on multiple input message filters ``fs``, and invokes the callback\n when it has a collection of messages with matching timestamps.\n\n The signature of the callback function is::\n\n def callback(msg1, ... msgN):\n\n where N is the number of input message filters, and each message is\n the output of the corresponding filter in ``fs``.\n The required ``queue size`` parameter specifies how many sets of\n messages it should store from each input filter (by timestamp)\n while waiting for messages to arrive and complete their \"set\".\n \"\"\"\n\n def __init__(self, fs, queue_size):\n SimpleFilter.__init__(self)\n self.connectInput(fs)\n self.queue_size = queue_size\n self.lock = threading.Lock()\n\n def connectInput(self, fs):\n self.queues = [{} for f in fs]\n self.input_connections = [f.registerCallback(self.add, q) for (f, q) in zip(fs, self.queues)]\n\n def add(self, msg, my_queue):\n self.lock.acquire()\n my_queue[msg.header.stamp] = msg\n while len(my_queue) > self.queue_size:\n del my_queue[min(my_queue)]\n # common is the set of timestamps that occur in all queues\n common = reduce(set.intersection, [set(q) for q in self.queues])\n for t in sorted(common):\n # msgs is list of msgs (one from each queue) with stamp t\n msgs = [q[t] for q in self.queues]\n self.signalMessage(*msgs)\n for q in self.queues:\n del q[t]\n self.lock.release()\n", "path": "utilities/message_filters/src/message_filters/__init__.py"}], "after_files": [{"content": "# Copyright (c) 2009, Willow Garage, Inc.\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n# * Redistributions in binary form must reproduce the above copyright\n# notice, this list of conditions and the following disclaimer in the\n# documentation and/or other materials provided with the distribution.\n# * Neither the name of the Willow Garage, Inc. nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n# POSSIBILITY OF SUCH DAMAGE.\n\n\"\"\"\nMessage Filter Objects\n======================\n\"\"\"\n\nimport itertools\nimport threading\nimport rospy\n\nclass SimpleFilter:\n\n def __init__(self):\n self.callbacks = {}\n\n def registerCallback(self, cb, *args):\n \"\"\"\n Register a callback function `cb` to be called when this filter\n has output.\n The filter calls the function ``cb`` with a filter-dependent list of arguments,\n followed by the call-supplied arguments ``args``.\n \"\"\"\n\n conn = len(self.callbacks)\n self.callbacks[conn] = (cb, args)\n return conn\n\n def signalMessage(self, *msg):\n for (cb, args) in self.callbacks.values():\n cb(*(msg + args))\n\nclass Subscriber(SimpleFilter):\n \n \"\"\"\n ROS subscription filter. Identical arguments as :class:`rospy.Subscriber`.\n\n This class acts as a highest-level filter, simply passing messages\n from a ROS subscription through to the filters which have connected\n to it.\n \"\"\"\n def __init__(self, *args, **kwargs):\n SimpleFilter.__init__(self)\n self.topic = args[0]\n kwargs['callback'] = self.callback\n self.sub = rospy.Subscriber(*args, **kwargs)\n\n def callback(self, msg):\n self.signalMessage(msg)\n\n def getTopic(self):\n return self.topic\n\nclass Cache(SimpleFilter):\n\n \"\"\"\n Stores a time history of messages.\n\n Given a stream of messages, the most recent ``cache_size`` messages\n are cached in a ring buffer, from which time intervals of the cache\n can then be retrieved by the client.\n \"\"\"\n\n def __init__(self, f, cache_size = 1):\n SimpleFilter.__init__(self)\n self.connectInput(f)\n self.cache_size = cache_size\n\n def connectInput(self, f):\n self.incoming_connection = f.registerCallback(self.add)\n\n def add(self, msg):\n # Add msg to cache... XXX TODO\n\n self.signalMessage(msg)\n\nclass TimeSynchronizer(SimpleFilter):\n\n \"\"\"\n Synchronizes messages by their timestamps.\n\n :class:`TimeSynchronizer` synchronizes incoming message filters by the\n timestamps contained in their messages' headers. TimeSynchronizer\n listens on multiple input message filters ``fs``, and invokes the callback\n when it has a collection of messages with matching timestamps.\n\n The signature of the callback function is::\n\n def callback(msg1, ... msgN):\n\n where N is the number of input message filters, and each message is\n the output of the corresponding filter in ``fs``.\n The required ``queue size`` parameter specifies how many sets of\n messages it should store from each input filter (by timestamp)\n while waiting for messages to arrive and complete their \"set\".\n \"\"\"\n\n def __init__(self, fs, queue_size):\n SimpleFilter.__init__(self)\n self.connectInput(fs)\n self.queue_size = queue_size\n self.lock = threading.Lock()\n\n def connectInput(self, fs):\n self.queues = [{} for f in fs]\n self.input_connections = [f.registerCallback(self.add, q) for (f, q) in zip(fs, self.queues)]\n\n def add(self, msg, my_queue):\n self.lock.acquire()\n my_queue[msg.header.stamp] = msg\n while len(my_queue) > self.queue_size:\n del my_queue[min(my_queue)]\n # common is the set of timestamps that occur in all queues\n common = reduce(set.intersection, [set(q) for q in self.queues])\n for t in sorted(common):\n # msgs is list of msgs (one from each queue) with stamp t\n msgs = [q[t] for q in self.queues]\n self.signalMessage(*msgs)\n for q in self.queues:\n del q[t]\n self.lock.release()\n\nclass ApproximateTimeSynchronizer(TimeSynchronizer):\n\n \"\"\"\n Approximately synchronizes messages by their timestamps.\n\n :class:`ApproximateTimeSynchronizer` synchronizes incoming message filters by the\n timestamps contained in their messages' headers. The API is the same as TimeSynchronizer\n except for an extra `slop` parameter in the constructor that defines the delay (in seconds)\n with which messages can be synchronized\n \"\"\"\n\n def __init__(self, fs, queue_size, slop):\n TimeSynchronizer.__init__(self, fs, queue_size)\n self.slop = rospy.Duration.from_sec(slop)\n\n def add(self, msg, my_queue):\n self.lock.acquire()\n my_queue[msg.header.stamp] = msg\n while len(my_queue) > self.queue_size:\n del my_queue[min(my_queue)]\n for vv in itertools.product(*[list(q.keys()) for q in self.queues]):\n qt = list(zip(self.queues, vv))\n if ( ((max(vv) - min(vv)) < self.slop) and\n (len([1 for q,t in qt if t not in q]) == 0) ):\n msgs = [q[t] for q,t in qt]\n self.signalMessage(*msgs)\n for q,t in qt:\n del q[t]\n self.lock.release()\n", "path": "utilities/message_filters/src/message_filters/__init__.py"}]} | 1,780 | 432 |
gh_patches_debug_5242 | rasdani/github-patches | git_diff | wemake-services__wemake-python-styleguide-884 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Wrong and correct examples of nested annotations mixed up in docs for Union type
# Bug report
<!--
Hi, thanks for submitting a bug. We appreciate that.
But, we will need some information about what's wrong to help you.
-->
## What's wrong
From ``NestedAnnotationsViolation`` class docstring:
```
Example::
# Correct:
Literal[1, 2, 3, "foo", 5, None]
Union[Union[int, str], float]
# Wrong:
Literal[Literal[Literal[1, 2, 3], "foo"], 5, None]
Union[int, str, float]
```
There is a mistake: correct and wrong examples for ``Union`` type are mixed up. If I run flake8 passing it a file which contains ``Union[int, str, float]`` annotation, no error will be raised. And for ``Union[Union[int, str], float]`` WPS702 is raised. Both are expected behaviour I guess, just a mismatch in documentation.
## How is that should be
```
Example::
# Correct:
Literal[1, 2, 3, "foo", 5, None]
Union[int, str, float]
# Wrong:
Literal[Literal[Literal[1, 2, 3], "foo"], 5, None]
Union[Union[int, str], float]
```
<!--
Love wemake-python-styleguide? ❤️
Do you want your bugs to be solved faster?
Please consider supporting our collective:
👉 https://opencollective.com/wemake-python-styleguide/donate
-->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wemake_python_styleguide/violations/annotations.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 """
4 These checks help to prevent incorrect usage of Python 3 typing annotations.
5
6 While they maybe of a great help in writing clear
7 and concise code, they still can be abused.
8
9 Once again, these rules are highly subjective. But, we love them.
10
11 .. currentmodule:: wemake_python_styleguide.violations.annotations
12
13 Summary
14 -------
15
16 .. autosummary::
17 :nosignatures:
18
19 LiteralNoneViolation
20 NestedAnnotationsViolation
21
22 Annotation checks
23 ------------------
24
25 .. autoclass:: LiteralNoneViolation
26 .. autoclass:: NestedAnnotationsViolation
27
28 """
29
30 from typing_extensions import final
31
32 from wemake_python_styleguide.violations.base import ASTViolation
33
34
35 @final
36 class LiteralNoneViolation(ASTViolation):
37 """
38 Forbids to use ``Literal[None]`` typing annotation.
39
40 Reasoning:
41 Literal[None] is just the same as None.
42 There's no need to use the first version.
43 It is not type related, it is a consistency rule.
44
45 Solution:
46 Replace ``Literal[None]`` with ``None``.
47
48 Example::
49
50 # Correct:
51 def func(empty: None):
52 '''Empty function.'''
53
54 # Wrong:
55 def func(empty: Literal[None]):
56 '''Empty function.'''
57
58 .. versionadded:: 0.13.0
59
60 """
61
62 code = 701
63 error_template = 'Found useless `Literal[None]` typing annotation'
64
65
66 @final
67 class NestedAnnotationsViolation(ASTViolation):
68 """
69 Forbids use of nested Literal and Union Annotation.
70
71 Reasoning:
72 There is no need to nest certain annotations of the same type.
73 They are exactly equivalent to the flattened version.
74 Use the flattened version for consistency.
75
76 Solution:
77 Flatten consecutively nested ``typing.Literal`` and ``typing.Union``.
78
79 Example::
80 # Correct:
81 Literal[1, 2, 3, "foo", 5, None]
82 Union[Union[int, str], float]
83
84 # Wrong:
85 Literal[Literal[Literal[1, 2, 3], "foo"], 5, None]
86 Union[int, str, float]
87
88 .. versionadded:: 0.13.0
89
90 """
91
92 error_template = 'Found redundant nested typing annotation'
93 code = 702
94
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/wemake_python_styleguide/violations/annotations.py b/wemake_python_styleguide/violations/annotations.py
--- a/wemake_python_styleguide/violations/annotations.py
+++ b/wemake_python_styleguide/violations/annotations.py
@@ -79,11 +79,11 @@
Example::
# Correct:
Literal[1, 2, 3, "foo", 5, None]
- Union[Union[int, str], float]
+ Union[int, str, float]
# Wrong:
Literal[Literal[Literal[1, 2, 3], "foo"], 5, None]
- Union[int, str, float]
+ Union[Union[int, str], float]
.. versionadded:: 0.13.0
| {"golden_diff": "diff --git a/wemake_python_styleguide/violations/annotations.py b/wemake_python_styleguide/violations/annotations.py\n--- a/wemake_python_styleguide/violations/annotations.py\n+++ b/wemake_python_styleguide/violations/annotations.py\n@@ -79,11 +79,11 @@\n Example::\n # Correct:\n Literal[1, 2, 3, \"foo\", 5, None]\n- Union[Union[int, str], float]\n+ Union[int, str, float]\n \n # Wrong:\n Literal[Literal[Literal[1, 2, 3], \"foo\"], 5, None]\n- Union[int, str, float]\n+ Union[Union[int, str], float]\n \n .. versionadded:: 0.13.0\n", "issue": "Wrong and correct examples of nested annotations mixed up in docs for Union type\n# Bug report\r\n\r\n<!--\r\nHi, thanks for submitting a bug. We appreciate that.\r\n\r\nBut, we will need some information about what's wrong to help you.\r\n-->\r\n\r\n## What's wrong\r\n\r\nFrom ``NestedAnnotationsViolation`` class docstring:\r\n```\r\nExample::\r\n # Correct:\r\n Literal[1, 2, 3, \"foo\", 5, None]\r\n Union[Union[int, str], float]\r\n\r\n # Wrong:\r\n Literal[Literal[Literal[1, 2, 3], \"foo\"], 5, None]\r\n Union[int, str, float]\r\n```\r\nThere is a mistake: correct and wrong examples for ``Union`` type are mixed up. If I run flake8 passing it a file which contains ``Union[int, str, float]`` annotation, no error will be raised. And for ``Union[Union[int, str], float]`` WPS702 is raised. Both are expected behaviour I guess, just a mismatch in documentation.\r\n## How is that should be\r\n\r\n```\r\nExample::\r\n # Correct:\r\n Literal[1, 2, 3, \"foo\", 5, None]\r\n Union[int, str, float]\r\n\r\n # Wrong:\r\n Literal[Literal[Literal[1, 2, 3], \"foo\"], 5, None]\r\n Union[Union[int, str], float]\r\n```\r\n<!--\r\n\r\n Love wemake-python-styleguide? \u2764\ufe0f\r\n Do you want your bugs to be solved faster?\r\n\r\n Please consider supporting our collective:\r\n\ud83d\udc49 https://opencollective.com/wemake-python-styleguide/donate\r\n\r\n-->\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\nThese checks help to prevent incorrect usage of Python 3 typing annotations.\n\nWhile they maybe of a great help in writing clear\nand concise code, they still can be abused.\n\nOnce again, these rules are highly subjective. But, we love them.\n\n.. currentmodule:: wemake_python_styleguide.violations.annotations\n\nSummary\n-------\n\n.. autosummary::\n :nosignatures:\n\n LiteralNoneViolation\n NestedAnnotationsViolation\n\nAnnotation checks\n------------------\n\n.. autoclass:: LiteralNoneViolation\n.. autoclass:: NestedAnnotationsViolation\n\n\"\"\"\n\nfrom typing_extensions import final\n\nfrom wemake_python_styleguide.violations.base import ASTViolation\n\n\n@final\nclass LiteralNoneViolation(ASTViolation):\n \"\"\"\n Forbids to use ``Literal[None]`` typing annotation.\n\n Reasoning:\n Literal[None] is just the same as None.\n There's no need to use the first version.\n It is not type related, it is a consistency rule.\n\n Solution:\n Replace ``Literal[None]`` with ``None``.\n\n Example::\n\n # Correct:\n def func(empty: None):\n '''Empty function.'''\n\n # Wrong:\n def func(empty: Literal[None]):\n '''Empty function.'''\n\n .. versionadded:: 0.13.0\n\n \"\"\"\n\n code = 701\n error_template = 'Found useless `Literal[None]` typing annotation'\n\n\n@final\nclass NestedAnnotationsViolation(ASTViolation):\n \"\"\"\n Forbids use of nested Literal and Union Annotation.\n\n Reasoning:\n There is no need to nest certain annotations of the same type.\n They are exactly equivalent to the flattened version.\n Use the flattened version for consistency.\n\n Solution:\n Flatten consecutively nested ``typing.Literal`` and ``typing.Union``.\n\n Example::\n # Correct:\n Literal[1, 2, 3, \"foo\", 5, None]\n Union[Union[int, str], float]\n\n # Wrong:\n Literal[Literal[Literal[1, 2, 3], \"foo\"], 5, None]\n Union[int, str, float]\n\n .. versionadded:: 0.13.0\n\n \"\"\"\n\n error_template = 'Found redundant nested typing annotation'\n code = 702\n", "path": "wemake_python_styleguide/violations/annotations.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\nThese checks help to prevent incorrect usage of Python 3 typing annotations.\n\nWhile they maybe of a great help in writing clear\nand concise code, they still can be abused.\n\nOnce again, these rules are highly subjective. But, we love them.\n\n.. currentmodule:: wemake_python_styleguide.violations.annotations\n\nSummary\n-------\n\n.. autosummary::\n :nosignatures:\n\n LiteralNoneViolation\n NestedAnnotationsViolation\n\nAnnotation checks\n------------------\n\n.. autoclass:: LiteralNoneViolation\n.. autoclass:: NestedAnnotationsViolation\n\n\"\"\"\n\nfrom typing_extensions import final\n\nfrom wemake_python_styleguide.violations.base import ASTViolation\n\n\n@final\nclass LiteralNoneViolation(ASTViolation):\n \"\"\"\n Forbids to use ``Literal[None]`` typing annotation.\n\n Reasoning:\n Literal[None] is just the same as None.\n There's no need to use the first version.\n It is not type related, it is a consistency rule.\n\n Solution:\n Replace ``Literal[None]`` with ``None``.\n\n Example::\n\n # Correct:\n def func(empty: None):\n '''Empty function.'''\n\n # Wrong:\n def func(empty: Literal[None]):\n '''Empty function.'''\n\n .. versionadded:: 0.13.0\n\n \"\"\"\n\n code = 701\n error_template = 'Found useless `Literal[None]` typing annotation'\n\n\n@final\nclass NestedAnnotationsViolation(ASTViolation):\n \"\"\"\n Forbids use of nested Literal and Union Annotation.\n\n Reasoning:\n There is no need to nest certain annotations of the same type.\n They are exactly equivalent to the flattened version.\n Use the flattened version for consistency.\n\n Solution:\n Flatten consecutively nested ``typing.Literal`` and ``typing.Union``.\n\n Example::\n # Correct:\n Literal[1, 2, 3, \"foo\", 5, None]\n Union[int, str, float]\n\n # Wrong:\n Literal[Literal[Literal[1, 2, 3], \"foo\"], 5, None]\n Union[Union[int, str], float]\n\n .. versionadded:: 0.13.0\n\n \"\"\"\n\n error_template = 'Found redundant nested typing annotation'\n code = 702\n", "path": "wemake_python_styleguide/violations/annotations.py"}]} | 1,322 | 182 |
gh_patches_debug_16457 | rasdani/github-patches | git_diff | explosion__spaCy-3583 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Training a new model using cli throws error `KeyError`
I am trying to train a new spacy model based on the [Tweebank](https://github.com/Oneplus/Tweebank) annotated data.
For that I first tried using the training example given in the docs to familiarize myself with the procedure.
Example and training on the Tweebank throw the same error.
## How to reproduce the behaviour
Follow the example [here](https://spacy.io/usage/training#spacy-train-cli)
For the sake of completeness:
```
git clone https://github.com/UniversalDependencies/UD_Spanish-AnCora
mkdir ancora-json
python -m spacy convert UD_Spanish-AnCora/es_ancora-ud-train.conllu ancora-json
python -m spacy convert UD_Spanish-AnCora/es_ancora-ud-dev.conllu ancora-json
mkdir models
python -m spacy train es models ancora-json/es_ancora-ud-train.jsonl ancora-json/es_ancora-ud-dev.jsonl
```
## Your Environment
## Info about spaCy
* **spaCy version:** 2.1.3
* **Platform:** Linux-4.15.0-46-generic-x86_64-with-debian-buster-sid
* **Python version:** 3.6.7
* **Models:** en_core_web_md, en_core_web_sm
## The Error
```
>>> python -m spacy train es models es_ancora-ud-train.jsonl es_ancora-ud-dev.jsonl
Training pipeline: ['tagger', 'parser', 'ner']
Starting with blank model 'es'
Counting training words (limit=0)
Traceback (most recent call last):
File "/home/shikhar/.conda/envs/factmata/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/shikhar/.conda/envs/factmata/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/shikhar/.conda/envs/factmata/lib/python3.6/site-packages/spacy/__main__.py", line 35, in <module>
plac.call(commands[command], sys.argv[1:])
File "/home/shikhar/.conda/envs/factmata/lib/python3.6/site-packages/plac_core.py", line 328, in call
cmd, result = parser.consume(arglist)
File "/home/shikhar/.conda/envs/factmata/lib/python3.6/site-packages/plac_core.py", line 207, in consume
return cmd, self.func(*(args + varargs + extraopts), **kwargs)
File "/home/shikhar/.conda/envs/factmata/lib/python3.6/site-packages/spacy/cli/train.py", line 196, in train
corpus = GoldCorpus(train_path, dev_path, limit=n_examples)
File "gold.pyx", line 112, in spacy.gold.GoldCorpus.__init__
File "gold.pyx", line 125, in spacy.gold.GoldCorpus.write_msgpack
KeyError: 1
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `spacy/cli/convert.py`
Content:
```
1 # coding: utf8
2 from __future__ import unicode_literals
3
4 import plac
5 from pathlib import Path
6 from wasabi import Printer
7 import srsly
8
9 from .converters import conllu2json, iob2json, conll_ner2json
10 from .converters import ner_jsonl2json
11
12
13 # Converters are matched by file extension. To add a converter, add a new
14 # entry to this dict with the file extension mapped to the converter function
15 # imported from /converters.
16 CONVERTERS = {
17 "conllubio": conllu2json,
18 "conllu": conllu2json,
19 "conll": conllu2json,
20 "ner": conll_ner2json,
21 "iob": iob2json,
22 "jsonl": ner_jsonl2json,
23 }
24
25 # File types
26 FILE_TYPES = ("json", "jsonl", "msg")
27 FILE_TYPES_STDOUT = ("json", "jsonl")
28
29
30 @plac.annotations(
31 input_file=("Input file", "positional", None, str),
32 output_dir=("Output directory. '-' for stdout.", "positional", None, str),
33 file_type=("Type of data to produce: {}".format(FILE_TYPES), "option", "t", str),
34 n_sents=("Number of sentences per doc", "option", "n", int),
35 converter=("Converter: {}".format(tuple(CONVERTERS.keys())), "option", "c", str),
36 lang=("Language (if tokenizer required)", "option", "l", str),
37 morphology=("Enable appending morphology to tags", "flag", "m", bool),
38 )
39 def convert(
40 input_file,
41 output_dir="-",
42 file_type="jsonl",
43 n_sents=1,
44 morphology=False,
45 converter="auto",
46 lang=None,
47 ):
48 """
49 Convert files into JSON format for use with train command and other
50 experiment management functions. If no output_dir is specified, the data
51 is written to stdout, so you can pipe them forward to a JSONL file:
52 $ spacy convert some_file.conllu > some_file.jsonl
53 """
54 msg = Printer()
55 input_path = Path(input_file)
56 if file_type not in FILE_TYPES:
57 msg.fail(
58 "Unknown file type: '{}'".format(file_type),
59 "Supported file types: '{}'".format(", ".join(FILE_TYPES)),
60 exits=1,
61 )
62 if file_type not in FILE_TYPES_STDOUT and output_dir == "-":
63 # TODO: support msgpack via stdout in srsly?
64 msg.fail(
65 "Can't write .{} data to stdout.".format(file_type),
66 "Please specify an output directory.",
67 exits=1,
68 )
69 if not input_path.exists():
70 msg.fail("Input file not found", input_path, exits=1)
71 if output_dir != "-" and not Path(output_dir).exists():
72 msg.fail("Output directory not found", output_dir, exits=1)
73 if converter == "auto":
74 converter = input_path.suffix[1:]
75 if converter not in CONVERTERS:
76 msg.fail("Can't find converter for {}".format(converter), exits=1)
77 # Use converter function to convert data
78 func = CONVERTERS[converter]
79 input_data = input_path.open("r", encoding="utf-8").read()
80 data = func(input_data, n_sents=n_sents, use_morphology=morphology, lang=lang)
81 if output_dir != "-":
82 # Export data to a file
83 suffix = ".{}".format(file_type)
84 output_file = Path(output_dir) / Path(input_path.parts[-1]).with_suffix(suffix)
85 if file_type == "json":
86 srsly.write_json(output_file, data)
87 elif file_type == "jsonl":
88 srsly.write_jsonl(output_file, data)
89 elif file_type == "msg":
90 srsly.write_msgpack(output_file, data)
91 msg.good("Generated output file ({} documents)".format(len(data)), output_file)
92 else:
93 # Print to stdout
94 if file_type == "json":
95 srsly.write_json("-", data)
96 elif file_type == "jsonl":
97 srsly.write_jsonl("-", data)
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/spacy/cli/convert.py b/spacy/cli/convert.py
--- a/spacy/cli/convert.py
+++ b/spacy/cli/convert.py
@@ -39,7 +39,7 @@
def convert(
input_file,
output_dir="-",
- file_type="jsonl",
+ file_type="json",
n_sents=1,
morphology=False,
converter="auto",
@@ -48,8 +48,8 @@
"""
Convert files into JSON format for use with train command and other
experiment management functions. If no output_dir is specified, the data
- is written to stdout, so you can pipe them forward to a JSONL file:
- $ spacy convert some_file.conllu > some_file.jsonl
+ is written to stdout, so you can pipe them forward to a JSON file:
+ $ spacy convert some_file.conllu > some_file.json
"""
msg = Printer()
input_path = Path(input_file)
| {"golden_diff": "diff --git a/spacy/cli/convert.py b/spacy/cli/convert.py\n--- a/spacy/cli/convert.py\n+++ b/spacy/cli/convert.py\n@@ -39,7 +39,7 @@\n def convert(\n input_file,\n output_dir=\"-\",\n- file_type=\"jsonl\",\n+ file_type=\"json\",\n n_sents=1,\n morphology=False,\n converter=\"auto\",\n@@ -48,8 +48,8 @@\n \"\"\"\n Convert files into JSON format for use with train command and other\n experiment management functions. If no output_dir is specified, the data\n- is written to stdout, so you can pipe them forward to a JSONL file:\n- $ spacy convert some_file.conllu > some_file.jsonl\n+ is written to stdout, so you can pipe them forward to a JSON file:\n+ $ spacy convert some_file.conllu > some_file.json\n \"\"\"\n msg = Printer()\n input_path = Path(input_file)\n", "issue": "Training a new model using cli throws error `KeyError`\nI am trying to train a new spacy model based on the [Tweebank](https://github.com/Oneplus/Tweebank) annotated data. \r\nFor that I first tried using the training example given in the docs to familiarize myself with the procedure. \r\nExample and training on the Tweebank throw the same error.\r\n\r\n## How to reproduce the behaviour\r\nFollow the example [here](https://spacy.io/usage/training#spacy-train-cli)\r\nFor the sake of completeness:\r\n```\r\ngit clone https://github.com/UniversalDependencies/UD_Spanish-AnCora\r\nmkdir ancora-json\r\npython -m spacy convert UD_Spanish-AnCora/es_ancora-ud-train.conllu ancora-json\r\npython -m spacy convert UD_Spanish-AnCora/es_ancora-ud-dev.conllu ancora-json\r\nmkdir models\r\npython -m spacy train es models ancora-json/es_ancora-ud-train.jsonl ancora-json/es_ancora-ud-dev.jsonl\r\n```\r\n\r\n## Your Environment\r\n## Info about spaCy\r\n\r\n* **spaCy version:** 2.1.3\r\n* **Platform:** Linux-4.15.0-46-generic-x86_64-with-debian-buster-sid\r\n* **Python version:** 3.6.7\r\n* **Models:** en_core_web_md, en_core_web_sm\r\n\r\n## The Error\r\n\r\n```\r\n>>> python -m spacy train es models es_ancora-ud-train.jsonl es_ancora-ud-dev.jsonl\r\nTraining pipeline: ['tagger', 'parser', 'ner']\r\nStarting with blank model 'es'\r\nCounting training words (limit=0)\r\nTraceback (most recent call last):\r\n File \"/home/shikhar/.conda/envs/factmata/lib/python3.6/runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/home/shikhar/.conda/envs/factmata/lib/python3.6/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/home/shikhar/.conda/envs/factmata/lib/python3.6/site-packages/spacy/__main__.py\", line 35, in <module>\r\n plac.call(commands[command], sys.argv[1:])\r\n File \"/home/shikhar/.conda/envs/factmata/lib/python3.6/site-packages/plac_core.py\", line 328, in call\r\n cmd, result = parser.consume(arglist)\r\n File \"/home/shikhar/.conda/envs/factmata/lib/python3.6/site-packages/plac_core.py\", line 207, in consume\r\n return cmd, self.func(*(args + varargs + extraopts), **kwargs)\r\n File \"/home/shikhar/.conda/envs/factmata/lib/python3.6/site-packages/spacy/cli/train.py\", line 196, in train\r\n corpus = GoldCorpus(train_path, dev_path, limit=n_examples)\r\n File \"gold.pyx\", line 112, in spacy.gold.GoldCorpus.__init__\r\n File \"gold.pyx\", line 125, in spacy.gold.GoldCorpus.write_msgpack\r\nKeyError: 1\r\n\r\n```\r\n\n", "before_files": [{"content": "# coding: utf8\nfrom __future__ import unicode_literals\n\nimport plac\nfrom pathlib import Path\nfrom wasabi import Printer\nimport srsly\n\nfrom .converters import conllu2json, iob2json, conll_ner2json\nfrom .converters import ner_jsonl2json\n\n\n# Converters are matched by file extension. To add a converter, add a new\n# entry to this dict with the file extension mapped to the converter function\n# imported from /converters.\nCONVERTERS = {\n \"conllubio\": conllu2json,\n \"conllu\": conllu2json,\n \"conll\": conllu2json,\n \"ner\": conll_ner2json,\n \"iob\": iob2json,\n \"jsonl\": ner_jsonl2json,\n}\n\n# File types\nFILE_TYPES = (\"json\", \"jsonl\", \"msg\")\nFILE_TYPES_STDOUT = (\"json\", \"jsonl\")\n\n\[email protected](\n input_file=(\"Input file\", \"positional\", None, str),\n output_dir=(\"Output directory. '-' for stdout.\", \"positional\", None, str),\n file_type=(\"Type of data to produce: {}\".format(FILE_TYPES), \"option\", \"t\", str),\n n_sents=(\"Number of sentences per doc\", \"option\", \"n\", int),\n converter=(\"Converter: {}\".format(tuple(CONVERTERS.keys())), \"option\", \"c\", str),\n lang=(\"Language (if tokenizer required)\", \"option\", \"l\", str),\n morphology=(\"Enable appending morphology to tags\", \"flag\", \"m\", bool),\n)\ndef convert(\n input_file,\n output_dir=\"-\",\n file_type=\"jsonl\",\n n_sents=1,\n morphology=False,\n converter=\"auto\",\n lang=None,\n):\n \"\"\"\n Convert files into JSON format for use with train command and other\n experiment management functions. If no output_dir is specified, the data\n is written to stdout, so you can pipe them forward to a JSONL file:\n $ spacy convert some_file.conllu > some_file.jsonl\n \"\"\"\n msg = Printer()\n input_path = Path(input_file)\n if file_type not in FILE_TYPES:\n msg.fail(\n \"Unknown file type: '{}'\".format(file_type),\n \"Supported file types: '{}'\".format(\", \".join(FILE_TYPES)),\n exits=1,\n )\n if file_type not in FILE_TYPES_STDOUT and output_dir == \"-\":\n # TODO: support msgpack via stdout in srsly?\n msg.fail(\n \"Can't write .{} data to stdout.\".format(file_type),\n \"Please specify an output directory.\",\n exits=1,\n )\n if not input_path.exists():\n msg.fail(\"Input file not found\", input_path, exits=1)\n if output_dir != \"-\" and not Path(output_dir).exists():\n msg.fail(\"Output directory not found\", output_dir, exits=1)\n if converter == \"auto\":\n converter = input_path.suffix[1:]\n if converter not in CONVERTERS:\n msg.fail(\"Can't find converter for {}\".format(converter), exits=1)\n # Use converter function to convert data\n func = CONVERTERS[converter]\n input_data = input_path.open(\"r\", encoding=\"utf-8\").read()\n data = func(input_data, n_sents=n_sents, use_morphology=morphology, lang=lang)\n if output_dir != \"-\":\n # Export data to a file\n suffix = \".{}\".format(file_type)\n output_file = Path(output_dir) / Path(input_path.parts[-1]).with_suffix(suffix)\n if file_type == \"json\":\n srsly.write_json(output_file, data)\n elif file_type == \"jsonl\":\n srsly.write_jsonl(output_file, data)\n elif file_type == \"msg\":\n srsly.write_msgpack(output_file, data)\n msg.good(\"Generated output file ({} documents)\".format(len(data)), output_file)\n else:\n # Print to stdout\n if file_type == \"json\":\n srsly.write_json(\"-\", data)\n elif file_type == \"jsonl\":\n srsly.write_jsonl(\"-\", data)\n", "path": "spacy/cli/convert.py"}], "after_files": [{"content": "# coding: utf8\nfrom __future__ import unicode_literals\n\nimport plac\nfrom pathlib import Path\nfrom wasabi import Printer\nimport srsly\n\nfrom .converters import conllu2json, iob2json, conll_ner2json\nfrom .converters import ner_jsonl2json\n\n\n# Converters are matched by file extension. To add a converter, add a new\n# entry to this dict with the file extension mapped to the converter function\n# imported from /converters.\nCONVERTERS = {\n \"conllubio\": conllu2json,\n \"conllu\": conllu2json,\n \"conll\": conllu2json,\n \"ner\": conll_ner2json,\n \"iob\": iob2json,\n \"jsonl\": ner_jsonl2json,\n}\n\n# File types\nFILE_TYPES = (\"json\", \"jsonl\", \"msg\")\nFILE_TYPES_STDOUT = (\"json\", \"jsonl\")\n\n\[email protected](\n input_file=(\"Input file\", \"positional\", None, str),\n output_dir=(\"Output directory. '-' for stdout.\", \"positional\", None, str),\n file_type=(\"Type of data to produce: {}\".format(FILE_TYPES), \"option\", \"t\", str),\n n_sents=(\"Number of sentences per doc\", \"option\", \"n\", int),\n converter=(\"Converter: {}\".format(tuple(CONVERTERS.keys())), \"option\", \"c\", str),\n lang=(\"Language (if tokenizer required)\", \"option\", \"l\", str),\n morphology=(\"Enable appending morphology to tags\", \"flag\", \"m\", bool),\n)\ndef convert(\n input_file,\n output_dir=\"-\",\n file_type=\"json\",\n n_sents=1,\n morphology=False,\n converter=\"auto\",\n lang=None,\n):\n \"\"\"\n Convert files into JSON format for use with train command and other\n experiment management functions. If no output_dir is specified, the data\n is written to stdout, so you can pipe them forward to a JSON file:\n $ spacy convert some_file.conllu > some_file.json\n \"\"\"\n msg = Printer()\n input_path = Path(input_file)\n if file_type not in FILE_TYPES:\n msg.fail(\n \"Unknown file type: '{}'\".format(file_type),\n \"Supported file types: '{}'\".format(\", \".join(FILE_TYPES)),\n exits=1,\n )\n if file_type not in FILE_TYPES_STDOUT and output_dir == \"-\":\n # TODO: support msgpack via stdout in srsly?\n msg.fail(\n \"Can't write .{} data to stdout.\".format(file_type),\n \"Please specify an output directory.\",\n exits=1,\n )\n if not input_path.exists():\n msg.fail(\"Input file not found\", input_path, exits=1)\n if output_dir != \"-\" and not Path(output_dir).exists():\n msg.fail(\"Output directory not found\", output_dir, exits=1)\n if converter == \"auto\":\n converter = input_path.suffix[1:]\n if converter not in CONVERTERS:\n msg.fail(\"Can't find converter for {}\".format(converter), exits=1)\n # Use converter function to convert data\n func = CONVERTERS[converter]\n input_data = input_path.open(\"r\", encoding=\"utf-8\").read()\n data = func(input_data, n_sents=n_sents, use_morphology=morphology, lang=lang)\n if output_dir != \"-\":\n # Export data to a file\n suffix = \".{}\".format(file_type)\n output_file = Path(output_dir) / Path(input_path.parts[-1]).with_suffix(suffix)\n if file_type == \"json\":\n srsly.write_json(output_file, data)\n elif file_type == \"jsonl\":\n srsly.write_jsonl(output_file, data)\n elif file_type == \"msg\":\n srsly.write_msgpack(output_file, data)\n msg.good(\"Generated output file ({} documents)\".format(len(data)), output_file)\n else:\n # Print to stdout\n if file_type == \"json\":\n srsly.write_json(\"-\", data)\n elif file_type == \"jsonl\":\n srsly.write_jsonl(\"-\", data)\n", "path": "spacy/cli/convert.py"}]} | 2,077 | 217 |
gh_patches_debug_1372 | rasdani/github-patches | git_diff | wright-group__WrightTools-361 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
collection.keys returns data objects
should return names, but get objects
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `WrightTools/collection/_collection.py`
Content:
```
1 """Collection."""
2
3
4 # --- import --------------------------------------------------------------------------------------
5
6
7 import os
8 import shutil
9
10 import numpy as np
11
12 import h5py
13
14 from .. import data as wt_data
15 from .._base import Group
16
17
18 # --- define --------------------------------------------------------------------------------------
19
20
21 __all__ = ['Collection']
22
23
24 # --- classes -------------------------------------------------------------------------------------
25
26
27 class Collection(Group):
28 """Nestable Collection of Data objects."""
29 class_name = 'Collection'
30
31 def __iter__(self):
32 self.__n = 0
33 return self
34
35 def __len__(self):
36 return len(self.item_names)
37
38 def __next__(self):
39 if self.__n < len(self):
40 out = self[self.__n]
41 self.__n += 1
42 else:
43 raise StopIteration
44 return out
45
46 def __repr__(self):
47 return '<WrightTools.Collection \'{0}\' {1} at {2}>'.format(self.natural_name,
48 self.item_names,
49 '::'.join([self.filepath,
50 self.name]))
51
52 def __getitem__(self, key):
53 if isinstance(key, int):
54 key = self.item_names[key]
55 out = h5py.Group.__getitem__(self, key)
56 if 'class' in out.attrs.keys():
57 if out.attrs['class'] == 'Data':
58 return wt_data.Data(filepath=self.filepath, parent=self.name, name=key,
59 edit_local=True)
60 elif out.attrs['class'] == 'Collection':
61 return Collection(filepath=self.filepath, parent=self.name, name=key,
62 edit_local=True)
63 else:
64 return Group(filepath=self.filepath, parent=self.name, name=key,
65 edit_local=True)
66 else:
67 return out
68
69 def __setitem__(self, key, value):
70 raise NotImplementedError
71
72 @property
73 def item_names(self):
74 if 'item_names' not in self.attrs.keys():
75 self.attrs['item_names'] = np.array([], dtype='S')
76 return [s.decode() for s in self.attrs['item_names']]
77
78 def create_collection(self, name='collection', position=None, **kwargs):
79 collection = Collection(filepath=self.filepath, parent=self.name, name=name,
80 edit_local=True, **kwargs)
81 if position is None:
82 self._items.append(collection)
83 self.attrs['item_names'] = np.append(self.attrs['item_names'],
84 collection.natural_name.encode())
85 else:
86 self._items.insert(position, collection)
87 self.attrs['item_names'] = np.insert(self.attrs['item_names'], position,
88 collection.natural_name.encode())
89 setattr(self, name, collection)
90 return collection
91
92 def create_data(self, name='data', position=None, **kwargs):
93 data = wt_data.Data(filepath=self.filepath, parent=self.name, name=name, edit_local=True,
94 **kwargs)
95 if position is None:
96 self._items.append(data)
97 self.attrs['item_names'] = np.append(self.attrs['item_names'],
98 data.natural_name.encode())
99 else:
100 self._items.insert(position, data)
101 self.attrs['item_names'] = np.insert(self.attrs['item_names'], position,
102 data.natural_name.encode())
103 setattr(self, name, data)
104 return data
105
106 def index(self):
107 raise NotImplementedError
108
109 def flush(self):
110 for item in self._items:
111 item.flush()
112 self.file.flush()
113
114 def save(self, filepath=None, verbose=True):
115 # TODO: documentation
116 self.flush() # ensure all changes are written to file
117 if filepath is None:
118 filepath = os.path.join(os.getcwd(), self.natural_name + '.wt5')
119 elif len(os.path.basename(filepath).split('.')) == 1:
120 filepath += '.wt5'
121 filepath = os.path.expanduser(filepath)
122 shutil.copyfile(src=self.filepath, dst=filepath)
123 if verbose:
124 print('file saved at', filepath)
125 return filepath
126
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/WrightTools/collection/_collection.py b/WrightTools/collection/_collection.py
--- a/WrightTools/collection/_collection.py
+++ b/WrightTools/collection/_collection.py
@@ -37,7 +37,7 @@
def __next__(self):
if self.__n < len(self):
- out = self[self.__n]
+ out = self.item_names[self.__n]
self.__n += 1
else:
raise StopIteration
| {"golden_diff": "diff --git a/WrightTools/collection/_collection.py b/WrightTools/collection/_collection.py\n--- a/WrightTools/collection/_collection.py\n+++ b/WrightTools/collection/_collection.py\n@@ -37,7 +37,7 @@\n \n def __next__(self):\n if self.__n < len(self):\n- out = self[self.__n]\n+ out = self.item_names[self.__n]\n self.__n += 1\n else:\n raise StopIteration\n", "issue": "collection.keys returns data objects\nshould return names, but get objects\n", "before_files": [{"content": "\"\"\"Collection.\"\"\"\n\n\n# --- import --------------------------------------------------------------------------------------\n\n\nimport os\nimport shutil\n\nimport numpy as np\n\nimport h5py\n\nfrom .. import data as wt_data\nfrom .._base import Group\n\n\n# --- define --------------------------------------------------------------------------------------\n\n\n__all__ = ['Collection']\n\n\n# --- classes -------------------------------------------------------------------------------------\n\n\nclass Collection(Group):\n \"\"\"Nestable Collection of Data objects.\"\"\"\n class_name = 'Collection'\n\n def __iter__(self):\n self.__n = 0\n return self\n\n def __len__(self):\n return len(self.item_names)\n\n def __next__(self):\n if self.__n < len(self):\n out = self[self.__n]\n self.__n += 1\n else:\n raise StopIteration\n return out\n\n def __repr__(self):\n return '<WrightTools.Collection \\'{0}\\' {1} at {2}>'.format(self.natural_name,\n self.item_names,\n '::'.join([self.filepath,\n self.name]))\n\n def __getitem__(self, key):\n if isinstance(key, int):\n key = self.item_names[key]\n out = h5py.Group.__getitem__(self, key)\n if 'class' in out.attrs.keys():\n if out.attrs['class'] == 'Data':\n return wt_data.Data(filepath=self.filepath, parent=self.name, name=key,\n edit_local=True)\n elif out.attrs['class'] == 'Collection':\n return Collection(filepath=self.filepath, parent=self.name, name=key,\n edit_local=True)\n else:\n return Group(filepath=self.filepath, parent=self.name, name=key,\n edit_local=True)\n else:\n return out\n\n def __setitem__(self, key, value):\n raise NotImplementedError\n\n @property\n def item_names(self):\n if 'item_names' not in self.attrs.keys():\n self.attrs['item_names'] = np.array([], dtype='S')\n return [s.decode() for s in self.attrs['item_names']]\n\n def create_collection(self, name='collection', position=None, **kwargs):\n collection = Collection(filepath=self.filepath, parent=self.name, name=name,\n edit_local=True, **kwargs)\n if position is None:\n self._items.append(collection)\n self.attrs['item_names'] = np.append(self.attrs['item_names'],\n collection.natural_name.encode())\n else:\n self._items.insert(position, collection)\n self.attrs['item_names'] = np.insert(self.attrs['item_names'], position,\n collection.natural_name.encode())\n setattr(self, name, collection)\n return collection\n\n def create_data(self, name='data', position=None, **kwargs):\n data = wt_data.Data(filepath=self.filepath, parent=self.name, name=name, edit_local=True,\n **kwargs)\n if position is None:\n self._items.append(data)\n self.attrs['item_names'] = np.append(self.attrs['item_names'],\n data.natural_name.encode())\n else:\n self._items.insert(position, data)\n self.attrs['item_names'] = np.insert(self.attrs['item_names'], position,\n data.natural_name.encode())\n setattr(self, name, data)\n return data\n\n def index(self):\n raise NotImplementedError\n\n def flush(self):\n for item in self._items:\n item.flush()\n self.file.flush()\n\n def save(self, filepath=None, verbose=True):\n # TODO: documentation\n self.flush() # ensure all changes are written to file\n if filepath is None:\n filepath = os.path.join(os.getcwd(), self.natural_name + '.wt5')\n elif len(os.path.basename(filepath).split('.')) == 1:\n filepath += '.wt5'\n filepath = os.path.expanduser(filepath)\n shutil.copyfile(src=self.filepath, dst=filepath)\n if verbose:\n print('file saved at', filepath)\n return filepath\n", "path": "WrightTools/collection/_collection.py"}], "after_files": [{"content": "\"\"\"Collection.\"\"\"\n\n\n# --- import --------------------------------------------------------------------------------------\n\n\nimport os\nimport shutil\n\nimport numpy as np\n\nimport h5py\n\nfrom .. import data as wt_data\nfrom .._base import Group\n\n\n# --- define --------------------------------------------------------------------------------------\n\n\n__all__ = ['Collection']\n\n\n# --- classes -------------------------------------------------------------------------------------\n\n\nclass Collection(Group):\n \"\"\"Nestable Collection of Data objects.\"\"\"\n class_name = 'Collection'\n\n def __iter__(self):\n self.__n = 0\n return self\n\n def __len__(self):\n return len(self.item_names)\n\n def __next__(self):\n if self.__n < len(self):\n out = self.item_names[self.__n]\n self.__n += 1\n else:\n raise StopIteration\n return out\n\n def __repr__(self):\n return '<WrightTools.Collection \\'{0}\\' {1} at {2}>'.format(self.natural_name,\n self.item_names,\n '::'.join([self.filepath,\n self.name]))\n\n def __getitem__(self, key):\n if isinstance(key, int):\n key = self.item_names[key]\n out = h5py.Group.__getitem__(self, key)\n if 'class' in out.attrs.keys():\n if out.attrs['class'] == 'Data':\n return wt_data.Data(filepath=self.filepath, parent=self.name, name=key,\n edit_local=True)\n elif out.attrs['class'] == 'Collection':\n return Collection(filepath=self.filepath, parent=self.name, name=key,\n edit_local=True)\n else:\n return Group(filepath=self.filepath, parent=self.name, name=key,\n edit_local=True)\n else:\n return out\n\n def __setitem__(self, key, value):\n raise NotImplementedError\n\n @property\n def item_names(self):\n if 'item_names' not in self.attrs.keys():\n self.attrs['item_names'] = np.array([], dtype='S')\n return [s.decode() for s in self.attrs['item_names']]\n\n def create_collection(self, name='collection', position=None, **kwargs):\n collection = Collection(filepath=self.filepath, parent=self.name, name=name,\n edit_local=True, **kwargs)\n if position is None:\n self._items.append(collection)\n self.attrs['item_names'] = np.append(self.attrs['item_names'],\n collection.natural_name.encode())\n else:\n self._items.insert(position, collection)\n self.attrs['item_names'] = np.insert(self.attrs['item_names'], position,\n collection.natural_name.encode())\n setattr(self, name, collection)\n return collection\n\n def create_data(self, name='data', position=None, **kwargs):\n data = wt_data.Data(filepath=self.filepath, parent=self.name, name=name, edit_local=True,\n **kwargs)\n if position is None:\n self._items.append(data)\n self.attrs['item_names'] = np.append(self.attrs['item_names'],\n data.natural_name.encode())\n else:\n self._items.insert(position, data)\n self.attrs['item_names'] = np.insert(self.attrs['item_names'], position,\n data.natural_name.encode())\n setattr(self, name, data)\n return data\n\n def index(self):\n raise NotImplementedError\n\n def flush(self):\n for item in self._items:\n item.flush()\n self.file.flush()\n\n def save(self, filepath=None, verbose=True):\n # TODO: documentation\n self.flush() # ensure all changes are written to file\n if filepath is None:\n filepath = os.path.join(os.getcwd(), self.natural_name + '.wt5')\n elif len(os.path.basename(filepath).split('.')) == 1:\n filepath += '.wt5'\n filepath = os.path.expanduser(filepath)\n shutil.copyfile(src=self.filepath, dst=filepath)\n if verbose:\n print('file saved at', filepath)\n return filepath\n", "path": "WrightTools/collection/_collection.py"}]} | 1,387 | 109 |
gh_patches_debug_27078 | rasdani/github-patches | git_diff | dotkom__onlineweb4-524 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
(User creation) Some fields are required that should not be
Address, zip and phone should not be required. People will complain.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/authentication/forms.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 import datetime
4 import re
5
6 from django import forms
7 from django.contrib import auth
8 from django.utils.translation import ugettext as _
9
10 from apps.authentication.models import OnlineUser as User, Email
11
12 class LoginForm(forms.Form):
13 username = forms.CharField(widget=forms.TextInput(), label=_("Brukernavn"), max_length=50)
14 password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(u"Passord"))
15 user = None
16
17 def clean(self):
18 if self._errors:
19 return
20
21 user = auth.authenticate(username=self.cleaned_data['username'], password=self.cleaned_data['password'])
22
23 if user:
24 if user.is_active:
25 self.user = user
26 else:
27 self._errors['username'] = self.error_class([_(u"Din konto er ikke aktiv. Forsøk gjenoppretning av passord.")])
28 else:
29 self._errors['username'] = self.error_class([_(u"Kontoen eksisterer ikke, eller kombinasjonen av brukernavn og passord er feil.")])
30 return self.cleaned_data
31
32 def login(self, request):
33 try:
34 User.objects.get(username=request.POST['username'])
35 except:
36 return False
37 if self.is_valid():
38 auth.login(request, self.user)
39 request.session.set_expiry(0)
40 return True
41 return False
42
43 class RegisterForm(forms.Form):
44 username = forms.CharField(label=_("Brukernavn"), max_length=20)
45 first_name = forms.CharField(label=_("Fornavn"), max_length=50)
46 last_name = forms.CharField(label=_("Etternavn"), max_length=50)
47 email = forms.EmailField(label=_("Epost"), max_length=50)
48 password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_("Passord"))
49 repeat_password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_("Gjenta passord"))
50 address = forms.CharField(label=_("Adresse"), max_length=50)
51 zip_code = forms.CharField(label=_("Postnummer"), max_length=4)
52 phone = forms.CharField(label=_("Telefon"), max_length=20)
53
54 def clean(self):
55 super(RegisterForm, self).clean()
56 if self.is_valid():
57 cleaned_data = self.cleaned_data
58
59 # Check passwords
60 if cleaned_data['password'] != cleaned_data['repeat_password']:
61 self._errors['repeat_password'] = self.error_class([_(u"Passordene er ikke like.")])
62
63 # Check username
64 username = cleaned_data['username']
65 if User.objects.filter(username=username).count() > 0:
66 self._errors['username'] = self.error_class([_(u"Brukernavnet er allerede registrert.")])
67 if not re.match("^[a-zA-Z0-9_-]+$", username):
68 self._errors['username'] = self.error_class([_(u"Ditt brukernavn inneholdt ulovlige tegn. Lovlige tegn: a-Z 0-9 - _")])
69
70 # Check email
71 email = cleaned_data['email']
72 if Email.objects.filter(email=email).count() > 0:
73 self._errors['email'] = self.error_class([_(u"Det fins allerede en bruker med denne epostadressen.")])
74
75 # ZIP code digits only
76 zip_code = cleaned_data['zip_code']
77 if len(zip_code) != 4 or not zip_code.isdigit():
78 self._errors['zip_code'] = self.error_class([_(u"Postnummer må bestå av fire siffer.")])
79
80 return cleaned_data
81
82 class RecoveryForm(forms.Form):
83 email = forms.EmailField(label="Email", max_length=50)
84
85 class ChangePasswordForm(forms.Form):
86 new_password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(u"Nytt passord"))
87 repeat_password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(u"Gjenta passord"))
88
89 def clean(self):
90 super(ChangePasswordForm, self).clean()
91 if self.is_valid():
92 cleaned_data = self.cleaned_data
93
94 # Check passwords
95 if cleaned_data['new_password'] != cleaned_data['repeat_password']:
96 self._errors['repeat_password'] = self.error_class([_(u"Passordene er ikke like.")])
97
98 return cleaned_data
99
100
101 class NewEmailForm(forms.Form):
102 new_email = forms.EmailField(_(u"ny epostadresse"))
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/apps/authentication/forms.py b/apps/authentication/forms.py
--- a/apps/authentication/forms.py
+++ b/apps/authentication/forms.py
@@ -47,9 +47,9 @@
email = forms.EmailField(label=_("Epost"), max_length=50)
password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_("Passord"))
repeat_password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_("Gjenta passord"))
- address = forms.CharField(label=_("Adresse"), max_length=50)
- zip_code = forms.CharField(label=_("Postnummer"), max_length=4)
- phone = forms.CharField(label=_("Telefon"), max_length=20)
+ address = forms.CharField(label=_("Adresse"), max_length=50, required=False)
+ zip_code = forms.CharField(label=_("Postnummer"), max_length=4, required=False)
+ phone = forms.CharField(label=_("Telefon"), max_length=20, required=False)
def clean(self):
super(RegisterForm, self).clean()
@@ -74,8 +74,9 @@
# ZIP code digits only
zip_code = cleaned_data['zip_code']
- if len(zip_code) != 4 or not zip_code.isdigit():
- self._errors['zip_code'] = self.error_class([_(u"Postnummer må bestå av fire siffer.")])
+ if len(zip_code) != 0:
+ if len(zip_code) != 4 or len(zip_code) != 0 or not zip_code.isdigit():
+ self._errors['zip_code'] = self.error_class([_(u"Postnummer må bestå av fire siffer.")])
return cleaned_data
| {"golden_diff": "diff --git a/apps/authentication/forms.py b/apps/authentication/forms.py\n--- a/apps/authentication/forms.py\n+++ b/apps/authentication/forms.py\n@@ -47,9 +47,9 @@\n email = forms.EmailField(label=_(\"Epost\"), max_length=50)\n password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(\"Passord\"))\n repeat_password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(\"Gjenta passord\"))\n- address = forms.CharField(label=_(\"Adresse\"), max_length=50)\n- zip_code = forms.CharField(label=_(\"Postnummer\"), max_length=4)\n- phone = forms.CharField(label=_(\"Telefon\"), max_length=20)\n+ address = forms.CharField(label=_(\"Adresse\"), max_length=50, required=False)\n+ zip_code = forms.CharField(label=_(\"Postnummer\"), max_length=4, required=False)\n+ phone = forms.CharField(label=_(\"Telefon\"), max_length=20, required=False)\n \n def clean(self):\n super(RegisterForm, self).clean()\n@@ -74,8 +74,9 @@\n \n # ZIP code digits only\n zip_code = cleaned_data['zip_code']\n- if len(zip_code) != 4 or not zip_code.isdigit():\n- self._errors['zip_code'] = self.error_class([_(u\"Postnummer m\u00e5 best\u00e5 av fire siffer.\")])\n+ if len(zip_code) != 0:\n+ if len(zip_code) != 4 or len(zip_code) != 0 or not zip_code.isdigit():\n+ self._errors['zip_code'] = self.error_class([_(u\"Postnummer m\u00e5 best\u00e5 av fire siffer.\")])\n \n return cleaned_data\n", "issue": "(User creation) Some fields are required that should not be\nAddress, zip and phone should not be required. People will complain.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport datetime\nimport re\n\nfrom django import forms\nfrom django.contrib import auth\nfrom django.utils.translation import ugettext as _\n\nfrom apps.authentication.models import OnlineUser as User, Email\n\nclass LoginForm(forms.Form):\n username = forms.CharField(widget=forms.TextInput(), label=_(\"Brukernavn\"), max_length=50)\n password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(u\"Passord\"))\n user = None\n\n def clean(self):\n if self._errors:\n return\n \n user = auth.authenticate(username=self.cleaned_data['username'], password=self.cleaned_data['password'])\n\n if user:\n if user.is_active:\n self.user = user\n else:\n self._errors['username'] = self.error_class([_(u\"Din konto er ikke aktiv. Fors\u00f8k gjenoppretning av passord.\")])\n else:\n self._errors['username'] = self.error_class([_(u\"Kontoen eksisterer ikke, eller kombinasjonen av brukernavn og passord er feil.\")])\n return self.cleaned_data\n\n def login(self, request):\n try:\n User.objects.get(username=request.POST['username'])\n except:\n return False\n if self.is_valid():\n auth.login(request, self.user)\n request.session.set_expiry(0)\n return True\n return False\n\nclass RegisterForm(forms.Form):\n username = forms.CharField(label=_(\"Brukernavn\"), max_length=20)\n first_name = forms.CharField(label=_(\"Fornavn\"), max_length=50)\n last_name = forms.CharField(label=_(\"Etternavn\"), max_length=50)\n email = forms.EmailField(label=_(\"Epost\"), max_length=50)\n password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(\"Passord\"))\n repeat_password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(\"Gjenta passord\"))\n address = forms.CharField(label=_(\"Adresse\"), max_length=50)\n zip_code = forms.CharField(label=_(\"Postnummer\"), max_length=4)\n phone = forms.CharField(label=_(\"Telefon\"), max_length=20)\n \n def clean(self):\n super(RegisterForm, self).clean()\n if self.is_valid():\n cleaned_data = self.cleaned_data\n\n # Check passwords\n if cleaned_data['password'] != cleaned_data['repeat_password']:\n self._errors['repeat_password'] = self.error_class([_(u\"Passordene er ikke like.\")])\n\n # Check username\n username = cleaned_data['username']\n if User.objects.filter(username=username).count() > 0:\n self._errors['username'] = self.error_class([_(u\"Brukernavnet er allerede registrert.\")])\n if not re.match(\"^[a-zA-Z0-9_-]+$\", username):\n self._errors['username'] = self.error_class([_(u\"Ditt brukernavn inneholdt ulovlige tegn. Lovlige tegn: a-Z 0-9 - _\")])\n\n # Check email\n email = cleaned_data['email']\n if Email.objects.filter(email=email).count() > 0:\n self._errors['email'] = self.error_class([_(u\"Det fins allerede en bruker med denne epostadressen.\")])\n\n # ZIP code digits only\n zip_code = cleaned_data['zip_code']\n if len(zip_code) != 4 or not zip_code.isdigit():\n self._errors['zip_code'] = self.error_class([_(u\"Postnummer m\u00e5 best\u00e5 av fire siffer.\")])\n\n return cleaned_data \n\nclass RecoveryForm(forms.Form):\n email = forms.EmailField(label=\"Email\", max_length=50)\n\nclass ChangePasswordForm(forms.Form):\n new_password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(u\"Nytt passord\"))\n repeat_password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(u\"Gjenta passord\"))\n\n def clean(self):\n super(ChangePasswordForm, self).clean()\n if self.is_valid():\n cleaned_data = self.cleaned_data\n\n # Check passwords\n if cleaned_data['new_password'] != cleaned_data['repeat_password']:\n self._errors['repeat_password'] = self.error_class([_(u\"Passordene er ikke like.\")])\n\n return cleaned_data\n\n\nclass NewEmailForm(forms.Form):\n new_email = forms.EmailField(_(u\"ny epostadresse\"))\n", "path": "apps/authentication/forms.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport datetime\nimport re\n\nfrom django import forms\nfrom django.contrib import auth\nfrom django.utils.translation import ugettext as _\n\nfrom apps.authentication.models import OnlineUser as User, Email\n\nclass LoginForm(forms.Form):\n username = forms.CharField(widget=forms.TextInput(), label=_(\"Brukernavn\"), max_length=50)\n password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(u\"Passord\"))\n user = None\n\n def clean(self):\n if self._errors:\n return\n \n user = auth.authenticate(username=self.cleaned_data['username'], password=self.cleaned_data['password'])\n\n if user:\n if user.is_active:\n self.user = user\n else:\n self._errors['username'] = self.error_class([_(u\"Din konto er ikke aktiv. Fors\u00f8k gjenoppretning av passord.\")])\n else:\n self._errors['username'] = self.error_class([_(u\"Kontoen eksisterer ikke, eller kombinasjonen av brukernavn og passord er feil.\")])\n return self.cleaned_data\n\n def login(self, request):\n try:\n User.objects.get(username=request.POST['username'])\n except:\n return False\n if self.is_valid():\n auth.login(request, self.user)\n request.session.set_expiry(0)\n return True\n return False\n\nclass RegisterForm(forms.Form):\n username = forms.CharField(label=_(\"Brukernavn\"), max_length=20)\n first_name = forms.CharField(label=_(\"Fornavn\"), max_length=50)\n last_name = forms.CharField(label=_(\"Etternavn\"), max_length=50)\n email = forms.EmailField(label=_(\"Epost\"), max_length=50)\n password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(\"Passord\"))\n repeat_password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(\"Gjenta passord\"))\n address = forms.CharField(label=_(\"Adresse\"), max_length=50, required=False)\n zip_code = forms.CharField(label=_(\"Postnummer\"), max_length=4, required=False)\n phone = forms.CharField(label=_(\"Telefon\"), max_length=20, required=False)\n \n def clean(self):\n super(RegisterForm, self).clean()\n if self.is_valid():\n cleaned_data = self.cleaned_data\n\n # Check passwords\n if cleaned_data['password'] != cleaned_data['repeat_password']:\n self._errors['repeat_password'] = self.error_class([_(u\"Passordene er ikke like.\")])\n\n # Check username\n username = cleaned_data['username']\n if User.objects.filter(username=username).count() > 0:\n self._errors['username'] = self.error_class([_(u\"Brukernavnet er allerede registrert.\")])\n if not re.match(\"^[a-zA-Z0-9_-]+$\", username):\n self._errors['username'] = self.error_class([_(u\"Ditt brukernavn inneholdt ulovlige tegn. Lovlige tegn: a-Z 0-9 - _\")])\n\n # Check email\n email = cleaned_data['email']\n if Email.objects.filter(email=email).count() > 0:\n self._errors['email'] = self.error_class([_(u\"Det fins allerede en bruker med denne epostadressen.\")])\n\n # ZIP code digits only\n zip_code = cleaned_data['zip_code']\n if len(zip_code) != 0:\n if len(zip_code) != 4 or len(zip_code) != 0 or not zip_code.isdigit():\n self._errors['zip_code'] = self.error_class([_(u\"Postnummer m\u00e5 best\u00e5 av fire siffer.\")])\n\n return cleaned_data \n\nclass RecoveryForm(forms.Form):\n email = forms.EmailField(label=\"Email\", max_length=50)\n\nclass ChangePasswordForm(forms.Form):\n new_password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(u\"Nytt passord\"))\n repeat_password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(u\"Gjenta passord\"))\n\n def clean(self):\n super(ChangePasswordForm, self).clean()\n if self.is_valid():\n cleaned_data = self.cleaned_data\n\n # Check passwords\n if cleaned_data['new_password'] != cleaned_data['repeat_password']:\n self._errors['repeat_password'] = self.error_class([_(u\"Passordene er ikke like.\")])\n\n return cleaned_data\n\n\nclass NewEmailForm(forms.Form):\n new_email = forms.EmailField(_(u\"ny epostadresse\"))\n", "path": "apps/authentication/forms.py"}]} | 1,450 | 364 |
gh_patches_debug_2387 | rasdani/github-patches | git_diff | ivy-llc__ivy-23027 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
solve
### Bug Explanation
The `paddle.linalg.solve` tests are failing. The tests and the front-end function are not implemented properly.
The test should generate two matrices of shape [ *, M, M ] and [ *, M, K ] but the written test just generates two matrices of the same shape, and function arguments are mismatched returning
`TypeError: solve() got an unexpected keyword argument 'x'`
### Steps to Reproduce Bug
Run : `pytest ivy_tests/test_ivy/test_frontends/test_paddle/test_tensor/test_linalg.py::test_paddle_solve`
### Environment
MacOs : 13.5
### Ivy Version
0.0.0.0.0
### Backend
- [ ] NumPy
- [ ] TensorFlow
- [ ] PyTorch
- [ ] JAX
### Device
Mac M1
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ivy/functional/frontends/paddle/tensor/linalg.py`
Content:
```
1 # global
2 import ivy
3 from ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes
4 from ivy.functional.frontends.paddle import promote_types_of_paddle_inputs
5 from ivy.functional.frontends.paddle.func_wrapper import (
6 to_ivy_arrays_and_back,
7 )
8
9
10 @with_supported_dtypes({"2.4.1 and above": ("int64",)}, "paddle")
11 @to_ivy_arrays_and_back
12 def bincount(x, weights=None, minlength=0, name=None):
13 return ivy.bincount(x, weights=weights, minlength=minlength)
14
15
16 # bmm
17 @with_unsupported_dtypes({"2.5.1 and below": ("float16", "bfloat16")}, "paddle")
18 @to_ivy_arrays_and_back
19 def bmm(x, y, transpose_x=False, transpose_y=False, name=None):
20 if len(ivy.shape(x)) != 3 or len(ivy.shape(y)) != 3:
21 raise RuntimeError("input must be 3D matrices")
22 x, y = promote_types_of_paddle_inputs(x, y)
23 return ivy.matmul(x, y, transpose_a=transpose_x, transpose_b=transpose_y)
24
25
26 # cholesky
27 @with_supported_dtypes({"2.5.1 and below": ("float32", "float64")}, "paddle")
28 @to_ivy_arrays_and_back
29 def cholesky(x, /, *, upper=False, name=None):
30 return ivy.cholesky(x, upper=upper)
31
32
33 # cholesky_solve
34 @with_supported_dtypes({"2.5.1 and below": ("float32", "float64")}, "paddle")
35 @to_ivy_arrays_and_back
36 def cholesky_solve(x, y, /, *, upper=False, name=None):
37 if upper:
38 y = ivy.matrix_transpose(y)
39 Y = ivy.solve(y, x)
40 return ivy.solve(ivy.matrix_transpose(y), Y)
41
42
43 # cond
44 @with_supported_dtypes({"2.5.1 and below": ("float32", "float64")}, "paddle")
45 @to_ivy_arrays_and_back
46 def cond(x, p=None, name=None):
47 ret = ivy.cond(x, p=p, out=name)
48 if ret.shape == ():
49 ret = ret.reshape((1,))
50 return ret
51
52
53 @with_supported_dtypes(
54 {"2.5.1 and below": ("float32", "float64", "int32", "int64")}, "paddle"
55 )
56 @to_ivy_arrays_and_back
57 def cross(x, y, /, *, axis=9, name=None):
58 x, y = promote_types_of_paddle_inputs(x, y)
59 return ivy.cross(x, y, axis=axis)
60
61
62 @with_supported_dtypes({"2.4.1 and above": ("float64", "float32")}, "paddle")
63 @to_ivy_arrays_and_back
64 def dist(x, y, p=2):
65 ret = ivy.vector_norm(ivy.subtract(x, y), ord=p)
66 return ivy.reshape(ret, (1,))
67
68
69 # dot
70 @with_supported_dtypes({"2.5.1 and below": ("float32", "float64")}, "paddle")
71 @to_ivy_arrays_and_back
72 def dot(x, y, name=None):
73 x, y = promote_types_of_paddle_inputs(x, y)
74 out = ivy.multiply(x, y)
75 return ivy.sum(out, axis=ivy.get_num_dims(x) - 1, keepdims=False)
76
77
78 # eig
79 @to_ivy_arrays_and_back
80 def eig(x, name=None):
81 return ivy.eig(x)
82
83
84 # eigh
85 @to_ivy_arrays_and_back
86 def eigh(x, UPLO="L", name=None):
87 return ivy.eigh(x, UPLO=UPLO)
88
89
90 # eigvals
91 @to_ivy_arrays_and_back
92 def eigvals(x, name=None):
93 return ivy.eigvals(x)
94
95
96 # eigvalsh
97 @to_ivy_arrays_and_back
98 def eigvalsh(x, UPLO="L", name=None):
99 return ivy.eigvalsh(x, UPLO=UPLO)
100
101
102 # matmul
103 @with_unsupported_dtypes({"2.5.1 and below": ("float16", "bfloat16")}, "paddle")
104 @to_ivy_arrays_and_back
105 def matmul(x, y, transpose_x=False, transpose_y=False, name=None):
106 x, y = promote_types_of_paddle_inputs(x, y)
107 return ivy.matmul(x, y, transpose_a=transpose_x, transpose_b=transpose_y)
108
109
110 # matrix_power
111 @with_unsupported_dtypes({"2.5.1 and below": ("float16", "bfloat16")}, "paddle")
112 @to_ivy_arrays_and_back
113 def matrix_power(x, n, name=None):
114 return ivy.matrix_power(x, n)
115
116
117 # norm
118 @with_supported_dtypes({"2.5.1 and below": ("float32", "float64")}, "paddle")
119 @to_ivy_arrays_and_back
120 def norm(x, p="fro", axis=None, keepdim=False, name=None):
121 if axis is None and p is not None:
122 if p == "fro":
123 p = 2
124 ret = ivy.vector_norm(x.flatten(), ord=p, axis=-1)
125 if keepdim:
126 ret = ret.reshape([1] * len(x.shape))
127 return ret
128
129 if isinstance(axis, tuple):
130 axis = list(axis)
131 if isinstance(axis, list) and len(axis) == 1:
132 axis = axis[0]
133
134 if isinstance(axis, int):
135 if p == "fro":
136 p = 2
137 if p in [0, 1, 2, ivy.inf, -ivy.inf]:
138 ret = ivy.vector_norm(x, ord=p, axis=axis, keepdims=keepdim)
139 elif isinstance(p, (int, float)):
140 ret = ivy.pow(
141 ivy.sum(ivy.pow(ivy.abs(x), p), axis=axis, keepdims=keepdim),
142 float(1.0 / p),
143 )
144
145 elif isinstance(axis, list) and len(axis) == 2:
146 if p == 0:
147 raise ValueError
148 elif p == 1:
149 ret = ivy.sum(ivy.abs(x), axis=axis, keepdims=keepdim)
150 elif p == 2 or p == "fro":
151 ret = ivy.matrix_norm(x, ord="fro", axis=axis, keepdims=keepdim)
152 elif p == ivy.inf:
153 ret = ivy.max(ivy.abs(x), axis=axis, keepdims=keepdim)
154 elif p == -ivy.inf:
155 ret = ivy.min(ivy.abs(x), axis=axis, keepdims=keepdim)
156 elif isinstance(p, (int, float)) and p > 0:
157 ret = ivy.pow(
158 ivy.sum(ivy.pow(ivy.abs(x), p), axis=axis, keepdims=keepdim),
159 float(1.0 / p),
160 )
161 else:
162 raise ValueError
163
164 else:
165 raise ValueError
166
167 return ret
168
169
170 # pinv
171 @with_unsupported_dtypes({"2.5.1 and below": ("float16", "bfloat16")}, "paddle")
172 @to_ivy_arrays_and_back
173 def pinv(x, rcond=1e-15, hermitian=False, name=None):
174 # TODO: Add hermitian functionality
175 return ivy.pinv(x, rtol=rcond)
176
177
178 # qr
179 @with_supported_dtypes({"2.5.1 and below": ("float32", "float64")}, "paddle")
180 @to_ivy_arrays_and_back
181 def qr(x, mode="reduced", name=None):
182 return ivy.qr(x, mode=mode)
183
184
185 # solve
186 @with_unsupported_dtypes({"2.5.1 and below": ("float16", "bfloat16")}, "paddle")
187 @to_ivy_arrays_and_back
188 def solve(x1, x2, name=None):
189 return ivy.solve(x1, x2)
190
191
192 # transpose
193 @with_unsupported_dtypes({"2.5.1 and below": ("uint8", "int8", "int16")}, "paddle")
194 @to_ivy_arrays_and_back
195 def transpose(x, perm, name=None):
196 return ivy.permute_dims(x, axes=perm)
197
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ivy/functional/frontends/paddle/tensor/linalg.py b/ivy/functional/frontends/paddle/tensor/linalg.py
--- a/ivy/functional/frontends/paddle/tensor/linalg.py
+++ b/ivy/functional/frontends/paddle/tensor/linalg.py
@@ -183,10 +183,10 @@
# solve
-@with_unsupported_dtypes({"2.5.1 and below": ("float16", "bfloat16")}, "paddle")
+@with_supported_dtypes({"2.5.1 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
-def solve(x1, x2, name=None):
- return ivy.solve(x1, x2)
+def solve(x, y, name=None):
+ return ivy.solve(x, y)
# transpose
| {"golden_diff": "diff --git a/ivy/functional/frontends/paddle/tensor/linalg.py b/ivy/functional/frontends/paddle/tensor/linalg.py\n--- a/ivy/functional/frontends/paddle/tensor/linalg.py\n+++ b/ivy/functional/frontends/paddle/tensor/linalg.py\n@@ -183,10 +183,10 @@\n \n \n # solve\n-@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n+@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n @to_ivy_arrays_and_back\n-def solve(x1, x2, name=None):\n- return ivy.solve(x1, x2)\n+def solve(x, y, name=None):\n+ return ivy.solve(x, y)\n \n \n # transpose\n", "issue": "solve\n### Bug Explanation\r\n\r\nThe `paddle.linalg.solve` tests are failing. The tests and the front-end function are not implemented properly.\r\nThe test should generate two matrices of shape [ *, M, M ] and [ *, M, K ] but the written test just generates two matrices of the same shape, and function arguments are mismatched returning \r\n`TypeError: solve() got an unexpected keyword argument 'x'`\r\n\r\n### Steps to Reproduce Bug\r\n\r\nRun : `pytest ivy_tests/test_ivy/test_frontends/test_paddle/test_tensor/test_linalg.py::test_paddle_solve`\r\n\r\n### Environment\r\n\r\nMacOs : 13.5\r\n\r\n### Ivy Version\r\n\r\n 0.0.0.0.0\r\n\r\n### Backend\r\n\r\n- [ ] NumPy\r\n- [ ] TensorFlow\r\n- [ ] PyTorch\r\n- [ ] JAX\r\n\r\n### Device\r\n\r\nMac M1\n", "before_files": [{"content": "# global\nimport ivy\nfrom ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes\nfrom ivy.functional.frontends.paddle import promote_types_of_paddle_inputs\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n@with_supported_dtypes({\"2.4.1 and above\": (\"int64\",)}, \"paddle\")\n@to_ivy_arrays_and_back\ndef bincount(x, weights=None, minlength=0, name=None):\n return ivy.bincount(x, weights=weights, minlength=minlength)\n\n\n# bmm\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef bmm(x, y, transpose_x=False, transpose_y=False, name=None):\n if len(ivy.shape(x)) != 3 or len(ivy.shape(y)) != 3:\n raise RuntimeError(\"input must be 3D matrices\")\n x, y = promote_types_of_paddle_inputs(x, y)\n return ivy.matmul(x, y, transpose_a=transpose_x, transpose_b=transpose_y)\n\n\n# cholesky\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef cholesky(x, /, *, upper=False, name=None):\n return ivy.cholesky(x, upper=upper)\n\n\n# cholesky_solve\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef cholesky_solve(x, y, /, *, upper=False, name=None):\n if upper:\n y = ivy.matrix_transpose(y)\n Y = ivy.solve(y, x)\n return ivy.solve(ivy.matrix_transpose(y), Y)\n\n\n# cond\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef cond(x, p=None, name=None):\n ret = ivy.cond(x, p=p, out=name)\n if ret.shape == ():\n ret = ret.reshape((1,))\n return ret\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef cross(x, y, /, *, axis=9, name=None):\n x, y = promote_types_of_paddle_inputs(x, y)\n return ivy.cross(x, y, axis=axis)\n\n\n@with_supported_dtypes({\"2.4.1 and above\": (\"float64\", \"float32\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef dist(x, y, p=2):\n ret = ivy.vector_norm(ivy.subtract(x, y), ord=p)\n return ivy.reshape(ret, (1,))\n\n\n# dot\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef dot(x, y, name=None):\n x, y = promote_types_of_paddle_inputs(x, y)\n out = ivy.multiply(x, y)\n return ivy.sum(out, axis=ivy.get_num_dims(x) - 1, keepdims=False)\n\n\n# eig\n@to_ivy_arrays_and_back\ndef eig(x, name=None):\n return ivy.eig(x)\n\n\n# eigh\n@to_ivy_arrays_and_back\ndef eigh(x, UPLO=\"L\", name=None):\n return ivy.eigh(x, UPLO=UPLO)\n\n\n# eigvals\n@to_ivy_arrays_and_back\ndef eigvals(x, name=None):\n return ivy.eigvals(x)\n\n\n# eigvalsh\n@to_ivy_arrays_and_back\ndef eigvalsh(x, UPLO=\"L\", name=None):\n return ivy.eigvalsh(x, UPLO=UPLO)\n\n\n# matmul\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef matmul(x, y, transpose_x=False, transpose_y=False, name=None):\n x, y = promote_types_of_paddle_inputs(x, y)\n return ivy.matmul(x, y, transpose_a=transpose_x, transpose_b=transpose_y)\n\n\n# matrix_power\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef matrix_power(x, n, name=None):\n return ivy.matrix_power(x, n)\n\n\n# norm\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef norm(x, p=\"fro\", axis=None, keepdim=False, name=None):\n if axis is None and p is not None:\n if p == \"fro\":\n p = 2\n ret = ivy.vector_norm(x.flatten(), ord=p, axis=-1)\n if keepdim:\n ret = ret.reshape([1] * len(x.shape))\n return ret\n\n if isinstance(axis, tuple):\n axis = list(axis)\n if isinstance(axis, list) and len(axis) == 1:\n axis = axis[0]\n\n if isinstance(axis, int):\n if p == \"fro\":\n p = 2\n if p in [0, 1, 2, ivy.inf, -ivy.inf]:\n ret = ivy.vector_norm(x, ord=p, axis=axis, keepdims=keepdim)\n elif isinstance(p, (int, float)):\n ret = ivy.pow(\n ivy.sum(ivy.pow(ivy.abs(x), p), axis=axis, keepdims=keepdim),\n float(1.0 / p),\n )\n\n elif isinstance(axis, list) and len(axis) == 2:\n if p == 0:\n raise ValueError\n elif p == 1:\n ret = ivy.sum(ivy.abs(x), axis=axis, keepdims=keepdim)\n elif p == 2 or p == \"fro\":\n ret = ivy.matrix_norm(x, ord=\"fro\", axis=axis, keepdims=keepdim)\n elif p == ivy.inf:\n ret = ivy.max(ivy.abs(x), axis=axis, keepdims=keepdim)\n elif p == -ivy.inf:\n ret = ivy.min(ivy.abs(x), axis=axis, keepdims=keepdim)\n elif isinstance(p, (int, float)) and p > 0:\n ret = ivy.pow(\n ivy.sum(ivy.pow(ivy.abs(x), p), axis=axis, keepdims=keepdim),\n float(1.0 / p),\n )\n else:\n raise ValueError\n\n else:\n raise ValueError\n\n return ret\n\n\n# pinv\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef pinv(x, rcond=1e-15, hermitian=False, name=None):\n # TODO: Add hermitian functionality\n return ivy.pinv(x, rtol=rcond)\n\n\n# qr\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef qr(x, mode=\"reduced\", name=None):\n return ivy.qr(x, mode=mode)\n\n\n# solve\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef solve(x1, x2, name=None):\n return ivy.solve(x1, x2)\n\n\n# transpose\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"uint8\", \"int8\", \"int16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef transpose(x, perm, name=None):\n return ivy.permute_dims(x, axes=perm)\n", "path": "ivy/functional/frontends/paddle/tensor/linalg.py"}], "after_files": [{"content": "# global\nimport ivy\nfrom ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes\nfrom ivy.functional.frontends.paddle import promote_types_of_paddle_inputs\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n@with_supported_dtypes({\"2.4.1 and above\": (\"int64\",)}, \"paddle\")\n@to_ivy_arrays_and_back\ndef bincount(x, weights=None, minlength=0, name=None):\n return ivy.bincount(x, weights=weights, minlength=minlength)\n\n\n# bmm\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef bmm(x, y, transpose_x=False, transpose_y=False, name=None):\n if len(ivy.shape(x)) != 3 or len(ivy.shape(y)) != 3:\n raise RuntimeError(\"input must be 3D matrices\")\n x, y = promote_types_of_paddle_inputs(x, y)\n return ivy.matmul(x, y, transpose_a=transpose_x, transpose_b=transpose_y)\n\n\n# cholesky\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef cholesky(x, /, *, upper=False, name=None):\n return ivy.cholesky(x, upper=upper)\n\n\n# cholesky_solve\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef cholesky_solve(x, y, /, *, upper=False, name=None):\n if upper:\n y = ivy.matrix_transpose(y)\n Y = ivy.solve(y, x)\n return ivy.solve(ivy.matrix_transpose(y), Y)\n\n\n# cond\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef cond(x, p=None, name=None):\n ret = ivy.cond(x, p=p, out=name)\n if ret.shape == ():\n ret = ret.reshape((1,))\n return ret\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef cross(x, y, /, *, axis=9, name=None):\n x, y = promote_types_of_paddle_inputs(x, y)\n return ivy.cross(x, y, axis=axis)\n\n\n@with_supported_dtypes({\"2.4.1 and above\": (\"float64\", \"float32\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef dist(x, y, p=2):\n ret = ivy.vector_norm(ivy.subtract(x, y), ord=p)\n return ivy.reshape(ret, (1,))\n\n\n# dot\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef dot(x, y, name=None):\n x, y = promote_types_of_paddle_inputs(x, y)\n out = ivy.multiply(x, y)\n return ivy.sum(out, axis=ivy.get_num_dims(x) - 1, keepdims=False)\n\n\n# eig\n@to_ivy_arrays_and_back\ndef eig(x, name=None):\n return ivy.eig(x)\n\n\n# eigh\n@to_ivy_arrays_and_back\ndef eigh(x, UPLO=\"L\", name=None):\n return ivy.eigh(x, UPLO=UPLO)\n\n\n# eigvals\n@to_ivy_arrays_and_back\ndef eigvals(x, name=None):\n return ivy.eigvals(x)\n\n\n# eigvalsh\n@to_ivy_arrays_and_back\ndef eigvalsh(x, UPLO=\"L\", name=None):\n return ivy.eigvalsh(x, UPLO=UPLO)\n\n\n# matmul\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef matmul(x, y, transpose_x=False, transpose_y=False, name=None):\n x, y = promote_types_of_paddle_inputs(x, y)\n return ivy.matmul(x, y, transpose_a=transpose_x, transpose_b=transpose_y)\n\n\n# matrix_power\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef matrix_power(x, n, name=None):\n return ivy.matrix_power(x, n)\n\n\n# norm\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef norm(x, p=\"fro\", axis=None, keepdim=False, name=None):\n if axis is None and p is not None:\n if p == \"fro\":\n p = 2\n ret = ivy.vector_norm(x.flatten(), ord=p, axis=-1)\n if keepdim:\n ret = ret.reshape([1] * len(x.shape))\n return ret\n\n if isinstance(axis, tuple):\n axis = list(axis)\n if isinstance(axis, list) and len(axis) == 1:\n axis = axis[0]\n\n if isinstance(axis, int):\n if p == \"fro\":\n p = 2\n if p in [0, 1, 2, ivy.inf, -ivy.inf]:\n ret = ivy.vector_norm(x, ord=p, axis=axis, keepdims=keepdim)\n elif isinstance(p, (int, float)):\n ret = ivy.pow(\n ivy.sum(ivy.pow(ivy.abs(x), p), axis=axis, keepdims=keepdim),\n float(1.0 / p),\n )\n\n elif isinstance(axis, list) and len(axis) == 2:\n if p == 0:\n raise ValueError\n elif p == 1:\n ret = ivy.sum(ivy.abs(x), axis=axis, keepdims=keepdim)\n elif p == 2 or p == \"fro\":\n ret = ivy.matrix_norm(x, ord=\"fro\", axis=axis, keepdims=keepdim)\n elif p == ivy.inf:\n ret = ivy.max(ivy.abs(x), axis=axis, keepdims=keepdim)\n elif p == -ivy.inf:\n ret = ivy.min(ivy.abs(x), axis=axis, keepdims=keepdim)\n elif isinstance(p, (int, float)) and p > 0:\n ret = ivy.pow(\n ivy.sum(ivy.pow(ivy.abs(x), p), axis=axis, keepdims=keepdim),\n float(1.0 / p),\n )\n else:\n raise ValueError\n\n else:\n raise ValueError\n\n return ret\n\n\n# pinv\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef pinv(x, rcond=1e-15, hermitian=False, name=None):\n # TODO: Add hermitian functionality\n return ivy.pinv(x, rtol=rcond)\n\n\n# qr\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef qr(x, mode=\"reduced\", name=None):\n return ivy.qr(x, mode=mode)\n\n\n# solve\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef solve(x, y, name=None):\n return ivy.solve(x, y)\n\n\n# transpose\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"uint8\", \"int8\", \"int16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef transpose(x, perm, name=None):\n return ivy.permute_dims(x, axes=perm)\n", "path": "ivy/functional/frontends/paddle/tensor/linalg.py"}]} | 2,826 | 201 |
gh_patches_debug_4249 | rasdani/github-patches | git_diff | pypa__pip-8791 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
restore a broken slug anchor in user guide
ping @uranusjr , @di
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/html/conf.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # pip documentation build configuration file, created by
4 # sphinx-quickstart on Tue Apr 22 22:08:49 2008
5 #
6 # This file is execfile()d with the current directory set to its containing dir
7 #
8 # Note that not all possible configuration values are present in this
9 # autogenerated file.
10 #
11 # All configuration values have a default; values that are commented out
12 # serve to show the default.
13
14 import glob
15 import os
16 import re
17 import sys
18
19 on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
20
21 docs_dir = os.path.dirname(os.path.dirname(__file__))
22 # If extensions (or modules to document with autodoc) are in another directory,
23 # add these directories to sys.path here. If the directory is relative to the
24 # documentation root, use os.path.abspath to make it absolute, like shown here.
25 sys.path.insert(0, docs_dir)
26 # sys.path.append(os.path.join(os.path.dirname(__file__), '../'))
27
28 # -- General configuration ----------------------------------------------------
29
30 # Add any Sphinx extension module names here, as strings. They can be
31 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
32 # extensions = ['sphinx.ext.autodoc']
33 extensions = ['sphinx.ext.extlinks', 'pip_sphinxext', 'sphinx.ext.intersphinx']
34
35 # intersphinx
36 intersphinx_cache_limit = 0
37 intersphinx_mapping = {
38 'pypug': ('https://packaging.python.org/', None),
39 'pypa': ('https://www.pypa.io/en/latest/', None),
40 }
41
42
43 # Add any paths that contain templates here, relative to this directory.
44 templates_path = []
45
46 # The suffix of source filenames.
47 source_suffix = '.rst'
48
49 # The encoding of source files.
50 # source_encoding = 'utf-8'
51
52 # The master toctree document.
53 master_doc = 'index'
54
55 # General information about the project.
56 project = 'pip'
57 copyright = '2008-2020, PyPA'
58
59 # The version info for the project you're documenting, acts as replacement for
60 # |version| and |release|, also used in various other places throughout the
61 # built documents.
62 #
63 # The short X.Y version.
64
65 version = release = 'dev'
66
67 # Readthedocs seems to install pip as an egg (via setup.py install) which
68 # is somehow resulting in "import pip" picking up an older copy of pip.
69 # Rather than trying to force RTD to install pip properly, we'll simply
70 # read the version direct from the __init__.py file. (Yes, this is
71 # fragile, but it works...)
72
73 pip_init = os.path.join(docs_dir, '..', 'src', 'pip', '__init__.py')
74 with open(pip_init) as f:
75 for line in f:
76 m = re.match(r'__version__ = "(.*)"', line)
77 if m:
78 __version__ = m.group(1)
79 # The short X.Y version.
80 version = '.'.join(__version__.split('.')[:2])
81 # The full version, including alpha/beta/rc tags.
82 release = __version__
83 break
84
85 # We have this here because readthedocs plays tricks sometimes and there seems
86 # to be a heisenbug, related to the version of pip discovered. This is here to
87 # help debug that if someone decides to do that in the future.
88 print(version)
89
90 # The language for content autogenerated by Sphinx. Refer to documentation
91 # for a list of supported languages.
92 # language = None
93
94 # There are two options for replacing |today|: either, you set today to some
95 # non-false value, then it is used:
96 # today = ''
97 # Else, today_fmt is used as the format for a strftime call.
98 today_fmt = '%B %d, %Y'
99
100 # List of documents that shouldn't be included in the build.
101 # unused_docs = []
102
103 # List of directories, relative to source directory, that shouldn't be searched
104 # for source files.
105 exclude_patterns = ['build/']
106
107 # The reST default role (used for this markup: `text`) to use for all documents
108 # default_role = None
109
110 # If true, '()' will be appended to :func: etc. cross-reference text.
111 # add_function_parentheses = True
112
113 # If true, the current module name will be prepended to all description
114 # unit titles (such as .. function::).
115 # add_module_names = True
116
117 # If true, sectionauthor and moduleauthor directives will be shown in the
118 # output. They are ignored by default.
119 # show_authors = False
120
121 # The name of the Pygments (syntax highlighting) style to use.
122 pygments_style = 'sphinx'
123
124 # A list of ignored prefixes for module index sorting.
125 # modindex_common_prefix = []
126
127 extlinks = {
128 'issue': ('https://github.com/pypa/pip/issues/%s', '#'),
129 'pull': ('https://github.com/pypa/pip/pull/%s', 'PR #'),
130 'pypi': ('https://pypi.org/project/%s/', ''),
131 }
132
133 # -- Options for HTML output --------------------------------------------------
134
135 # The theme to use for HTML and HTML Help pages. Major themes that come with
136 # Sphinx are currently 'default' and 'sphinxdoc'.
137 html_theme = "pypa_theme"
138
139 # Theme options are theme-specific and customize the look and feel of a theme
140 # further. For a list of options available for each theme, see the
141 # documentation.
142 html_theme_options = {
143 'collapsiblesidebar': True,
144 'externalrefs': True,
145 'navigation_depth': 3,
146 'issues_url': 'https://github.com/pypa/pip/issues'
147 }
148
149 # Add any paths that contain custom themes here, relative to this directory.
150
151 # The name for this set of Sphinx documents. If None, it defaults to
152 # "<project> v<release> documentation".
153 # html_title = None
154
155 # A shorter title for the navigation bar. Default is the same as html_title.
156 # html_short_title = None
157
158 # The name of an image file (relative to this directory) to place at the top
159 # of the sidebar.
160 # html_logo = '_static/piplogo.png'
161
162 # The name of an image file (within the static path) to use as favicon of the
163 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
164 # pixels large.
165 # html_favicon = 'favicon.png'
166
167 # Add any paths that contain custom static files (such as style sheets) here,
168 # relative to this directory. They are copied after the builtin static files,
169 # so a file named "default.css" will overwrite the builtin "default.css".
170 html_static_path = []
171
172 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
173 # using the given strftime format.
174 html_last_updated_fmt = '%b %d, %Y'
175
176 # If true, the Docutils Smart Quotes transform (originally based on
177 # SmartyPants) will be used to convert characters like quotes and dashes
178 # to typographically correct entities. The default is True.
179 smartquotes = True
180
181 # This string, for use with Docutils 0.14 or later, customizes the
182 # SmartQuotes transform. The default of "qDe" converts normal quote
183 # characters ('"' and "'"), en and em dashes ("--" and "---"), and
184 # ellipses "...".
185 # For now, we disable the conversion of dashes so that long options
186 # like "--find-links" won't render as "-find-links" if included in the
187 # text in places where monospaced type can't be used. For example, backticks
188 # can't be used inside roles like :ref:`--no-index <--no-index>` because
189 # of nesting.
190 smartquotes_action = "qe"
191
192 # Custom sidebar templates, maps document names to template names.
193 html_sidebars = {
194 '**': ['localtoc.html', 'relations.html'],
195 'index': ['localtoc.html']
196 }
197
198 # Additional templates that should be rendered to pages, maps page names to
199 # template names.
200 # html_additional_pages = {}
201
202 # If false, no module index is generated.
203 html_use_modindex = False
204
205 # If false, no index is generated.
206 html_use_index = False
207
208 # If true, the index is split into individual pages for each letter.
209 # html_split_index = False
210
211 # If true, links to the reST sources are added to the pages.
212 html_show_sourcelink = False
213
214 # If true, an OpenSearch description file will be output, and all pages will
215 # contain a <link> tag referring to it. The value of this option must be the
216 # base URL from which the finished HTML is served.
217 # html_use_opensearch = ''
218
219 # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
220 # html_file_suffix = ''
221
222 # Output file base name for HTML help builder.
223 htmlhelp_basename = 'pipdocs'
224
225
226 # -- Options for LaTeX output -------------------------------------------------
227
228 # The paper size ('letter' or 'a4').
229 # latex_paper_size = 'letter'
230
231 # The font size ('10pt', '11pt' or '12pt').
232 # latex_font_size = '10pt'
233
234 # Grouping the document tree into LaTeX files. List of tuples
235 # (source start file, target name, title, author, documentclass [howto/manual])
236 latex_documents = [
237 (
238 'index',
239 'pip.tex',
240 u'pip Documentation',
241 u'pip developers',
242 'manual',
243 ),
244 ]
245
246 # The name of an image file (relative to this directory) to place at the top of
247 # the title page.
248 # latex_logo = None
249
250 # For "manual" documents, if this is true, then toplevel headings are parts,
251 # not chapters.
252 # latex_use_parts = False
253
254 # Additional stuff for the LaTeX preamble.
255 # latex_preamble = ''
256
257 # Documents to append as an appendix to all manuals.
258 # latex_appendices = []
259
260 # If false, no module index is generated.
261 # latex_use_modindex = True
262
263 # -- Options for Manual Pages -------------------------------------------------
264
265 # List of manual pages generated
266 man_pages = [
267 (
268 'index',
269 'pip',
270 u'package manager for Python packages',
271 u'pip developers',
272 1
273 )
274 ]
275
276
277 def to_document_name(path, base_dir):
278 """Convert a provided path to a Sphinx "document name".
279 """
280 relative_path = os.path.relpath(path, base_dir)
281 root, _ = os.path.splitext(relative_path)
282 return root.replace(os.sep, '/')
283
284
285 # Here, we crawl the entire man/commands/ directory and list every file with
286 # appropriate name and details
287 man_dir = os.path.join(docs_dir, 'man')
288 raw_subcommands = glob.glob(os.path.join(man_dir, 'commands/*.rst'))
289 if not raw_subcommands:
290 raise FileNotFoundError(
291 'The individual subcommand manpages could not be found!'
292 )
293 for fname in raw_subcommands:
294 fname_base = to_document_name(fname, man_dir)
295 outname = 'pip-' + fname_base.split('/')[1]
296 description = u'description of {} command'.format(
297 outname.replace('-', ' ')
298 )
299
300 man_pages.append((fname_base, outname, description, u'pip developers', 1))
301
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/html/conf.py b/docs/html/conf.py
--- a/docs/html/conf.py
+++ b/docs/html/conf.py
@@ -143,7 +143,9 @@
'collapsiblesidebar': True,
'externalrefs': True,
'navigation_depth': 3,
- 'issues_url': 'https://github.com/pypa/pip/issues'
+ 'issues_url': 'https://github.com/pypa/pip/issues',
+ 'codebgcolor': '#eeffcc',
+ 'codetextcolor': '#333333',
}
# Add any paths that contain custom themes here, relative to this directory.
| {"golden_diff": "diff --git a/docs/html/conf.py b/docs/html/conf.py\n--- a/docs/html/conf.py\n+++ b/docs/html/conf.py\n@@ -143,7 +143,9 @@\n 'collapsiblesidebar': True,\n 'externalrefs': True,\n 'navigation_depth': 3,\n- 'issues_url': 'https://github.com/pypa/pip/issues'\n+ 'issues_url': 'https://github.com/pypa/pip/issues',\n+ 'codebgcolor': '#eeffcc',\n+ 'codetextcolor': '#333333',\n }\n \n # Add any paths that contain custom themes here, relative to this directory.\n", "issue": "restore a broken slug anchor in user guide\nping @uranusjr , @di \n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# pip documentation build configuration file, created by\n# sphinx-quickstart on Tue Apr 22 22:08:49 2008\n#\n# This file is execfile()d with the current directory set to its containing dir\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport glob\nimport os\nimport re\nimport sys\n\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\n\ndocs_dir = os.path.dirname(os.path.dirname(__file__))\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nsys.path.insert(0, docs_dir)\n# sys.path.append(os.path.join(os.path.dirname(__file__), '../'))\n\n# -- General configuration ----------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.\n# extensions = ['sphinx.ext.autodoc']\nextensions = ['sphinx.ext.extlinks', 'pip_sphinxext', 'sphinx.ext.intersphinx']\n\n# intersphinx\nintersphinx_cache_limit = 0\nintersphinx_mapping = {\n 'pypug': ('https://packaging.python.org/', None),\n 'pypa': ('https://www.pypa.io/en/latest/', None),\n}\n\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = []\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n# source_encoding = 'utf-8'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = 'pip'\ncopyright = '2008-2020, PyPA'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\n\nversion = release = 'dev'\n\n# Readthedocs seems to install pip as an egg (via setup.py install) which\n# is somehow resulting in \"import pip\" picking up an older copy of pip.\n# Rather than trying to force RTD to install pip properly, we'll simply\n# read the version direct from the __init__.py file. (Yes, this is\n# fragile, but it works...)\n\npip_init = os.path.join(docs_dir, '..', 'src', 'pip', '__init__.py')\nwith open(pip_init) as f:\n for line in f:\n m = re.match(r'__version__ = \"(.*)\"', line)\n if m:\n __version__ = m.group(1)\n # The short X.Y version.\n version = '.'.join(__version__.split('.')[:2])\n # The full version, including alpha/beta/rc tags.\n release = __version__\n break\n\n# We have this here because readthedocs plays tricks sometimes and there seems\n# to be a heisenbug, related to the version of pip discovered. This is here to\n# help debug that if someone decides to do that in the future.\nprint(version)\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n# language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n# today = ''\n# Else, today_fmt is used as the format for a strftime call.\ntoday_fmt = '%B %d, %Y'\n\n# List of documents that shouldn't be included in the build.\n# unused_docs = []\n\n# List of directories, relative to source directory, that shouldn't be searched\n# for source files.\nexclude_patterns = ['build/']\n\n# The reST default role (used for this markup: `text`) to use for all documents\n# default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n# add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n# add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n# show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n# modindex_common_prefix = []\n\nextlinks = {\n 'issue': ('https://github.com/pypa/pip/issues/%s', '#'),\n 'pull': ('https://github.com/pypa/pip/pull/%s', 'PR #'),\n 'pypi': ('https://pypi.org/project/%s/', ''),\n}\n\n# -- Options for HTML output --------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. Major themes that come with\n# Sphinx are currently 'default' and 'sphinxdoc'.\nhtml_theme = \"pypa_theme\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\nhtml_theme_options = {\n 'collapsiblesidebar': True,\n 'externalrefs': True,\n 'navigation_depth': 3,\n 'issues_url': 'https://github.com/pypa/pip/issues'\n}\n\n# Add any paths that contain custom themes here, relative to this directory.\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n# html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n# html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n# html_logo = '_static/piplogo.png'\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n# html_favicon = 'favicon.png'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\nhtml_last_updated_fmt = '%b %d, %Y'\n\n# If true, the Docutils Smart Quotes transform (originally based on\n# SmartyPants) will be used to convert characters like quotes and dashes\n# to typographically correct entities. The default is True.\nsmartquotes = True\n\n# This string, for use with Docutils 0.14 or later, customizes the\n# SmartQuotes transform. The default of \"qDe\" converts normal quote\n# characters ('\"' and \"'\"), en and em dashes (\"--\" and \"---\"), and\n# ellipses \"...\".\n# For now, we disable the conversion of dashes so that long options\n# like \"--find-links\" won't render as \"-find-links\" if included in the\n# text in places where monospaced type can't be used. For example, backticks\n# can't be used inside roles like :ref:`--no-index <--no-index>` because\n# of nesting.\nsmartquotes_action = \"qe\"\n\n# Custom sidebar templates, maps document names to template names.\nhtml_sidebars = {\n '**': ['localtoc.html', 'relations.html'],\n 'index': ['localtoc.html']\n}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n# html_additional_pages = {}\n\n# If false, no module index is generated.\nhtml_use_modindex = False\n\n# If false, no index is generated.\nhtml_use_index = False\n\n# If true, the index is split into individual pages for each letter.\n# html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\nhtml_show_sourcelink = False\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n# html_use_opensearch = ''\n\n# If nonempty, this is the file name suffix for HTML files (e.g. \".xhtml\").\n# html_file_suffix = ''\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'pipdocs'\n\n\n# -- Options for LaTeX output -------------------------------------------------\n\n# The paper size ('letter' or 'a4').\n# latex_paper_size = 'letter'\n\n# The font size ('10pt', '11pt' or '12pt').\n# latex_font_size = '10pt'\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title, author, documentclass [howto/manual])\nlatex_documents = [\n (\n 'index',\n 'pip.tex',\n u'pip Documentation',\n u'pip developers',\n 'manual',\n ),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n# latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n# latex_use_parts = False\n\n# Additional stuff for the LaTeX preamble.\n# latex_preamble = ''\n\n# Documents to append as an appendix to all manuals.\n# latex_appendices = []\n\n# If false, no module index is generated.\n# latex_use_modindex = True\n\n# -- Options for Manual Pages -------------------------------------------------\n\n# List of manual pages generated\nman_pages = [\n (\n 'index',\n 'pip',\n u'package manager for Python packages',\n u'pip developers',\n 1\n )\n]\n\n\ndef to_document_name(path, base_dir):\n \"\"\"Convert a provided path to a Sphinx \"document name\".\n \"\"\"\n relative_path = os.path.relpath(path, base_dir)\n root, _ = os.path.splitext(relative_path)\n return root.replace(os.sep, '/')\n\n\n# Here, we crawl the entire man/commands/ directory and list every file with\n# appropriate name and details\nman_dir = os.path.join(docs_dir, 'man')\nraw_subcommands = glob.glob(os.path.join(man_dir, 'commands/*.rst'))\nif not raw_subcommands:\n raise FileNotFoundError(\n 'The individual subcommand manpages could not be found!'\n )\nfor fname in raw_subcommands:\n fname_base = to_document_name(fname, man_dir)\n outname = 'pip-' + fname_base.split('/')[1]\n description = u'description of {} command'.format(\n outname.replace('-', ' ')\n )\n\n man_pages.append((fname_base, outname, description, u'pip developers', 1))\n", "path": "docs/html/conf.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# pip documentation build configuration file, created by\n# sphinx-quickstart on Tue Apr 22 22:08:49 2008\n#\n# This file is execfile()d with the current directory set to its containing dir\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport glob\nimport os\nimport re\nimport sys\n\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\n\ndocs_dir = os.path.dirname(os.path.dirname(__file__))\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nsys.path.insert(0, docs_dir)\n# sys.path.append(os.path.join(os.path.dirname(__file__), '../'))\n\n# -- General configuration ----------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.\n# extensions = ['sphinx.ext.autodoc']\nextensions = ['sphinx.ext.extlinks', 'pip_sphinxext', 'sphinx.ext.intersphinx']\n\n# intersphinx\nintersphinx_cache_limit = 0\nintersphinx_mapping = {\n 'pypug': ('https://packaging.python.org/', None),\n 'pypa': ('https://www.pypa.io/en/latest/', None),\n}\n\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = []\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n# source_encoding = 'utf-8'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = 'pip'\ncopyright = '2008-2020, PyPA'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\n\nversion = release = 'dev'\n\n# Readthedocs seems to install pip as an egg (via setup.py install) which\n# is somehow resulting in \"import pip\" picking up an older copy of pip.\n# Rather than trying to force RTD to install pip properly, we'll simply\n# read the version direct from the __init__.py file. (Yes, this is\n# fragile, but it works...)\n\npip_init = os.path.join(docs_dir, '..', 'src', 'pip', '__init__.py')\nwith open(pip_init) as f:\n for line in f:\n m = re.match(r'__version__ = \"(.*)\"', line)\n if m:\n __version__ = m.group(1)\n # The short X.Y version.\n version = '.'.join(__version__.split('.')[:2])\n # The full version, including alpha/beta/rc tags.\n release = __version__\n break\n\n# We have this here because readthedocs plays tricks sometimes and there seems\n# to be a heisenbug, related to the version of pip discovered. This is here to\n# help debug that if someone decides to do that in the future.\nprint(version)\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n# language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n# today = ''\n# Else, today_fmt is used as the format for a strftime call.\ntoday_fmt = '%B %d, %Y'\n\n# List of documents that shouldn't be included in the build.\n# unused_docs = []\n\n# List of directories, relative to source directory, that shouldn't be searched\n# for source files.\nexclude_patterns = ['build/']\n\n# The reST default role (used for this markup: `text`) to use for all documents\n# default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n# add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n# add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n# show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n# modindex_common_prefix = []\n\nextlinks = {\n 'issue': ('https://github.com/pypa/pip/issues/%s', '#'),\n 'pull': ('https://github.com/pypa/pip/pull/%s', 'PR #'),\n 'pypi': ('https://pypi.org/project/%s/', ''),\n}\n\n# -- Options for HTML output --------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. Major themes that come with\n# Sphinx are currently 'default' and 'sphinxdoc'.\nhtml_theme = \"pypa_theme\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\nhtml_theme_options = {\n 'collapsiblesidebar': True,\n 'externalrefs': True,\n 'navigation_depth': 3,\n 'issues_url': 'https://github.com/pypa/pip/issues',\n 'codebgcolor': '#eeffcc',\n 'codetextcolor': '#333333',\n}\n\n# Add any paths that contain custom themes here, relative to this directory.\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n# html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n# html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n# html_logo = '_static/piplogo.png'\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n# html_favicon = 'favicon.png'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\nhtml_last_updated_fmt = '%b %d, %Y'\n\n# If true, the Docutils Smart Quotes transform (originally based on\n# SmartyPants) will be used to convert characters like quotes and dashes\n# to typographically correct entities. The default is True.\nsmartquotes = True\n\n# This string, for use with Docutils 0.14 or later, customizes the\n# SmartQuotes transform. The default of \"qDe\" converts normal quote\n# characters ('\"' and \"'\"), en and em dashes (\"--\" and \"---\"), and\n# ellipses \"...\".\n# For now, we disable the conversion of dashes so that long options\n# like \"--find-links\" won't render as \"-find-links\" if included in the\n# text in places where monospaced type can't be used. For example, backticks\n# can't be used inside roles like :ref:`--no-index <--no-index>` because\n# of nesting.\nsmartquotes_action = \"qe\"\n\n# Custom sidebar templates, maps document names to template names.\nhtml_sidebars = {\n '**': ['localtoc.html', 'relations.html'],\n 'index': ['localtoc.html']\n}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n# html_additional_pages = {}\n\n# If false, no module index is generated.\nhtml_use_modindex = False\n\n# If false, no index is generated.\nhtml_use_index = False\n\n# If true, the index is split into individual pages for each letter.\n# html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\nhtml_show_sourcelink = False\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n# html_use_opensearch = ''\n\n# If nonempty, this is the file name suffix for HTML files (e.g. \".xhtml\").\n# html_file_suffix = ''\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'pipdocs'\n\n\n# -- Options for LaTeX output -------------------------------------------------\n\n# The paper size ('letter' or 'a4').\n# latex_paper_size = 'letter'\n\n# The font size ('10pt', '11pt' or '12pt').\n# latex_font_size = '10pt'\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title, author, documentclass [howto/manual])\nlatex_documents = [\n (\n 'index',\n 'pip.tex',\n u'pip Documentation',\n u'pip developers',\n 'manual',\n ),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n# latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n# latex_use_parts = False\n\n# Additional stuff for the LaTeX preamble.\n# latex_preamble = ''\n\n# Documents to append as an appendix to all manuals.\n# latex_appendices = []\n\n# If false, no module index is generated.\n# latex_use_modindex = True\n\n# -- Options for Manual Pages -------------------------------------------------\n\n# List of manual pages generated\nman_pages = [\n (\n 'index',\n 'pip',\n u'package manager for Python packages',\n u'pip developers',\n 1\n )\n]\n\n\ndef to_document_name(path, base_dir):\n \"\"\"Convert a provided path to a Sphinx \"document name\".\n \"\"\"\n relative_path = os.path.relpath(path, base_dir)\n root, _ = os.path.splitext(relative_path)\n return root.replace(os.sep, '/')\n\n\n# Here, we crawl the entire man/commands/ directory and list every file with\n# appropriate name and details\nman_dir = os.path.join(docs_dir, 'man')\nraw_subcommands = glob.glob(os.path.join(man_dir, 'commands/*.rst'))\nif not raw_subcommands:\n raise FileNotFoundError(\n 'The individual subcommand manpages could not be found!'\n )\nfor fname in raw_subcommands:\n fname_base = to_document_name(fname, man_dir)\n outname = 'pip-' + fname_base.split('/')[1]\n description = u'description of {} command'.format(\n outname.replace('-', ' ')\n )\n\n man_pages.append((fname_base, outname, description, u'pip developers', 1))\n", "path": "docs/html/conf.py"}]} | 3,569 | 146 |
gh_patches_debug_409 | rasdani/github-patches | git_diff | pypa__pipenv-3424 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
NameError: name 'unicode' is not defined
### Issue description
```
[hadoop@ip-192-168-0-84 sales-forecast]$ pipenv install --python 3
Virtualenv already exists!
Removing existing virtualenv...
Creating a virtualenv for this project...
Pipfile: /home/hadoop/sales-forecast/Pipfile
Using /usr/bin/python3.4 (3.4.8) to create virtualenv...
⠼ Creating virtual environment...Using base prefix '/usr'
New python executable in /home/hadoop/.local/share/virtualenvs/sales-forecast-ZYWIRznP/bin/python3.4
Also creating executable in /home/hadoop/.local/share/virtualenvs/sales-forecast-ZYWIRznP/bin/python
Installing setuptools, pip, wheel...done.
Running virtualenv with interpreter /usr/bin/python3.4
✔ Successfully created virtual environment!
Virtualenv location: /home/hadoop/.local/share/virtualenvs/sales-forecast-ZYWIRznP
requirements.txt found, instead of Pipfile! Converting...
✔ Success!
Warning: Your Pipfile now contains pinned versions, if your requirements.txt did.
We recommend updating your Pipfile to specify the "*" version, instead.
Pipfile.lock not found, creating...
Locking [dev-packages] dependencies...
Locking [packages] dependencies...
✘ Locking Failed!
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/pipenv/resolver.py", line 126, in <module>
main()
File "/usr/local/lib/python2.7/site-packages/pipenv/resolver.py", line 119, in main
parsed.requirements_dir, parsed.packages)
File "/usr/local/lib/python2.7/site-packages/pipenv/resolver.py", line 85, in _main
requirements_dir=requirements_dir,
File "/usr/local/lib/python2.7/site-packages/pipenv/resolver.py", line 69, in resolve
req_dir=requirements_dir
File "/usr/local/lib/python2.7/site-packages/pipenv/utils.py", line 700, in resolve_deps
from .vendor.requirementslib.models.requirements import Requirement
File "/usr/local/lib/python2.7/site-packages/pipenv/vendor/requirementslib/__init__.py", line 14, in <module>
from .models.pipfile import Pipfile
File "/usr/local/lib/python2.7/site-packages/pipenv/vendor/requirementslib/models/pipfile.py", line 23, in <module>
from ..environment import MYPY_RUNNING
File "/usr/local/lib/python2.7/site-packages/pipenv/vendor/requirementslib/environment.py", line 17, in <module>
MYPY_RUNNING = os.environ.get("MYPY_RUNNING", is_type_checking())
File "/usr/local/lib/python2.7/site-packages/pipenv/vendor/requirementslib/environment.py", line 10, in is_type_checking
from typing import TYPE_CHECKING
File "/usr/local/lib/python2.7/site-packages/typing.py", line 624, in <module>
AnyStr = TypeVar('AnyStr', bytes, unicode)
NameError: name 'unicode' is not defined
File "/usr/local/lib/python2.7/site-packages/pipenv/resolver.py", line 126, in <module>
main()
File "/usr/local/lib/python2.7/site-packages/pipenv/resolver.py", line 119, in main
parsed.requirements_dir, parsed.packages)
File "/usr/local/lib/python2.7/site-packages/pipenv/resolver.py", line 85, in _main
requirements_dir=requirements_dir,
File "/usr/local/lib/python2.7/site-packages/pipenv/resolver.py", line 69, in resolve
req_dir=requirements_dir
File "/usr/local/lib/python2.7/site-packages/pipenv/utils.py", line 700, in resolve_deps
from .vendor.requirementslib.models.requirements import Requirement
File "/usr/local/lib/python2.7/site-packages/pipenv/vendor/requirementslib/__init__.py", line 14, in <module>
from .models.pipfile import Pipfile
File "/usr/local/lib/python2.7/site-packages/pipenv/vendor/requirementslib/models/pipfile.py", line 23, in <module>
from ..environment import MYPY_RUNNING
File "/usr/local/lib/python2.7/site-packages/pipenv/vendor/requirementslib/environment.py", line 17, in <module>
MYPY_RUNNING = os.environ.get("MYPY_RUNNING", is_type_checking())
File "/usr/local/lib/python2.7/site-packages/pipenv/vendor/requirementslib/environment.py", line 10, in is_type_checking
from typing import TYPE_CHECKING
File "/usr/local/lib/python2.7/site-packages/typing.py", line 624, in <module>
AnyStr = TypeVar('AnyStr', bytes, unicode)
NameError: name 'unicode' is not defined
```
-------------------------------------------------------------------------------
<details><summary>$ pipenv --support</summary>
Pipenv version: `'2018.11.26'`
Pipenv location: `'/usr/local/lib/python2.7/site-packages/pipenv'`
Python location: `'/usr/bin/python2.7'`
Python installations found:
- `3.4.8`: `/usr/bin/python3`
- `2.7.14`: `/usr/bin/python2.7`
- `3.4.8`: `/usr/bin/python3.4m`
PEP 508 Information:
```
{'implementation_name': 'cpython',
'implementation_version': '0',
'os_name': 'posix',
'platform_machine': 'x86_64',
'platform_python_implementation': 'CPython',
'platform_release': '4.14.72-68.55.amzn1.x86_64',
'platform_system': 'Linux',
'platform_version': '#1 SMP Fri Sep 28 21:14:54 UTC 2018',
'python_full_version': '2.7.14',
'python_version': '2.7',
'sys_platform': 'linux2'}
```
System environment variables:
- `AWS_PATH`
- `PYTHONDONTWRITEBYTECODE`
- `LESSOPEN`
- `SSH_CLIENT`
- `LOGNAME`
- `USER`
- `HOME`
- `PATH`
- `PS1`
- `LANG`
- `LESS_TERMCAP_se`
- `TERM`
- `SHELL`
- `EC2_AMITOOL_HOME`
- `LESS_TERMCAP_me`
- `LESS_TERMCAP_md`
- `LESS_TERMCAP_mb`
- `HISTSIZE`
- `PYTHONFINDER_IGNORE_UNSUPPORTED`
- `AWS_ELB_HOME`
- `JAVA_HOME`
- `EC2_HOME`
- `LESS_TERMCAP_ue`
- `AWS_AUTO_SCALING_HOME`
- `PIP_PYTHON_PATH`
- `_`
- `AWS_DEFAULT_REGION`
- `SSH_CONNECTION`
- `AWS_CLOUDWATCH_HOME`
- `SSH_TTY`
- `OLDPWD`
- `HOSTNAME`
- `HISTCONTROL`
- `SHLVL`
- `PWD`
- `LESS_TERMCAP_us`
- `PIP_SHIMS_BASE_MODULE`
- `PIP_DISABLE_PIP_VERSION_CHECK`
- `MAIL`
- `LS_COLORS`
Pipenv–specific environment variables:
Debug–specific environment variables:
- `PATH`: `/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/aws/bin`
- `SHELL`: `/bin/bash`
- `LANG`: `en_US.UTF-8`
- `PWD`: `/home/hadoop/sales-forecast`
---------------------------
Contents of `Pipfile` ('/home/hadoop/sales-forecast/Pipfile'):
```toml
[[source]]
name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true
[dev-packages]
[packages]
numpy = "*"
scipy = "*"
scikit-learn = "==0.19.1"
statsmodels = "*"
xgboost = "*"
pystan = "*"
fbprophet = "*"
matplotlib = "==2.2.3"
seaborn = "*"
redis = "*"
sqlalchemy = "*"
mysqlclient = "*"
pyarrow = "==0.11.1"
hdfs = "*"
hdfs3 = "*"
s3fs = "*"
python-dateutil = "*"
chinesecalendar = "*"
pandas = "*"
h5py = "*"
lxml = "*"
openpyxl = "*"
xlrd = "*"
click = "*"
pathlib2 = "*"
python-dotenv = "*"
environs = "*"
joblib = "*"
tqdm = "*"
dask = {extras = ["complete"],version = "*"}
[requires]
python_version = "3.4"
```
</details>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 import codecs
4 import os
5 import sys
6 from shutil import rmtree
7
8 from setuptools import find_packages, setup, Command
9
10 here = os.path.abspath(os.path.dirname(__file__))
11
12 with codecs.open(os.path.join(here, "README.md"), encoding="utf-8") as f:
13 long_description = "\n" + f.read()
14
15 about = {}
16
17 with open(os.path.join(here, "pipenv", "__version__.py")) as f:
18 exec(f.read(), about)
19
20 if sys.argv[-1] == "publish":
21 os.system("python setup.py sdist bdist_wheel upload")
22 sys.exit()
23
24 required = [
25 "pip>=9.0.1",
26 "certifi",
27 "setuptools>=36.2.1",
28 "virtualenv-clone>=0.2.5",
29 "virtualenv",
30 'enum34; python_version<"3"',
31 'typing; python_version<"3.5"'
32 ]
33
34
35 # https://pypi.python.org/pypi/stdeb/0.8.5#quickstart-2-just-tell-me-the-fastest-way-to-make-a-deb
36 class DebCommand(Command):
37 """Support for setup.py deb"""
38
39 description = "Build and publish the .deb package."
40 user_options = []
41
42 @staticmethod
43 def status(s):
44 """Prints things in bold."""
45 print("\033[1m{0}\033[0m".format(s))
46
47 def initialize_options(self):
48 pass
49
50 def finalize_options(self):
51 pass
52
53 def run(self):
54 try:
55 self.status("Removing previous builds…")
56 rmtree(os.path.join(here, "deb_dist"))
57 except FileNotFoundError:
58 pass
59 self.status(u"Creating debian mainfest…")
60 os.system(
61 "python setup.py --command-packages=stdeb.command sdist_dsc -z artful --package3=pipenv --depends3=python3-virtualenv-clone"
62 )
63 self.status(u"Building .deb…")
64 os.chdir("deb_dist/pipenv-{0}".format(about["__version__"]))
65 os.system("dpkg-buildpackage -rfakeroot -uc -us")
66
67
68 class UploadCommand(Command):
69 """Support setup.py upload."""
70
71 description = "Build and publish the package."
72 user_options = []
73
74 @staticmethod
75 def status(s):
76 """Prints things in bold."""
77 print("\033[1m{0}\033[0m".format(s))
78
79 def initialize_options(self):
80 pass
81
82 def finalize_options(self):
83 pass
84
85 def run(self):
86 try:
87 self.status("Removing previous builds…")
88 rmtree(os.path.join(here, "dist"))
89 except FileNotFoundError:
90 pass
91 self.status("Building Source distribution…")
92 os.system("{0} setup.py sdist bdist_wheel".format(sys.executable))
93 self.status("Uploading the package to PyPI via Twine…")
94 os.system("twine upload dist/*")
95 self.status("Pushing git tags…")
96 os.system("git tag v{0}".format(about["__version__"]))
97 os.system("git push --tags")
98 sys.exit()
99
100
101 setup(
102 name="pipenv",
103 version=about["__version__"],
104 description="Python Development Workflow for Humans.",
105 long_description=long_description,
106 long_description_content_type='text/markdown',
107 author="Kenneth Reitz",
108 author_email="[email protected]",
109 url="https://github.com/pypa/pipenv",
110 packages=find_packages(exclude=["tests", "tests.*", "tasks", "tasks.*"]),
111 entry_points={
112 "console_scripts": [
113 "pipenv=pipenv:cli",
114 "pipenv-resolver=pipenv.resolver:main",
115 ]
116 },
117 package_data={
118 "": ["LICENSE", "NOTICES"],
119 "pipenv.vendor.requests": ["*.pem"],
120 "pipenv.vendor.certifi": ["*.pem"],
121 "pipenv.vendor.click_completion": ["*.j2"],
122 "pipenv.patched.notpip._vendor.certifi": ["*.pem"],
123 "pipenv.patched.notpip._vendor.requests": ["*.pem"],
124 "pipenv.patched.notpip._vendor.distlib._backport": ["sysconfig.cfg"],
125 "pipenv.patched.notpip._vendor.distlib": [
126 "t32.exe",
127 "t64.exe",
128 "w32.exe",
129 "w64.exe",
130 ],
131 },
132 python_requires=">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*",
133 setup_requires=["invoke", "parver"],
134 install_requires=required,
135 extras_require={},
136 include_package_data=True,
137 license="MIT",
138 classifiers=[
139 "License :: OSI Approved :: MIT License",
140 "Programming Language :: Python",
141 "Programming Language :: Python :: 2.7",
142 "Programming Language :: Python :: 3",
143 "Programming Language :: Python :: 3.4",
144 "Programming Language :: Python :: 3.5",
145 "Programming Language :: Python :: 3.6",
146 "Programming Language :: Python :: 3.7",
147 "Programming Language :: Python :: Implementation :: CPython",
148 "Programming Language :: Python :: Implementation :: PyPy",
149 ],
150 cmdclass={"upload": UploadCommand, "deb": DebCommand},
151 )
152
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -27,8 +27,7 @@
"setuptools>=36.2.1",
"virtualenv-clone>=0.2.5",
"virtualenv",
- 'enum34; python_version<"3"',
- 'typing; python_version<"3.5"'
+ 'enum34; python_version<"3"'
]
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -27,8 +27,7 @@\n \"setuptools>=36.2.1\",\n \"virtualenv-clone>=0.2.5\",\n \"virtualenv\",\n- 'enum34; python_version<\"3\"',\n- 'typing; python_version<\"3.5\"'\n+ 'enum34; python_version<\"3\"'\n ]\n", "issue": "NameError: name 'unicode' is not defined\n### Issue description\r\n```\r\n\r\n[hadoop@ip-192-168-0-84 sales-forecast]$ pipenv install --python 3\r\nVirtualenv already exists!\r\nRemoving existing virtualenv...\r\nCreating a virtualenv for this project...\r\nPipfile: /home/hadoop/sales-forecast/Pipfile\r\nUsing /usr/bin/python3.4 (3.4.8) to create virtualenv...\r\n\u283c Creating virtual environment...Using base prefix '/usr'\r\nNew python executable in /home/hadoop/.local/share/virtualenvs/sales-forecast-ZYWIRznP/bin/python3.4\r\nAlso creating executable in /home/hadoop/.local/share/virtualenvs/sales-forecast-ZYWIRznP/bin/python\r\nInstalling setuptools, pip, wheel...done.\r\nRunning virtualenv with interpreter /usr/bin/python3.4\r\n\r\n\u2714 Successfully created virtual environment!\r\nVirtualenv location: /home/hadoop/.local/share/virtualenvs/sales-forecast-ZYWIRznP\r\nrequirements.txt found, instead of Pipfile! Converting...\r\n\u2714 Success!\r\nWarning: Your Pipfile now contains pinned versions, if your requirements.txt did.\r\nWe recommend updating your Pipfile to specify the \"*\" version, instead.\r\nPipfile.lock not found, creating...\r\nLocking [dev-packages] dependencies...\r\nLocking [packages] dependencies...\r\n\u2718 Locking Failed!\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python2.7/site-packages/pipenv/resolver.py\", line 126, in <module>\r\n main()\r\n File \"/usr/local/lib/python2.7/site-packages/pipenv/resolver.py\", line 119, in main\r\n parsed.requirements_dir, parsed.packages)\r\n File \"/usr/local/lib/python2.7/site-packages/pipenv/resolver.py\", line 85, in _main\r\n requirements_dir=requirements_dir,\r\n File \"/usr/local/lib/python2.7/site-packages/pipenv/resolver.py\", line 69, in resolve\r\n req_dir=requirements_dir\r\n File \"/usr/local/lib/python2.7/site-packages/pipenv/utils.py\", line 700, in resolve_deps\r\n from .vendor.requirementslib.models.requirements import Requirement\r\n File \"/usr/local/lib/python2.7/site-packages/pipenv/vendor/requirementslib/__init__.py\", line 14, in <module>\r\n from .models.pipfile import Pipfile\r\n File \"/usr/local/lib/python2.7/site-packages/pipenv/vendor/requirementslib/models/pipfile.py\", line 23, in <module>\r\n from ..environment import MYPY_RUNNING\r\n File \"/usr/local/lib/python2.7/site-packages/pipenv/vendor/requirementslib/environment.py\", line 17, in <module>\r\n MYPY_RUNNING = os.environ.get(\"MYPY_RUNNING\", is_type_checking())\r\n File \"/usr/local/lib/python2.7/site-packages/pipenv/vendor/requirementslib/environment.py\", line 10, in is_type_checking\r\n from typing import TYPE_CHECKING\r\n File \"/usr/local/lib/python2.7/site-packages/typing.py\", line 624, in <module>\r\n AnyStr = TypeVar('AnyStr', bytes, unicode)\r\nNameError: name 'unicode' is not defined\r\nFile \"/usr/local/lib/python2.7/site-packages/pipenv/resolver.py\", line 126, in <module>\r\n main()\r\n File \"/usr/local/lib/python2.7/site-packages/pipenv/resolver.py\", line 119, in main\r\n parsed.requirements_dir, parsed.packages)\r\n File \"/usr/local/lib/python2.7/site-packages/pipenv/resolver.py\", line 85, in _main\r\n requirements_dir=requirements_dir,\r\n File \"/usr/local/lib/python2.7/site-packages/pipenv/resolver.py\", line 69, in resolve\r\n req_dir=requirements_dir\r\n File \"/usr/local/lib/python2.7/site-packages/pipenv/utils.py\", line 700, in resolve_deps\r\n from .vendor.requirementslib.models.requirements import Requirement\r\n File \"/usr/local/lib/python2.7/site-packages/pipenv/vendor/requirementslib/__init__.py\", line 14, in <module>\r\n from .models.pipfile import Pipfile\r\n File \"/usr/local/lib/python2.7/site-packages/pipenv/vendor/requirementslib/models/pipfile.py\", line 23, in <module>\r\n from ..environment import MYPY_RUNNING\r\n File \"/usr/local/lib/python2.7/site-packages/pipenv/vendor/requirementslib/environment.py\", line 17, in <module>\r\n MYPY_RUNNING = os.environ.get(\"MYPY_RUNNING\", is_type_checking())\r\n File \"/usr/local/lib/python2.7/site-packages/pipenv/vendor/requirementslib/environment.py\", line 10, in is_type_checking\r\n from typing import TYPE_CHECKING\r\n File \"/usr/local/lib/python2.7/site-packages/typing.py\", line 624, in <module>\r\n AnyStr = TypeVar('AnyStr', bytes, unicode)\r\nNameError: name 'unicode' is not defined\r\n\r\n```\r\n\r\n-------------------------------------------------------------------------------\r\n<details><summary>$ pipenv --support</summary>\r\n\r\nPipenv version: `'2018.11.26'`\r\n\r\nPipenv location: `'/usr/local/lib/python2.7/site-packages/pipenv'`\r\n\r\nPython location: `'/usr/bin/python2.7'`\r\n\r\nPython installations found:\r\n\r\n - `3.4.8`: `/usr/bin/python3`\r\n - `2.7.14`: `/usr/bin/python2.7`\r\n - `3.4.8`: `/usr/bin/python3.4m`\r\n\r\nPEP 508 Information:\r\n\r\n```\r\n{'implementation_name': 'cpython',\r\n 'implementation_version': '0',\r\n 'os_name': 'posix',\r\n 'platform_machine': 'x86_64',\r\n 'platform_python_implementation': 'CPython',\r\n 'platform_release': '4.14.72-68.55.amzn1.x86_64',\r\n 'platform_system': 'Linux',\r\n 'platform_version': '#1 SMP Fri Sep 28 21:14:54 UTC 2018',\r\n 'python_full_version': '2.7.14',\r\n 'python_version': '2.7',\r\n 'sys_platform': 'linux2'}\r\n```\r\n\r\nSystem environment variables:\r\n\r\n - `AWS_PATH`\r\n - `PYTHONDONTWRITEBYTECODE`\r\n - `LESSOPEN`\r\n - `SSH_CLIENT`\r\n - `LOGNAME`\r\n - `USER`\r\n - `HOME`\r\n - `PATH`\r\n - `PS1`\r\n - `LANG`\r\n - `LESS_TERMCAP_se`\r\n - `TERM`\r\n - `SHELL`\r\n - `EC2_AMITOOL_HOME`\r\n - `LESS_TERMCAP_me`\r\n - `LESS_TERMCAP_md`\r\n - `LESS_TERMCAP_mb`\r\n - `HISTSIZE`\r\n - `PYTHONFINDER_IGNORE_UNSUPPORTED`\r\n - `AWS_ELB_HOME`\r\n - `JAVA_HOME`\r\n - `EC2_HOME`\r\n - `LESS_TERMCAP_ue`\r\n - `AWS_AUTO_SCALING_HOME`\r\n - `PIP_PYTHON_PATH`\r\n - `_`\r\n - `AWS_DEFAULT_REGION`\r\n - `SSH_CONNECTION`\r\n - `AWS_CLOUDWATCH_HOME`\r\n - `SSH_TTY`\r\n - `OLDPWD`\r\n - `HOSTNAME`\r\n - `HISTCONTROL`\r\n - `SHLVL`\r\n - `PWD`\r\n - `LESS_TERMCAP_us`\r\n - `PIP_SHIMS_BASE_MODULE`\r\n - `PIP_DISABLE_PIP_VERSION_CHECK`\r\n - `MAIL`\r\n - `LS_COLORS`\r\n\r\nPipenv\u2013specific environment variables:\r\n\r\n\r\nDebug\u2013specific environment variables:\r\n\r\n - `PATH`: `/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/aws/bin`\r\n - `SHELL`: `/bin/bash`\r\n - `LANG`: `en_US.UTF-8`\r\n - `PWD`: `/home/hadoop/sales-forecast`\r\n\r\n\r\n---------------------------\r\n\r\nContents of `Pipfile` ('/home/hadoop/sales-forecast/Pipfile'):\r\n\r\n```toml\r\n[[source]]\r\nname = \"pypi\"\r\nurl = \"https://pypi.org/simple\"\r\nverify_ssl = true\r\n\r\n[dev-packages]\r\n\r\n[packages]\r\nnumpy = \"*\"\r\nscipy = \"*\"\r\nscikit-learn = \"==0.19.1\"\r\nstatsmodels = \"*\"\r\nxgboost = \"*\"\r\npystan = \"*\"\r\nfbprophet = \"*\"\r\nmatplotlib = \"==2.2.3\"\r\nseaborn = \"*\"\r\nredis = \"*\"\r\nsqlalchemy = \"*\"\r\nmysqlclient = \"*\"\r\npyarrow = \"==0.11.1\"\r\nhdfs = \"*\"\r\nhdfs3 = \"*\"\r\ns3fs = \"*\"\r\npython-dateutil = \"*\"\r\nchinesecalendar = \"*\"\r\npandas = \"*\"\r\nh5py = \"*\"\r\nlxml = \"*\"\r\nopenpyxl = \"*\"\r\nxlrd = \"*\"\r\nclick = \"*\"\r\npathlib2 = \"*\"\r\npython-dotenv = \"*\"\r\nenvirons = \"*\"\r\njoblib = \"*\"\r\ntqdm = \"*\"\r\ndask = {extras = [\"complete\"],version = \"*\"}\r\n\r\n[requires]\r\npython_version = \"3.4\"\r\n\r\n```\r\n\r\n</details>\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nimport codecs\nimport os\nimport sys\nfrom shutil import rmtree\n\nfrom setuptools import find_packages, setup, Command\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\nwith codecs.open(os.path.join(here, \"README.md\"), encoding=\"utf-8\") as f:\n long_description = \"\\n\" + f.read()\n\nabout = {}\n\nwith open(os.path.join(here, \"pipenv\", \"__version__.py\")) as f:\n exec(f.read(), about)\n\nif sys.argv[-1] == \"publish\":\n os.system(\"python setup.py sdist bdist_wheel upload\")\n sys.exit()\n\nrequired = [\n \"pip>=9.0.1\",\n \"certifi\",\n \"setuptools>=36.2.1\",\n \"virtualenv-clone>=0.2.5\",\n \"virtualenv\",\n 'enum34; python_version<\"3\"',\n 'typing; python_version<\"3.5\"'\n]\n\n\n# https://pypi.python.org/pypi/stdeb/0.8.5#quickstart-2-just-tell-me-the-fastest-way-to-make-a-deb\nclass DebCommand(Command):\n \"\"\"Support for setup.py deb\"\"\"\n\n description = \"Build and publish the .deb package.\"\n user_options = []\n\n @staticmethod\n def status(s):\n \"\"\"Prints things in bold.\"\"\"\n print(\"\\033[1m{0}\\033[0m\".format(s))\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n def run(self):\n try:\n self.status(\"Removing previous builds\u2026\")\n rmtree(os.path.join(here, \"deb_dist\"))\n except FileNotFoundError:\n pass\n self.status(u\"Creating debian mainfest\u2026\")\n os.system(\n \"python setup.py --command-packages=stdeb.command sdist_dsc -z artful --package3=pipenv --depends3=python3-virtualenv-clone\"\n )\n self.status(u\"Building .deb\u2026\")\n os.chdir(\"deb_dist/pipenv-{0}\".format(about[\"__version__\"]))\n os.system(\"dpkg-buildpackage -rfakeroot -uc -us\")\n\n\nclass UploadCommand(Command):\n \"\"\"Support setup.py upload.\"\"\"\n\n description = \"Build and publish the package.\"\n user_options = []\n\n @staticmethod\n def status(s):\n \"\"\"Prints things in bold.\"\"\"\n print(\"\\033[1m{0}\\033[0m\".format(s))\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n def run(self):\n try:\n self.status(\"Removing previous builds\u2026\")\n rmtree(os.path.join(here, \"dist\"))\n except FileNotFoundError:\n pass\n self.status(\"Building Source distribution\u2026\")\n os.system(\"{0} setup.py sdist bdist_wheel\".format(sys.executable))\n self.status(\"Uploading the package to PyPI via Twine\u2026\")\n os.system(\"twine upload dist/*\")\n self.status(\"Pushing git tags\u2026\")\n os.system(\"git tag v{0}\".format(about[\"__version__\"]))\n os.system(\"git push --tags\")\n sys.exit()\n\n\nsetup(\n name=\"pipenv\",\n version=about[\"__version__\"],\n description=\"Python Development Workflow for Humans.\",\n long_description=long_description,\n long_description_content_type='text/markdown',\n author=\"Kenneth Reitz\",\n author_email=\"[email protected]\",\n url=\"https://github.com/pypa/pipenv\",\n packages=find_packages(exclude=[\"tests\", \"tests.*\", \"tasks\", \"tasks.*\"]),\n entry_points={\n \"console_scripts\": [\n \"pipenv=pipenv:cli\",\n \"pipenv-resolver=pipenv.resolver:main\",\n ]\n },\n package_data={\n \"\": [\"LICENSE\", \"NOTICES\"],\n \"pipenv.vendor.requests\": [\"*.pem\"],\n \"pipenv.vendor.certifi\": [\"*.pem\"],\n \"pipenv.vendor.click_completion\": [\"*.j2\"],\n \"pipenv.patched.notpip._vendor.certifi\": [\"*.pem\"],\n \"pipenv.patched.notpip._vendor.requests\": [\"*.pem\"],\n \"pipenv.patched.notpip._vendor.distlib._backport\": [\"sysconfig.cfg\"],\n \"pipenv.patched.notpip._vendor.distlib\": [\n \"t32.exe\",\n \"t64.exe\",\n \"w32.exe\",\n \"w64.exe\",\n ],\n },\n python_requires=\">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*\",\n setup_requires=[\"invoke\", \"parver\"],\n install_requires=required,\n extras_require={},\n include_package_data=True,\n license=\"MIT\",\n classifiers=[\n \"License :: OSI Approved :: MIT License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n ],\n cmdclass={\"upload\": UploadCommand, \"deb\": DebCommand},\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nimport codecs\nimport os\nimport sys\nfrom shutil import rmtree\n\nfrom setuptools import find_packages, setup, Command\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\nwith codecs.open(os.path.join(here, \"README.md\"), encoding=\"utf-8\") as f:\n long_description = \"\\n\" + f.read()\n\nabout = {}\n\nwith open(os.path.join(here, \"pipenv\", \"__version__.py\")) as f:\n exec(f.read(), about)\n\nif sys.argv[-1] == \"publish\":\n os.system(\"python setup.py sdist bdist_wheel upload\")\n sys.exit()\n\nrequired = [\n \"pip>=9.0.1\",\n \"certifi\",\n \"setuptools>=36.2.1\",\n \"virtualenv-clone>=0.2.5\",\n \"virtualenv\",\n 'enum34; python_version<\"3\"'\n]\n\n\n# https://pypi.python.org/pypi/stdeb/0.8.5#quickstart-2-just-tell-me-the-fastest-way-to-make-a-deb\nclass DebCommand(Command):\n \"\"\"Support for setup.py deb\"\"\"\n\n description = \"Build and publish the .deb package.\"\n user_options = []\n\n @staticmethod\n def status(s):\n \"\"\"Prints things in bold.\"\"\"\n print(\"\\033[1m{0}\\033[0m\".format(s))\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n def run(self):\n try:\n self.status(\"Removing previous builds\u2026\")\n rmtree(os.path.join(here, \"deb_dist\"))\n except FileNotFoundError:\n pass\n self.status(u\"Creating debian mainfest\u2026\")\n os.system(\n \"python setup.py --command-packages=stdeb.command sdist_dsc -z artful --package3=pipenv --depends3=python3-virtualenv-clone\"\n )\n self.status(u\"Building .deb\u2026\")\n os.chdir(\"deb_dist/pipenv-{0}\".format(about[\"__version__\"]))\n os.system(\"dpkg-buildpackage -rfakeroot -uc -us\")\n\n\nclass UploadCommand(Command):\n \"\"\"Support setup.py upload.\"\"\"\n\n description = \"Build and publish the package.\"\n user_options = []\n\n @staticmethod\n def status(s):\n \"\"\"Prints things in bold.\"\"\"\n print(\"\\033[1m{0}\\033[0m\".format(s))\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n def run(self):\n try:\n self.status(\"Removing previous builds\u2026\")\n rmtree(os.path.join(here, \"dist\"))\n except FileNotFoundError:\n pass\n self.status(\"Building Source distribution\u2026\")\n os.system(\"{0} setup.py sdist bdist_wheel\".format(sys.executable))\n self.status(\"Uploading the package to PyPI via Twine\u2026\")\n os.system(\"twine upload dist/*\")\n self.status(\"Pushing git tags\u2026\")\n os.system(\"git tag v{0}\".format(about[\"__version__\"]))\n os.system(\"git push --tags\")\n sys.exit()\n\n\nsetup(\n name=\"pipenv\",\n version=about[\"__version__\"],\n description=\"Python Development Workflow for Humans.\",\n long_description=long_description,\n long_description_content_type='text/markdown',\n author=\"Kenneth Reitz\",\n author_email=\"[email protected]\",\n url=\"https://github.com/pypa/pipenv\",\n packages=find_packages(exclude=[\"tests\", \"tests.*\", \"tasks\", \"tasks.*\"]),\n entry_points={\n \"console_scripts\": [\n \"pipenv=pipenv:cli\",\n \"pipenv-resolver=pipenv.resolver:main\",\n ]\n },\n package_data={\n \"\": [\"LICENSE\", \"NOTICES\"],\n \"pipenv.vendor.requests\": [\"*.pem\"],\n \"pipenv.vendor.certifi\": [\"*.pem\"],\n \"pipenv.vendor.click_completion\": [\"*.j2\"],\n \"pipenv.patched.notpip._vendor.certifi\": [\"*.pem\"],\n \"pipenv.patched.notpip._vendor.requests\": [\"*.pem\"],\n \"pipenv.patched.notpip._vendor.distlib._backport\": [\"sysconfig.cfg\"],\n \"pipenv.patched.notpip._vendor.distlib\": [\n \"t32.exe\",\n \"t64.exe\",\n \"w32.exe\",\n \"w64.exe\",\n ],\n },\n python_requires=\">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*\",\n setup_requires=[\"invoke\", \"parver\"],\n install_requires=required,\n extras_require={},\n include_package_data=True,\n license=\"MIT\",\n classifiers=[\n \"License :: OSI Approved :: MIT License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n ],\n cmdclass={\"upload\": UploadCommand, \"deb\": DebCommand},\n)\n", "path": "setup.py"}]} | 3,839 | 99 |
gh_patches_debug_35517 | rasdani/github-patches | git_diff | sunpy__sunpy-6944 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Our Map guide page is a terrible introduction to Map
https://docs.sunpy.org/en/stable/guide/data_types/maps.html
If you haven't ever used Map before (maybe have minimal background with solar data) this is a terrible introduction. Sure the information about making a custom map is useful somewhere in our docs, but two paragraphs in at the top of the main guide page?! No.
IMO we should be explaining:
* Why Map exists, what does it give you that just opening the array with `astropy.io.fits` dosen't.
* How to load a map from a FITS file.
* How to plot a map.
* How to view the standardised metadata.
* How to work with coordinates in the frame of the map and plot them.
* submap / rotate / resample etc.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/showcase/where_is_stereo.py`
Content:
```
1 """
2 =============================================
3 Reproducing the "Where is STEREO Today?" plot
4 =============================================
5
6 How to reproduce the "Where is STEREO Today?" plot.
7
8 This example uses capabilities in sunpy to reproduce the plot that can be found
9 at `<https://stereo-ssc.nascom.nasa.gov/where.shtml>`__. This example is not
10 written as a tutorial, but there are related tutorials for learning about these
11 capabilities:
12
13 * :ref:`sphx_glr_generated_gallery_units_and_coordinates_planet_locations.py`
14 * :ref:`sphx_glr_generated_gallery_units_and_coordinates_ParkerSolarProbe_trajectory.py`
15
16 `astroquery <https://astroquery.readthedocs.io/>`__ needs to be installed.
17 """
18 import matplotlib as mpl
19 import matplotlib.pyplot as plt
20 import numpy as np
21 from matplotlib.ticker import MultipleLocator
22
23 import astropy.units as u
24 from astropy.coordinates import Longitude
25
26 from sunpy.coordinates import HeliocentricEarthEcliptic, get_body_heliographic_stonyhurst, get_horizons_coord
27 from sunpy.time import parse_time
28
29 ##############################################################################
30 # Define the time for the plot as the time when this script is run.
31
32 obstime = parse_time('now')
33
34 ##############################################################################
35 # Define a convenience function to extract the first full orbit from a
36 # trajectory, assuming that the trajectory moves in the direction of positive
37 # ecliptic longitude.
38
39 hee_frame = HeliocentricEarthEcliptic(obstime=obstime)
40
41
42 def get_first_orbit(coord):
43 lon = coord.transform_to(hee_frame).spherical.lon
44 shifted = Longitude(lon - lon[0])
45 ends = np.flatnonzero(np.diff(shifted) < 0)
46 if ends.size > 0:
47 return coord[:ends[0]]
48 return coord
49
50
51 ##############################################################################
52 # Obtain the locations and trajectories of the various planets and spacecraft.
53 # To ensure that each trajectory contains at least one full orbit, we request
54 # 700 days for each planet and 1 year for each spacecraft.
55
56 planets = ['Mercury', 'Venus', 'Earth', 'Mars']
57 times = obstime + np.arange(700) * u.day
58 planet_coords = {planet: get_first_orbit(get_body_heliographic_stonyhurst(planet, times))
59 for planet in planets}
60
61 stereo_a = get_horizons_coord('STEREO-A', obstime)
62 stereo_b = get_horizons_coord('STEREO-B', obstime)
63
64 missions = ['Parker Solar Probe', 'Solar Orbiter', 'BepiColombo']
65 mission_labels = {'Parker Solar Probe': 'PSP', 'Solar Orbiter': 'SO', 'BepiColombo': 'BEPICOLOMBO'}
66 mission_coords = {mission: get_first_orbit(get_horizons_coord(mission, {'start': obstime,
67 'stop': obstime + 1 * u.yr,
68 'step': '1d'}))
69 for mission in missions}
70
71
72 ##############################################################################
73 # Define a convenience function for converting coordinates to plot positions
74 # in the ecliptic plane.
75
76 def coord_to_heexy(coord):
77 coord = coord.transform_to(hee_frame)
78 coord.representation_type = 'cartesian'
79 return coord.y.to_value('AU'), coord.x.to_value('AU')
80
81 ##############################################################################
82 # Set Matplotlib settings to the desired appearance and initialize the axes.
83
84
85 mpl.rcParams.update({'figure.facecolor': 'black',
86 'axes.edgecolor': 'white',
87 'axes.facecolor': 'black',
88 'axes.labelcolor': 'white',
89 'axes.titlecolor': 'white',
90 'lines.linewidth': 1,
91 'xtick.color': 'white',
92 'xtick.direction': 'in',
93 'xtick.top': True,
94 'ytick.color': 'white',
95 'ytick.direction': 'in',
96 'ytick.right': True})
97
98 fig = plt.figure()
99 ax = fig.add_subplot()
100
101 ax.set_xlim(-2.15, 2.15)
102 ax.set_xlabel('Y (HEE)')
103 ax.xaxis.set_major_locator(MultipleLocator(1))
104 ax.xaxis.set_minor_locator(MultipleLocator(0.1))
105
106 ax.set_ylim(1.8, -1.8)
107 ax.set_ylabel('X (HEE)')
108 ax.yaxis.set_major_locator(MultipleLocator(1))
109 ax.yaxis.set_minor_locator(MultipleLocator(0.1))
110
111 ax.set_title(obstime.strftime('%d-%b-%Y %H:%M UT'))
112 ax.set_aspect('equal')
113
114 # sphinx_gallery_defer_figures
115
116 ##############################################################################
117 # Draw the Sun-Earth line.
118
119 ax.plot([0, 0], [0, 2], linestyle='dotted', color='gray')
120
121 # sphinx_gallery_defer_figures
122
123 ##############################################################################
124 # Draw Mercury, Venus, Earth, and Mars, with Earth formatted differently.
125
126 for planet, coord in planet_coords.items():
127 ax.plot(*coord_to_heexy(coord), linestyle='dashed', color='gray')
128
129 if planet == 'Earth':
130 color, markersize, offset = 'lime', 10, 0.1
131 else:
132 color, markersize, offset = 'gray', None, 0.05
133
134 x, y = coord_to_heexy(coord[0])
135 ax.plot(x, y, 'o', markersize=markersize, color=color)
136 ax.text(x + offset, y, planet, color=color)
137
138 # sphinx_gallery_defer_figures
139
140 ##############################################################################
141 # Draw the STEREO spacecraft (without orbits), as well as Sun-STEREO lines.
142
143 for stereo, label, color in [(stereo_a, 'A', 'red'), (stereo_b, 'B', 'blue')]:
144 x, y = coord_to_heexy(stereo)
145 ax.plot([0, 5*x], [0, 5*y], linestyle='dotted', color='gray')
146 ax.plot(x, y, 'o', color=color)
147 ax.text(x + 0.1, y, label, color=color, fontsize=18)
148
149 # sphinx_gallery_defer_figures
150
151 ##############################################################################
152 # Draw the Sun, which is at the origin by definition.
153
154 ax.plot(0, 0, 'o', markersize=15, color='yellow')
155 ax.text(0.12, 0, 'Sun', color='yellow')
156
157 # sphinx_gallery_defer_figures
158
159 ##############################################################################
160 # Finally, draw the various spacecraft, with Solar Orbiter colored differently.
161
162 for mission, coord in mission_coords.items():
163 color = 'magenta' if mission == 'Solar Orbiter' else 'orange'
164
165 ax.plot(*coord_to_heexy(coord), linestyle='dashed', color=color)
166
167 x, y = coord_to_heexy(coord[0])
168 ax.plot(x, y, 'o', color=color)
169 ax.text(x + 0.05, y, mission_labels[mission], color=color)
170
171 plt.show()
172
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/showcase/where_is_stereo.py b/examples/showcase/where_is_stereo.py
--- a/examples/showcase/where_is_stereo.py
+++ b/examples/showcase/where_is_stereo.py
@@ -38,7 +38,6 @@
hee_frame = HeliocentricEarthEcliptic(obstime=obstime)
-
def get_first_orbit(coord):
lon = coord.transform_to(hee_frame).spherical.lon
shifted = Longitude(lon - lon[0])
@@ -47,7 +46,6 @@
return coord[:ends[0]]
return coord
-
##############################################################################
# Obtain the locations and trajectories of the various planets and spacecraft.
# To ensure that each trajectory contains at least one full orbit, we request
@@ -68,7 +66,6 @@
'step': '1d'}))
for mission in missions}
-
##############################################################################
# Define a convenience function for converting coordinates to plot positions
# in the ecliptic plane.
@@ -81,7 +78,6 @@
##############################################################################
# Set Matplotlib settings to the desired appearance and initialize the axes.
-
mpl.rcParams.update({'figure.facecolor': 'black',
'axes.edgecolor': 'white',
'axes.facecolor': 'black',
@@ -169,3 +165,8 @@
ax.text(x + 0.05, y, mission_labels[mission], color=color)
plt.show()
+
+# This is necessary to reset the Matplotlib settings after plotting for our documentation.
+# You don't need this in your own code.
+mpl.rcParams.update(mpl.rcParamsDefault)
+mpl.rcParams.update({'axes.titlecolor': 'black'})
| {"golden_diff": "diff --git a/examples/showcase/where_is_stereo.py b/examples/showcase/where_is_stereo.py\n--- a/examples/showcase/where_is_stereo.py\n+++ b/examples/showcase/where_is_stereo.py\n@@ -38,7 +38,6 @@\n \n hee_frame = HeliocentricEarthEcliptic(obstime=obstime)\n \n-\n def get_first_orbit(coord):\n lon = coord.transform_to(hee_frame).spherical.lon\n shifted = Longitude(lon - lon[0])\n@@ -47,7 +46,6 @@\n return coord[:ends[0]]\n return coord\n \n-\n ##############################################################################\n # Obtain the locations and trajectories of the various planets and spacecraft.\n # To ensure that each trajectory contains at least one full orbit, we request\n@@ -68,7 +66,6 @@\n 'step': '1d'}))\n for mission in missions}\n \n-\n ##############################################################################\n # Define a convenience function for converting coordinates to plot positions\n # in the ecliptic plane.\n@@ -81,7 +78,6 @@\n ##############################################################################\n # Set Matplotlib settings to the desired appearance and initialize the axes.\n \n-\n mpl.rcParams.update({'figure.facecolor': 'black',\n 'axes.edgecolor': 'white',\n 'axes.facecolor': 'black',\n@@ -169,3 +165,8 @@\n ax.text(x + 0.05, y, mission_labels[mission], color=color)\n \n plt.show()\n+\n+# This is necessary to reset the Matplotlib settings after plotting for our documentation.\n+# You don't need this in your own code.\n+mpl.rcParams.update(mpl.rcParamsDefault)\n+mpl.rcParams.update({'axes.titlecolor': 'black'})\n", "issue": "Our Map guide page is a terrible introduction to Map\nhttps://docs.sunpy.org/en/stable/guide/data_types/maps.html\r\n\r\nIf you haven't ever used Map before (maybe have minimal background with solar data) this is a terrible introduction. Sure the information about making a custom map is useful somewhere in our docs, but two paragraphs in at the top of the main guide page?! No.\r\n\r\nIMO we should be explaining:\r\n\r\n* Why Map exists, what does it give you that just opening the array with `astropy.io.fits` dosen't.\r\n* How to load a map from a FITS file.\r\n* How to plot a map.\r\n* How to view the standardised metadata.\r\n* How to work with coordinates in the frame of the map and plot them.\r\n* submap / rotate / resample etc.\n", "before_files": [{"content": "\"\"\"\n=============================================\nReproducing the \"Where is STEREO Today?\" plot\n=============================================\n\nHow to reproduce the \"Where is STEREO Today?\" plot.\n\nThis example uses capabilities in sunpy to reproduce the plot that can be found\nat `<https://stereo-ssc.nascom.nasa.gov/where.shtml>`__. This example is not\nwritten as a tutorial, but there are related tutorials for learning about these\ncapabilities:\n\n* :ref:`sphx_glr_generated_gallery_units_and_coordinates_planet_locations.py`\n* :ref:`sphx_glr_generated_gallery_units_and_coordinates_ParkerSolarProbe_trajectory.py`\n\n`astroquery <https://astroquery.readthedocs.io/>`__ needs to be installed.\n\"\"\"\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom matplotlib.ticker import MultipleLocator\n\nimport astropy.units as u\nfrom astropy.coordinates import Longitude\n\nfrom sunpy.coordinates import HeliocentricEarthEcliptic, get_body_heliographic_stonyhurst, get_horizons_coord\nfrom sunpy.time import parse_time\n\n##############################################################################\n# Define the time for the plot as the time when this script is run.\n\nobstime = parse_time('now')\n\n##############################################################################\n# Define a convenience function to extract the first full orbit from a\n# trajectory, assuming that the trajectory moves in the direction of positive\n# ecliptic longitude.\n\nhee_frame = HeliocentricEarthEcliptic(obstime=obstime)\n\n\ndef get_first_orbit(coord):\n lon = coord.transform_to(hee_frame).spherical.lon\n shifted = Longitude(lon - lon[0])\n ends = np.flatnonzero(np.diff(shifted) < 0)\n if ends.size > 0:\n return coord[:ends[0]]\n return coord\n\n\n##############################################################################\n# Obtain the locations and trajectories of the various planets and spacecraft.\n# To ensure that each trajectory contains at least one full orbit, we request\n# 700 days for each planet and 1 year for each spacecraft.\n\nplanets = ['Mercury', 'Venus', 'Earth', 'Mars']\ntimes = obstime + np.arange(700) * u.day\nplanet_coords = {planet: get_first_orbit(get_body_heliographic_stonyhurst(planet, times))\n for planet in planets}\n\nstereo_a = get_horizons_coord('STEREO-A', obstime)\nstereo_b = get_horizons_coord('STEREO-B', obstime)\n\nmissions = ['Parker Solar Probe', 'Solar Orbiter', 'BepiColombo']\nmission_labels = {'Parker Solar Probe': 'PSP', 'Solar Orbiter': 'SO', 'BepiColombo': 'BEPICOLOMBO'}\nmission_coords = {mission: get_first_orbit(get_horizons_coord(mission, {'start': obstime,\n 'stop': obstime + 1 * u.yr,\n 'step': '1d'}))\n for mission in missions}\n\n\n##############################################################################\n# Define a convenience function for converting coordinates to plot positions\n# in the ecliptic plane.\n\ndef coord_to_heexy(coord):\n coord = coord.transform_to(hee_frame)\n coord.representation_type = 'cartesian'\n return coord.y.to_value('AU'), coord.x.to_value('AU')\n\n##############################################################################\n# Set Matplotlib settings to the desired appearance and initialize the axes.\n\n\nmpl.rcParams.update({'figure.facecolor': 'black',\n 'axes.edgecolor': 'white',\n 'axes.facecolor': 'black',\n 'axes.labelcolor': 'white',\n 'axes.titlecolor': 'white',\n 'lines.linewidth': 1,\n 'xtick.color': 'white',\n 'xtick.direction': 'in',\n 'xtick.top': True,\n 'ytick.color': 'white',\n 'ytick.direction': 'in',\n 'ytick.right': True})\n\nfig = plt.figure()\nax = fig.add_subplot()\n\nax.set_xlim(-2.15, 2.15)\nax.set_xlabel('Y (HEE)')\nax.xaxis.set_major_locator(MultipleLocator(1))\nax.xaxis.set_minor_locator(MultipleLocator(0.1))\n\nax.set_ylim(1.8, -1.8)\nax.set_ylabel('X (HEE)')\nax.yaxis.set_major_locator(MultipleLocator(1))\nax.yaxis.set_minor_locator(MultipleLocator(0.1))\n\nax.set_title(obstime.strftime('%d-%b-%Y %H:%M UT'))\nax.set_aspect('equal')\n\n# sphinx_gallery_defer_figures\n\n##############################################################################\n# Draw the Sun-Earth line.\n\nax.plot([0, 0], [0, 2], linestyle='dotted', color='gray')\n\n# sphinx_gallery_defer_figures\n\n##############################################################################\n# Draw Mercury, Venus, Earth, and Mars, with Earth formatted differently.\n\nfor planet, coord in planet_coords.items():\n ax.plot(*coord_to_heexy(coord), linestyle='dashed', color='gray')\n\n if planet == 'Earth':\n color, markersize, offset = 'lime', 10, 0.1\n else:\n color, markersize, offset = 'gray', None, 0.05\n\n x, y = coord_to_heexy(coord[0])\n ax.plot(x, y, 'o', markersize=markersize, color=color)\n ax.text(x + offset, y, planet, color=color)\n\n# sphinx_gallery_defer_figures\n\n##############################################################################\n# Draw the STEREO spacecraft (without orbits), as well as Sun-STEREO lines.\n\nfor stereo, label, color in [(stereo_a, 'A', 'red'), (stereo_b, 'B', 'blue')]:\n x, y = coord_to_heexy(stereo)\n ax.plot([0, 5*x], [0, 5*y], linestyle='dotted', color='gray')\n ax.plot(x, y, 'o', color=color)\n ax.text(x + 0.1, y, label, color=color, fontsize=18)\n\n# sphinx_gallery_defer_figures\n\n##############################################################################\n# Draw the Sun, which is at the origin by definition.\n\nax.plot(0, 0, 'o', markersize=15, color='yellow')\nax.text(0.12, 0, 'Sun', color='yellow')\n\n# sphinx_gallery_defer_figures\n\n##############################################################################\n# Finally, draw the various spacecraft, with Solar Orbiter colored differently.\n\nfor mission, coord in mission_coords.items():\n color = 'magenta' if mission == 'Solar Orbiter' else 'orange'\n\n ax.plot(*coord_to_heexy(coord), linestyle='dashed', color=color)\n\n x, y = coord_to_heexy(coord[0])\n ax.plot(x, y, 'o', color=color)\n ax.text(x + 0.05, y, mission_labels[mission], color=color)\n\nplt.show()\n", "path": "examples/showcase/where_is_stereo.py"}], "after_files": [{"content": "\"\"\"\n=============================================\nReproducing the \"Where is STEREO Today?\" plot\n=============================================\n\nHow to reproduce the \"Where is STEREO Today?\" plot.\n\nThis example uses capabilities in sunpy to reproduce the plot that can be found\nat `<https://stereo-ssc.nascom.nasa.gov/where.shtml>`__. This example is not\nwritten as a tutorial, but there are related tutorials for learning about these\ncapabilities:\n\n* :ref:`sphx_glr_generated_gallery_units_and_coordinates_planet_locations.py`\n* :ref:`sphx_glr_generated_gallery_units_and_coordinates_ParkerSolarProbe_trajectory.py`\n\n`astroquery <https://astroquery.readthedocs.io/>`__ needs to be installed.\n\"\"\"\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom matplotlib.ticker import MultipleLocator\n\nimport astropy.units as u\nfrom astropy.coordinates import Longitude\n\nfrom sunpy.coordinates import HeliocentricEarthEcliptic, get_body_heliographic_stonyhurst, get_horizons_coord\nfrom sunpy.time import parse_time\n\n##############################################################################\n# Define the time for the plot as the time when this script is run.\n\nobstime = parse_time('now')\n\n##############################################################################\n# Define a convenience function to extract the first full orbit from a\n# trajectory, assuming that the trajectory moves in the direction of positive\n# ecliptic longitude.\n\nhee_frame = HeliocentricEarthEcliptic(obstime=obstime)\n\ndef get_first_orbit(coord):\n lon = coord.transform_to(hee_frame).spherical.lon\n shifted = Longitude(lon - lon[0])\n ends = np.flatnonzero(np.diff(shifted) < 0)\n if ends.size > 0:\n return coord[:ends[0]]\n return coord\n\n##############################################################################\n# Obtain the locations and trajectories of the various planets and spacecraft.\n# To ensure that each trajectory contains at least one full orbit, we request\n# 700 days for each planet and 1 year for each spacecraft.\n\nplanets = ['Mercury', 'Venus', 'Earth', 'Mars']\ntimes = obstime + np.arange(700) * u.day\nplanet_coords = {planet: get_first_orbit(get_body_heliographic_stonyhurst(planet, times))\n for planet in planets}\n\nstereo_a = get_horizons_coord('STEREO-A', obstime)\nstereo_b = get_horizons_coord('STEREO-B', obstime)\n\nmissions = ['Parker Solar Probe', 'Solar Orbiter', 'BepiColombo']\nmission_labels = {'Parker Solar Probe': 'PSP', 'Solar Orbiter': 'SO', 'BepiColombo': 'BEPICOLOMBO'}\nmission_coords = {mission: get_first_orbit(get_horizons_coord(mission, {'start': obstime,\n 'stop': obstime + 1 * u.yr,\n 'step': '1d'}))\n for mission in missions}\n\n##############################################################################\n# Define a convenience function for converting coordinates to plot positions\n# in the ecliptic plane.\n\ndef coord_to_heexy(coord):\n coord = coord.transform_to(hee_frame)\n coord.representation_type = 'cartesian'\n return coord.y.to_value('AU'), coord.x.to_value('AU')\n\n##############################################################################\n# Set Matplotlib settings to the desired appearance and initialize the axes.\n\nmpl.rcParams.update({'figure.facecolor': 'black',\n 'axes.edgecolor': 'white',\n 'axes.facecolor': 'black',\n 'axes.labelcolor': 'white',\n 'axes.titlecolor': 'white',\n 'lines.linewidth': 1,\n 'xtick.color': 'white',\n 'xtick.direction': 'in',\n 'xtick.top': True,\n 'ytick.color': 'white',\n 'ytick.direction': 'in',\n 'ytick.right': True})\n\nfig = plt.figure()\nax = fig.add_subplot()\n\nax.set_xlim(-2.15, 2.15)\nax.set_xlabel('Y (HEE)')\nax.xaxis.set_major_locator(MultipleLocator(1))\nax.xaxis.set_minor_locator(MultipleLocator(0.1))\n\nax.set_ylim(1.8, -1.8)\nax.set_ylabel('X (HEE)')\nax.yaxis.set_major_locator(MultipleLocator(1))\nax.yaxis.set_minor_locator(MultipleLocator(0.1))\n\nax.set_title(obstime.strftime('%d-%b-%Y %H:%M UT'))\nax.set_aspect('equal')\n\n# sphinx_gallery_defer_figures\n\n##############################################################################\n# Draw the Sun-Earth line.\n\nax.plot([0, 0], [0, 2], linestyle='dotted', color='gray')\n\n# sphinx_gallery_defer_figures\n\n##############################################################################\n# Draw Mercury, Venus, Earth, and Mars, with Earth formatted differently.\n\nfor planet, coord in planet_coords.items():\n ax.plot(*coord_to_heexy(coord), linestyle='dashed', color='gray')\n\n if planet == 'Earth':\n color, markersize, offset = 'lime', 10, 0.1\n else:\n color, markersize, offset = 'gray', None, 0.05\n\n x, y = coord_to_heexy(coord[0])\n ax.plot(x, y, 'o', markersize=markersize, color=color)\n ax.text(x + offset, y, planet, color=color)\n\n# sphinx_gallery_defer_figures\n\n##############################################################################\n# Draw the STEREO spacecraft (without orbits), as well as Sun-STEREO lines.\n\nfor stereo, label, color in [(stereo_a, 'A', 'red'), (stereo_b, 'B', 'blue')]:\n x, y = coord_to_heexy(stereo)\n ax.plot([0, 5*x], [0, 5*y], linestyle='dotted', color='gray')\n ax.plot(x, y, 'o', color=color)\n ax.text(x + 0.1, y, label, color=color, fontsize=18)\n\n# sphinx_gallery_defer_figures\n\n##############################################################################\n# Draw the Sun, which is at the origin by definition.\n\nax.plot(0, 0, 'o', markersize=15, color='yellow')\nax.text(0.12, 0, 'Sun', color='yellow')\n\n# sphinx_gallery_defer_figures\n\n##############################################################################\n# Finally, draw the various spacecraft, with Solar Orbiter colored differently.\n\nfor mission, coord in mission_coords.items():\n color = 'magenta' if mission == 'Solar Orbiter' else 'orange'\n\n ax.plot(*coord_to_heexy(coord), linestyle='dashed', color=color)\n\n x, y = coord_to_heexy(coord[0])\n ax.plot(x, y, 'o', color=color)\n ax.text(x + 0.05, y, mission_labels[mission], color=color)\n\nplt.show()\n\n# This is necessary to reset the Matplotlib settings after plotting for our documentation.\n# You don't need this in your own code.\nmpl.rcParams.update(mpl.rcParamsDefault)\nmpl.rcParams.update({'axes.titlecolor': 'black'})\n", "path": "examples/showcase/where_is_stereo.py"}]} | 2,342 | 367 |
gh_patches_debug_27889 | rasdani/github-patches | git_diff | bridgecrewio__checkov-5753 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Azure Function App Slots - Ensure Azure Function App Slots use at least TLS 1.2
**Describe the issue**
It seems that there are no checks that ensure that the following resources to use at least TLS 1.2 :
- azurerm_function_app_slot
- azurerm_linux_function_app_slot
- azurerm_windows_function_app_slot
**Examples**
````hcl
````hcl
resource "azurerm_function_app_slot" "example" {
name = "test-azure-functions_slot"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
app_service_plan_id = azurerm_app_service_plan.example.id
function_app_name = azurerm_function_app.example.name
storage_account_name = azurerm_storage_account.example.name
storage_account_access_key = azurerm_storage_account.example.primary_access_key
site_config {
min_tls_version = 1.2
}
}
resource "azurerm_linux_function_app_slot" "example" {
name = "example-linux-function-app-slot"
function_app_id = azurerm_linux_function_app.example.id
storage_account_name = azurerm_storage_account.example.name
site_config {
min_tls_version = 1.2
}
}
resource "azurerm_windows_function_app" "example" {
name = "example-windows-function-app"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
storage_account_name = azurerm_storage_account.example.name
service_plan_id = azurerm_service_plan.example.id
site_config {
min_tls_version = 1.2
}
}
````
**Version (please complete the following information):**
- N/A
**Additional context**
N/A
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/terraform/checks/resource/azure/FunctionAppMinTLSVersion.py`
Content:
```
1 from checkov.common.models.enums import CheckResult, CheckCategories
2 from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
3
4
5 class FunctionAppMinTLSVersion(BaseResourceValueCheck):
6 def __init__(self):
7 name = "Ensure Function app is using the latest version of TLS encryption"
8 id = "CKV_AZURE_145"
9 supported_resources = ['azurerm_function_app']
10 categories = [CheckCategories.NETWORKING]
11 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources,
12 missing_block_result=CheckResult.PASSED)
13
14 def get_inspected_key(self):
15 return "site_config/[0]/min_tls_version"
16
17 def get_expected_value(self):
18 return 1.2
19
20 def get_expected_values(self):
21 return ["1.2", 1.2]
22
23
24 check = FunctionAppMinTLSVersion()
25
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checkov/terraform/checks/resource/azure/FunctionAppMinTLSVersion.py b/checkov/terraform/checks/resource/azure/FunctionAppMinTLSVersion.py
--- a/checkov/terraform/checks/resource/azure/FunctionAppMinTLSVersion.py
+++ b/checkov/terraform/checks/resource/azure/FunctionAppMinTLSVersion.py
@@ -4,15 +4,27 @@
class FunctionAppMinTLSVersion(BaseResourceValueCheck):
def __init__(self):
+ """
+ The minimum supported TLS version for the function app.
+ Defaults to 1.2 for new function apps.
+ field name is:
+ - min_tls_version in azurerm_function_app, azurerm_function_app_slot.
+ - minimum_tls_version in newer resources (with linux/windows).
+ """
name = "Ensure Function app is using the latest version of TLS encryption"
id = "CKV_AZURE_145"
- supported_resources = ['azurerm_function_app']
+ supported_resources = ['azurerm_function_app', 'azurerm_linux_function_app', 'azurerm_windows_function_app',
+ 'azurerm_function_app_slot', 'azurerm_linux_function_app_slot',
+ 'azurerm_windows_function_app_slot']
categories = [CheckCategories.NETWORKING]
super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources,
missing_block_result=CheckResult.PASSED)
def get_inspected_key(self):
- return "site_config/[0]/min_tls_version"
+ if self.entity_type in ("azurerm_function_app", "azurerm_function_app_slot"):
+ return "site_config/[0]/min_tls_version"
+ else:
+ return "site_config/[0]/minimum_tls_version"
def get_expected_value(self):
return 1.2
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/azure/FunctionAppMinTLSVersion.py b/checkov/terraform/checks/resource/azure/FunctionAppMinTLSVersion.py\n--- a/checkov/terraform/checks/resource/azure/FunctionAppMinTLSVersion.py\n+++ b/checkov/terraform/checks/resource/azure/FunctionAppMinTLSVersion.py\n@@ -4,15 +4,27 @@\n \n class FunctionAppMinTLSVersion(BaseResourceValueCheck):\n def __init__(self):\n+ \"\"\"\n+ The minimum supported TLS version for the function app.\n+ Defaults to 1.2 for new function apps.\n+ field name is:\n+ - min_tls_version in azurerm_function_app, azurerm_function_app_slot.\n+ - minimum_tls_version in newer resources (with linux/windows).\n+ \"\"\"\n name = \"Ensure Function app is using the latest version of TLS encryption\"\n id = \"CKV_AZURE_145\"\n- supported_resources = ['azurerm_function_app']\n+ supported_resources = ['azurerm_function_app', 'azurerm_linux_function_app', 'azurerm_windows_function_app',\n+ 'azurerm_function_app_slot', 'azurerm_linux_function_app_slot',\n+ 'azurerm_windows_function_app_slot']\n categories = [CheckCategories.NETWORKING]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources,\n missing_block_result=CheckResult.PASSED)\n \n def get_inspected_key(self):\n- return \"site_config/[0]/min_tls_version\"\n+ if self.entity_type in (\"azurerm_function_app\", \"azurerm_function_app_slot\"):\n+ return \"site_config/[0]/min_tls_version\"\n+ else:\n+ return \"site_config/[0]/minimum_tls_version\"\n \n def get_expected_value(self):\n return 1.2\n", "issue": "Azure Function App Slots - Ensure Azure Function App Slots use at least TLS 1.2 \n**Describe the issue**\r\nIt seems that there are no checks that ensure that the following resources to use at least TLS 1.2 :\r\n\r\n- azurerm_function_app_slot\r\n- azurerm_linux_function_app_slot\r\n- azurerm_windows_function_app_slot\r\n\r\n**Examples**\r\n\r\n````hcl\r\n````hcl\r\nresource \"azurerm_function_app_slot\" \"example\" {\r\n name = \"test-azure-functions_slot\"\r\n location = azurerm_resource_group.example.location\r\n resource_group_name = azurerm_resource_group.example.name\r\n app_service_plan_id = azurerm_app_service_plan.example.id\r\n function_app_name = azurerm_function_app.example.name\r\n storage_account_name = azurerm_storage_account.example.name\r\n storage_account_access_key = azurerm_storage_account.example.primary_access_key\r\n\r\n site_config {\r\n min_tls_version = 1.2\r\n }\r\n}\r\n\r\nresource \"azurerm_linux_function_app_slot\" \"example\" {\r\n name = \"example-linux-function-app-slot\"\r\n function_app_id = azurerm_linux_function_app.example.id\r\n storage_account_name = azurerm_storage_account.example.name\r\n\r\n site_config {\r\n min_tls_version = 1.2\r\n }\r\n}\r\n\r\nresource \"azurerm_windows_function_app\" \"example\" {\r\n name = \"example-windows-function-app\"\r\n resource_group_name = azurerm_resource_group.example.name\r\n location = azurerm_resource_group.example.location\r\n storage_account_name = azurerm_storage_account.example.name\r\n service_plan_id = azurerm_service_plan.example.id\r\n\r\n site_config {\r\n min_tls_version = 1.2\r\n }\r\n}\r\n````\r\n\r\n**Version (please complete the following information):**\r\n - N/A\r\n \r\n**Additional context**\r\n\r\nN/A\n", "before_files": [{"content": "from checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\n\n\nclass FunctionAppMinTLSVersion(BaseResourceValueCheck):\n def __init__(self):\n name = \"Ensure Function app is using the latest version of TLS encryption\"\n id = \"CKV_AZURE_145\"\n supported_resources = ['azurerm_function_app']\n categories = [CheckCategories.NETWORKING]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources,\n missing_block_result=CheckResult.PASSED)\n\n def get_inspected_key(self):\n return \"site_config/[0]/min_tls_version\"\n\n def get_expected_value(self):\n return 1.2\n\n def get_expected_values(self):\n return [\"1.2\", 1.2]\n\n\ncheck = FunctionAppMinTLSVersion()\n", "path": "checkov/terraform/checks/resource/azure/FunctionAppMinTLSVersion.py"}], "after_files": [{"content": "from checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\n\n\nclass FunctionAppMinTLSVersion(BaseResourceValueCheck):\n def __init__(self):\n \"\"\"\n The minimum supported TLS version for the function app.\n Defaults to 1.2 for new function apps.\n field name is:\n - min_tls_version in azurerm_function_app, azurerm_function_app_slot.\n - minimum_tls_version in newer resources (with linux/windows).\n \"\"\"\n name = \"Ensure Function app is using the latest version of TLS encryption\"\n id = \"CKV_AZURE_145\"\n supported_resources = ['azurerm_function_app', 'azurerm_linux_function_app', 'azurerm_windows_function_app',\n 'azurerm_function_app_slot', 'azurerm_linux_function_app_slot',\n 'azurerm_windows_function_app_slot']\n categories = [CheckCategories.NETWORKING]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources,\n missing_block_result=CheckResult.PASSED)\n\n def get_inspected_key(self):\n if self.entity_type in (\"azurerm_function_app\", \"azurerm_function_app_slot\"):\n return \"site_config/[0]/min_tls_version\"\n else:\n return \"site_config/[0]/minimum_tls_version\"\n\n def get_expected_value(self):\n return 1.2\n\n def get_expected_values(self):\n return [\"1.2\", 1.2]\n\n\ncheck = FunctionAppMinTLSVersion()\n", "path": "checkov/terraform/checks/resource/azure/FunctionAppMinTLSVersion.py"}]} | 915 | 409 |
gh_patches_debug_9248 | rasdani/github-patches | git_diff | hylang__hy-1813 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
REPL crashes when ~/.hy-history is inaccessible
I've been working on updating the Hy package in Debian and in running the tests, I've noticed that the `test_bin` tests appear to expect to be able to do something with `~/.hy-history` (not sure if it's trying to read or write) but I had to set `HOME` to a real directory after which they passed. Looking at the tests, I imagine it's happening because they're interacting with the REPL via stdin so it's trying to load and/or use the history file?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hy/completer.py`
Content:
```
1 # Copyright 2019 the authors.
2 # This file is part of Hy, which is free software licensed under the Expat
3 # license. See the LICENSE.
4
5 import contextlib
6 import os
7 import re
8 import sys
9 import builtins
10
11 import hy.macros
12 import hy.compiler
13
14
15 docomplete = True
16
17 try:
18 import readline
19 except ImportError:
20 try:
21 import pyreadline.rlmain
22 import pyreadline.unicode_helper # NOQA
23 import readline
24 except ImportError:
25 docomplete = False
26
27 if docomplete:
28 if sys.platform == 'darwin' and 'libedit' in readline.__doc__:
29 readline_bind = "bind ^I rl_complete"
30 else:
31 readline_bind = "tab: complete"
32
33
34 class Completer(object):
35
36 def __init__(self, namespace={}):
37 if not isinstance(namespace, dict):
38 raise TypeError('namespace must be a dictionary')
39 self.namespace = namespace
40 self.path = [hy.compiler._special_form_compilers,
41 builtins.__dict__,
42 namespace]
43
44 self.tag_path = []
45
46 namespace.setdefault('__macros__', {})
47 namespace.setdefault('__tags__', {})
48
49 self.path.append(namespace['__macros__'])
50 self.tag_path.append(namespace['__tags__'])
51
52 def attr_matches(self, text):
53 # Borrowed from IPython's completer
54 m = re.match(r"(\S+(\.[\w-]+)*)\.([\w-]*)$", text)
55
56 if m:
57 expr, attr = m.group(1, 3)
58 attr = attr.replace("-", "_")
59 expr = expr.replace("-", "_")
60 else:
61 return []
62
63 try:
64 obj = eval(expr, self.namespace)
65 words = dir(obj)
66 except Exception:
67 return []
68
69 n = len(attr)
70 matches = []
71 for w in words:
72 if w[:n] == attr:
73 matches.append("{}.{}".format(
74 expr.replace("_", "-"), w.replace("_", "-")))
75 return matches
76
77 def global_matches(self, text):
78 matches = []
79 for p in self.path:
80 for k in p.keys():
81 if isinstance(k, str):
82 k = k.replace("_", "-")
83 if k.startswith(text):
84 matches.append(k)
85 return matches
86
87 def tag_matches(self, text):
88 text = text[1:]
89 matches = []
90 for p in self.tag_path:
91 for k in p.keys():
92 if isinstance(k, str):
93 if k.startswith(text):
94 matches.append("#{}".format(k))
95 return matches
96
97 def complete(self, text, state):
98 if text.startswith("#"):
99 matches = self.tag_matches(text)
100 elif "." in text:
101 matches = self.attr_matches(text)
102 else:
103 matches = self.global_matches(text)
104 try:
105 return matches[state]
106 except IndexError:
107 return None
108
109
110 @contextlib.contextmanager
111 def completion(completer=None):
112 delims = "()[]{} "
113 if not completer:
114 completer = Completer()
115
116 if docomplete:
117 readline.set_completer(completer.complete)
118 readline.set_completer_delims(delims)
119
120 history = os.path.expanduser("~/.hy-history")
121 readline.parse_and_bind("set blink-matching-paren on")
122
123 try:
124 readline.read_history_file(history)
125 except IOError:
126 open(history, 'a').close()
127
128 readline.parse_and_bind(readline_bind)
129
130 try:
131 yield
132 finally:
133 if docomplete:
134 readline.write_history_file(history)
135
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/hy/completer.py b/hy/completer.py
--- a/hy/completer.py
+++ b/hy/completer.py
@@ -123,7 +123,7 @@
try:
readline.read_history_file(history)
except IOError:
- open(history, 'a').close()
+ pass
readline.parse_and_bind(readline_bind)
@@ -131,4 +131,7 @@
yield
finally:
if docomplete:
- readline.write_history_file(history)
+ try:
+ readline.write_history_file(history)
+ except IOError:
+ pass
| {"golden_diff": "diff --git a/hy/completer.py b/hy/completer.py\n--- a/hy/completer.py\n+++ b/hy/completer.py\n@@ -123,7 +123,7 @@\n try:\n readline.read_history_file(history)\n except IOError:\n- open(history, 'a').close()\n+ pass\n \n readline.parse_and_bind(readline_bind)\n \n@@ -131,4 +131,7 @@\n yield\n finally:\n if docomplete:\n- readline.write_history_file(history)\n+ try:\n+ readline.write_history_file(history)\n+ except IOError:\n+ pass\n", "issue": "REPL crashes when ~/.hy-history is inaccessible\nI've been working on updating the Hy package in Debian and in running the tests, I've noticed that the `test_bin` tests appear to expect to be able to do something with `~/.hy-history` (not sure if it's trying to read or write) but I had to set `HOME` to a real directory after which they passed. Looking at the tests, I imagine it's happening because they're interacting with the REPL via stdin so it's trying to load and/or use the history file?\n", "before_files": [{"content": "# Copyright 2019 the authors.\n# This file is part of Hy, which is free software licensed under the Expat\n# license. See the LICENSE.\n\nimport contextlib\nimport os\nimport re\nimport sys\nimport builtins\n\nimport hy.macros\nimport hy.compiler\n\n\ndocomplete = True\n\ntry:\n import readline\nexcept ImportError:\n try:\n import pyreadline.rlmain\n import pyreadline.unicode_helper # NOQA\n import readline\n except ImportError:\n docomplete = False\n\nif docomplete:\n if sys.platform == 'darwin' and 'libedit' in readline.__doc__:\n readline_bind = \"bind ^I rl_complete\"\n else:\n readline_bind = \"tab: complete\"\n\n\nclass Completer(object):\n\n def __init__(self, namespace={}):\n if not isinstance(namespace, dict):\n raise TypeError('namespace must be a dictionary')\n self.namespace = namespace\n self.path = [hy.compiler._special_form_compilers,\n builtins.__dict__,\n namespace]\n\n self.tag_path = []\n\n namespace.setdefault('__macros__', {})\n namespace.setdefault('__tags__', {})\n\n self.path.append(namespace['__macros__'])\n self.tag_path.append(namespace['__tags__'])\n\n def attr_matches(self, text):\n # Borrowed from IPython's completer\n m = re.match(r\"(\\S+(\\.[\\w-]+)*)\\.([\\w-]*)$\", text)\n\n if m:\n expr, attr = m.group(1, 3)\n attr = attr.replace(\"-\", \"_\")\n expr = expr.replace(\"-\", \"_\")\n else:\n return []\n\n try:\n obj = eval(expr, self.namespace)\n words = dir(obj)\n except Exception:\n return []\n\n n = len(attr)\n matches = []\n for w in words:\n if w[:n] == attr:\n matches.append(\"{}.{}\".format(\n expr.replace(\"_\", \"-\"), w.replace(\"_\", \"-\")))\n return matches\n\n def global_matches(self, text):\n matches = []\n for p in self.path:\n for k in p.keys():\n if isinstance(k, str):\n k = k.replace(\"_\", \"-\")\n if k.startswith(text):\n matches.append(k)\n return matches\n\n def tag_matches(self, text):\n text = text[1:]\n matches = []\n for p in self.tag_path:\n for k in p.keys():\n if isinstance(k, str):\n if k.startswith(text):\n matches.append(\"#{}\".format(k))\n return matches\n\n def complete(self, text, state):\n if text.startswith(\"#\"):\n matches = self.tag_matches(text)\n elif \".\" in text:\n matches = self.attr_matches(text)\n else:\n matches = self.global_matches(text)\n try:\n return matches[state]\n except IndexError:\n return None\n\n\[email protected]\ndef completion(completer=None):\n delims = \"()[]{} \"\n if not completer:\n completer = Completer()\n\n if docomplete:\n readline.set_completer(completer.complete)\n readline.set_completer_delims(delims)\n\n history = os.path.expanduser(\"~/.hy-history\")\n readline.parse_and_bind(\"set blink-matching-paren on\")\n\n try:\n readline.read_history_file(history)\n except IOError:\n open(history, 'a').close()\n\n readline.parse_and_bind(readline_bind)\n\n try:\n yield\n finally:\n if docomplete:\n readline.write_history_file(history)\n", "path": "hy/completer.py"}], "after_files": [{"content": "# Copyright 2019 the authors.\n# This file is part of Hy, which is free software licensed under the Expat\n# license. See the LICENSE.\n\nimport contextlib\nimport os\nimport re\nimport sys\nimport builtins\n\nimport hy.macros\nimport hy.compiler\n\n\ndocomplete = True\n\ntry:\n import readline\nexcept ImportError:\n try:\n import pyreadline.rlmain\n import pyreadline.unicode_helper # NOQA\n import readline\n except ImportError:\n docomplete = False\n\nif docomplete:\n if sys.platform == 'darwin' and 'libedit' in readline.__doc__:\n readline_bind = \"bind ^I rl_complete\"\n else:\n readline_bind = \"tab: complete\"\n\n\nclass Completer(object):\n\n def __init__(self, namespace={}):\n if not isinstance(namespace, dict):\n raise TypeError('namespace must be a dictionary')\n self.namespace = namespace\n self.path = [hy.compiler._special_form_compilers,\n builtins.__dict__,\n namespace]\n\n self.tag_path = []\n\n namespace.setdefault('__macros__', {})\n namespace.setdefault('__tags__', {})\n\n self.path.append(namespace['__macros__'])\n self.tag_path.append(namespace['__tags__'])\n\n def attr_matches(self, text):\n # Borrowed from IPython's completer\n m = re.match(r\"(\\S+(\\.[\\w-]+)*)\\.([\\w-]*)$\", text)\n\n if m:\n expr, attr = m.group(1, 3)\n attr = attr.replace(\"-\", \"_\")\n expr = expr.replace(\"-\", \"_\")\n else:\n return []\n\n try:\n obj = eval(expr, self.namespace)\n words = dir(obj)\n except Exception:\n return []\n\n n = len(attr)\n matches = []\n for w in words:\n if w[:n] == attr:\n matches.append(\"{}.{}\".format(\n expr.replace(\"_\", \"-\"), w.replace(\"_\", \"-\")))\n return matches\n\n def global_matches(self, text):\n matches = []\n for p in self.path:\n for k in p.keys():\n if isinstance(k, str):\n k = k.replace(\"_\", \"-\")\n if k.startswith(text):\n matches.append(k)\n return matches\n\n def tag_matches(self, text):\n text = text[1:]\n matches = []\n for p in self.tag_path:\n for k in p.keys():\n if isinstance(k, str):\n if k.startswith(text):\n matches.append(\"#{}\".format(k))\n return matches\n\n def complete(self, text, state):\n if text.startswith(\"#\"):\n matches = self.tag_matches(text)\n elif \".\" in text:\n matches = self.attr_matches(text)\n else:\n matches = self.global_matches(text)\n try:\n return matches[state]\n except IndexError:\n return None\n\n\[email protected]\ndef completion(completer=None):\n delims = \"()[]{} \"\n if not completer:\n completer = Completer()\n\n if docomplete:\n readline.set_completer(completer.complete)\n readline.set_completer_delims(delims)\n\n history = os.path.expanduser(\"~/.hy-history\")\n readline.parse_and_bind(\"set blink-matching-paren on\")\n\n try:\n readline.read_history_file(history)\n except IOError:\n pass\n\n readline.parse_and_bind(readline_bind)\n\n try:\n yield\n finally:\n if docomplete:\n try:\n readline.write_history_file(history)\n except IOError:\n pass\n", "path": "hy/completer.py"}]} | 1,439 | 143 |
gh_patches_debug_24689 | rasdani/github-patches | git_diff | getsentry__sentry-23957 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
I have an issue when I import the export json file
## Important Details
On-Premise w/ Docker, version 9.1.2
## Description
I have two servers with a sentry. There are several projects on the first server, and I would like to copy these projects to the clear second server.
I use the export/import commands. The export command works fine. However, when I run the import command, I get an error.
## Steps to Reproduce
1. Run command 'sentry export sentry_export.json' on the first server
2. Run command 'sentry import sentry_export.json' on the second server
3. Get an error
Good items to include here include:
`Traceback (most recent call last):
File "/usr/local/bin/sentry", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python2.7/site-packages/sentry/runner/__init__.py", line 162, in main
cli(prog_name=get_prog(), obj={}, max_content_width=100)
File "/usr/local/lib/python2.7/site-packages/click/core.py", line 722, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python2.7/site-packages/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python2.7/site-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python2.7/site-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/click/decorators.py", line 17, in new_func
return f(get_current_context(), *args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/sentry/runner/decorators.py", line 36, in inner
return ctx.invoke(f, *args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/sentry/runner/commands/backup.py", line 21, in import_
for obj in serializers.deserialize("json", src, stream=True, use_natural_keys=True):
File "/usr/local/lib/python2.7/site-packages/django/core/serializers/json.py", line 76, in Deserializer
six.reraise(DeserializationError, DeserializationError(e), sys.exc_info()[2])
File "/usr/local/lib/python2.7/site-packages/django/core/serializers/json.py", line 70, in Deserializer
for obj in PythonDeserializer(objects, **options):
File "/usr/local/lib/python2.7/site-packages/django/core/serializers/python.py", line 140, in Deserializer
data[field.name] = field.to_python(field_value)
File "/usr/local/lib/python2.7/site-packages/sentry/db/models/fields/array.py", line 56, in to_python
value = json.loads(value)
File "/usr/local/lib/python2.7/site-packages/sentry/utils/json.py", line 111, in loads
return _default_decoder.decode(value)
File "/usr/local/lib/python2.7/site-packages/simplejson/decoder.py", line 370, in decode
obj, end = self.raw_decode(s)
File "/usr/local/lib/python2.7/site-packages/simplejson/decoder.py", line 400, in raw_decode
return self.scan_once(s, idx=_w(s, idx).end())
django.core.serializers.base.DeserializationError: Expecting value: line 1 column 2 (char 1)`
### What you expected to happen
Import works fine too
### Possible Solution
[If you have an idea on how this could be solved include that detail here.]
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/sentry/runner/commands/backup.py`
Content:
```
1 import click
2 from sentry.runner.decorators import configuration
3
4
5 @click.command(name="import")
6 @click.argument("src", type=click.File("rb"))
7 @configuration
8 def import_(src):
9 "Imports data from a Sentry export."
10
11 from django.core import serializers
12
13 for obj in serializers.deserialize("json", src, stream=True, use_natural_keys=True):
14 obj.save()
15
16
17 def sort_dependencies():
18 """
19 Similar to Django's except that we discard the important of natural keys
20 when sorting dependencies (i.e. it works without them).
21 """
22 from django.apps import apps
23
24 # Process the list of models, and get the list of dependencies
25 model_dependencies = []
26 models = set()
27 for app_config in apps.get_app_configs():
28 model_list = app_config.get_models()
29
30 for model in model_list:
31 models.add(model)
32 # Add any explicitly defined dependencies
33 if hasattr(model, "natural_key"):
34 deps = getattr(model.natural_key, "dependencies", [])
35 if deps:
36 deps = [apps.get_model(*d.split(".")) for d in deps]
37 else:
38 deps = []
39
40 # Now add a dependency for any FK relation with a model that
41 # defines a natural key
42 for field in model._meta.fields:
43 if hasattr(field.rel, "to"):
44 rel_model = field.rel.to
45 if rel_model != model:
46 deps.append(rel_model)
47
48 # Also add a dependency for any simple M2M relation with a model
49 # that defines a natural key. M2M relations with explicit through
50 # models don't count as dependencies.
51 for field in model._meta.many_to_many:
52 rel_model = field.rel.to
53 if rel_model != model:
54 deps.append(rel_model)
55 model_dependencies.append((model, deps))
56
57 model_dependencies.reverse()
58 # Now sort the models to ensure that dependencies are met. This
59 # is done by repeatedly iterating over the input list of models.
60 # If all the dependencies of a given model are in the final list,
61 # that model is promoted to the end of the final list. This process
62 # continues until the input list is empty, or we do a full iteration
63 # over the input models without promoting a model to the final list.
64 # If we do a full iteration without a promotion, that means there are
65 # circular dependencies in the list.
66 model_list = []
67 while model_dependencies:
68 skipped = []
69 changed = False
70 while model_dependencies:
71 model, deps = model_dependencies.pop()
72
73 # If all of the models in the dependency list are either already
74 # on the final model list, or not on the original serialization list,
75 # then we've found another model with all it's dependencies satisfied.
76 found = True
77 for candidate in ((d not in models or d in model_list) for d in deps):
78 if not candidate:
79 found = False
80 if found:
81 model_list.append(model)
82 changed = True
83 else:
84 skipped.append((model, deps))
85 if not changed:
86 raise RuntimeError(
87 "Can't resolve dependencies for %s in serialized app list."
88 % ", ".join(
89 f"{model._meta.app_label}.{model._meta.object_name}"
90 for model, deps in sorted(skipped, key=lambda obj: obj[0].__name__)
91 )
92 )
93 model_dependencies = skipped
94
95 return model_list
96
97
98 @click.command()
99 @click.argument("dest", default="-", type=click.File("w"))
100 @click.option("--silent", "-q", default=False, is_flag=True, help="Silence all debug output.")
101 @click.option(
102 "--indent", default=2, help="Number of spaces to indent for the JSON output. (default: 2)"
103 )
104 @click.option("--exclude", default=None, help="Models to exclude from export.", metavar="MODELS")
105 @configuration
106 def export(dest, silent, indent, exclude):
107 "Exports core metadata for the Sentry installation."
108
109 if exclude is None:
110 exclude = ()
111 else:
112 exclude = exclude.lower().split(",")
113
114 from django.core import serializers
115
116 def yield_objects():
117 # Collate the objects to be serialized.
118 for model in sort_dependencies():
119 if (
120 not getattr(model, "__core__", True)
121 or model.__name__.lower() in exclude
122 or model._meta.proxy
123 ):
124 if not silent:
125 click.echo(f">> Skipping model <{model.__name__}>", err=True)
126 continue
127
128 queryset = model._base_manager.order_by(model._meta.pk.name)
129 yield from queryset.iterator()
130
131 if not silent:
132 click.echo(">> Beginning export", err=True)
133 serializers.serialize(
134 "json", yield_objects(), indent=indent, stream=dest, use_natural_foreign_keys=True
135 )
136
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/sentry/runner/commands/backup.py b/src/sentry/runner/commands/backup.py
--- a/src/sentry/runner/commands/backup.py
+++ b/src/sentry/runner/commands/backup.py
@@ -1,4 +1,11 @@
import click
+
+from django.apps import apps
+from django.core import management, serializers
+from django.db import connection
+
+from io import StringIO
+
from sentry.runner.decorators import configuration
@@ -8,11 +15,17 @@
def import_(src):
"Imports data from a Sentry export."
- from django.core import serializers
-
for obj in serializers.deserialize("json", src, stream=True, use_natural_keys=True):
obj.save()
+ sequence_reset_sql = StringIO()
+
+ for app in apps.get_app_configs():
+ management.call_command("sqlsequencereset", app.label, stdout=sequence_reset_sql)
+
+ with connection.cursor() as cursor:
+ cursor.execute(sequence_reset_sql.getvalue())
+
def sort_dependencies():
"""
@@ -111,8 +124,6 @@
else:
exclude = exclude.lower().split(",")
- from django.core import serializers
-
def yield_objects():
# Collate the objects to be serialized.
for model in sort_dependencies():
| {"golden_diff": "diff --git a/src/sentry/runner/commands/backup.py b/src/sentry/runner/commands/backup.py\n--- a/src/sentry/runner/commands/backup.py\n+++ b/src/sentry/runner/commands/backup.py\n@@ -1,4 +1,11 @@\n import click\n+\n+from django.apps import apps\n+from django.core import management, serializers\n+from django.db import connection\n+\n+from io import StringIO\n+\n from sentry.runner.decorators import configuration\n \n \n@@ -8,11 +15,17 @@\n def import_(src):\n \"Imports data from a Sentry export.\"\n \n- from django.core import serializers\n-\n for obj in serializers.deserialize(\"json\", src, stream=True, use_natural_keys=True):\n obj.save()\n \n+ sequence_reset_sql = StringIO()\n+\n+ for app in apps.get_app_configs():\n+ management.call_command(\"sqlsequencereset\", app.label, stdout=sequence_reset_sql)\n+\n+ with connection.cursor() as cursor:\n+ cursor.execute(sequence_reset_sql.getvalue())\n+\n \n def sort_dependencies():\n \"\"\"\n@@ -111,8 +124,6 @@\n else:\n exclude = exclude.lower().split(\",\")\n \n- from django.core import serializers\n-\n def yield_objects():\n # Collate the objects to be serialized.\n for model in sort_dependencies():\n", "issue": "I have an issue when I import the export json file\n## Important Details\r\n\r\nOn-Premise w/ Docker, version 9.1.2\r\n\r\n## Description\r\n\r\nI have two servers with a sentry. There are several projects on the first server, and I would like to copy these projects to the clear second server.\r\nI use the export/import commands. The export command works fine. However, when I run the import command, I get an error.\r\n\r\n## Steps to Reproduce\r\n\r\n1. Run command 'sentry export sentry_export.json' on the first server\r\n2. Run command 'sentry import sentry_export.json' on the second server\r\n3. Get an error\r\n\r\nGood items to include here include:\r\n\r\n`Traceback (most recent call last):\r\n File \"/usr/local/bin/sentry\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/usr/local/lib/python2.7/site-packages/sentry/runner/__init__.py\", line 162, in main\r\n cli(prog_name=get_prog(), obj={}, max_content_width=100)\r\n File \"/usr/local/lib/python2.7/site-packages/click/core.py\", line 722, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/usr/local/lib/python2.7/site-packages/click/core.py\", line 697, in main\r\n rv = self.invoke(ctx)\r\n File \"/usr/local/lib/python2.7/site-packages/click/core.py\", line 1066, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/usr/local/lib/python2.7/site-packages/click/core.py\", line 895, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/usr/local/lib/python2.7/site-packages/click/core.py\", line 535, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/usr/local/lib/python2.7/site-packages/click/decorators.py\", line 17, in new_func\r\n return f(get_current_context(), *args, **kwargs)\r\n File \"/usr/local/lib/python2.7/site-packages/sentry/runner/decorators.py\", line 36, in inner\r\n return ctx.invoke(f, *args, **kwargs)\r\n File \"/usr/local/lib/python2.7/site-packages/click/core.py\", line 535, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/usr/local/lib/python2.7/site-packages/sentry/runner/commands/backup.py\", line 21, in import_\r\n for obj in serializers.deserialize(\"json\", src, stream=True, use_natural_keys=True):\r\n File \"/usr/local/lib/python2.7/site-packages/django/core/serializers/json.py\", line 76, in Deserializer\r\n six.reraise(DeserializationError, DeserializationError(e), sys.exc_info()[2])\r\n File \"/usr/local/lib/python2.7/site-packages/django/core/serializers/json.py\", line 70, in Deserializer\r\n for obj in PythonDeserializer(objects, **options):\r\n File \"/usr/local/lib/python2.7/site-packages/django/core/serializers/python.py\", line 140, in Deserializer\r\n data[field.name] = field.to_python(field_value)\r\n File \"/usr/local/lib/python2.7/site-packages/sentry/db/models/fields/array.py\", line 56, in to_python\r\n value = json.loads(value)\r\n File \"/usr/local/lib/python2.7/site-packages/sentry/utils/json.py\", line 111, in loads\r\n return _default_decoder.decode(value)\r\n File \"/usr/local/lib/python2.7/site-packages/simplejson/decoder.py\", line 370, in decode\r\n obj, end = self.raw_decode(s)\r\n File \"/usr/local/lib/python2.7/site-packages/simplejson/decoder.py\", line 400, in raw_decode\r\n return self.scan_once(s, idx=_w(s, idx).end())\r\ndjango.core.serializers.base.DeserializationError: Expecting value: line 1 column 2 (char 1)`\r\n\r\n### What you expected to happen\r\n\r\nImport works fine too\r\n\r\n### Possible Solution\r\n\r\n[If you have an idea on how this could be solved include that detail here.]\r\n\n", "before_files": [{"content": "import click\nfrom sentry.runner.decorators import configuration\n\n\[email protected](name=\"import\")\[email protected](\"src\", type=click.File(\"rb\"))\n@configuration\ndef import_(src):\n \"Imports data from a Sentry export.\"\n\n from django.core import serializers\n\n for obj in serializers.deserialize(\"json\", src, stream=True, use_natural_keys=True):\n obj.save()\n\n\ndef sort_dependencies():\n \"\"\"\n Similar to Django's except that we discard the important of natural keys\n when sorting dependencies (i.e. it works without them).\n \"\"\"\n from django.apps import apps\n\n # Process the list of models, and get the list of dependencies\n model_dependencies = []\n models = set()\n for app_config in apps.get_app_configs():\n model_list = app_config.get_models()\n\n for model in model_list:\n models.add(model)\n # Add any explicitly defined dependencies\n if hasattr(model, \"natural_key\"):\n deps = getattr(model.natural_key, \"dependencies\", [])\n if deps:\n deps = [apps.get_model(*d.split(\".\")) for d in deps]\n else:\n deps = []\n\n # Now add a dependency for any FK relation with a model that\n # defines a natural key\n for field in model._meta.fields:\n if hasattr(field.rel, \"to\"):\n rel_model = field.rel.to\n if rel_model != model:\n deps.append(rel_model)\n\n # Also add a dependency for any simple M2M relation with a model\n # that defines a natural key. M2M relations with explicit through\n # models don't count as dependencies.\n for field in model._meta.many_to_many:\n rel_model = field.rel.to\n if rel_model != model:\n deps.append(rel_model)\n model_dependencies.append((model, deps))\n\n model_dependencies.reverse()\n # Now sort the models to ensure that dependencies are met. This\n # is done by repeatedly iterating over the input list of models.\n # If all the dependencies of a given model are in the final list,\n # that model is promoted to the end of the final list. This process\n # continues until the input list is empty, or we do a full iteration\n # over the input models without promoting a model to the final list.\n # If we do a full iteration without a promotion, that means there are\n # circular dependencies in the list.\n model_list = []\n while model_dependencies:\n skipped = []\n changed = False\n while model_dependencies:\n model, deps = model_dependencies.pop()\n\n # If all of the models in the dependency list are either already\n # on the final model list, or not on the original serialization list,\n # then we've found another model with all it's dependencies satisfied.\n found = True\n for candidate in ((d not in models or d in model_list) for d in deps):\n if not candidate:\n found = False\n if found:\n model_list.append(model)\n changed = True\n else:\n skipped.append((model, deps))\n if not changed:\n raise RuntimeError(\n \"Can't resolve dependencies for %s in serialized app list.\"\n % \", \".join(\n f\"{model._meta.app_label}.{model._meta.object_name}\"\n for model, deps in sorted(skipped, key=lambda obj: obj[0].__name__)\n )\n )\n model_dependencies = skipped\n\n return model_list\n\n\[email protected]()\[email protected](\"dest\", default=\"-\", type=click.File(\"w\"))\[email protected](\"--silent\", \"-q\", default=False, is_flag=True, help=\"Silence all debug output.\")\[email protected](\n \"--indent\", default=2, help=\"Number of spaces to indent for the JSON output. (default: 2)\"\n)\[email protected](\"--exclude\", default=None, help=\"Models to exclude from export.\", metavar=\"MODELS\")\n@configuration\ndef export(dest, silent, indent, exclude):\n \"Exports core metadata for the Sentry installation.\"\n\n if exclude is None:\n exclude = ()\n else:\n exclude = exclude.lower().split(\",\")\n\n from django.core import serializers\n\n def yield_objects():\n # Collate the objects to be serialized.\n for model in sort_dependencies():\n if (\n not getattr(model, \"__core__\", True)\n or model.__name__.lower() in exclude\n or model._meta.proxy\n ):\n if not silent:\n click.echo(f\">> Skipping model <{model.__name__}>\", err=True)\n continue\n\n queryset = model._base_manager.order_by(model._meta.pk.name)\n yield from queryset.iterator()\n\n if not silent:\n click.echo(\">> Beginning export\", err=True)\n serializers.serialize(\n \"json\", yield_objects(), indent=indent, stream=dest, use_natural_foreign_keys=True\n )\n", "path": "src/sentry/runner/commands/backup.py"}], "after_files": [{"content": "import click\n\nfrom django.apps import apps\nfrom django.core import management, serializers\nfrom django.db import connection\n\nfrom io import StringIO\n\nfrom sentry.runner.decorators import configuration\n\n\[email protected](name=\"import\")\[email protected](\"src\", type=click.File(\"rb\"))\n@configuration\ndef import_(src):\n \"Imports data from a Sentry export.\"\n\n for obj in serializers.deserialize(\"json\", src, stream=True, use_natural_keys=True):\n obj.save()\n\n sequence_reset_sql = StringIO()\n\n for app in apps.get_app_configs():\n management.call_command(\"sqlsequencereset\", app.label, stdout=sequence_reset_sql)\n\n with connection.cursor() as cursor:\n cursor.execute(sequence_reset_sql.getvalue())\n\n\ndef sort_dependencies():\n \"\"\"\n Similar to Django's except that we discard the important of natural keys\n when sorting dependencies (i.e. it works without them).\n \"\"\"\n from django.apps import apps\n\n # Process the list of models, and get the list of dependencies\n model_dependencies = []\n models = set()\n for app_config in apps.get_app_configs():\n model_list = app_config.get_models()\n\n for model in model_list:\n models.add(model)\n # Add any explicitly defined dependencies\n if hasattr(model, \"natural_key\"):\n deps = getattr(model.natural_key, \"dependencies\", [])\n if deps:\n deps = [apps.get_model(*d.split(\".\")) for d in deps]\n else:\n deps = []\n\n # Now add a dependency for any FK relation with a model that\n # defines a natural key\n for field in model._meta.fields:\n if hasattr(field.rel, \"to\"):\n rel_model = field.rel.to\n if rel_model != model:\n deps.append(rel_model)\n\n # Also add a dependency for any simple M2M relation with a model\n # that defines a natural key. M2M relations with explicit through\n # models don't count as dependencies.\n for field in model._meta.many_to_many:\n rel_model = field.rel.to\n if rel_model != model:\n deps.append(rel_model)\n model_dependencies.append((model, deps))\n\n model_dependencies.reverse()\n # Now sort the models to ensure that dependencies are met. This\n # is done by repeatedly iterating over the input list of models.\n # If all the dependencies of a given model are in the final list,\n # that model is promoted to the end of the final list. This process\n # continues until the input list is empty, or we do a full iteration\n # over the input models without promoting a model to the final list.\n # If we do a full iteration without a promotion, that means there are\n # circular dependencies in the list.\n model_list = []\n while model_dependencies:\n skipped = []\n changed = False\n while model_dependencies:\n model, deps = model_dependencies.pop()\n\n # If all of the models in the dependency list are either already\n # on the final model list, or not on the original serialization list,\n # then we've found another model with all it's dependencies satisfied.\n found = True\n for candidate in ((d not in models or d in model_list) for d in deps):\n if not candidate:\n found = False\n if found:\n model_list.append(model)\n changed = True\n else:\n skipped.append((model, deps))\n if not changed:\n raise RuntimeError(\n \"Can't resolve dependencies for %s in serialized app list.\"\n % \", \".join(\n f\"{model._meta.app_label}.{model._meta.object_name}\"\n for model, deps in sorted(skipped, key=lambda obj: obj[0].__name__)\n )\n )\n model_dependencies = skipped\n\n return model_list\n\n\[email protected]()\[email protected](\"dest\", default=\"-\", type=click.File(\"w\"))\[email protected](\"--silent\", \"-q\", default=False, is_flag=True, help=\"Silence all debug output.\")\[email protected](\n \"--indent\", default=2, help=\"Number of spaces to indent for the JSON output. (default: 2)\"\n)\[email protected](\"--exclude\", default=None, help=\"Models to exclude from export.\", metavar=\"MODELS\")\n@configuration\ndef export(dest, silent, indent, exclude):\n \"Exports core metadata for the Sentry installation.\"\n\n if exclude is None:\n exclude = ()\n else:\n exclude = exclude.lower().split(\",\")\n\n def yield_objects():\n # Collate the objects to be serialized.\n for model in sort_dependencies():\n if (\n not getattr(model, \"__core__\", True)\n or model.__name__.lower() in exclude\n or model._meta.proxy\n ):\n if not silent:\n click.echo(f\">> Skipping model <{model.__name__}>\", err=True)\n continue\n\n queryset = model._base_manager.order_by(model._meta.pk.name)\n yield from queryset.iterator()\n\n if not silent:\n click.echo(\">> Beginning export\", err=True)\n serializers.serialize(\n \"json\", yield_objects(), indent=indent, stream=dest, use_natural_foreign_keys=True\n )\n", "path": "src/sentry/runner/commands/backup.py"}]} | 2,529 | 293 |
gh_patches_debug_9957 | rasdani/github-patches | git_diff | pypa__pip-4069 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
deprecation message for list format suggests wrong configuration value
* Pip version: 9.0.0
* Python version: 2.7.12
* Operating System: OS X
### Description:
The deprecation message emitted from `pip list` about the `--format` option suggests the wrong configuration value.
### What I've run:
```
$ pip list --outdated
DEPRECATION: The default format will switch to columns in the future. You can use --format=legacy (or define a list_format in your pip.conf) to disable this warning.
```
The correct configuration option is:
```ini
[list]
format =
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pip/commands/list.py`
Content:
```
1 from __future__ import absolute_import
2
3 import json
4 import logging
5 import warnings
6 try:
7 from itertools import zip_longest
8 except ImportError:
9 from itertools import izip_longest as zip_longest
10
11 from pip._vendor import six
12
13 from pip.basecommand import Command
14 from pip.exceptions import CommandError
15 from pip.index import PackageFinder
16 from pip.utils import (
17 get_installed_distributions, dist_is_editable)
18 from pip.utils.deprecation import RemovedInPip10Warning
19 from pip.cmdoptions import make_option_group, index_group
20
21 logger = logging.getLogger(__name__)
22
23
24 class ListCommand(Command):
25 """
26 List installed packages, including editables.
27
28 Packages are listed in a case-insensitive sorted order.
29 """
30 name = 'list'
31 usage = """
32 %prog [options]"""
33 summary = 'List installed packages.'
34
35 def __init__(self, *args, **kw):
36 super(ListCommand, self).__init__(*args, **kw)
37
38 cmd_opts = self.cmd_opts
39
40 cmd_opts.add_option(
41 '-o', '--outdated',
42 action='store_true',
43 default=False,
44 help='List outdated packages')
45 cmd_opts.add_option(
46 '-u', '--uptodate',
47 action='store_true',
48 default=False,
49 help='List uptodate packages')
50 cmd_opts.add_option(
51 '-e', '--editable',
52 action='store_true',
53 default=False,
54 help='List editable projects.')
55 cmd_opts.add_option(
56 '-l', '--local',
57 action='store_true',
58 default=False,
59 help=('If in a virtualenv that has global access, do not list '
60 'globally-installed packages.'),
61 )
62 self.cmd_opts.add_option(
63 '--user',
64 dest='user',
65 action='store_true',
66 default=False,
67 help='Only output packages installed in user-site.')
68
69 cmd_opts.add_option(
70 '--pre',
71 action='store_true',
72 default=False,
73 help=("Include pre-release and development versions. By default, "
74 "pip only finds stable versions."),
75 )
76
77 cmd_opts.add_option(
78 '--format',
79 action='store',
80 dest='list_format',
81 choices=('legacy', 'columns', 'freeze', 'json'),
82 help="Select the output format among: legacy (default), columns, "
83 "freeze or json.",
84 )
85
86 cmd_opts.add_option(
87 '--not-required',
88 action='store_true',
89 dest='not_required',
90 help="List packages that are not dependencies of "
91 "installed packages.",
92 )
93
94 index_opts = make_option_group(index_group, self.parser)
95
96 self.parser.insert_option_group(0, index_opts)
97 self.parser.insert_option_group(0, cmd_opts)
98
99 def _build_package_finder(self, options, index_urls, session):
100 """
101 Create a package finder appropriate to this list command.
102 """
103 return PackageFinder(
104 find_links=options.find_links,
105 index_urls=index_urls,
106 allow_all_prereleases=options.pre,
107 trusted_hosts=options.trusted_hosts,
108 process_dependency_links=options.process_dependency_links,
109 session=session,
110 )
111
112 def run(self, options, args):
113 if options.allow_external:
114 warnings.warn(
115 "--allow-external has been deprecated and will be removed in "
116 "the future. Due to changes in the repository protocol, it no "
117 "longer has any effect.",
118 RemovedInPip10Warning,
119 )
120
121 if options.allow_all_external:
122 warnings.warn(
123 "--allow-all-external has been deprecated and will be removed "
124 "in the future. Due to changes in the repository protocol, it "
125 "no longer has any effect.",
126 RemovedInPip10Warning,
127 )
128
129 if options.allow_unverified:
130 warnings.warn(
131 "--allow-unverified has been deprecated and will be removed "
132 "in the future. Due to changes in the repository protocol, it "
133 "no longer has any effect.",
134 RemovedInPip10Warning,
135 )
136
137 if options.list_format is None:
138 warnings.warn(
139 "The default format will switch to columns in the future. "
140 "You can use --format=legacy (or define a list_format "
141 "in your pip.conf) to disable this warning.",
142 RemovedInPip10Warning,
143 )
144
145 if options.outdated and options.uptodate:
146 raise CommandError(
147 "Options --outdated and --uptodate cannot be combined.")
148
149 packages = get_installed_distributions(
150 local_only=options.local,
151 user_only=options.user,
152 editables_only=options.editable,
153 )
154
155 if options.outdated:
156 packages = self.get_outdated(packages, options)
157 elif options.uptodate:
158 packages = self.get_uptodate(packages, options)
159
160 if options.not_required:
161 packages = self.get_not_required(packages, options)
162
163 self.output_package_listing(packages, options)
164
165 def get_outdated(self, packages, options):
166 return [
167 dist for dist in self.iter_packages_latest_infos(packages, options)
168 if dist.latest_version > dist.parsed_version
169 ]
170
171 def get_uptodate(self, packages, options):
172 return [
173 dist for dist in self.iter_packages_latest_infos(packages, options)
174 if dist.latest_version == dist.parsed_version
175 ]
176
177 def get_not_required(self, packages, options):
178 dep_keys = set()
179 for dist in packages:
180 dep_keys.update(requirement.key for requirement in dist.requires())
181 return set(pkg for pkg in packages if pkg.key not in dep_keys)
182
183 def iter_packages_latest_infos(self, packages, options):
184 index_urls = [options.index_url] + options.extra_index_urls
185 if options.no_index:
186 logger.debug('Ignoring indexes: %s', ','.join(index_urls))
187 index_urls = []
188
189 dependency_links = []
190 for dist in packages:
191 if dist.has_metadata('dependency_links.txt'):
192 dependency_links.extend(
193 dist.get_metadata_lines('dependency_links.txt'),
194 )
195
196 with self._build_session(options) as session:
197 finder = self._build_package_finder(options, index_urls, session)
198 finder.add_dependency_links(dependency_links)
199
200 for dist in packages:
201 typ = 'unknown'
202 all_candidates = finder.find_all_candidates(dist.key)
203 if not options.pre:
204 # Remove prereleases
205 all_candidates = [candidate for candidate in all_candidates
206 if not candidate.version.is_prerelease]
207
208 if not all_candidates:
209 continue
210 best_candidate = max(all_candidates,
211 key=finder._candidate_sort_key)
212 remote_version = best_candidate.version
213 if best_candidate.location.is_wheel:
214 typ = 'wheel'
215 else:
216 typ = 'sdist'
217 # This is dirty but makes the rest of the code much cleaner
218 dist.latest_version = remote_version
219 dist.latest_filetype = typ
220 yield dist
221
222 def output_legacy(self, dist):
223 if dist_is_editable(dist):
224 return '%s (%s, %s)' % (
225 dist.project_name,
226 dist.version,
227 dist.location,
228 )
229 else:
230 return '%s (%s)' % (dist.project_name, dist.version)
231
232 def output_legacy_latest(self, dist):
233 return '%s - Latest: %s [%s]' % (
234 self.output_legacy(dist),
235 dist.latest_version,
236 dist.latest_filetype,
237 )
238
239 def output_package_listing(self, packages, options):
240 packages = sorted(
241 packages,
242 key=lambda dist: dist.project_name.lower(),
243 )
244 if options.list_format == 'columns' and packages:
245 data, header = format_for_columns(packages, options)
246 self.output_package_listing_columns(data, header)
247 elif options.list_format == 'freeze':
248 for dist in packages:
249 logger.info("%s==%s", dist.project_name, dist.version)
250 elif options.list_format == 'json':
251 logger.info(format_for_json(packages, options))
252 else: # legacy
253 for dist in packages:
254 if options.outdated:
255 logger.info(self.output_legacy_latest(dist))
256 else:
257 logger.info(self.output_legacy(dist))
258
259 def output_package_listing_columns(self, data, header):
260 # insert the header first: we need to know the size of column names
261 if len(data) > 0:
262 data.insert(0, header)
263
264 pkg_strings, sizes = tabulate(data)
265
266 # Create and add a separator.
267 if len(data) > 0:
268 pkg_strings.insert(1, " ".join(map(lambda x: '-' * x, sizes)))
269
270 for val in pkg_strings:
271 logger.info(val)
272
273
274 def tabulate(vals):
275 # From pfmoore on GitHub:
276 # https://github.com/pypa/pip/issues/3651#issuecomment-216932564
277 assert len(vals) > 0
278
279 sizes = [0] * max(len(x) for x in vals)
280 for row in vals:
281 sizes = [max(s, len(str(c))) for s, c in zip_longest(sizes, row)]
282
283 result = []
284 for row in vals:
285 display = " ".join([str(c).ljust(s) if c is not None else ''
286 for s, c in zip_longest(sizes, row)])
287 result.append(display)
288
289 return result, sizes
290
291
292 def format_for_columns(pkgs, options):
293 """
294 Convert the package data into something usable
295 by output_package_listing_columns.
296 """
297 running_outdated = options.outdated
298 # Adjust the header for the `pip list --outdated` case.
299 if running_outdated:
300 header = ["Package", "Version", "Latest", "Type"]
301 else:
302 header = ["Package", "Version"]
303
304 data = []
305 if any(dist_is_editable(x) for x in pkgs):
306 header.append("Location")
307
308 for proj in pkgs:
309 # if we're working on the 'outdated' list, separate out the
310 # latest_version and type
311 row = [proj.project_name, proj.version]
312
313 if running_outdated:
314 row.append(proj.latest_version)
315 row.append(proj.latest_filetype)
316
317 if dist_is_editable(proj):
318 row.append(proj.location)
319
320 data.append(row)
321
322 return data, header
323
324
325 def format_for_json(packages, options):
326 data = []
327 for dist in packages:
328 info = {
329 'name': dist.project_name,
330 'version': six.text_type(dist.version),
331 }
332 if options.outdated:
333 info['latest_version'] = six.text_type(dist.latest_version)
334 info['latest_filetype'] = dist.latest_filetype
335 data.append(info)
336 return json.dumps(data)
337
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pip/commands/list.py b/pip/commands/list.py
--- a/pip/commands/list.py
+++ b/pip/commands/list.py
@@ -137,8 +137,9 @@
if options.list_format is None:
warnings.warn(
"The default format will switch to columns in the future. "
- "You can use --format=legacy (or define a list_format "
- "in your pip.conf) to disable this warning.",
+ "You can use --format=(legacy|columns) (or define a "
+ "format=(legacy|columns) in your pip.conf under the [list] "
+ "section) to disable this warning.",
RemovedInPip10Warning,
)
| {"golden_diff": "diff --git a/pip/commands/list.py b/pip/commands/list.py\n--- a/pip/commands/list.py\n+++ b/pip/commands/list.py\n@@ -137,8 +137,9 @@\n if options.list_format is None:\n warnings.warn(\n \"The default format will switch to columns in the future. \"\n- \"You can use --format=legacy (or define a list_format \"\n- \"in your pip.conf) to disable this warning.\",\n+ \"You can use --format=(legacy|columns) (or define a \"\n+ \"format=(legacy|columns) in your pip.conf under the [list] \"\n+ \"section) to disable this warning.\",\n RemovedInPip10Warning,\n )\n", "issue": "deprecation message for list format suggests wrong configuration value\n* Pip version: 9.0.0\r\n* Python version: 2.7.12\r\n* Operating System: OS X\r\n\r\n### Description:\r\n\r\nThe deprecation message emitted from `pip list` about the `--format` option suggests the wrong configuration value.\r\n\r\n### What I've run:\r\n\r\n```\r\n$ pip list --outdated\r\nDEPRECATION: The default format will switch to columns in the future. You can use --format=legacy (or define a list_format in your pip.conf) to disable this warning.\r\n```\r\n\r\nThe correct configuration option is:\r\n\r\n```ini\r\n[list]\r\nformat =\r\n```\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport json\nimport logging\nimport warnings\ntry:\n from itertools import zip_longest\nexcept ImportError:\n from itertools import izip_longest as zip_longest\n\nfrom pip._vendor import six\n\nfrom pip.basecommand import Command\nfrom pip.exceptions import CommandError\nfrom pip.index import PackageFinder\nfrom pip.utils import (\n get_installed_distributions, dist_is_editable)\nfrom pip.utils.deprecation import RemovedInPip10Warning\nfrom pip.cmdoptions import make_option_group, index_group\n\nlogger = logging.getLogger(__name__)\n\n\nclass ListCommand(Command):\n \"\"\"\n List installed packages, including editables.\n\n Packages are listed in a case-insensitive sorted order.\n \"\"\"\n name = 'list'\n usage = \"\"\"\n %prog [options]\"\"\"\n summary = 'List installed packages.'\n\n def __init__(self, *args, **kw):\n super(ListCommand, self).__init__(*args, **kw)\n\n cmd_opts = self.cmd_opts\n\n cmd_opts.add_option(\n '-o', '--outdated',\n action='store_true',\n default=False,\n help='List outdated packages')\n cmd_opts.add_option(\n '-u', '--uptodate',\n action='store_true',\n default=False,\n help='List uptodate packages')\n cmd_opts.add_option(\n '-e', '--editable',\n action='store_true',\n default=False,\n help='List editable projects.')\n cmd_opts.add_option(\n '-l', '--local',\n action='store_true',\n default=False,\n help=('If in a virtualenv that has global access, do not list '\n 'globally-installed packages.'),\n )\n self.cmd_opts.add_option(\n '--user',\n dest='user',\n action='store_true',\n default=False,\n help='Only output packages installed in user-site.')\n\n cmd_opts.add_option(\n '--pre',\n action='store_true',\n default=False,\n help=(\"Include pre-release and development versions. By default, \"\n \"pip only finds stable versions.\"),\n )\n\n cmd_opts.add_option(\n '--format',\n action='store',\n dest='list_format',\n choices=('legacy', 'columns', 'freeze', 'json'),\n help=\"Select the output format among: legacy (default), columns, \"\n \"freeze or json.\",\n )\n\n cmd_opts.add_option(\n '--not-required',\n action='store_true',\n dest='not_required',\n help=\"List packages that are not dependencies of \"\n \"installed packages.\",\n )\n\n index_opts = make_option_group(index_group, self.parser)\n\n self.parser.insert_option_group(0, index_opts)\n self.parser.insert_option_group(0, cmd_opts)\n\n def _build_package_finder(self, options, index_urls, session):\n \"\"\"\n Create a package finder appropriate to this list command.\n \"\"\"\n return PackageFinder(\n find_links=options.find_links,\n index_urls=index_urls,\n allow_all_prereleases=options.pre,\n trusted_hosts=options.trusted_hosts,\n process_dependency_links=options.process_dependency_links,\n session=session,\n )\n\n def run(self, options, args):\n if options.allow_external:\n warnings.warn(\n \"--allow-external has been deprecated and will be removed in \"\n \"the future. Due to changes in the repository protocol, it no \"\n \"longer has any effect.\",\n RemovedInPip10Warning,\n )\n\n if options.allow_all_external:\n warnings.warn(\n \"--allow-all-external has been deprecated and will be removed \"\n \"in the future. Due to changes in the repository protocol, it \"\n \"no longer has any effect.\",\n RemovedInPip10Warning,\n )\n\n if options.allow_unverified:\n warnings.warn(\n \"--allow-unverified has been deprecated and will be removed \"\n \"in the future. Due to changes in the repository protocol, it \"\n \"no longer has any effect.\",\n RemovedInPip10Warning,\n )\n\n if options.list_format is None:\n warnings.warn(\n \"The default format will switch to columns in the future. \"\n \"You can use --format=legacy (or define a list_format \"\n \"in your pip.conf) to disable this warning.\",\n RemovedInPip10Warning,\n )\n\n if options.outdated and options.uptodate:\n raise CommandError(\n \"Options --outdated and --uptodate cannot be combined.\")\n\n packages = get_installed_distributions(\n local_only=options.local,\n user_only=options.user,\n editables_only=options.editable,\n )\n\n if options.outdated:\n packages = self.get_outdated(packages, options)\n elif options.uptodate:\n packages = self.get_uptodate(packages, options)\n\n if options.not_required:\n packages = self.get_not_required(packages, options)\n\n self.output_package_listing(packages, options)\n\n def get_outdated(self, packages, options):\n return [\n dist for dist in self.iter_packages_latest_infos(packages, options)\n if dist.latest_version > dist.parsed_version\n ]\n\n def get_uptodate(self, packages, options):\n return [\n dist for dist in self.iter_packages_latest_infos(packages, options)\n if dist.latest_version == dist.parsed_version\n ]\n\n def get_not_required(self, packages, options):\n dep_keys = set()\n for dist in packages:\n dep_keys.update(requirement.key for requirement in dist.requires())\n return set(pkg for pkg in packages if pkg.key not in dep_keys)\n\n def iter_packages_latest_infos(self, packages, options):\n index_urls = [options.index_url] + options.extra_index_urls\n if options.no_index:\n logger.debug('Ignoring indexes: %s', ','.join(index_urls))\n index_urls = []\n\n dependency_links = []\n for dist in packages:\n if dist.has_metadata('dependency_links.txt'):\n dependency_links.extend(\n dist.get_metadata_lines('dependency_links.txt'),\n )\n\n with self._build_session(options) as session:\n finder = self._build_package_finder(options, index_urls, session)\n finder.add_dependency_links(dependency_links)\n\n for dist in packages:\n typ = 'unknown'\n all_candidates = finder.find_all_candidates(dist.key)\n if not options.pre:\n # Remove prereleases\n all_candidates = [candidate for candidate in all_candidates\n if not candidate.version.is_prerelease]\n\n if not all_candidates:\n continue\n best_candidate = max(all_candidates,\n key=finder._candidate_sort_key)\n remote_version = best_candidate.version\n if best_candidate.location.is_wheel:\n typ = 'wheel'\n else:\n typ = 'sdist'\n # This is dirty but makes the rest of the code much cleaner\n dist.latest_version = remote_version\n dist.latest_filetype = typ\n yield dist\n\n def output_legacy(self, dist):\n if dist_is_editable(dist):\n return '%s (%s, %s)' % (\n dist.project_name,\n dist.version,\n dist.location,\n )\n else:\n return '%s (%s)' % (dist.project_name, dist.version)\n\n def output_legacy_latest(self, dist):\n return '%s - Latest: %s [%s]' % (\n self.output_legacy(dist),\n dist.latest_version,\n dist.latest_filetype,\n )\n\n def output_package_listing(self, packages, options):\n packages = sorted(\n packages,\n key=lambda dist: dist.project_name.lower(),\n )\n if options.list_format == 'columns' and packages:\n data, header = format_for_columns(packages, options)\n self.output_package_listing_columns(data, header)\n elif options.list_format == 'freeze':\n for dist in packages:\n logger.info(\"%s==%s\", dist.project_name, dist.version)\n elif options.list_format == 'json':\n logger.info(format_for_json(packages, options))\n else: # legacy\n for dist in packages:\n if options.outdated:\n logger.info(self.output_legacy_latest(dist))\n else:\n logger.info(self.output_legacy(dist))\n\n def output_package_listing_columns(self, data, header):\n # insert the header first: we need to know the size of column names\n if len(data) > 0:\n data.insert(0, header)\n\n pkg_strings, sizes = tabulate(data)\n\n # Create and add a separator.\n if len(data) > 0:\n pkg_strings.insert(1, \" \".join(map(lambda x: '-' * x, sizes)))\n\n for val in pkg_strings:\n logger.info(val)\n\n\ndef tabulate(vals):\n # From pfmoore on GitHub:\n # https://github.com/pypa/pip/issues/3651#issuecomment-216932564\n assert len(vals) > 0\n\n sizes = [0] * max(len(x) for x in vals)\n for row in vals:\n sizes = [max(s, len(str(c))) for s, c in zip_longest(sizes, row)]\n\n result = []\n for row in vals:\n display = \" \".join([str(c).ljust(s) if c is not None else ''\n for s, c in zip_longest(sizes, row)])\n result.append(display)\n\n return result, sizes\n\n\ndef format_for_columns(pkgs, options):\n \"\"\"\n Convert the package data into something usable\n by output_package_listing_columns.\n \"\"\"\n running_outdated = options.outdated\n # Adjust the header for the `pip list --outdated` case.\n if running_outdated:\n header = [\"Package\", \"Version\", \"Latest\", \"Type\"]\n else:\n header = [\"Package\", \"Version\"]\n\n data = []\n if any(dist_is_editable(x) for x in pkgs):\n header.append(\"Location\")\n\n for proj in pkgs:\n # if we're working on the 'outdated' list, separate out the\n # latest_version and type\n row = [proj.project_name, proj.version]\n\n if running_outdated:\n row.append(proj.latest_version)\n row.append(proj.latest_filetype)\n\n if dist_is_editable(proj):\n row.append(proj.location)\n\n data.append(row)\n\n return data, header\n\n\ndef format_for_json(packages, options):\n data = []\n for dist in packages:\n info = {\n 'name': dist.project_name,\n 'version': six.text_type(dist.version),\n }\n if options.outdated:\n info['latest_version'] = six.text_type(dist.latest_version)\n info['latest_filetype'] = dist.latest_filetype\n data.append(info)\n return json.dumps(data)\n", "path": "pip/commands/list.py"}], "after_files": [{"content": "from __future__ import absolute_import\n\nimport json\nimport logging\nimport warnings\ntry:\n from itertools import zip_longest\nexcept ImportError:\n from itertools import izip_longest as zip_longest\n\nfrom pip._vendor import six\n\nfrom pip.basecommand import Command\nfrom pip.exceptions import CommandError\nfrom pip.index import PackageFinder\nfrom pip.utils import (\n get_installed_distributions, dist_is_editable)\nfrom pip.utils.deprecation import RemovedInPip10Warning\nfrom pip.cmdoptions import make_option_group, index_group\n\nlogger = logging.getLogger(__name__)\n\n\nclass ListCommand(Command):\n \"\"\"\n List installed packages, including editables.\n\n Packages are listed in a case-insensitive sorted order.\n \"\"\"\n name = 'list'\n usage = \"\"\"\n %prog [options]\"\"\"\n summary = 'List installed packages.'\n\n def __init__(self, *args, **kw):\n super(ListCommand, self).__init__(*args, **kw)\n\n cmd_opts = self.cmd_opts\n\n cmd_opts.add_option(\n '-o', '--outdated',\n action='store_true',\n default=False,\n help='List outdated packages')\n cmd_opts.add_option(\n '-u', '--uptodate',\n action='store_true',\n default=False,\n help='List uptodate packages')\n cmd_opts.add_option(\n '-e', '--editable',\n action='store_true',\n default=False,\n help='List editable projects.')\n cmd_opts.add_option(\n '-l', '--local',\n action='store_true',\n default=False,\n help=('If in a virtualenv that has global access, do not list '\n 'globally-installed packages.'),\n )\n self.cmd_opts.add_option(\n '--user',\n dest='user',\n action='store_true',\n default=False,\n help='Only output packages installed in user-site.')\n\n cmd_opts.add_option(\n '--pre',\n action='store_true',\n default=False,\n help=(\"Include pre-release and development versions. By default, \"\n \"pip only finds stable versions.\"),\n )\n\n cmd_opts.add_option(\n '--format',\n action='store',\n dest='list_format',\n choices=('legacy', 'columns', 'freeze', 'json'),\n help=\"Select the output format among: legacy (default), columns, \"\n \"freeze or json.\",\n )\n\n cmd_opts.add_option(\n '--not-required',\n action='store_true',\n dest='not_required',\n help=\"List packages that are not dependencies of \"\n \"installed packages.\",\n )\n\n index_opts = make_option_group(index_group, self.parser)\n\n self.parser.insert_option_group(0, index_opts)\n self.parser.insert_option_group(0, cmd_opts)\n\n def _build_package_finder(self, options, index_urls, session):\n \"\"\"\n Create a package finder appropriate to this list command.\n \"\"\"\n return PackageFinder(\n find_links=options.find_links,\n index_urls=index_urls,\n allow_all_prereleases=options.pre,\n trusted_hosts=options.trusted_hosts,\n process_dependency_links=options.process_dependency_links,\n session=session,\n )\n\n def run(self, options, args):\n if options.allow_external:\n warnings.warn(\n \"--allow-external has been deprecated and will be removed in \"\n \"the future. Due to changes in the repository protocol, it no \"\n \"longer has any effect.\",\n RemovedInPip10Warning,\n )\n\n if options.allow_all_external:\n warnings.warn(\n \"--allow-all-external has been deprecated and will be removed \"\n \"in the future. Due to changes in the repository protocol, it \"\n \"no longer has any effect.\",\n RemovedInPip10Warning,\n )\n\n if options.allow_unverified:\n warnings.warn(\n \"--allow-unverified has been deprecated and will be removed \"\n \"in the future. Due to changes in the repository protocol, it \"\n \"no longer has any effect.\",\n RemovedInPip10Warning,\n )\n\n if options.list_format is None:\n warnings.warn(\n \"The default format will switch to columns in the future. \"\n \"You can use --format=(legacy|columns) (or define a \"\n \"format=(legacy|columns) in your pip.conf under the [list] \"\n \"section) to disable this warning.\",\n RemovedInPip10Warning,\n )\n\n if options.outdated and options.uptodate:\n raise CommandError(\n \"Options --outdated and --uptodate cannot be combined.\")\n\n packages = get_installed_distributions(\n local_only=options.local,\n user_only=options.user,\n editables_only=options.editable,\n )\n\n if options.outdated:\n packages = self.get_outdated(packages, options)\n elif options.uptodate:\n packages = self.get_uptodate(packages, options)\n\n if options.not_required:\n packages = self.get_not_required(packages, options)\n\n self.output_package_listing(packages, options)\n\n def get_outdated(self, packages, options):\n return [\n dist for dist in self.iter_packages_latest_infos(packages, options)\n if dist.latest_version > dist.parsed_version\n ]\n\n def get_uptodate(self, packages, options):\n return [\n dist for dist in self.iter_packages_latest_infos(packages, options)\n if dist.latest_version == dist.parsed_version\n ]\n\n def get_not_required(self, packages, options):\n dep_keys = set()\n for dist in packages:\n dep_keys.update(requirement.key for requirement in dist.requires())\n return set(pkg for pkg in packages if pkg.key not in dep_keys)\n\n def iter_packages_latest_infos(self, packages, options):\n index_urls = [options.index_url] + options.extra_index_urls\n if options.no_index:\n logger.debug('Ignoring indexes: %s', ','.join(index_urls))\n index_urls = []\n\n dependency_links = []\n for dist in packages:\n if dist.has_metadata('dependency_links.txt'):\n dependency_links.extend(\n dist.get_metadata_lines('dependency_links.txt'),\n )\n\n with self._build_session(options) as session:\n finder = self._build_package_finder(options, index_urls, session)\n finder.add_dependency_links(dependency_links)\n\n for dist in packages:\n typ = 'unknown'\n all_candidates = finder.find_all_candidates(dist.key)\n if not options.pre:\n # Remove prereleases\n all_candidates = [candidate for candidate in all_candidates\n if not candidate.version.is_prerelease]\n\n if not all_candidates:\n continue\n best_candidate = max(all_candidates,\n key=finder._candidate_sort_key)\n remote_version = best_candidate.version\n if best_candidate.location.is_wheel:\n typ = 'wheel'\n else:\n typ = 'sdist'\n # This is dirty but makes the rest of the code much cleaner\n dist.latest_version = remote_version\n dist.latest_filetype = typ\n yield dist\n\n def output_legacy(self, dist):\n if dist_is_editable(dist):\n return '%s (%s, %s)' % (\n dist.project_name,\n dist.version,\n dist.location,\n )\n else:\n return '%s (%s)' % (dist.project_name, dist.version)\n\n def output_legacy_latest(self, dist):\n return '%s - Latest: %s [%s]' % (\n self.output_legacy(dist),\n dist.latest_version,\n dist.latest_filetype,\n )\n\n def output_package_listing(self, packages, options):\n packages = sorted(\n packages,\n key=lambda dist: dist.project_name.lower(),\n )\n if options.list_format == 'columns' and packages:\n data, header = format_for_columns(packages, options)\n self.output_package_listing_columns(data, header)\n elif options.list_format == 'freeze':\n for dist in packages:\n logger.info(\"%s==%s\", dist.project_name, dist.version)\n elif options.list_format == 'json':\n logger.info(format_for_json(packages, options))\n else: # legacy\n for dist in packages:\n if options.outdated:\n logger.info(self.output_legacy_latest(dist))\n else:\n logger.info(self.output_legacy(dist))\n\n def output_package_listing_columns(self, data, header):\n # insert the header first: we need to know the size of column names\n if len(data) > 0:\n data.insert(0, header)\n\n pkg_strings, sizes = tabulate(data)\n\n # Create and add a separator.\n if len(data) > 0:\n pkg_strings.insert(1, \" \".join(map(lambda x: '-' * x, sizes)))\n\n for val in pkg_strings:\n logger.info(val)\n\n\ndef tabulate(vals):\n # From pfmoore on GitHub:\n # https://github.com/pypa/pip/issues/3651#issuecomment-216932564\n assert len(vals) > 0\n\n sizes = [0] * max(len(x) for x in vals)\n for row in vals:\n sizes = [max(s, len(str(c))) for s, c in zip_longest(sizes, row)]\n\n result = []\n for row in vals:\n display = \" \".join([str(c).ljust(s) if c is not None else ''\n for s, c in zip_longest(sizes, row)])\n result.append(display)\n\n return result, sizes\n\n\ndef format_for_columns(pkgs, options):\n \"\"\"\n Convert the package data into something usable\n by output_package_listing_columns.\n \"\"\"\n running_outdated = options.outdated\n # Adjust the header for the `pip list --outdated` case.\n if running_outdated:\n header = [\"Package\", \"Version\", \"Latest\", \"Type\"]\n else:\n header = [\"Package\", \"Version\"]\n\n data = []\n if any(dist_is_editable(x) for x in pkgs):\n header.append(\"Location\")\n\n for proj in pkgs:\n # if we're working on the 'outdated' list, separate out the\n # latest_version and type\n row = [proj.project_name, proj.version]\n\n if running_outdated:\n row.append(proj.latest_version)\n row.append(proj.latest_filetype)\n\n if dist_is_editable(proj):\n row.append(proj.location)\n\n data.append(row)\n\n return data, header\n\n\ndef format_for_json(packages, options):\n data = []\n for dist in packages:\n info = {\n 'name': dist.project_name,\n 'version': six.text_type(dist.version),\n }\n if options.outdated:\n info['latest_version'] = six.text_type(dist.latest_version)\n info['latest_filetype'] = dist.latest_filetype\n data.append(info)\n return json.dumps(data)\n", "path": "pip/commands/list.py"}]} | 3,605 | 164 |
gh_patches_debug_17254 | rasdani/github-patches | git_diff | uclapi__uclapi-3378 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] cachet URL error
Currently when sending requests to the status page we are getting the error
```Unexpected: Failed to create fixed cachet incident. Reason: ValueError("Cannot determine api version based on endpoint 'https://cachet.apps.uclapi.com/api/v1/'. If the api version is not present in the url, please supply it on client creation.")```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `backend/uclapi/common/cachet.py`
Content:
```
1 import cachetclient
2 from cachetclient.v1 import enums
3 from django.conf import settings
4
5 """
6 This file handles all things cachet. From creating incidents to deleting
7 them when fixed. The results can be seen on our cachet status dashboard at
8 https://cachet.apps.uclapi.com/ .
9
10 Incidents can be added to components of which we currently have 8 indicating
11 problems with that specific system.
12 List of components and their ID's:
13 Gencache-Staging 1,
14 Occupeye-Staging 3,
15 Occupeye-Mini-Staging 6,
16 OAuth-Staging 7,
17 Gencache-Prod 2,
18 Occupeye-Prod 4,
19 Occupeye-Mini-Prod 5,
20 OAuth-Prod 8
21
22 You can add a component in the admin dashboard and then reference it using
23 the functions in this file without finding out its ID or attached information.
24
25 More documentation on cachet endpoints can be found at
26 https://docs.cachethq.io/reference
27 and general documentation at
28 https://docs.cachethq.io/docs
29 """
30
31
32 class CachetException(Exception):
33 pass
34
35
36 def get_incident_name(base: str) -> str:
37 """
38 Get and append the appropriate suffix for the component using the current
39 domain. For example if we are running on staging.ninja we add the
40 '-staging' suffix.
41
42 :param base: Base incident name
43 :type base: str
44 :return: Base incident name with correct environment suffix such as
45 -prod or -staging
46 :rtype: str
47
48 """
49
50 incident_name = ""
51
52 if settings.UCLAPI_DOMAIN_CURRENT == "staging.ninja":
53 incident_name = base + "-Staging"
54 elif settings.UCLAPI_DOMAIN_CURRENT == "uclapi.com":
55 incident_name = base + "-Prod"
56
57 return incident_name
58
59
60 def create_incident(error_message: str, component_name: str,
61 incident_status: int, comp_status: int):
62 """
63 Create an incident with the error message and status specified for the
64 component with its name and new status specified. The status is specified
65 by the
66 cachet.enums enum. Only creates an incident if the component isn't
67 already down.
68
69 :param error_message: Error message to display on incident
70 :type error_message: str
71 :param component_name: Component name to create incident for
72 :type component_name: str
73 :param incident_status: Status for the incident
74 :type incident_status: int
75 :param comp_status: Status for the component
76 :type comp_status: int
77 """
78
79 cachet_client = cachetclient.Client(endpoint=settings.CACHET_URL,
80 api_token=settings.CACHET_TOKEN)
81
82 target_comp = _get_component(cachet_client, component_name)
83
84 if target_comp.status == enums.COMPONENT_STATUS_OPERATIONAL:
85
86 message = (f"{component_name} failed on "
87 f"{settings.UCLAPI_DOMAIN_CURRENT}"
88 f" with error: {repr(error_message)}")
89
90 cachet_client.incidents.create(
91 name=f"{component_name} failed",
92 message=message,
93 status=incident_status,
94 component_id=target_comp.id,
95 component_status=comp_status
96 )
97
98
99 def update_incident(update_message: str, component_name: str,
100 status: int):
101 """
102 Update an incident with the update message and status specified for the
103 component with its name specified . The status is specified by the
104 cachet.enums enum. Only updates if the component is down.
105
106 :param update_message: Update message to display on incident
107 :type update_message: str
108 :param component_name: Component name to create incident for
109 :type component_name: str
110 :param status: Status for the incident
111 :type status: int
112 """
113
114 cachet_client = cachetclient.Client(endpoint=settings.CACHET_URL,
115 api_token=settings.CACHET_TOKEN)
116
117 target_comp = _get_component(cachet_client, component_name)
118
119 target_incident = _get_incident(cachet_client, target_comp)
120
121 if target_comp.status != enums.COMPONENT_STATUS_OPERATIONAL:
122 cachet_client.incident_updates.create(
123 incident_id=target_incident.id,
124 status=status,
125 message=update_message
126 )
127
128 if status == enums.INCIDENT_FIXED:
129 target_comp.status = enums.COMPONENT_STATUS_OPERATIONAL
130 target_comp.update()
131
132
133 def _get_component(client: cachetclient.v1.Client, comp_name: str) -> \
134 cachetclient.v1.components.Component:
135 """
136 Given a component name returns the actual component object.
137
138 :param client: cachet client instance
139 :type client: cachetclient.v1.Client
140 :param comp_name: Component name to fetch
141 :type comp_name: str
142 :return: Component object with the given name
143 :rtype: cachetclient.v1.components.Component
144 """
145
146 for i in client.components.list():
147 if i.name == comp_name:
148 return i
149
150 # If we fail to find it raise an error
151
152 raise CachetException(
153 "Failed to get cachet incident: Cachet component not found!"
154 )
155
156
157 def _get_incident(cachet_client: cachetclient.v1.Client, target_comp:
158 cachetclient.v1.components.Component) -> \
159 cachetclient.v1.incidents.Incident:
160 """
161
162 :param cachet_client: cachet client instance
163 :type cachet_client: cachetclient.v1.Client
164 :param target_comp: Component to get incident from
165 :type target_comp: cachetclient.v1.components.Component
166 :return: Incident to update
167 :rtype: cachetclient.v1.incidents.Incident
168 """
169
170 incidents = cachet_client.incidents.list()
171 for i in incidents:
172 if i.component_id == target_comp.id and i.status != \
173 enums.INCIDENT_FIXED:
174 return i
175
176 raise CachetException("Failed to get cachet incident: Cachet Incident not "
177 "found!")
178
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/backend/uclapi/common/cachet.py b/backend/uclapi/common/cachet.py
--- a/backend/uclapi/common/cachet.py
+++ b/backend/uclapi/common/cachet.py
@@ -77,7 +77,8 @@
"""
cachet_client = cachetclient.Client(endpoint=settings.CACHET_URL,
- api_token=settings.CACHET_TOKEN)
+ api_token=settings.CACHET_TOKEN,
+ version="1")
target_comp = _get_component(cachet_client, component_name)
@@ -112,7 +113,8 @@
"""
cachet_client = cachetclient.Client(endpoint=settings.CACHET_URL,
- api_token=settings.CACHET_TOKEN)
+ api_token=settings.CACHET_TOKEN,
+ version="1")
target_comp = _get_component(cachet_client, component_name)
| {"golden_diff": "diff --git a/backend/uclapi/common/cachet.py b/backend/uclapi/common/cachet.py\n--- a/backend/uclapi/common/cachet.py\n+++ b/backend/uclapi/common/cachet.py\n@@ -77,7 +77,8 @@\n \"\"\"\n \n cachet_client = cachetclient.Client(endpoint=settings.CACHET_URL,\n- api_token=settings.CACHET_TOKEN)\n+ api_token=settings.CACHET_TOKEN,\n+ version=\"1\")\n \n target_comp = _get_component(cachet_client, component_name)\n \n@@ -112,7 +113,8 @@\n \"\"\"\n \n cachet_client = cachetclient.Client(endpoint=settings.CACHET_URL,\n- api_token=settings.CACHET_TOKEN)\n+ api_token=settings.CACHET_TOKEN,\n+ version=\"1\")\n \n target_comp = _get_component(cachet_client, component_name)\n", "issue": "[BUG] cachet URL error\nCurrently when sending requests to the status page we are getting the error \r\n\r\n```Unexpected: Failed to create fixed cachet incident. Reason: ValueError(\"Cannot determine api version based on endpoint 'https://cachet.apps.uclapi.com/api/v1/'. If the api version is not present in the url, please supply it on client creation.\")```\r\n\n", "before_files": [{"content": "import cachetclient\nfrom cachetclient.v1 import enums\nfrom django.conf import settings\n\n\"\"\"\nThis file handles all things cachet. From creating incidents to deleting\nthem when fixed. The results can be seen on our cachet status dashboard at\nhttps://cachet.apps.uclapi.com/ .\n\nIncidents can be added to components of which we currently have 8 indicating\nproblems with that specific system.\nList of components and their ID's:\nGencache-Staging 1,\nOccupeye-Staging 3,\nOccupeye-Mini-Staging 6,\nOAuth-Staging 7,\nGencache-Prod 2,\nOccupeye-Prod 4,\nOccupeye-Mini-Prod 5,\nOAuth-Prod 8\n\nYou can add a component in the admin dashboard and then reference it using\nthe functions in this file without finding out its ID or attached information.\n\nMore documentation on cachet endpoints can be found at\nhttps://docs.cachethq.io/reference\nand general documentation at\nhttps://docs.cachethq.io/docs\n\"\"\"\n\n\nclass CachetException(Exception):\n pass\n\n\ndef get_incident_name(base: str) -> str:\n \"\"\"\n Get and append the appropriate suffix for the component using the current\n domain. For example if we are running on staging.ninja we add the\n '-staging' suffix.\n\n :param base: Base incident name\n :type base: str\n :return: Base incident name with correct environment suffix such as\n -prod or -staging\n :rtype: str\n\n \"\"\"\n\n incident_name = \"\"\n\n if settings.UCLAPI_DOMAIN_CURRENT == \"staging.ninja\":\n incident_name = base + \"-Staging\"\n elif settings.UCLAPI_DOMAIN_CURRENT == \"uclapi.com\":\n incident_name = base + \"-Prod\"\n\n return incident_name\n\n\ndef create_incident(error_message: str, component_name: str,\n incident_status: int, comp_status: int):\n \"\"\"\n Create an incident with the error message and status specified for the\n component with its name and new status specified. The status is specified\n by the\n cachet.enums enum. Only creates an incident if the component isn't\n already down.\n\n :param error_message: Error message to display on incident\n :type error_message: str\n :param component_name: Component name to create incident for\n :type component_name: str\n :param incident_status: Status for the incident\n :type incident_status: int\n :param comp_status: Status for the component\n :type comp_status: int\n \"\"\"\n\n cachet_client = cachetclient.Client(endpoint=settings.CACHET_URL,\n api_token=settings.CACHET_TOKEN)\n\n target_comp = _get_component(cachet_client, component_name)\n\n if target_comp.status == enums.COMPONENT_STATUS_OPERATIONAL:\n\n message = (f\"{component_name} failed on \"\n f\"{settings.UCLAPI_DOMAIN_CURRENT}\"\n f\" with error: {repr(error_message)}\")\n\n cachet_client.incidents.create(\n name=f\"{component_name} failed\",\n message=message,\n status=incident_status,\n component_id=target_comp.id,\n component_status=comp_status\n )\n\n\ndef update_incident(update_message: str, component_name: str,\n status: int):\n \"\"\"\n Update an incident with the update message and status specified for the\n component with its name specified . The status is specified by the\n cachet.enums enum. Only updates if the component is down.\n\n :param update_message: Update message to display on incident\n :type update_message: str\n :param component_name: Component name to create incident for\n :type component_name: str\n :param status: Status for the incident\n :type status: int\n \"\"\"\n\n cachet_client = cachetclient.Client(endpoint=settings.CACHET_URL,\n api_token=settings.CACHET_TOKEN)\n\n target_comp = _get_component(cachet_client, component_name)\n\n target_incident = _get_incident(cachet_client, target_comp)\n\n if target_comp.status != enums.COMPONENT_STATUS_OPERATIONAL:\n cachet_client.incident_updates.create(\n incident_id=target_incident.id,\n status=status,\n message=update_message\n )\n\n if status == enums.INCIDENT_FIXED:\n target_comp.status = enums.COMPONENT_STATUS_OPERATIONAL\n target_comp.update()\n\n\ndef _get_component(client: cachetclient.v1.Client, comp_name: str) -> \\\n cachetclient.v1.components.Component:\n \"\"\"\n Given a component name returns the actual component object.\n\n :param client: cachet client instance\n :type client: cachetclient.v1.Client\n :param comp_name: Component name to fetch\n :type comp_name: str\n :return: Component object with the given name\n :rtype: cachetclient.v1.components.Component\n \"\"\"\n\n for i in client.components.list():\n if i.name == comp_name:\n return i\n\n # If we fail to find it raise an error\n\n raise CachetException(\n \"Failed to get cachet incident: Cachet component not found!\"\n )\n\n\ndef _get_incident(cachet_client: cachetclient.v1.Client, target_comp:\n cachetclient.v1.components.Component) -> \\\n cachetclient.v1.incidents.Incident:\n \"\"\"\n\n :param cachet_client: cachet client instance\n :type cachet_client: cachetclient.v1.Client\n :param target_comp: Component to get incident from\n :type target_comp: cachetclient.v1.components.Component\n :return: Incident to update\n :rtype: cachetclient.v1.incidents.Incident\n \"\"\"\n\n incidents = cachet_client.incidents.list()\n for i in incidents:\n if i.component_id == target_comp.id and i.status != \\\n enums.INCIDENT_FIXED:\n return i\n\n raise CachetException(\"Failed to get cachet incident: Cachet Incident not \"\n \"found!\")\n", "path": "backend/uclapi/common/cachet.py"}], "after_files": [{"content": "import cachetclient\nfrom cachetclient.v1 import enums\nfrom django.conf import settings\n\n\"\"\"\nThis file handles all things cachet. From creating incidents to deleting\nthem when fixed. The results can be seen on our cachet status dashboard at\nhttps://cachet.apps.uclapi.com/ .\n\nIncidents can be added to components of which we currently have 8 indicating\nproblems with that specific system.\nList of components and their ID's:\nGencache-Staging 1,\nOccupeye-Staging 3,\nOccupeye-Mini-Staging 6,\nOAuth-Staging 7,\nGencache-Prod 2,\nOccupeye-Prod 4,\nOccupeye-Mini-Prod 5,\nOAuth-Prod 8\n\nYou can add a component in the admin dashboard and then reference it using\nthe functions in this file without finding out its ID or attached information.\n\nMore documentation on cachet endpoints can be found at\nhttps://docs.cachethq.io/reference\nand general documentation at\nhttps://docs.cachethq.io/docs\n\"\"\"\n\n\nclass CachetException(Exception):\n pass\n\n\ndef get_incident_name(base: str) -> str:\n \"\"\"\n Get and append the appropriate suffix for the component using the current\n domain. For example if we are running on staging.ninja we add the\n '-staging' suffix.\n\n :param base: Base incident name\n :type base: str\n :return: Base incident name with correct environment suffix such as\n -prod or -staging\n :rtype: str\n\n \"\"\"\n\n incident_name = \"\"\n\n if settings.UCLAPI_DOMAIN_CURRENT == \"staging.ninja\":\n incident_name = base + \"-Staging\"\n elif settings.UCLAPI_DOMAIN_CURRENT == \"uclapi.com\":\n incident_name = base + \"-Prod\"\n\n return incident_name\n\n\ndef create_incident(error_message: str, component_name: str,\n incident_status: int, comp_status: int):\n \"\"\"\n Create an incident with the error message and status specified for the\n component with its name and new status specified. The status is specified\n by the\n cachet.enums enum. Only creates an incident if the component isn't\n already down.\n\n :param error_message: Error message to display on incident\n :type error_message: str\n :param component_name: Component name to create incident for\n :type component_name: str\n :param incident_status: Status for the incident\n :type incident_status: int\n :param comp_status: Status for the component\n :type comp_status: int\n \"\"\"\n\n cachet_client = cachetclient.Client(endpoint=settings.CACHET_URL,\n api_token=settings.CACHET_TOKEN,\n version=\"1\")\n\n target_comp = _get_component(cachet_client, component_name)\n\n if target_comp.status == enums.COMPONENT_STATUS_OPERATIONAL:\n\n message = (f\"{component_name} failed on \"\n f\"{settings.UCLAPI_DOMAIN_CURRENT}\"\n f\" with error: {repr(error_message)}\")\n\n cachet_client.incidents.create(\n name=f\"{component_name} failed\",\n message=message,\n status=incident_status,\n component_id=target_comp.id,\n component_status=comp_status\n )\n\n\ndef update_incident(update_message: str, component_name: str,\n status: int):\n \"\"\"\n Update an incident with the update message and status specified for the\n component with its name specified . The status is specified by the\n cachet.enums enum. Only updates if the component is down.\n\n :param update_message: Update message to display on incident\n :type update_message: str\n :param component_name: Component name to create incident for\n :type component_name: str\n :param status: Status for the incident\n :type status: int\n \"\"\"\n\n cachet_client = cachetclient.Client(endpoint=settings.CACHET_URL,\n api_token=settings.CACHET_TOKEN,\n version=\"1\")\n\n target_comp = _get_component(cachet_client, component_name)\n\n target_incident = _get_incident(cachet_client, target_comp)\n\n if target_comp.status != enums.COMPONENT_STATUS_OPERATIONAL:\n cachet_client.incident_updates.create(\n incident_id=target_incident.id,\n status=status,\n message=update_message\n )\n\n if status == enums.INCIDENT_FIXED:\n target_comp.status = enums.COMPONENT_STATUS_OPERATIONAL\n target_comp.update()\n\n\ndef _get_component(client: cachetclient.v1.Client, comp_name: str) -> \\\n cachetclient.v1.components.Component:\n \"\"\"\n Given a component name returns the actual component object.\n\n :param client: cachet client instance\n :type client: cachetclient.v1.Client\n :param comp_name: Component name to fetch\n :type comp_name: str\n :return: Component object with the given name\n :rtype: cachetclient.v1.components.Component\n \"\"\"\n\n for i in client.components.list():\n if i.name == comp_name:\n return i\n\n # If we fail to find it raise an error\n\n raise CachetException(\n \"Failed to get cachet incident: Cachet component not found!\"\n )\n\n\ndef _get_incident(cachet_client: cachetclient.v1.Client, target_comp:\n cachetclient.v1.components.Component) -> \\\n cachetclient.v1.incidents.Incident:\n \"\"\"\n\n :param cachet_client: cachet client instance\n :type cachet_client: cachetclient.v1.Client\n :param target_comp: Component to get incident from\n :type target_comp: cachetclient.v1.components.Component\n :return: Incident to update\n :rtype: cachetclient.v1.incidents.Incident\n \"\"\"\n\n incidents = cachet_client.incidents.list()\n for i in incidents:\n if i.component_id == target_comp.id and i.status != \\\n enums.INCIDENT_FIXED:\n return i\n\n raise CachetException(\"Failed to get cachet incident: Cachet Incident not \"\n \"found!\")\n", "path": "backend/uclapi/common/cachet.py"}]} | 2,094 | 197 |
gh_patches_debug_25027 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-2810 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Spider planned_parenthood is broken
During the global build at 2021-05-26-14-42-23, spider **planned_parenthood** failed with **598 features** and **598 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/logs/planned_parenthood.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/planned_parenthood.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/planned_parenthood.geojson))
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/planned_parenthood.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import scrapy
3 import re
4
5 from locations.items import GeojsonPointItem
6
7
8 class PlannedParenthoodSpider(scrapy.Spider):
9 name = "planned_parenthood"
10 item_attributes = { 'brand': "Planned Parenthood" }
11 allowed_domains = ["www.plannedparenthood.org"]
12 start_urls = (
13 'https://www.plannedparenthood.org/health-center',
14 )
15
16 def parse(self, response):
17 state_urls = response.xpath('//ul[@class="quicklist-list"]/li/a/@href').extract()
18 for path in state_urls:
19 yield scrapy.Request(
20 response.urljoin(path),
21 callback=self.parse_state,
22 )
23
24 def parse_state(self, response):
25 venue_urls = response.xpath('//ul[@class="quicklist-list"]/li/p/a/@href').extract()
26 for path in venue_urls:
27 yield scrapy.Request(
28 response.urljoin(path),
29 callback=self.parse_venue,
30 )
31
32 def parse_venue(self, response):
33 properties = {
34 'addr_full': response.xpath('//*[@itemprop="streetAddress"]/text()').extract_first(),
35 'city': response.xpath('//*[@itemprop="addressLocality"]/text()').extract_first(),
36 'state': response.xpath('//*[@itemprop="addressRegion"]/text()').extract_first(),
37 'postcode': response.xpath('//*[@itemprop="postalCode"]/text()').extract_first(),
38 'phone': response.xpath('//a[@itemprop="telephone"][@data-link]/text()').extract_first(),
39 'ref': response.url,
40 'website': response.url,
41 }
42
43 map_image_url = response.xpath('//img[@class="address-map"]/@src').extract_first()
44 match = re.search(r"center=(.*?),(.*?)&zoom", map_image_url)
45 properties['lat'] = float(match.group(1)),
46 properties['lon'] = float(match.group(2)),
47
48 yield GeojsonPointItem(**properties)
49
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/locations/spiders/planned_parenthood.py b/locations/spiders/planned_parenthood.py
--- a/locations/spiders/planned_parenthood.py
+++ b/locations/spiders/planned_parenthood.py
@@ -27,9 +27,14 @@
yield scrapy.Request(
response.urljoin(path),
callback=self.parse_venue,
+ meta={'dont_redirect':True}
)
def parse_venue(self, response):
+ if response is None:
+ # Ignoring redirects
+ return
+
properties = {
'addr_full': response.xpath('//*[@itemprop="streetAddress"]/text()').extract_first(),
'city': response.xpath('//*[@itemprop="addressLocality"]/text()').extract_first(),
@@ -42,7 +47,7 @@
map_image_url = response.xpath('//img[@class="address-map"]/@src').extract_first()
match = re.search(r"center=(.*?),(.*?)&zoom", map_image_url)
- properties['lat'] = float(match.group(1)),
- properties['lon'] = float(match.group(2)),
+ properties['lat'] = float(match.group(1))
+ properties['lon'] = float(match.group(2))
yield GeojsonPointItem(**properties)
| {"golden_diff": "diff --git a/locations/spiders/planned_parenthood.py b/locations/spiders/planned_parenthood.py\n--- a/locations/spiders/planned_parenthood.py\n+++ b/locations/spiders/planned_parenthood.py\n@@ -27,9 +27,14 @@\n yield scrapy.Request(\n response.urljoin(path),\n callback=self.parse_venue,\n+ meta={'dont_redirect':True}\n )\n \n def parse_venue(self, response):\n+ if response is None:\n+ # Ignoring redirects\n+ return\n+\n properties = {\n 'addr_full': response.xpath('//*[@itemprop=\"streetAddress\"]/text()').extract_first(),\n 'city': response.xpath('//*[@itemprop=\"addressLocality\"]/text()').extract_first(),\n@@ -42,7 +47,7 @@\n \n map_image_url = response.xpath('//img[@class=\"address-map\"]/@src').extract_first()\n match = re.search(r\"center=(.*?),(.*?)&zoom\", map_image_url)\n- properties['lat'] = float(match.group(1)),\n- properties['lon'] = float(match.group(2)),\n+ properties['lat'] = float(match.group(1))\n+ properties['lon'] = float(match.group(2))\n \n yield GeojsonPointItem(**properties)\n", "issue": "Spider planned_parenthood is broken\nDuring the global build at 2021-05-26-14-42-23, spider **planned_parenthood** failed with **598 features** and **598 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/logs/planned_parenthood.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/planned_parenthood.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/planned_parenthood.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nimport re\n\nfrom locations.items import GeojsonPointItem\n\n\nclass PlannedParenthoodSpider(scrapy.Spider):\n name = \"planned_parenthood\"\n item_attributes = { 'brand': \"Planned Parenthood\" }\n allowed_domains = [\"www.plannedparenthood.org\"]\n start_urls = (\n 'https://www.plannedparenthood.org/health-center',\n )\n\n def parse(self, response):\n state_urls = response.xpath('//ul[@class=\"quicklist-list\"]/li/a/@href').extract()\n for path in state_urls:\n yield scrapy.Request(\n response.urljoin(path),\n callback=self.parse_state,\n )\n\n def parse_state(self, response):\n venue_urls = response.xpath('//ul[@class=\"quicklist-list\"]/li/p/a/@href').extract()\n for path in venue_urls:\n yield scrapy.Request(\n response.urljoin(path),\n callback=self.parse_venue,\n )\n\n def parse_venue(self, response):\n properties = {\n 'addr_full': response.xpath('//*[@itemprop=\"streetAddress\"]/text()').extract_first(),\n 'city': response.xpath('//*[@itemprop=\"addressLocality\"]/text()').extract_first(),\n 'state': response.xpath('//*[@itemprop=\"addressRegion\"]/text()').extract_first(),\n 'postcode': response.xpath('//*[@itemprop=\"postalCode\"]/text()').extract_first(),\n 'phone': response.xpath('//a[@itemprop=\"telephone\"][@data-link]/text()').extract_first(),\n 'ref': response.url,\n 'website': response.url,\n }\n\n map_image_url = response.xpath('//img[@class=\"address-map\"]/@src').extract_first()\n match = re.search(r\"center=(.*?),(.*?)&zoom\", map_image_url)\n properties['lat'] = float(match.group(1)),\n properties['lon'] = float(match.group(2)),\n\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/planned_parenthood.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nimport re\n\nfrom locations.items import GeojsonPointItem\n\n\nclass PlannedParenthoodSpider(scrapy.Spider):\n name = \"planned_parenthood\"\n item_attributes = { 'brand': \"Planned Parenthood\" }\n allowed_domains = [\"www.plannedparenthood.org\"]\n start_urls = (\n 'https://www.plannedparenthood.org/health-center',\n )\n\n def parse(self, response):\n state_urls = response.xpath('//ul[@class=\"quicklist-list\"]/li/a/@href').extract()\n for path in state_urls:\n yield scrapy.Request(\n response.urljoin(path),\n callback=self.parse_state,\n )\n\n def parse_state(self, response):\n venue_urls = response.xpath('//ul[@class=\"quicklist-list\"]/li/p/a/@href').extract()\n for path in venue_urls:\n yield scrapy.Request(\n response.urljoin(path),\n callback=self.parse_venue,\n meta={'dont_redirect':True}\n )\n\n def parse_venue(self, response):\n if response is None:\n # Ignoring redirects\n return\n\n properties = {\n 'addr_full': response.xpath('//*[@itemprop=\"streetAddress\"]/text()').extract_first(),\n 'city': response.xpath('//*[@itemprop=\"addressLocality\"]/text()').extract_first(),\n 'state': response.xpath('//*[@itemprop=\"addressRegion\"]/text()').extract_first(),\n 'postcode': response.xpath('//*[@itemprop=\"postalCode\"]/text()').extract_first(),\n 'phone': response.xpath('//a[@itemprop=\"telephone\"][@data-link]/text()').extract_first(),\n 'ref': response.url,\n 'website': response.url,\n }\n\n map_image_url = response.xpath('//img[@class=\"address-map\"]/@src').extract_first()\n match = re.search(r\"center=(.*?),(.*?)&zoom\", map_image_url)\n properties['lat'] = float(match.group(1))\n properties['lon'] = float(match.group(2))\n\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/planned_parenthood.py"}]} | 966 | 283 |
gh_patches_debug_2634 | rasdani/github-patches | git_diff | awslabs__gluonts-1537 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Theta model does not preserve item IDs
## Description
When using the `RForecastPredictor` with `method_name = "thetaf"`, the item IDs returned by the predictor's forecasts do not align with the actual item IDs. Instead, it returns `None` for the item IDs.
## To Reproduce
```python
from gluonts.dataset.repository.datasets import get_dataset
from gluonts.model.r_forecast import RForecastPredictor
from gluonts.evaluation.backtest import make_evaluation_predictions
dataset = get_dataset("m3_yearly")
predictor = RForecastPredictor(
freq=dataset.metadata.freq,
prediction_length=dataset.metadata.prediction_length,
method_name="thetaf",
)
forecast_pred, forecast_true = make_evaluation_predictions(dataset.test, predictor)
forecast_pred, _ = make_evaluation_predictions(dataset.test, predictor)
for pred in forecast_pred:
print(pred.item_id)
```
You'll see that only `None` is printed.
## Environment
- Operating system: Debian Buster
- Python version: `3.8.9`
- GluonTS version: Master, Post 0.7.0 (commit 645d551a2190b9b749528917cdf1c5e897c861d2)
- MXNet version: `1.8.0.post0`
Full list of dependencies:
```
PyYAML = "^5.4.1"
click = "^7.1.2"
fastparquet = "^0.6.1"
fbprophet = "^0.7.1"
gluonts = {git = "https://github.com/awslabs/gluon-ts.git", rev = "f6948bacb7a038df3374e768ad4939455c74b49d"}
holidays = "^0.11.1"
mxnet = "^1.8.0"
numpy = "^1.20.3"
pandas = "^1.2.4"
pyarrow = "^4.0.0"
pydantic = "^1.8.2"
pystan = "^2.0.0"
python = ">=3.8,<3.10"
rpy2 = ">=2.9.*,<3.*"
sagemaker = "^2.40.0"
sagemaker-training = "^3.9.2"
scikit-learn = "^0.24.2"
scipy = "^1.6.3"
toolz = "^0.11.1"
tqdm = "^4.60.0"
ujson = "^4.0.2"
xgboost = "^1.4.1"
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/gluonts/model/r_forecast/_predictor.py`
Content:
```
1 # Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License").
4 # You may not use this file except in compliance with the License.
5 # A copy of the License is located at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # or in the "license" file accompanying this file. This file is distributed
10 # on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
11 # express or implied. See the License for the specific language governing
12 # permissions and limitations under the License.
13
14 import os
15 from pathlib import Path
16 from typing import Dict, Iterator, Optional
17
18 import numpy as np
19
20 from gluonts.core.component import validated
21 from gluonts.dataset.common import Dataset
22 from gluonts.model.forecast import SampleForecast
23 from gluonts.model.predictor import RepresentablePredictor
24 from gluonts.support.pandas import forecast_start
25 from gluonts.time_feature import get_seasonality
26
27 USAGE_MESSAGE = """
28 The RForecastPredictor is a thin wrapper for calling the R forecast package.
29 In order to use it you need to install R and run
30
31 pip install 'rpy2>=2.9.*,<3.*'
32
33 R -e 'install.packages(c("forecast", "nnfor"), repos="https://cloud.r-project.org")'
34 """
35
36
37 class RForecastPredictor(RepresentablePredictor):
38 """
39 Wrapper for calling the `R forecast package
40 <http://pkg.robjhyndman.com/forecast/>`_.
41
42 The `RForecastPredictor` is a thin wrapper for calling the R forecast
43 package. In order to use it you need to install R and run::
44
45 pip install 'rpy2>=2.9.*,<3.*'
46 R -e 'install.packages(c("forecast", "nnfor"), repos="https://cloud.r-project.org")'
47
48 Parameters
49 ----------
50 freq
51 The granularity of the time series (e.g. '1H')
52 prediction_length
53 Number of time points to be predicted.
54 method
55 The method from rforecast to be used one of
56 "ets", "arima", "tbats", "croston", "mlp", "thetaf".
57 period
58 The period to be used (this is called `frequency` in the R forecast
59 package), result to a tentative reasonable default if not specified
60 (for instance 24 for hourly freq '1H')
61 trunc_length
62 Maximum history length to feed to the model (some models become slow
63 with very long series).
64 params
65 Parameters to be used when calling the forecast method default.
66 Note that currently only `output_type = 'samples'` is supported.
67 """
68
69 @validated()
70 def __init__(
71 self,
72 freq: str,
73 prediction_length: int,
74 method_name: str = "ets",
75 period: int = None,
76 trunc_length: Optional[int] = None,
77 params: Optional[Dict] = None,
78 ) -> None:
79 super().__init__(freq=freq, prediction_length=prediction_length)
80
81 try:
82 import rpy2.robjects.packages as rpackages
83 from rpy2 import rinterface, robjects
84 from rpy2.rinterface import RRuntimeError
85 except ImportError as e:
86 raise ImportError(str(e) + USAGE_MESSAGE) from e
87
88 self._robjects = robjects
89 self._rinterface = rinterface
90 self._rinterface.initr()
91 self._rpackages = rpackages
92
93 this_dir = os.path.dirname(os.path.realpath(__file__))
94 this_dir = this_dir.replace("\\", "/") # for windows
95 r_files = [
96 n[:-2] for n in os.listdir(f"{this_dir}/R/") if n[-2:] == ".R"
97 ]
98
99 for n in r_files:
100 try:
101 path = Path(this_dir, "R", f"{n}.R")
102 robjects.r(f'source("{path}")'.replace("\\", "\\\\"))
103 except RRuntimeError as er:
104 raise RRuntimeError(str(er) + USAGE_MESSAGE) from er
105
106 supported_methods = [
107 "ets",
108 "arima",
109 "tbats",
110 "croston",
111 "mlp",
112 "thetaf",
113 ]
114 assert (
115 method_name in supported_methods
116 ), f"method {method_name} is not supported please use one of {supported_methods}"
117
118 self.method_name = method_name
119
120 self._stats_pkg = rpackages.importr("stats")
121 self._r_method = robjects.r[method_name]
122
123 self.prediction_length = prediction_length
124 self.freq = freq
125 self.period = period if period is not None else get_seasonality(freq)
126 self.trunc_length = trunc_length
127
128 self.params = {
129 "prediction_length": self.prediction_length,
130 "output_types": ["samples"],
131 "frequency": self.period,
132 }
133 if params is not None:
134 self.params.update(params)
135
136 def _unlist(self, l):
137 if type(l).__name__.endswith("Vector"):
138 return [self._unlist(x) for x in l]
139 else:
140 return l
141
142 def _run_r_forecast(self, d, params, save_info):
143 buf = []
144
145 def save_to_buf(x):
146 buf.append(x)
147
148 def dont_save(x):
149 pass
150
151 f = save_to_buf if save_info else dont_save
152
153 # save output from the R console in buf
154 self._rinterface.set_writeconsole_regular(f)
155 self._rinterface.set_writeconsole_warnerror(f)
156
157 make_ts = self._stats_pkg.ts
158 r_params = self._robjects.vectors.ListVector(params)
159 vec = self._robjects.FloatVector(d["target"])
160 ts = make_ts(vec, frequency=self.period)
161 forecast = self._r_method(ts, r_params)
162 forecast_dict = dict(
163 zip(forecast.names, map(self._unlist, list(forecast)))
164 )
165 # FOR NOW ONLY SAMPLES...
166 # if "quantiles" in forecast_dict:
167 # forecast_dict["quantiles"] = dict(zip(params["quantiles"], forecast_dict["quantiles"]))
168
169 self._rinterface.set_writeconsole_regular(
170 self._rinterface.consolePrint
171 )
172 self._rinterface.set_writeconsole_warnerror(
173 self._rinterface.consolePrint
174 )
175 return forecast_dict, buf
176
177 def predict(
178 self,
179 dataset: Dataset,
180 num_samples: int = 100,
181 save_info: bool = False,
182 **kwargs,
183 ) -> Iterator[SampleForecast]:
184 for entry in dataset:
185 if isinstance(entry, dict):
186 data = entry
187 else:
188 data = entry.data
189 if self.trunc_length:
190 data = data[-self.trunc_length :]
191
192 params = self.params.copy()
193 params["num_samples"] = num_samples
194
195 forecast_dict, console_output = self._run_r_forecast(
196 data, params, save_info=save_info
197 )
198
199 samples = np.array(forecast_dict["samples"])
200 expected_shape = (params["num_samples"], self.prediction_length)
201 assert (
202 samples.shape == expected_shape
203 ), f"Expected shape {expected_shape} but found {samples.shape}"
204 info = (
205 {"console_output": "\n".join(console_output)}
206 if save_info
207 else None
208 )
209 yield SampleForecast(
210 samples, forecast_start(data), self.freq, info=info
211 )
212
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/gluonts/model/r_forecast/_predictor.py b/src/gluonts/model/r_forecast/_predictor.py
--- a/src/gluonts/model/r_forecast/_predictor.py
+++ b/src/gluonts/model/r_forecast/_predictor.py
@@ -207,5 +207,9 @@
else None
)
yield SampleForecast(
- samples, forecast_start(data), self.freq, info=info
+ samples,
+ forecast_start(data),
+ self.freq,
+ info=info,
+ item_id=entry.get("item_id", None),
)
| {"golden_diff": "diff --git a/src/gluonts/model/r_forecast/_predictor.py b/src/gluonts/model/r_forecast/_predictor.py\n--- a/src/gluonts/model/r_forecast/_predictor.py\n+++ b/src/gluonts/model/r_forecast/_predictor.py\n@@ -207,5 +207,9 @@\n else None\n )\n yield SampleForecast(\n- samples, forecast_start(data), self.freq, info=info\n+ samples,\n+ forecast_start(data),\n+ self.freq,\n+ info=info,\n+ item_id=entry.get(\"item_id\", None),\n )\n", "issue": "Theta model does not preserve item IDs\n## Description\r\n\r\nWhen using the `RForecastPredictor` with `method_name = \"thetaf\"`, the item IDs returned by the predictor's forecasts do not align with the actual item IDs. Instead, it returns `None` for the item IDs.\r\n\r\n## To Reproduce\r\n\r\n```python\r\nfrom gluonts.dataset.repository.datasets import get_dataset\r\nfrom gluonts.model.r_forecast import RForecastPredictor\r\nfrom gluonts.evaluation.backtest import make_evaluation_predictions\r\n\r\ndataset = get_dataset(\"m3_yearly\")\r\npredictor = RForecastPredictor(\r\n freq=dataset.metadata.freq,\r\n prediction_length=dataset.metadata.prediction_length,\r\n method_name=\"thetaf\",\r\n)\r\nforecast_pred, forecast_true = make_evaluation_predictions(dataset.test, predictor)\r\n\r\nforecast_pred, _ = make_evaluation_predictions(dataset.test, predictor)\r\n\r\nfor pred in forecast_pred:\r\n print(pred.item_id)\r\n```\r\n\r\nYou'll see that only `None` is printed.\r\n\r\n## Environment\r\n\r\n- Operating system: Debian Buster\r\n- Python version: `3.8.9`\r\n- GluonTS version: Master, Post 0.7.0 (commit 645d551a2190b9b749528917cdf1c5e897c861d2)\r\n- MXNet version: `1.8.0.post0`\r\n\r\nFull list of dependencies:\r\n\r\n```\r\nPyYAML = \"^5.4.1\"\r\nclick = \"^7.1.2\"\r\nfastparquet = \"^0.6.1\"\r\nfbprophet = \"^0.7.1\"\r\ngluonts = {git = \"https://github.com/awslabs/gluon-ts.git\", rev = \"f6948bacb7a038df3374e768ad4939455c74b49d\"}\r\nholidays = \"^0.11.1\"\r\nmxnet = \"^1.8.0\"\r\nnumpy = \"^1.20.3\"\r\npandas = \"^1.2.4\"\r\npyarrow = \"^4.0.0\"\r\npydantic = \"^1.8.2\"\r\npystan = \"^2.0.0\"\r\npython = \">=3.8,<3.10\"\r\nrpy2 = \">=2.9.*,<3.*\"\r\nsagemaker = \"^2.40.0\"\r\nsagemaker-training = \"^3.9.2\"\r\nscikit-learn = \"^0.24.2\"\r\nscipy = \"^1.6.3\"\r\ntoolz = \"^0.11.1\"\r\ntqdm = \"^4.60.0\"\r\nujson = \"^4.0.2\"\r\nxgboost = \"^1.4.1\"\r\n```\r\n\r\n\n", "before_files": [{"content": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\").\n# You may not use this file except in compliance with the License.\n# A copy of the License is located at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# or in the \"license\" file accompanying this file. This file is distributed\n# on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either\n# express or implied. See the License for the specific language governing\n# permissions and limitations under the License.\n\nimport os\nfrom pathlib import Path\nfrom typing import Dict, Iterator, Optional\n\nimport numpy as np\n\nfrom gluonts.core.component import validated\nfrom gluonts.dataset.common import Dataset\nfrom gluonts.model.forecast import SampleForecast\nfrom gluonts.model.predictor import RepresentablePredictor\nfrom gluonts.support.pandas import forecast_start\nfrom gluonts.time_feature import get_seasonality\n\nUSAGE_MESSAGE = \"\"\"\nThe RForecastPredictor is a thin wrapper for calling the R forecast package.\nIn order to use it you need to install R and run\n\npip install 'rpy2>=2.9.*,<3.*'\n\nR -e 'install.packages(c(\"forecast\", \"nnfor\"), repos=\"https://cloud.r-project.org\")'\n\"\"\"\n\n\nclass RForecastPredictor(RepresentablePredictor):\n \"\"\"\n Wrapper for calling the `R forecast package\n <http://pkg.robjhyndman.com/forecast/>`_.\n\n The `RForecastPredictor` is a thin wrapper for calling the R forecast\n package. In order to use it you need to install R and run::\n\n pip install 'rpy2>=2.9.*,<3.*'\n R -e 'install.packages(c(\"forecast\", \"nnfor\"), repos=\"https://cloud.r-project.org\")'\n\n Parameters\n ----------\n freq\n The granularity of the time series (e.g. '1H')\n prediction_length\n Number of time points to be predicted.\n method\n The method from rforecast to be used one of\n \"ets\", \"arima\", \"tbats\", \"croston\", \"mlp\", \"thetaf\".\n period\n The period to be used (this is called `frequency` in the R forecast\n package), result to a tentative reasonable default if not specified\n (for instance 24 for hourly freq '1H')\n trunc_length\n Maximum history length to feed to the model (some models become slow\n with very long series).\n params\n Parameters to be used when calling the forecast method default.\n Note that currently only `output_type = 'samples'` is supported.\n \"\"\"\n\n @validated()\n def __init__(\n self,\n freq: str,\n prediction_length: int,\n method_name: str = \"ets\",\n period: int = None,\n trunc_length: Optional[int] = None,\n params: Optional[Dict] = None,\n ) -> None:\n super().__init__(freq=freq, prediction_length=prediction_length)\n\n try:\n import rpy2.robjects.packages as rpackages\n from rpy2 import rinterface, robjects\n from rpy2.rinterface import RRuntimeError\n except ImportError as e:\n raise ImportError(str(e) + USAGE_MESSAGE) from e\n\n self._robjects = robjects\n self._rinterface = rinterface\n self._rinterface.initr()\n self._rpackages = rpackages\n\n this_dir = os.path.dirname(os.path.realpath(__file__))\n this_dir = this_dir.replace(\"\\\\\", \"/\") # for windows\n r_files = [\n n[:-2] for n in os.listdir(f\"{this_dir}/R/\") if n[-2:] == \".R\"\n ]\n\n for n in r_files:\n try:\n path = Path(this_dir, \"R\", f\"{n}.R\")\n robjects.r(f'source(\"{path}\")'.replace(\"\\\\\", \"\\\\\\\\\"))\n except RRuntimeError as er:\n raise RRuntimeError(str(er) + USAGE_MESSAGE) from er\n\n supported_methods = [\n \"ets\",\n \"arima\",\n \"tbats\",\n \"croston\",\n \"mlp\",\n \"thetaf\",\n ]\n assert (\n method_name in supported_methods\n ), f\"method {method_name} is not supported please use one of {supported_methods}\"\n\n self.method_name = method_name\n\n self._stats_pkg = rpackages.importr(\"stats\")\n self._r_method = robjects.r[method_name]\n\n self.prediction_length = prediction_length\n self.freq = freq\n self.period = period if period is not None else get_seasonality(freq)\n self.trunc_length = trunc_length\n\n self.params = {\n \"prediction_length\": self.prediction_length,\n \"output_types\": [\"samples\"],\n \"frequency\": self.period,\n }\n if params is not None:\n self.params.update(params)\n\n def _unlist(self, l):\n if type(l).__name__.endswith(\"Vector\"):\n return [self._unlist(x) for x in l]\n else:\n return l\n\n def _run_r_forecast(self, d, params, save_info):\n buf = []\n\n def save_to_buf(x):\n buf.append(x)\n\n def dont_save(x):\n pass\n\n f = save_to_buf if save_info else dont_save\n\n # save output from the R console in buf\n self._rinterface.set_writeconsole_regular(f)\n self._rinterface.set_writeconsole_warnerror(f)\n\n make_ts = self._stats_pkg.ts\n r_params = self._robjects.vectors.ListVector(params)\n vec = self._robjects.FloatVector(d[\"target\"])\n ts = make_ts(vec, frequency=self.period)\n forecast = self._r_method(ts, r_params)\n forecast_dict = dict(\n zip(forecast.names, map(self._unlist, list(forecast)))\n )\n # FOR NOW ONLY SAMPLES...\n # if \"quantiles\" in forecast_dict:\n # forecast_dict[\"quantiles\"] = dict(zip(params[\"quantiles\"], forecast_dict[\"quantiles\"]))\n\n self._rinterface.set_writeconsole_regular(\n self._rinterface.consolePrint\n )\n self._rinterface.set_writeconsole_warnerror(\n self._rinterface.consolePrint\n )\n return forecast_dict, buf\n\n def predict(\n self,\n dataset: Dataset,\n num_samples: int = 100,\n save_info: bool = False,\n **kwargs,\n ) -> Iterator[SampleForecast]:\n for entry in dataset:\n if isinstance(entry, dict):\n data = entry\n else:\n data = entry.data\n if self.trunc_length:\n data = data[-self.trunc_length :]\n\n params = self.params.copy()\n params[\"num_samples\"] = num_samples\n\n forecast_dict, console_output = self._run_r_forecast(\n data, params, save_info=save_info\n )\n\n samples = np.array(forecast_dict[\"samples\"])\n expected_shape = (params[\"num_samples\"], self.prediction_length)\n assert (\n samples.shape == expected_shape\n ), f\"Expected shape {expected_shape} but found {samples.shape}\"\n info = (\n {\"console_output\": \"\\n\".join(console_output)}\n if save_info\n else None\n )\n yield SampleForecast(\n samples, forecast_start(data), self.freq, info=info\n )\n", "path": "src/gluonts/model/r_forecast/_predictor.py"}], "after_files": [{"content": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\").\n# You may not use this file except in compliance with the License.\n# A copy of the License is located at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# or in the \"license\" file accompanying this file. This file is distributed\n# on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either\n# express or implied. See the License for the specific language governing\n# permissions and limitations under the License.\n\nimport os\nfrom pathlib import Path\nfrom typing import Dict, Iterator, Optional\n\nimport numpy as np\n\nfrom gluonts.core.component import validated\nfrom gluonts.dataset.common import Dataset\nfrom gluonts.model.forecast import SampleForecast\nfrom gluonts.model.predictor import RepresentablePredictor\nfrom gluonts.support.pandas import forecast_start\nfrom gluonts.time_feature import get_seasonality\n\nUSAGE_MESSAGE = \"\"\"\nThe RForecastPredictor is a thin wrapper for calling the R forecast package.\nIn order to use it you need to install R and run\n\npip install 'rpy2>=2.9.*,<3.*'\n\nR -e 'install.packages(c(\"forecast\", \"nnfor\"), repos=\"https://cloud.r-project.org\")'\n\"\"\"\n\n\nclass RForecastPredictor(RepresentablePredictor):\n \"\"\"\n Wrapper for calling the `R forecast package\n <http://pkg.robjhyndman.com/forecast/>`_.\n\n The `RForecastPredictor` is a thin wrapper for calling the R forecast\n package. In order to use it you need to install R and run::\n\n pip install 'rpy2>=2.9.*,<3.*'\n R -e 'install.packages(c(\"forecast\", \"nnfor\"), repos=\"https://cloud.r-project.org\")'\n\n Parameters\n ----------\n freq\n The granularity of the time series (e.g. '1H')\n prediction_length\n Number of time points to be predicted.\n method\n The method from rforecast to be used one of\n \"ets\", \"arima\", \"tbats\", \"croston\", \"mlp\", \"thetaf\".\n period\n The period to be used (this is called `frequency` in the R forecast\n package), result to a tentative reasonable default if not specified\n (for instance 24 for hourly freq '1H')\n trunc_length\n Maximum history length to feed to the model (some models become slow\n with very long series).\n params\n Parameters to be used when calling the forecast method default.\n Note that currently only `output_type = 'samples'` is supported.\n \"\"\"\n\n @validated()\n def __init__(\n self,\n freq: str,\n prediction_length: int,\n method_name: str = \"ets\",\n period: int = None,\n trunc_length: Optional[int] = None,\n params: Optional[Dict] = None,\n ) -> None:\n super().__init__(freq=freq, prediction_length=prediction_length)\n\n try:\n import rpy2.robjects.packages as rpackages\n from rpy2 import rinterface, robjects\n from rpy2.rinterface import RRuntimeError\n except ImportError as e:\n raise ImportError(str(e) + USAGE_MESSAGE) from e\n\n self._robjects = robjects\n self._rinterface = rinterface\n self._rinterface.initr()\n self._rpackages = rpackages\n\n this_dir = os.path.dirname(os.path.realpath(__file__))\n this_dir = this_dir.replace(\"\\\\\", \"/\") # for windows\n r_files = [\n n[:-2] for n in os.listdir(f\"{this_dir}/R/\") if n[-2:] == \".R\"\n ]\n\n for n in r_files:\n try:\n path = Path(this_dir, \"R\", f\"{n}.R\")\n robjects.r(f'source(\"{path}\")'.replace(\"\\\\\", \"\\\\\\\\\"))\n except RRuntimeError as er:\n raise RRuntimeError(str(er) + USAGE_MESSAGE) from er\n\n supported_methods = [\n \"ets\",\n \"arima\",\n \"tbats\",\n \"croston\",\n \"mlp\",\n \"thetaf\",\n ]\n assert (\n method_name in supported_methods\n ), f\"method {method_name} is not supported please use one of {supported_methods}\"\n\n self.method_name = method_name\n\n self._stats_pkg = rpackages.importr(\"stats\")\n self._r_method = robjects.r[method_name]\n\n self.prediction_length = prediction_length\n self.freq = freq\n self.period = period if period is not None else get_seasonality(freq)\n self.trunc_length = trunc_length\n\n self.params = {\n \"prediction_length\": self.prediction_length,\n \"output_types\": [\"samples\"],\n \"frequency\": self.period,\n }\n if params is not None:\n self.params.update(params)\n\n def _unlist(self, l):\n if type(l).__name__.endswith(\"Vector\"):\n return [self._unlist(x) for x in l]\n else:\n return l\n\n def _run_r_forecast(self, d, params, save_info):\n buf = []\n\n def save_to_buf(x):\n buf.append(x)\n\n def dont_save(x):\n pass\n\n f = save_to_buf if save_info else dont_save\n\n # save output from the R console in buf\n self._rinterface.set_writeconsole_regular(f)\n self._rinterface.set_writeconsole_warnerror(f)\n\n make_ts = self._stats_pkg.ts\n r_params = self._robjects.vectors.ListVector(params)\n vec = self._robjects.FloatVector(d[\"target\"])\n ts = make_ts(vec, frequency=self.period)\n forecast = self._r_method(ts, r_params)\n forecast_dict = dict(\n zip(forecast.names, map(self._unlist, list(forecast)))\n )\n # FOR NOW ONLY SAMPLES...\n # if \"quantiles\" in forecast_dict:\n # forecast_dict[\"quantiles\"] = dict(zip(params[\"quantiles\"], forecast_dict[\"quantiles\"]))\n\n self._rinterface.set_writeconsole_regular(\n self._rinterface.consolePrint\n )\n self._rinterface.set_writeconsole_warnerror(\n self._rinterface.consolePrint\n )\n return forecast_dict, buf\n\n def predict(\n self,\n dataset: Dataset,\n num_samples: int = 100,\n save_info: bool = False,\n **kwargs,\n ) -> Iterator[SampleForecast]:\n for entry in dataset:\n if isinstance(entry, dict):\n data = entry\n else:\n data = entry.data\n if self.trunc_length:\n data = data[-self.trunc_length :]\n\n params = self.params.copy()\n params[\"num_samples\"] = num_samples\n\n forecast_dict, console_output = self._run_r_forecast(\n data, params, save_info=save_info\n )\n\n samples = np.array(forecast_dict[\"samples\"])\n expected_shape = (params[\"num_samples\"], self.prediction_length)\n assert (\n samples.shape == expected_shape\n ), f\"Expected shape {expected_shape} but found {samples.shape}\"\n info = (\n {\"console_output\": \"\\n\".join(console_output)}\n if save_info\n else None\n )\n yield SampleForecast(\n samples,\n forecast_start(data),\n self.freq,\n info=info,\n item_id=entry.get(\"item_id\", None),\n )\n", "path": "src/gluonts/model/r_forecast/_predictor.py"}]} | 3,068 | 140 |
gh_patches_debug_23474 | rasdani/github-patches | git_diff | aws-powertools__powertools-lambda-python-186 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Case insensitive lookup of header values in http proxy events
**Is your feature request related to a problem? Please describe.**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Http headers are meant to be case insensitive, but lambda events use case sensitive dicts for the header keys.
For example getting the `Authorization` header you will need to do a case insensitive look up to support if the header is `authorization`. So you end up doing something like this.
```python3
authorization = None
for key, value in self.headers.items():
if name.lower() == 'authorization'
authorization = value
```
**Describe the solution you'd like**
<!-- A clear and concise description of what you want to happen. -->
> Linked pull request : #185
Update `BaseProxyEvent.get_header_value` method to support case insensitive lookups by default
```python3
authorization = event.get_header_value("authorization")
```
```python3
def get_header_value(
self, name: str, default_value: Optional[str] = None, case_sensitive: Optional[bool] = False
) -> Optional[str]:
"""Get header value by name
Parameters
----------
name: str
Header name
default_value: str, optional
Default value if no value was found by name
case_sensitive: bool
Whether to use a case sensitive look up
Returns
-------
str, optional
Header value
"""
if case_sensitive:
return self.headers.get(name, default_value)
return next((value for key, value in self.headers.items() if name.lower() == key.lower()), default_value)
```
**Questions**
* Should we default to case insensitive look ups? This is what most people would want?
* Just return the first match? Or support returning all matches?
**Describe alternatives you've considered**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
Use the requests CaseInsensitiveDict or port it :
* https://github.com/psf/requests/blob/333fa8d3f201f68b82d9e8f26be4987e869fd3bf/requests/structures.py#L15
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->
From the Http 1.1 spec, https headers are not supposed to be case sensitive: https://www.w3.org/Protocols/rfc2616/rfc2616.html
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `aws_lambda_powertools/utilities/data_classes/common.py`
Content:
```
1 from typing import Any, Dict, Optional
2
3
4 class DictWrapper:
5 """Provides a single read only access to a wrapper dict"""
6
7 def __init__(self, data: Dict[str, Any]):
8 self._data = data
9
10 def __getitem__(self, key: str) -> Any:
11 return self._data[key]
12
13 def get(self, key: str) -> Optional[Any]:
14 return self._data.get(key)
15
16
17 class BaseProxyEvent(DictWrapper):
18 @property
19 def headers(self) -> Dict[str, str]:
20 return self["headers"]
21
22 @property
23 def query_string_parameters(self) -> Optional[Dict[str, str]]:
24 return self.get("queryStringParameters")
25
26 @property
27 def is_base64_encoded(self) -> bool:
28 return self.get("isBase64Encoded")
29
30 @property
31 def body(self) -> Optional[str]:
32 return self.get("body")
33
34 def get_query_string_value(self, name: str, default_value: Optional[str] = None) -> Optional[str]:
35 """Get query string value by name
36
37 Parameters
38 ----------
39 name: str
40 Query string parameter name
41 default_value: str, optional
42 Default value if no value was found by name
43 Returns
44 -------
45 str, optional
46 Query string parameter value
47 """
48 params = self.query_string_parameters
49 return default_value if params is None else params.get(name, default_value)
50
51 def get_header_value(self, name: str, default_value: Optional[str] = None) -> Optional[str]:
52 """Get header value by name
53
54 Parameters
55 ----------
56 name: str
57 Header name
58 default_value: str, optional
59 Default value if no value was found by name
60 Returns
61 -------
62 str, optional
63 Header value
64 """
65 return self.headers.get(name, default_value)
66
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/aws_lambda_powertools/utilities/data_classes/common.py b/aws_lambda_powertools/utilities/data_classes/common.py
--- a/aws_lambda_powertools/utilities/data_classes/common.py
+++ b/aws_lambda_powertools/utilities/data_classes/common.py
@@ -48,7 +48,9 @@
params = self.query_string_parameters
return default_value if params is None else params.get(name, default_value)
- def get_header_value(self, name: str, default_value: Optional[str] = None) -> Optional[str]:
+ def get_header_value(
+ self, name: str, default_value: Optional[str] = None, case_sensitive: Optional[bool] = False
+ ) -> Optional[str]:
"""Get header value by name
Parameters
@@ -57,9 +59,14 @@
Header name
default_value: str, optional
Default value if no value was found by name
+ case_sensitive: bool
+ Whether to use a case sensitive look up
Returns
-------
str, optional
Header value
"""
- return self.headers.get(name, default_value)
+ if case_sensitive:
+ return self.headers.get(name, default_value)
+
+ return next((value for key, value in self.headers.items() if name.lower() == key.lower()), default_value)
| {"golden_diff": "diff --git a/aws_lambda_powertools/utilities/data_classes/common.py b/aws_lambda_powertools/utilities/data_classes/common.py\n--- a/aws_lambda_powertools/utilities/data_classes/common.py\n+++ b/aws_lambda_powertools/utilities/data_classes/common.py\n@@ -48,7 +48,9 @@\n params = self.query_string_parameters\n return default_value if params is None else params.get(name, default_value)\n \n- def get_header_value(self, name: str, default_value: Optional[str] = None) -> Optional[str]:\n+ def get_header_value(\n+ self, name: str, default_value: Optional[str] = None, case_sensitive: Optional[bool] = False\n+ ) -> Optional[str]:\n \"\"\"Get header value by name\n \n Parameters\n@@ -57,9 +59,14 @@\n Header name\n default_value: str, optional\n Default value if no value was found by name\n+ case_sensitive: bool\n+ Whether to use a case sensitive look up\n Returns\n -------\n str, optional\n Header value\n \"\"\"\n- return self.headers.get(name, default_value)\n+ if case_sensitive:\n+ return self.headers.get(name, default_value)\n+\n+ return next((value for key, value in self.headers.items() if name.lower() == key.lower()), default_value)\n", "issue": "Case insensitive lookup of header values in http proxy events\n**Is your feature request related to a problem? Please describe.**\r\n<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->\r\n\r\nHttp headers are meant to be case insensitive, but lambda events use case sensitive dicts for the header keys.\r\n\r\nFor example getting the `Authorization` header you will need to do a case insensitive look up to support if the header is `authorization`. So you end up doing something like this.\r\n\r\n```python3\r\nauthorization = None\r\nfor key, value in self.headers.items():\r\n if name.lower() == 'authorization'\r\n authorization = value\r\n```\r\n\r\n**Describe the solution you'd like**\r\n<!-- A clear and concise description of what you want to happen. -->\r\n\r\n> Linked pull request : #185\r\n\r\nUpdate `BaseProxyEvent.get_header_value` method to support case insensitive lookups by default\r\n\r\n```python3\r\nauthorization = event.get_header_value(\"authorization\")\r\n```\r\n\r\n```python3\r\n def get_header_value(\r\n self, name: str, default_value: Optional[str] = None, case_sensitive: Optional[bool] = False\r\n ) -> Optional[str]:\r\n \"\"\"Get header value by name\r\n\r\n Parameters\r\n ----------\r\n name: str\r\n Header name\r\n default_value: str, optional\r\n Default value if no value was found by name\r\n case_sensitive: bool\r\n Whether to use a case sensitive look up\r\n Returns\r\n -------\r\n str, optional\r\n Header value\r\n \"\"\"\r\n if case_sensitive:\r\n return self.headers.get(name, default_value)\r\n \r\n return next((value for key, value in self.headers.items() if name.lower() == key.lower()), default_value)\r\n```\r\n\r\n**Questions**\r\n\r\n* Should we default to case insensitive look ups? This is what most people would want?\r\n* Just return the first match? Or support returning all matches?\r\n\r\n**Describe alternatives you've considered**\r\n<!-- A clear and concise description of any alternative solutions or features you've considered. -->\r\nUse the requests CaseInsensitiveDict or port it :\r\n* https://github.com/psf/requests/blob/333fa8d3f201f68b82d9e8f26be4987e869fd3bf/requests/structures.py#L15\r\n\r\n**Additional context**\r\n<!-- Add any other context or screenshots about the feature request here. -->\r\nFrom the Http 1.1 spec, https headers are not supposed to be case sensitive: https://www.w3.org/Protocols/rfc2616/rfc2616.html\n", "before_files": [{"content": "from typing import Any, Dict, Optional\n\n\nclass DictWrapper:\n \"\"\"Provides a single read only access to a wrapper dict\"\"\"\n\n def __init__(self, data: Dict[str, Any]):\n self._data = data\n\n def __getitem__(self, key: str) -> Any:\n return self._data[key]\n\n def get(self, key: str) -> Optional[Any]:\n return self._data.get(key)\n\n\nclass BaseProxyEvent(DictWrapper):\n @property\n def headers(self) -> Dict[str, str]:\n return self[\"headers\"]\n\n @property\n def query_string_parameters(self) -> Optional[Dict[str, str]]:\n return self.get(\"queryStringParameters\")\n\n @property\n def is_base64_encoded(self) -> bool:\n return self.get(\"isBase64Encoded\")\n\n @property\n def body(self) -> Optional[str]:\n return self.get(\"body\")\n\n def get_query_string_value(self, name: str, default_value: Optional[str] = None) -> Optional[str]:\n \"\"\"Get query string value by name\n\n Parameters\n ----------\n name: str\n Query string parameter name\n default_value: str, optional\n Default value if no value was found by name\n Returns\n -------\n str, optional\n Query string parameter value\n \"\"\"\n params = self.query_string_parameters\n return default_value if params is None else params.get(name, default_value)\n\n def get_header_value(self, name: str, default_value: Optional[str] = None) -> Optional[str]:\n \"\"\"Get header value by name\n\n Parameters\n ----------\n name: str\n Header name\n default_value: str, optional\n Default value if no value was found by name\n Returns\n -------\n str, optional\n Header value\n \"\"\"\n return self.headers.get(name, default_value)\n", "path": "aws_lambda_powertools/utilities/data_classes/common.py"}], "after_files": [{"content": "from typing import Any, Dict, Optional\n\n\nclass DictWrapper:\n \"\"\"Provides a single read only access to a wrapper dict\"\"\"\n\n def __init__(self, data: Dict[str, Any]):\n self._data = data\n\n def __getitem__(self, key: str) -> Any:\n return self._data[key]\n\n def get(self, key: str) -> Optional[Any]:\n return self._data.get(key)\n\n\nclass BaseProxyEvent(DictWrapper):\n @property\n def headers(self) -> Dict[str, str]:\n return self[\"headers\"]\n\n @property\n def query_string_parameters(self) -> Optional[Dict[str, str]]:\n return self.get(\"queryStringParameters\")\n\n @property\n def is_base64_encoded(self) -> bool:\n return self.get(\"isBase64Encoded\")\n\n @property\n def body(self) -> Optional[str]:\n return self.get(\"body\")\n\n def get_query_string_value(self, name: str, default_value: Optional[str] = None) -> Optional[str]:\n \"\"\"Get query string value by name\n\n Parameters\n ----------\n name: str\n Query string parameter name\n default_value: str, optional\n Default value if no value was found by name\n Returns\n -------\n str, optional\n Query string parameter value\n \"\"\"\n params = self.query_string_parameters\n return default_value if params is None else params.get(name, default_value)\n\n def get_header_value(\n self, name: str, default_value: Optional[str] = None, case_sensitive: Optional[bool] = False\n ) -> Optional[str]:\n \"\"\"Get header value by name\n\n Parameters\n ----------\n name: str\n Header name\n default_value: str, optional\n Default value if no value was found by name\n case_sensitive: bool\n Whether to use a case sensitive look up\n Returns\n -------\n str, optional\n Header value\n \"\"\"\n if case_sensitive:\n return self.headers.get(name, default_value)\n\n return next((value for key, value in self.headers.items() if name.lower() == key.lower()), default_value)\n", "path": "aws_lambda_powertools/utilities/data_classes/common.py"}]} | 1,344 | 291 |
gh_patches_debug_25767 | rasdani/github-patches | git_diff | python-pillow__Pillow-3143 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Import matplotlib qt4 backend after import PIL ImageQt when PyQt5 is installed causes exception
### What did you do?
```python
from PyQt4 import QtCore
from PIL.ImageQt import ImageQt
from matplotlib.backends.backend_qt4agg import (FigureCanvasQTAgg as FigureCanvas, NavigationToolbar2QT as NavigationToolbar)
```
### What did you expect to happen?
Import PIL and matplotlib using PyQt4 (in an environment with PyQt5 also installed...)
### What actually happened?
RuntimeError: the PyQt5.QtCore and PyQt4.QtCore modules both wrap the QObject class
Note that this is related, to me, with issue #1939 (closed): https://github.com/python-pillow/Pillow/issues/1939
### What versions of Pillow and Python are you using?
Python 3.4.5
PIL 4.2.1
matplotlib 2.1.0
Please include **code** that reproduces the issue and whenever possible, an **image** that demonstrates the issue. Please upload images to GitHub, not to third-party file hosting sites. If necessary, add the image to a zip or tar archive.
The best reproductions are self-contained scripts with minimal dependencies. If you are using a framework such as plone, Django, or buildout, try to replicate the issue just using Pillow.
```
Python 3.4.5 (default, May 29 2017, 15:17:55)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
>>> import sys
>>>
>>> 'PyQt4' in sys.modules
False
>>> 'PyQt5' in sys.modules
False
>>>
>>> from PyQt4 import QtCore
>>> 'PyQt4' in sys.modules
True
>>> 'PyQt5' in sys.modules
False
>>>
>>> from PIL.ImageQt import ImageQt
Imported PyQt4... OK!
>>>
>>> 'PyQt4' in sys.modules
True
>>> 'PyQt5' in sys.modules
True
>>>
>>> from matplotlib.backends.backend_qt4agg import (FigureCanvasQTAgg as FigureCanvas,
... NavigationToolbar2QT as NavigationToolbar)
<module 'PyQt5' from '/usr/lib64/python3.4/site-packages/PyQt5/__init__.py'>
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python3.4/site-packages/matplotlib/backends/backend_qt4agg.py", line 10, in <module>
from .backend_qt4 import (
File "/usr/lib64/python3.4/site-packages/matplotlib/backends/backend_qt4.py", line 18, in <module>
from .qt_compat import QtCore, QtWidgets, _getSaveFileName, __version__
File "/usr/lib64/python3.4/site-packages/matplotlib/backends/qt_compat.py", line 153, in <module>
from PyQt5 import QtCore, QtGui, QtWidgets
RuntimeError: the PyQt5.QtCore and PyQt4.QtCore modules both wrap the QObject class
>>>
```
A try to fix it, is this (currently used in our production environment):
\# /usr/lib64/python3.4/site-packages/PIL/ImageQt.py
```python
(...)
import sys
from . import Image
from ._util import isPath
from io import BytesIO
qt_is_installed = True
qt_version = None
try:
from PyQt5.QtGui import QImage, qRgba, QPixmap
from PyQt5.QtCore import QBuffer, QIODevice
qt_version = '5'
except (ImportError, RuntimeError):
try:
del sys.modules['PyQt5']
except:
pass
try:
from PyQt4.QtGui import QImage, qRgba, QPixmap
from PyQt4.QtCore import QBuffer, QIODevice
qt_version = '4'
except (ImportError, RuntimeError):
try:
del sys.modules['PyQt4']
except:
pass
try:
from PySide.QtGui import QImage, qRgba, QPixmap
from PySide.QtCore import QBuffer, QIODevice
qt_version = 'side'
except ImportError:
qt_is_installed = False
(...)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/PIL/ImageQt.py`
Content:
```
1 #
2 # The Python Imaging Library.
3 # $Id$
4 #
5 # a simple Qt image interface.
6 #
7 # history:
8 # 2006-06-03 fl: created
9 # 2006-06-04 fl: inherit from QImage instead of wrapping it
10 # 2006-06-05 fl: removed toimage helper; move string support to ImageQt
11 # 2013-11-13 fl: add support for Qt5 ([email protected])
12 #
13 # Copyright (c) 2006 by Secret Labs AB
14 # Copyright (c) 2006 by Fredrik Lundh
15 #
16 # See the README file for information on usage and redistribution.
17 #
18
19 from . import Image
20 from ._util import isPath, py3
21 from io import BytesIO
22
23 qt_is_installed = True
24 qt_version = None
25 try:
26 from PyQt5.QtGui import QImage, qRgba, QPixmap
27 from PyQt5.QtCore import QBuffer, QIODevice
28 qt_version = '5'
29 except (ImportError, RuntimeError):
30 try:
31 from PyQt4.QtGui import QImage, qRgba, QPixmap
32 from PyQt4.QtCore import QBuffer, QIODevice
33 qt_version = '4'
34 except (ImportError, RuntimeError):
35 try:
36 from PySide.QtGui import QImage, qRgba, QPixmap
37 from PySide.QtCore import QBuffer, QIODevice
38 qt_version = 'side'
39 except ImportError:
40 qt_is_installed = False
41
42
43 def rgb(r, g, b, a=255):
44 """(Internal) Turns an RGB color into a Qt compatible color integer."""
45 # use qRgb to pack the colors, and then turn the resulting long
46 # into a negative integer with the same bitpattern.
47 return (qRgba(r, g, b, a) & 0xffffffff)
48
49
50 def fromqimage(im):
51 """
52 :param im: A PIL Image object, or a file name
53 (given either as Python string or a PyQt string object)
54 """
55 buffer = QBuffer()
56 buffer.open(QIODevice.ReadWrite)
57 # preserve alha channel with png
58 # otherwise ppm is more friendly with Image.open
59 if im.hasAlphaChannel():
60 im.save(buffer, 'png')
61 else:
62 im.save(buffer, 'ppm')
63
64 b = BytesIO()
65 try:
66 b.write(buffer.data())
67 except TypeError:
68 # workaround for Python 2
69 b.write(str(buffer.data()))
70 buffer.close()
71 b.seek(0)
72
73 return Image.open(b)
74
75
76 def fromqpixmap(im):
77 return fromqimage(im)
78 # buffer = QBuffer()
79 # buffer.open(QIODevice.ReadWrite)
80 # # im.save(buffer)
81 # # What if png doesn't support some image features like animation?
82 # im.save(buffer, 'ppm')
83 # bytes_io = BytesIO()
84 # bytes_io.write(buffer.data())
85 # buffer.close()
86 # bytes_io.seek(0)
87 # return Image.open(bytes_io)
88
89
90 def align8to32(bytes, width, mode):
91 """
92 converts each scanline of data from 8 bit to 32 bit aligned
93 """
94
95 bits_per_pixel = {
96 '1': 1,
97 'L': 8,
98 'P': 8,
99 }[mode]
100
101 # calculate bytes per line and the extra padding if needed
102 bits_per_line = bits_per_pixel * width
103 full_bytes_per_line, remaining_bits_per_line = divmod(bits_per_line, 8)
104 bytes_per_line = full_bytes_per_line + (1 if remaining_bits_per_line else 0)
105
106 extra_padding = -bytes_per_line % 4
107
108 # already 32 bit aligned by luck
109 if not extra_padding:
110 return bytes
111
112 new_data = []
113 for i in range(len(bytes) // bytes_per_line):
114 new_data.append(bytes[i*bytes_per_line:(i+1)*bytes_per_line] + b'\x00' * extra_padding)
115
116 return b''.join(new_data)
117
118
119 def _toqclass_helper(im):
120 data = None
121 colortable = None
122
123 # handle filename, if given instead of image name
124 if hasattr(im, "toUtf8"):
125 # FIXME - is this really the best way to do this?
126 if py3:
127 im = str(im.toUtf8(), "utf-8")
128 else:
129 im = unicode(im.toUtf8(), "utf-8")
130 if isPath(im):
131 im = Image.open(im)
132
133 if im.mode == "1":
134 format = QImage.Format_Mono
135 elif im.mode == "L":
136 format = QImage.Format_Indexed8
137 colortable = []
138 for i in range(256):
139 colortable.append(rgb(i, i, i))
140 elif im.mode == "P":
141 format = QImage.Format_Indexed8
142 colortable = []
143 palette = im.getpalette()
144 for i in range(0, len(palette), 3):
145 colortable.append(rgb(*palette[i:i+3]))
146 elif im.mode == "RGB":
147 data = im.tobytes("raw", "BGRX")
148 format = QImage.Format_RGB32
149 elif im.mode == "RGBA":
150 try:
151 data = im.tobytes("raw", "BGRA")
152 except SystemError:
153 # workaround for earlier versions
154 r, g, b, a = im.split()
155 im = Image.merge("RGBA", (b, g, r, a))
156 format = QImage.Format_ARGB32
157 else:
158 raise ValueError("unsupported image mode %r" % im.mode)
159
160 __data = data or align8to32(im.tobytes(), im.size[0], im.mode)
161 return {
162 'data': __data, 'im': im, 'format': format, 'colortable': colortable
163 }
164
165
166 if qt_is_installed:
167 class ImageQt(QImage):
168
169 def __init__(self, im):
170 """
171 An PIL image wrapper for Qt. This is a subclass of PyQt's QImage
172 class.
173
174 :param im: A PIL Image object, or a file name (given either as Python
175 string or a PyQt string object).
176 """
177 im_data = _toqclass_helper(im)
178 # must keep a reference, or Qt will crash!
179 # All QImage constructors that take data operate on an existing
180 # buffer, so this buffer has to hang on for the life of the image.
181 # Fixes https://github.com/python-pillow/Pillow/issues/1370
182 self.__data = im_data['data']
183 QImage.__init__(self,
184 self.__data, im_data['im'].size[0],
185 im_data['im'].size[1], im_data['format'])
186 if im_data['colortable']:
187 self.setColorTable(im_data['colortable'])
188
189
190 def toqimage(im):
191 return ImageQt(im)
192
193
194 def toqpixmap(im):
195 # # This doesn't work. For now using a dumb approach.
196 # im_data = _toqclass_helper(im)
197 # result = QPixmap(im_data['im'].size[0], im_data['im'].size[1])
198 # result.loadFromData(im_data['data'])
199 # Fix some strange bug that causes
200 if im.mode == 'RGB':
201 im = im.convert('RGBA')
202
203 qimage = toqimage(im)
204 return QPixmap.fromImage(qimage)
205
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/PIL/ImageQt.py b/src/PIL/ImageQt.py
--- a/src/PIL/ImageQt.py
+++ b/src/PIL/ImageQt.py
@@ -19,25 +19,33 @@
from . import Image
from ._util import isPath, py3
from io import BytesIO
-
-qt_is_installed = True
-qt_version = None
-try:
- from PyQt5.QtGui import QImage, qRgba, QPixmap
- from PyQt5.QtCore import QBuffer, QIODevice
- qt_version = '5'
-except (ImportError, RuntimeError):
+import sys
+
+qt_versions = [
+ ['5', 'PyQt5'],
+ ['4', 'PyQt4'],
+ ['side', 'PySide']
+]
+# If a version has already been imported, attempt it first
+qt_versions.sort(key=lambda qt_version: qt_version[1] in sys.modules, reverse=True)
+for qt_version, qt_module in qt_versions:
try:
- from PyQt4.QtGui import QImage, qRgba, QPixmap
- from PyQt4.QtCore import QBuffer, QIODevice
- qt_version = '4'
- except (ImportError, RuntimeError):
- try:
+ if qt_module == 'PyQt5':
+ from PyQt5.QtGui import QImage, qRgba, QPixmap
+ from PyQt5.QtCore import QBuffer, QIODevice
+ elif qt_module == 'PyQt4':
+ from PyQt4.QtGui import QImage, qRgba, QPixmap
+ from PyQt4.QtCore import QBuffer, QIODevice
+ elif qt_module == 'PySide':
from PySide.QtGui import QImage, qRgba, QPixmap
from PySide.QtCore import QBuffer, QIODevice
- qt_version = 'side'
- except ImportError:
- qt_is_installed = False
+ except (ImportError, RuntimeError):
+ continue
+ qt_is_installed = True
+ break
+else:
+ qt_is_installed = False
+ qt_version = None
def rgb(r, g, b, a=255):
| {"golden_diff": "diff --git a/src/PIL/ImageQt.py b/src/PIL/ImageQt.py\n--- a/src/PIL/ImageQt.py\n+++ b/src/PIL/ImageQt.py\n@@ -19,25 +19,33 @@\n from . import Image\n from ._util import isPath, py3\n from io import BytesIO\n-\n-qt_is_installed = True\n-qt_version = None\n-try:\n- from PyQt5.QtGui import QImage, qRgba, QPixmap\n- from PyQt5.QtCore import QBuffer, QIODevice\n- qt_version = '5'\n-except (ImportError, RuntimeError):\n+import sys\n+\n+qt_versions = [\n+ ['5', 'PyQt5'],\n+ ['4', 'PyQt4'],\n+ ['side', 'PySide']\n+]\n+# If a version has already been imported, attempt it first\n+qt_versions.sort(key=lambda qt_version: qt_version[1] in sys.modules, reverse=True)\n+for qt_version, qt_module in qt_versions:\n try:\n- from PyQt4.QtGui import QImage, qRgba, QPixmap\n- from PyQt4.QtCore import QBuffer, QIODevice\n- qt_version = '4'\n- except (ImportError, RuntimeError):\n- try:\n+ if qt_module == 'PyQt5':\n+ from PyQt5.QtGui import QImage, qRgba, QPixmap\n+ from PyQt5.QtCore import QBuffer, QIODevice\n+ elif qt_module == 'PyQt4':\n+ from PyQt4.QtGui import QImage, qRgba, QPixmap\n+ from PyQt4.QtCore import QBuffer, QIODevice\n+ elif qt_module == 'PySide':\n from PySide.QtGui import QImage, qRgba, QPixmap\n from PySide.QtCore import QBuffer, QIODevice\n- qt_version = 'side'\n- except ImportError:\n- qt_is_installed = False\n+ except (ImportError, RuntimeError):\n+ continue\n+ qt_is_installed = True\n+ break\n+else:\n+ qt_is_installed = False\n+ qt_version = None\n \n \n def rgb(r, g, b, a=255):\n", "issue": "Import matplotlib qt4 backend after import PIL ImageQt when PyQt5 is installed causes exception\n### What did you do?\r\n\r\n```python\r\nfrom PyQt4 import QtCore\r\nfrom PIL.ImageQt import ImageQt\r\nfrom matplotlib.backends.backend_qt4agg import (FigureCanvasQTAgg as FigureCanvas, NavigationToolbar2QT as NavigationToolbar)\r\n```\r\n\r\n### What did you expect to happen?\r\nImport PIL and matplotlib using PyQt4 (in an environment with PyQt5 also installed...)\r\n\r\n### What actually happened?\r\nRuntimeError: the PyQt5.QtCore and PyQt4.QtCore modules both wrap the QObject class\r\n\r\nNote that this is related, to me, with issue #1939 (closed): https://github.com/python-pillow/Pillow/issues/1939\r\n\r\n### What versions of Pillow and Python are you using?\r\nPython 3.4.5\r\nPIL 4.2.1\r\nmatplotlib 2.1.0\r\n\r\nPlease include **code** that reproduces the issue and whenever possible, an **image** that demonstrates the issue. Please upload images to GitHub, not to third-party file hosting sites. If necessary, add the image to a zip or tar archive.\r\n\r\nThe best reproductions are self-contained scripts with minimal dependencies. If you are using a framework such as plone, Django, or buildout, try to replicate the issue just using Pillow. \r\n\r\n```\r\nPython 3.4.5 (default, May 29 2017, 15:17:55) \r\n[GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> \r\n>>> import sys\r\n>>> \r\n>>> 'PyQt4' in sys.modules\r\nFalse\r\n>>> 'PyQt5' in sys.modules\r\nFalse\r\n>>> \r\n>>> from PyQt4 import QtCore\r\n>>> 'PyQt4' in sys.modules\r\nTrue\r\n>>> 'PyQt5' in sys.modules\r\nFalse\r\n>>> \r\n>>> from PIL.ImageQt import ImageQt\r\nImported PyQt4... OK!\r\n>>> \r\n>>> 'PyQt4' in sys.modules\r\nTrue\r\n>>> 'PyQt5' in sys.modules\r\nTrue\r\n>>> \r\n>>> from matplotlib.backends.backend_qt4agg import (FigureCanvasQTAgg as FigureCanvas,\r\n... NavigationToolbar2QT as NavigationToolbar)\r\n<module 'PyQt5' from '/usr/lib64/python3.4/site-packages/PyQt5/__init__.py'>\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/usr/lib64/python3.4/site-packages/matplotlib/backends/backend_qt4agg.py\", line 10, in <module>\r\n from .backend_qt4 import (\r\n File \"/usr/lib64/python3.4/site-packages/matplotlib/backends/backend_qt4.py\", line 18, in <module>\r\n from .qt_compat import QtCore, QtWidgets, _getSaveFileName, __version__\r\n File \"/usr/lib64/python3.4/site-packages/matplotlib/backends/qt_compat.py\", line 153, in <module>\r\n from PyQt5 import QtCore, QtGui, QtWidgets\r\nRuntimeError: the PyQt5.QtCore and PyQt4.QtCore modules both wrap the QObject class\r\n>>> \r\n```\r\nA try to fix it, is this (currently used in our production environment):\r\n\r\n\\# /usr/lib64/python3.4/site-packages/PIL/ImageQt.py\r\n\r\n```python\r\n(...)\r\nimport sys\r\n\r\nfrom . import Image\r\nfrom ._util import isPath\r\nfrom io import BytesIO\r\n\r\nqt_is_installed = True\r\nqt_version = None\r\ntry:\r\n from PyQt5.QtGui import QImage, qRgba, QPixmap\r\n from PyQt5.QtCore import QBuffer, QIODevice\r\n qt_version = '5'\r\nexcept (ImportError, RuntimeError):\r\n try:\r\n del sys.modules['PyQt5']\r\n except:\r\n pass\r\n\r\n try:\r\n from PyQt4.QtGui import QImage, qRgba, QPixmap\r\n from PyQt4.QtCore import QBuffer, QIODevice\r\n qt_version = '4'\r\n except (ImportError, RuntimeError):\r\n try:\r\n del sys.modules['PyQt4']\r\n except:\r\n pass\r\n\r\n try:\r\n from PySide.QtGui import QImage, qRgba, QPixmap\r\n from PySide.QtCore import QBuffer, QIODevice\r\n qt_version = 'side'\r\n except ImportError:\r\n qt_is_installed = False\r\n(...)\r\n```\n", "before_files": [{"content": "#\n# The Python Imaging Library.\n# $Id$\n#\n# a simple Qt image interface.\n#\n# history:\n# 2006-06-03 fl: created\n# 2006-06-04 fl: inherit from QImage instead of wrapping it\n# 2006-06-05 fl: removed toimage helper; move string support to ImageQt\n# 2013-11-13 fl: add support for Qt5 ([email protected])\n#\n# Copyright (c) 2006 by Secret Labs AB\n# Copyright (c) 2006 by Fredrik Lundh\n#\n# See the README file for information on usage and redistribution.\n#\n\nfrom . import Image\nfrom ._util import isPath, py3\nfrom io import BytesIO\n\nqt_is_installed = True\nqt_version = None\ntry:\n from PyQt5.QtGui import QImage, qRgba, QPixmap\n from PyQt5.QtCore import QBuffer, QIODevice\n qt_version = '5'\nexcept (ImportError, RuntimeError):\n try:\n from PyQt4.QtGui import QImage, qRgba, QPixmap\n from PyQt4.QtCore import QBuffer, QIODevice\n qt_version = '4'\n except (ImportError, RuntimeError):\n try:\n from PySide.QtGui import QImage, qRgba, QPixmap\n from PySide.QtCore import QBuffer, QIODevice\n qt_version = 'side'\n except ImportError:\n qt_is_installed = False\n\n\ndef rgb(r, g, b, a=255):\n \"\"\"(Internal) Turns an RGB color into a Qt compatible color integer.\"\"\"\n # use qRgb to pack the colors, and then turn the resulting long\n # into a negative integer with the same bitpattern.\n return (qRgba(r, g, b, a) & 0xffffffff)\n\n\ndef fromqimage(im):\n \"\"\"\n :param im: A PIL Image object, or a file name\n (given either as Python string or a PyQt string object)\n \"\"\"\n buffer = QBuffer()\n buffer.open(QIODevice.ReadWrite)\n # preserve alha channel with png\n # otherwise ppm is more friendly with Image.open\n if im.hasAlphaChannel():\n im.save(buffer, 'png')\n else:\n im.save(buffer, 'ppm')\n\n b = BytesIO()\n try:\n b.write(buffer.data())\n except TypeError:\n # workaround for Python 2\n b.write(str(buffer.data()))\n buffer.close()\n b.seek(0)\n\n return Image.open(b)\n\n\ndef fromqpixmap(im):\n return fromqimage(im)\n # buffer = QBuffer()\n # buffer.open(QIODevice.ReadWrite)\n # # im.save(buffer)\n # # What if png doesn't support some image features like animation?\n # im.save(buffer, 'ppm')\n # bytes_io = BytesIO()\n # bytes_io.write(buffer.data())\n # buffer.close()\n # bytes_io.seek(0)\n # return Image.open(bytes_io)\n\n\ndef align8to32(bytes, width, mode):\n \"\"\"\n converts each scanline of data from 8 bit to 32 bit aligned\n \"\"\"\n\n bits_per_pixel = {\n '1': 1,\n 'L': 8,\n 'P': 8,\n }[mode]\n\n # calculate bytes per line and the extra padding if needed\n bits_per_line = bits_per_pixel * width\n full_bytes_per_line, remaining_bits_per_line = divmod(bits_per_line, 8)\n bytes_per_line = full_bytes_per_line + (1 if remaining_bits_per_line else 0)\n\n extra_padding = -bytes_per_line % 4\n\n # already 32 bit aligned by luck\n if not extra_padding:\n return bytes\n\n new_data = []\n for i in range(len(bytes) // bytes_per_line):\n new_data.append(bytes[i*bytes_per_line:(i+1)*bytes_per_line] + b'\\x00' * extra_padding)\n\n return b''.join(new_data)\n\n\ndef _toqclass_helper(im):\n data = None\n colortable = None\n\n # handle filename, if given instead of image name\n if hasattr(im, \"toUtf8\"):\n # FIXME - is this really the best way to do this?\n if py3:\n im = str(im.toUtf8(), \"utf-8\")\n else:\n im = unicode(im.toUtf8(), \"utf-8\")\n if isPath(im):\n im = Image.open(im)\n\n if im.mode == \"1\":\n format = QImage.Format_Mono\n elif im.mode == \"L\":\n format = QImage.Format_Indexed8\n colortable = []\n for i in range(256):\n colortable.append(rgb(i, i, i))\n elif im.mode == \"P\":\n format = QImage.Format_Indexed8\n colortable = []\n palette = im.getpalette()\n for i in range(0, len(palette), 3):\n colortable.append(rgb(*palette[i:i+3]))\n elif im.mode == \"RGB\":\n data = im.tobytes(\"raw\", \"BGRX\")\n format = QImage.Format_RGB32\n elif im.mode == \"RGBA\":\n try:\n data = im.tobytes(\"raw\", \"BGRA\")\n except SystemError:\n # workaround for earlier versions\n r, g, b, a = im.split()\n im = Image.merge(\"RGBA\", (b, g, r, a))\n format = QImage.Format_ARGB32\n else:\n raise ValueError(\"unsupported image mode %r\" % im.mode)\n\n __data = data or align8to32(im.tobytes(), im.size[0], im.mode)\n return {\n 'data': __data, 'im': im, 'format': format, 'colortable': colortable\n }\n\n\nif qt_is_installed:\n class ImageQt(QImage):\n\n def __init__(self, im):\n \"\"\"\n An PIL image wrapper for Qt. This is a subclass of PyQt's QImage\n class.\n\n :param im: A PIL Image object, or a file name (given either as Python\n string or a PyQt string object).\n \"\"\"\n im_data = _toqclass_helper(im)\n # must keep a reference, or Qt will crash!\n # All QImage constructors that take data operate on an existing\n # buffer, so this buffer has to hang on for the life of the image.\n # Fixes https://github.com/python-pillow/Pillow/issues/1370\n self.__data = im_data['data']\n QImage.__init__(self,\n self.__data, im_data['im'].size[0],\n im_data['im'].size[1], im_data['format'])\n if im_data['colortable']:\n self.setColorTable(im_data['colortable'])\n\n\ndef toqimage(im):\n return ImageQt(im)\n\n\ndef toqpixmap(im):\n # # This doesn't work. For now using a dumb approach.\n # im_data = _toqclass_helper(im)\n # result = QPixmap(im_data['im'].size[0], im_data['im'].size[1])\n # result.loadFromData(im_data['data'])\n # Fix some strange bug that causes\n if im.mode == 'RGB':\n im = im.convert('RGBA')\n\n qimage = toqimage(im)\n return QPixmap.fromImage(qimage)\n", "path": "src/PIL/ImageQt.py"}], "after_files": [{"content": "#\n# The Python Imaging Library.\n# $Id$\n#\n# a simple Qt image interface.\n#\n# history:\n# 2006-06-03 fl: created\n# 2006-06-04 fl: inherit from QImage instead of wrapping it\n# 2006-06-05 fl: removed toimage helper; move string support to ImageQt\n# 2013-11-13 fl: add support for Qt5 ([email protected])\n#\n# Copyright (c) 2006 by Secret Labs AB\n# Copyright (c) 2006 by Fredrik Lundh\n#\n# See the README file for information on usage and redistribution.\n#\n\nfrom . import Image\nfrom ._util import isPath, py3\nfrom io import BytesIO\nimport sys\n\nqt_versions = [\n ['5', 'PyQt5'],\n ['4', 'PyQt4'],\n ['side', 'PySide']\n]\n# If a version has already been imported, attempt it first\nqt_versions.sort(key=lambda qt_version: qt_version[1] in sys.modules, reverse=True)\nfor qt_version, qt_module in qt_versions:\n try:\n if qt_module == 'PyQt5':\n from PyQt5.QtGui import QImage, qRgba, QPixmap\n from PyQt5.QtCore import QBuffer, QIODevice\n elif qt_module == 'PyQt4':\n from PyQt4.QtGui import QImage, qRgba, QPixmap\n from PyQt4.QtCore import QBuffer, QIODevice\n elif qt_module == 'PySide':\n from PySide.QtGui import QImage, qRgba, QPixmap\n from PySide.QtCore import QBuffer, QIODevice\n except (ImportError, RuntimeError):\n continue\n qt_is_installed = True\n break\nelse:\n qt_is_installed = False\n qt_version = None\n\n\ndef rgb(r, g, b, a=255):\n \"\"\"(Internal) Turns an RGB color into a Qt compatible color integer.\"\"\"\n # use qRgb to pack the colors, and then turn the resulting long\n # into a negative integer with the same bitpattern.\n return (qRgba(r, g, b, a) & 0xffffffff)\n\n\ndef fromqimage(im):\n \"\"\"\n :param im: A PIL Image object, or a file name\n (given either as Python string or a PyQt string object)\n \"\"\"\n buffer = QBuffer()\n buffer.open(QIODevice.ReadWrite)\n # preserve alha channel with png\n # otherwise ppm is more friendly with Image.open\n if im.hasAlphaChannel():\n im.save(buffer, 'png')\n else:\n im.save(buffer, 'ppm')\n\n b = BytesIO()\n try:\n b.write(buffer.data())\n except TypeError:\n # workaround for Python 2\n b.write(str(buffer.data()))\n buffer.close()\n b.seek(0)\n\n return Image.open(b)\n\n\ndef fromqpixmap(im):\n return fromqimage(im)\n # buffer = QBuffer()\n # buffer.open(QIODevice.ReadWrite)\n # # im.save(buffer)\n # # What if png doesn't support some image features like animation?\n # im.save(buffer, 'ppm')\n # bytes_io = BytesIO()\n # bytes_io.write(buffer.data())\n # buffer.close()\n # bytes_io.seek(0)\n # return Image.open(bytes_io)\n\n\ndef align8to32(bytes, width, mode):\n \"\"\"\n converts each scanline of data from 8 bit to 32 bit aligned\n \"\"\"\n\n bits_per_pixel = {\n '1': 1,\n 'L': 8,\n 'P': 8,\n }[mode]\n\n # calculate bytes per line and the extra padding if needed\n bits_per_line = bits_per_pixel * width\n full_bytes_per_line, remaining_bits_per_line = divmod(bits_per_line, 8)\n bytes_per_line = full_bytes_per_line + (1 if remaining_bits_per_line else 0)\n\n extra_padding = -bytes_per_line % 4\n\n # already 32 bit aligned by luck\n if not extra_padding:\n return bytes\n\n new_data = []\n for i in range(len(bytes) // bytes_per_line):\n new_data.append(bytes[i*bytes_per_line:(i+1)*bytes_per_line] + b'\\x00' * extra_padding)\n\n return b''.join(new_data)\n\n\ndef _toqclass_helper(im):\n data = None\n colortable = None\n\n # handle filename, if given instead of image name\n if hasattr(im, \"toUtf8\"):\n # FIXME - is this really the best way to do this?\n if py3:\n im = str(im.toUtf8(), \"utf-8\")\n else:\n im = unicode(im.toUtf8(), \"utf-8\")\n if isPath(im):\n im = Image.open(im)\n\n if im.mode == \"1\":\n format = QImage.Format_Mono\n elif im.mode == \"L\":\n format = QImage.Format_Indexed8\n colortable = []\n for i in range(256):\n colortable.append(rgb(i, i, i))\n elif im.mode == \"P\":\n format = QImage.Format_Indexed8\n colortable = []\n palette = im.getpalette()\n for i in range(0, len(palette), 3):\n colortable.append(rgb(*palette[i:i+3]))\n elif im.mode == \"RGB\":\n data = im.tobytes(\"raw\", \"BGRX\")\n format = QImage.Format_RGB32\n elif im.mode == \"RGBA\":\n try:\n data = im.tobytes(\"raw\", \"BGRA\")\n except SystemError:\n # workaround for earlier versions\n r, g, b, a = im.split()\n im = Image.merge(\"RGBA\", (b, g, r, a))\n format = QImage.Format_ARGB32\n else:\n raise ValueError(\"unsupported image mode %r\" % im.mode)\n\n __data = data or align8to32(im.tobytes(), im.size[0], im.mode)\n return {\n 'data': __data, 'im': im, 'format': format, 'colortable': colortable\n }\n\n\nif qt_is_installed:\n class ImageQt(QImage):\n\n def __init__(self, im):\n \"\"\"\n An PIL image wrapper for Qt. This is a subclass of PyQt's QImage\n class.\n\n :param im: A PIL Image object, or a file name (given either as Python\n string or a PyQt string object).\n \"\"\"\n im_data = _toqclass_helper(im)\n # must keep a reference, or Qt will crash!\n # All QImage constructors that take data operate on an existing\n # buffer, so this buffer has to hang on for the life of the image.\n # Fixes https://github.com/python-pillow/Pillow/issues/1370\n self.__data = im_data['data']\n QImage.__init__(self,\n self.__data, im_data['im'].size[0],\n im_data['im'].size[1], im_data['format'])\n if im_data['colortable']:\n self.setColorTable(im_data['colortable'])\n\n\ndef toqimage(im):\n return ImageQt(im)\n\n\ndef toqpixmap(im):\n # # This doesn't work. For now using a dumb approach.\n # im_data = _toqclass_helper(im)\n # result = QPixmap(im_data['im'].size[0], im_data['im'].size[1])\n # result.loadFromData(im_data['data'])\n # Fix some strange bug that causes\n if im.mode == 'RGB':\n im = im.convert('RGBA')\n\n qimage = toqimage(im)\n return QPixmap.fromImage(qimage)\n", "path": "src/PIL/ImageQt.py"}]} | 3,382 | 473 |
gh_patches_debug_66082 | rasdani/github-patches | git_diff | freedomofpress__securedrop-5595 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
dev server hot reload has stopped working
## Description
In #5532 the `env` attribute was dropped from `SDConfig` in `sdconfig.py`. That value is checked in [`source.py`](https://github.com/freedomofpress/securedrop/blob/6246482157e31d0655a91c5e7284cc8550f2c289/securedrop/source.py#L11) and [`journalist.py`](https://github.com/freedomofpress/securedrop/blob/6246482157e31d0655a91c5e7284cc8550f2c289/securedrop/journalist.py#L26) to determine whether the Flask app will run in [debug](https://flask.palletsprojects.com/en/1.1.x/config/#DEBUG) mode. By default it will not, so the dev server has stopped responding to code changes.
Given the Flask documentation warnings about setting debug mode via code and not the `FLASK_DEBUG` environment variable, we may want to reevaluate all of this, but right now let's just get back to a properly functioning dev server.
## Steps to Reproduce
- Check out `develop` at a commit before the `sdconfig.py` change (eff931fa8a0e74d5c3be87e46c5d0f004f02e289).
- Run `make dev`.
- Change `securedrop/journalist_app/main.py` to trigger a reload, and confirm that the change is detected.
- Stop the dev server.
- Check out `develop` and run `make dev`.
- Change `securedrop/journalist_app/main.py` again, and observe that the change is not detected.
## Expected Behavior
That the dev server would notice code changes and reload to pick them up.
## Actual Behavior
It does not care one whit about your useless flailings. We are all `prod` now.
## Comments
Just need to restore `SDConfig.env`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `securedrop/sdconfig.py`
Content:
```
1 from typing import Dict
2 from typing import Optional
3
4 from typing import Type
5
6 import config as _config
7 from typing import List
8
9
10 class SDConfig:
11 def __init__(self) -> None:
12 self.JOURNALIST_APP_FLASK_CONFIG_CLS = \
13 _config.JournalistInterfaceFlaskConfig # type: Type
14
15 self.SOURCE_APP_FLASK_CONFIG_CLS = \
16 _config.SourceInterfaceFlaskConfig # type: Type
17
18 self.DATABASE_ENGINE = _config.DATABASE_ENGINE # type: str
19 self.DATABASE_FILE = _config.DATABASE_FILE # type: str
20
21 self.DATABASE_USERNAME = getattr(_config, "DATABASE_USERNAME", None) # type: Optional[str]
22 self.DATABASE_PASSWORD = getattr(_config, "DATABASE_PASSWORD", None) # type: Optional[str]
23 self.DATABASE_HOST = getattr(_config, "DATABASE_HOST", None) # type: Optional[str]
24 self.DATABASE_NAME = getattr(_config, "DATABASE_NAME", None) # type: Optional[str]
25
26 self.ADJECTIVES = _config.ADJECTIVES # type: str
27 self.NOUNS = _config.NOUNS # type: str
28 self.WORD_LIST = _config.WORD_LIST # type: str
29
30 self.DEFAULT_LOCALE = _config.DEFAULT_LOCALE # type: str
31 self.SUPPORTED_LOCALES = getattr(
32 _config, "SUPPORTED_LOCALES", [self.DEFAULT_LOCALE]
33 ) # type: List[str]
34
35 self.GPG_KEY_DIR = _config.GPG_KEY_DIR # type: str
36
37 self.JOURNALIST_KEY = _config.JOURNALIST_KEY # type: str
38 self.JOURNALIST_TEMPLATES_DIR = _config.JOURNALIST_TEMPLATES_DIR # type: str
39
40 self.SCRYPT_GPG_PEPPER = _config.SCRYPT_GPG_PEPPER # type: str
41 self.SCRYPT_ID_PEPPER = _config.SCRYPT_ID_PEPPER # type: str
42 self.SCRYPT_PARAMS = _config.SCRYPT_PARAMS # type: Dict[str, int]
43
44 self.SECUREDROP_DATA_ROOT = _config.SECUREDROP_DATA_ROOT # type: str
45 self.SECUREDROP_ROOT = _config.SECUREDROP_ROOT # type: str
46
47 self.SESSION_EXPIRATION_MINUTES = _config.SESSION_EXPIRATION_MINUTES # type: int
48
49 self.SOURCE_TEMPLATES_DIR = _config.SOURCE_TEMPLATES_DIR # type: str
50 self.TEMP_DIR = _config.TEMP_DIR # type: str
51 self.STORE_DIR = _config.STORE_DIR # type: str
52 self.TRANSLATION_DIRS = getattr(_config, "TRANSLATION_DIRS", None) # type: Optional[str]
53
54 self.WORKER_PIDFILE = _config.WORKER_PIDFILE # type: str
55
56 if _config.env == 'test':
57 self.RQ_WORKER_NAME = 'test' # type: str
58 else:
59 self.RQ_WORKER_NAME = 'default'
60
61 @property
62 def DATABASE_URI(self) -> str:
63 if self.DATABASE_ENGINE == "sqlite":
64 db_uri = (self.DATABASE_ENGINE + ":///" +
65 self.DATABASE_FILE)
66 else:
67 if self.DATABASE_USERNAME is None:
68 raise RuntimeError("Missing DATABASE_USERNAME entry from config.py")
69 if self.DATABASE_PASSWORD is None:
70 raise RuntimeError("Missing DATABASE_PASSWORD entry from config.py")
71 if self.DATABASE_HOST is None:
72 raise RuntimeError("Missing DATABASE_HOST entry from config.py")
73 if self.DATABASE_NAME is None:
74 raise RuntimeError("Missing DATABASE_NAME entry from config.py")
75
76 db_uri = (
77 self.DATABASE_ENGINE + '://' +
78 self.DATABASE_USERNAME + ':' +
79 self.DATABASE_PASSWORD + '@' +
80 self.DATABASE_HOST + '/' +
81 self.DATABASE_NAME
82 )
83 return db_uri
84
85
86 config = SDConfig() # type: SDConfig
87
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/securedrop/sdconfig.py b/securedrop/sdconfig.py
--- a/securedrop/sdconfig.py
+++ b/securedrop/sdconfig.py
@@ -53,7 +53,8 @@
self.WORKER_PIDFILE = _config.WORKER_PIDFILE # type: str
- if _config.env == 'test':
+ self.env = getattr(_config, 'env', 'prod') # type: str
+ if self.env == 'test':
self.RQ_WORKER_NAME = 'test' # type: str
else:
self.RQ_WORKER_NAME = 'default'
| {"golden_diff": "diff --git a/securedrop/sdconfig.py b/securedrop/sdconfig.py\n--- a/securedrop/sdconfig.py\n+++ b/securedrop/sdconfig.py\n@@ -53,7 +53,8 @@\n \n self.WORKER_PIDFILE = _config.WORKER_PIDFILE # type: str\n \n- if _config.env == 'test':\n+ self.env = getattr(_config, 'env', 'prod') # type: str\n+ if self.env == 'test':\n self.RQ_WORKER_NAME = 'test' # type: str\n else:\n self.RQ_WORKER_NAME = 'default'\n", "issue": "dev server hot reload has stopped working\n## Description\r\n\r\nIn #5532 the `env` attribute was dropped from `SDConfig` in `sdconfig.py`. That value is checked in [`source.py`](https://github.com/freedomofpress/securedrop/blob/6246482157e31d0655a91c5e7284cc8550f2c289/securedrop/source.py#L11) and [`journalist.py`](https://github.com/freedomofpress/securedrop/blob/6246482157e31d0655a91c5e7284cc8550f2c289/securedrop/journalist.py#L26) to determine whether the Flask app will run in [debug](https://flask.palletsprojects.com/en/1.1.x/config/#DEBUG) mode. By default it will not, so the dev server has stopped responding to code changes.\r\n\r\nGiven the Flask documentation warnings about setting debug mode via code and not the `FLASK_DEBUG` environment variable, we may want to reevaluate all of this, but right now let's just get back to a properly functioning dev server.\r\n\r\n## Steps to Reproduce\r\n\r\n- Check out `develop` at a commit before the `sdconfig.py` change (eff931fa8a0e74d5c3be87e46c5d0f004f02e289).\r\n- Run `make dev`.\r\n- Change `securedrop/journalist_app/main.py` to trigger a reload, and confirm that the change is detected.\r\n- Stop the dev server.\r\n- Check out `develop` and run `make dev`.\r\n- Change `securedrop/journalist_app/main.py` again, and observe that the change is not detected.\r\n\r\n## Expected Behavior\r\n\r\nThat the dev server would notice code changes and reload to pick them up.\r\n\r\n## Actual Behavior\r\n\r\nIt does not care one whit about your useless flailings. We are all `prod` now.\r\n\r\n## Comments\r\n\r\nJust need to restore `SDConfig.env`.\r\n\n", "before_files": [{"content": "from typing import Dict\nfrom typing import Optional\n\nfrom typing import Type\n\nimport config as _config\nfrom typing import List\n\n\nclass SDConfig:\n def __init__(self) -> None:\n self.JOURNALIST_APP_FLASK_CONFIG_CLS = \\\n _config.JournalistInterfaceFlaskConfig # type: Type\n\n self.SOURCE_APP_FLASK_CONFIG_CLS = \\\n _config.SourceInterfaceFlaskConfig # type: Type\n\n self.DATABASE_ENGINE = _config.DATABASE_ENGINE # type: str\n self.DATABASE_FILE = _config.DATABASE_FILE # type: str\n\n self.DATABASE_USERNAME = getattr(_config, \"DATABASE_USERNAME\", None) # type: Optional[str]\n self.DATABASE_PASSWORD = getattr(_config, \"DATABASE_PASSWORD\", None) # type: Optional[str]\n self.DATABASE_HOST = getattr(_config, \"DATABASE_HOST\", None) # type: Optional[str]\n self.DATABASE_NAME = getattr(_config, \"DATABASE_NAME\", None) # type: Optional[str]\n\n self.ADJECTIVES = _config.ADJECTIVES # type: str\n self.NOUNS = _config.NOUNS # type: str\n self.WORD_LIST = _config.WORD_LIST # type: str\n\n self.DEFAULT_LOCALE = _config.DEFAULT_LOCALE # type: str\n self.SUPPORTED_LOCALES = getattr(\n _config, \"SUPPORTED_LOCALES\", [self.DEFAULT_LOCALE]\n ) # type: List[str]\n\n self.GPG_KEY_DIR = _config.GPG_KEY_DIR # type: str\n\n self.JOURNALIST_KEY = _config.JOURNALIST_KEY # type: str\n self.JOURNALIST_TEMPLATES_DIR = _config.JOURNALIST_TEMPLATES_DIR # type: str\n\n self.SCRYPT_GPG_PEPPER = _config.SCRYPT_GPG_PEPPER # type: str\n self.SCRYPT_ID_PEPPER = _config.SCRYPT_ID_PEPPER # type: str\n self.SCRYPT_PARAMS = _config.SCRYPT_PARAMS # type: Dict[str, int]\n\n self.SECUREDROP_DATA_ROOT = _config.SECUREDROP_DATA_ROOT # type: str\n self.SECUREDROP_ROOT = _config.SECUREDROP_ROOT # type: str\n\n self.SESSION_EXPIRATION_MINUTES = _config.SESSION_EXPIRATION_MINUTES # type: int\n\n self.SOURCE_TEMPLATES_DIR = _config.SOURCE_TEMPLATES_DIR # type: str\n self.TEMP_DIR = _config.TEMP_DIR # type: str\n self.STORE_DIR = _config.STORE_DIR # type: str\n self.TRANSLATION_DIRS = getattr(_config, \"TRANSLATION_DIRS\", None) # type: Optional[str]\n\n self.WORKER_PIDFILE = _config.WORKER_PIDFILE # type: str\n\n if _config.env == 'test':\n self.RQ_WORKER_NAME = 'test' # type: str\n else:\n self.RQ_WORKER_NAME = 'default'\n\n @property\n def DATABASE_URI(self) -> str:\n if self.DATABASE_ENGINE == \"sqlite\":\n db_uri = (self.DATABASE_ENGINE + \":///\" +\n self.DATABASE_FILE)\n else:\n if self.DATABASE_USERNAME is None:\n raise RuntimeError(\"Missing DATABASE_USERNAME entry from config.py\")\n if self.DATABASE_PASSWORD is None:\n raise RuntimeError(\"Missing DATABASE_PASSWORD entry from config.py\")\n if self.DATABASE_HOST is None:\n raise RuntimeError(\"Missing DATABASE_HOST entry from config.py\")\n if self.DATABASE_NAME is None:\n raise RuntimeError(\"Missing DATABASE_NAME entry from config.py\")\n\n db_uri = (\n self.DATABASE_ENGINE + '://' +\n self.DATABASE_USERNAME + ':' +\n self.DATABASE_PASSWORD + '@' +\n self.DATABASE_HOST + '/' +\n self.DATABASE_NAME\n )\n return db_uri\n\n\nconfig = SDConfig() # type: SDConfig\n", "path": "securedrop/sdconfig.py"}], "after_files": [{"content": "from typing import Dict\nfrom typing import Optional\n\nfrom typing import Type\n\nimport config as _config\nfrom typing import List\n\n\nclass SDConfig:\n def __init__(self) -> None:\n self.JOURNALIST_APP_FLASK_CONFIG_CLS = \\\n _config.JournalistInterfaceFlaskConfig # type: Type\n\n self.SOURCE_APP_FLASK_CONFIG_CLS = \\\n _config.SourceInterfaceFlaskConfig # type: Type\n\n self.DATABASE_ENGINE = _config.DATABASE_ENGINE # type: str\n self.DATABASE_FILE = _config.DATABASE_FILE # type: str\n\n self.DATABASE_USERNAME = getattr(_config, \"DATABASE_USERNAME\", None) # type: Optional[str]\n self.DATABASE_PASSWORD = getattr(_config, \"DATABASE_PASSWORD\", None) # type: Optional[str]\n self.DATABASE_HOST = getattr(_config, \"DATABASE_HOST\", None) # type: Optional[str]\n self.DATABASE_NAME = getattr(_config, \"DATABASE_NAME\", None) # type: Optional[str]\n\n self.ADJECTIVES = _config.ADJECTIVES # type: str\n self.NOUNS = _config.NOUNS # type: str\n self.WORD_LIST = _config.WORD_LIST # type: str\n\n self.DEFAULT_LOCALE = _config.DEFAULT_LOCALE # type: str\n self.SUPPORTED_LOCALES = getattr(\n _config, \"SUPPORTED_LOCALES\", [self.DEFAULT_LOCALE]\n ) # type: List[str]\n\n self.GPG_KEY_DIR = _config.GPG_KEY_DIR # type: str\n\n self.JOURNALIST_KEY = _config.JOURNALIST_KEY # type: str\n self.JOURNALIST_TEMPLATES_DIR = _config.JOURNALIST_TEMPLATES_DIR # type: str\n\n self.SCRYPT_GPG_PEPPER = _config.SCRYPT_GPG_PEPPER # type: str\n self.SCRYPT_ID_PEPPER = _config.SCRYPT_ID_PEPPER # type: str\n self.SCRYPT_PARAMS = _config.SCRYPT_PARAMS # type: Dict[str, int]\n\n self.SECUREDROP_DATA_ROOT = _config.SECUREDROP_DATA_ROOT # type: str\n self.SECUREDROP_ROOT = _config.SECUREDROP_ROOT # type: str\n\n self.SESSION_EXPIRATION_MINUTES = _config.SESSION_EXPIRATION_MINUTES # type: int\n\n self.SOURCE_TEMPLATES_DIR = _config.SOURCE_TEMPLATES_DIR # type: str\n self.TEMP_DIR = _config.TEMP_DIR # type: str\n self.STORE_DIR = _config.STORE_DIR # type: str\n self.TRANSLATION_DIRS = getattr(_config, \"TRANSLATION_DIRS\", None) # type: Optional[str]\n\n self.WORKER_PIDFILE = _config.WORKER_PIDFILE # type: str\n\n self.env = getattr(_config, 'env', 'prod') # type: str\n if self.env == 'test':\n self.RQ_WORKER_NAME = 'test' # type: str\n else:\n self.RQ_WORKER_NAME = 'default'\n\n @property\n def DATABASE_URI(self) -> str:\n if self.DATABASE_ENGINE == \"sqlite\":\n db_uri = (self.DATABASE_ENGINE + \":///\" +\n self.DATABASE_FILE)\n else:\n if self.DATABASE_USERNAME is None:\n raise RuntimeError(\"Missing DATABASE_USERNAME entry from config.py\")\n if self.DATABASE_PASSWORD is None:\n raise RuntimeError(\"Missing DATABASE_PASSWORD entry from config.py\")\n if self.DATABASE_HOST is None:\n raise RuntimeError(\"Missing DATABASE_HOST entry from config.py\")\n if self.DATABASE_NAME is None:\n raise RuntimeError(\"Missing DATABASE_NAME entry from config.py\")\n\n db_uri = (\n self.DATABASE_ENGINE + '://' +\n self.DATABASE_USERNAME + ':' +\n self.DATABASE_PASSWORD + '@' +\n self.DATABASE_HOST + '/' +\n self.DATABASE_NAME\n )\n return db_uri\n\n\nconfig = SDConfig() # type: SDConfig\n", "path": "securedrop/sdconfig.py"}]} | 1,771 | 145 |
gh_patches_debug_12924 | rasdani/github-patches | git_diff | feast-dev__feast-3628 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Schema inference of BQ OfflineStore costly
## Expected Behavior
When making schema inference, I expect there would be an extra minor cost when scanning tables.
## Current Behavior
BQ OfflineStore made a full scan on the entire table, although we are using `limit` statement. According to GCP documentation:
`Applying a LIMIT clause to a SELECT * query does not affect the amount of data read. You are billed for reading all bytes in the entire table, and the query counts against your free tier quota.`
https://cloud.google.com/bigquery/docs/best-practices-costs
## Steps to reproduce
1. Prepare a large feature table
2. feast apply as a feature view
### Specifications
- Version: 0.28.0
- Platform: Linux
- Subsystem:
## Possible Solution
We can add a filter by `timestamp_field` by modifying this [line](https://github.com/feast-dev/feast/blob/master/sdk/python/feast/infra/offline_stores/bigquery_source.py#L183) even though there is no data, the schema could be inferred eventually
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sdk/python/feast/infra/offline_stores/bigquery_source.py`
Content:
```
1 from typing import Callable, Dict, Iterable, List, Optional, Tuple
2
3 from typeguard import typechecked
4
5 from feast import type_map
6 from feast.data_source import DataSource
7 from feast.errors import DataSourceNoNameException, DataSourceNotFoundException
8 from feast.feature_logging import LoggingDestination
9 from feast.protos.feast.core.DataSource_pb2 import DataSource as DataSourceProto
10 from feast.protos.feast.core.FeatureService_pb2 import (
11 LoggingConfig as LoggingConfigProto,
12 )
13 from feast.protos.feast.core.SavedDataset_pb2 import (
14 SavedDatasetStorage as SavedDatasetStorageProto,
15 )
16 from feast.repo_config import RepoConfig
17 from feast.saved_dataset import SavedDatasetStorage
18 from feast.usage import get_user_agent
19 from feast.value_type import ValueType
20
21
22 @typechecked
23 class BigQuerySource(DataSource):
24 def __init__(
25 self,
26 *,
27 name: Optional[str] = None,
28 timestamp_field: Optional[str] = None,
29 table: Optional[str] = None,
30 created_timestamp_column: Optional[str] = "",
31 field_mapping: Optional[Dict[str, str]] = None,
32 query: Optional[str] = None,
33 description: Optional[str] = "",
34 tags: Optional[Dict[str, str]] = None,
35 owner: Optional[str] = "",
36 ):
37 """Create a BigQuerySource from an existing table or query.
38
39 Args:
40 name (optional): Name for the source. Defaults to the table if not specified, in which
41 case the table must be specified.
42 timestamp_field (optional): Event timestamp field used for point in time
43 joins of feature values.
44 table (optional): BigQuery table where the features are stored. Exactly one of 'table'
45 and 'query' must be specified.
46 table (optional): The BigQuery table where features can be found.
47 created_timestamp_column (optional): Timestamp column when row was created, used for deduplicating rows.
48 field_mapping (optional): A dictionary mapping of column names in this data source to feature names in a feature table
49 or view. Only used for feature columns, not entities or timestamp columns.
50 query (optional): The query to be executed to obtain the features. Exactly one of 'table'
51 and 'query' must be specified.
52 description (optional): A human-readable description.
53 tags (optional): A dictionary of key-value pairs to store arbitrary metadata.
54 owner (optional): The owner of the bigquery source, typically the email of the primary
55 maintainer.
56 Example:
57 >>> from feast import BigQuerySource
58 >>> my_bigquery_source = BigQuerySource(table="gcp_project:bq_dataset.bq_table")
59 """
60 if table is None and query is None:
61 raise ValueError('No "table" or "query" argument provided.')
62
63 self.bigquery_options = BigQueryOptions(table=table, query=query)
64
65 # If no name, use the table as the default name.
66 if name is None and table is None:
67 raise DataSourceNoNameException()
68 name = name or table
69 assert name
70
71 super().__init__(
72 name=name,
73 timestamp_field=timestamp_field,
74 created_timestamp_column=created_timestamp_column,
75 field_mapping=field_mapping,
76 description=description,
77 tags=tags,
78 owner=owner,
79 )
80
81 # Note: Python requires redefining hash in child classes that override __eq__
82 def __hash__(self):
83 return super().__hash__()
84
85 def __eq__(self, other):
86 if not isinstance(other, BigQuerySource):
87 raise TypeError(
88 "Comparisons should only involve BigQuerySource class objects."
89 )
90
91 return (
92 super().__eq__(other)
93 and self.table == other.table
94 and self.query == other.query
95 )
96
97 @property
98 def table(self):
99 return self.bigquery_options.table
100
101 @property
102 def query(self):
103 return self.bigquery_options.query
104
105 @staticmethod
106 def from_proto(data_source: DataSourceProto):
107 assert data_source.HasField("bigquery_options")
108
109 return BigQuerySource(
110 name=data_source.name,
111 field_mapping=dict(data_source.field_mapping),
112 table=data_source.bigquery_options.table,
113 timestamp_field=data_source.timestamp_field,
114 created_timestamp_column=data_source.created_timestamp_column,
115 query=data_source.bigquery_options.query,
116 description=data_source.description,
117 tags=dict(data_source.tags),
118 owner=data_source.owner,
119 )
120
121 def to_proto(self) -> DataSourceProto:
122 data_source_proto = DataSourceProto(
123 name=self.name,
124 type=DataSourceProto.BATCH_BIGQUERY,
125 field_mapping=self.field_mapping,
126 bigquery_options=self.bigquery_options.to_proto(),
127 description=self.description,
128 tags=self.tags,
129 owner=self.owner,
130 timestamp_field=self.timestamp_field,
131 created_timestamp_column=self.created_timestamp_column,
132 )
133
134 return data_source_proto
135
136 def validate(self, config: RepoConfig):
137 if not self.query:
138 from google.api_core.exceptions import NotFound
139 from google.cloud import bigquery
140
141 client = bigquery.Client()
142 try:
143 client.get_table(self.table)
144 except NotFound:
145 raise DataSourceNotFoundException(self.table)
146
147 def get_table_query_string(self) -> str:
148 """Returns a string that can directly be used to reference this table in SQL"""
149 if self.table:
150 return f"`{self.table}`"
151 else:
152 return f"({self.query})"
153
154 @staticmethod
155 def source_datatype_to_feast_value_type() -> Callable[[str], ValueType]:
156 return type_map.bq_to_feast_value_type
157
158 def get_table_column_names_and_types(
159 self, config: RepoConfig
160 ) -> Iterable[Tuple[str, str]]:
161 try:
162 from google.api_core import client_info as http_client_info
163 except ImportError as e:
164 from feast.errors import FeastExtrasDependencyImportError
165
166 raise FeastExtrasDependencyImportError("gcp", str(e))
167
168 from google.cloud import bigquery
169
170 project_id = (
171 config.offline_store.billing_project_id or config.offline_store.project_id
172 )
173 client = bigquery.Client(
174 project=project_id,
175 location=config.offline_store.location,
176 client_info=http_client_info.ClientInfo(user_agent=get_user_agent()),
177 )
178 if self.table:
179 schema = client.get_table(self.table).schema
180 if not isinstance(schema[0], bigquery.schema.SchemaField):
181 raise TypeError("Could not parse BigQuery table schema.")
182 else:
183 bq_columns_query = f"SELECT * FROM ({self.query}) LIMIT 1"
184 queryRes = client.query(bq_columns_query).result()
185 schema = queryRes.schema
186
187 name_type_pairs: List[Tuple[str, str]] = []
188 for field in schema:
189 bq_type_as_str = field.field_type
190 if field.mode == "REPEATED":
191 bq_type_as_str = "ARRAY<" + bq_type_as_str + ">"
192 name_type_pairs.append((field.name, bq_type_as_str))
193
194 return name_type_pairs
195
196
197 class BigQueryOptions:
198 """
199 Configuration options for a BigQuery data source.
200 """
201
202 def __init__(
203 self,
204 table: Optional[str],
205 query: Optional[str],
206 ):
207 self.table = table or ""
208 self.query = query or ""
209
210 @classmethod
211 def from_proto(cls, bigquery_options_proto: DataSourceProto.BigQueryOptions):
212 """
213 Creates a BigQueryOptions from a protobuf representation of a BigQuery option
214
215 Args:
216 bigquery_options_proto: A protobuf representation of a DataSource
217
218 Returns:
219 Returns a BigQueryOptions object based on the bigquery_options protobuf
220 """
221 bigquery_options = cls(
222 table=bigquery_options_proto.table,
223 query=bigquery_options_proto.query,
224 )
225
226 return bigquery_options
227
228 def to_proto(self) -> DataSourceProto.BigQueryOptions:
229 """
230 Converts an BigQueryOptionsProto object to its protobuf representation.
231
232 Returns:
233 BigQueryOptionsProto protobuf
234 """
235 bigquery_options_proto = DataSourceProto.BigQueryOptions(
236 table=self.table,
237 query=self.query,
238 )
239
240 return bigquery_options_proto
241
242
243 class SavedDatasetBigQueryStorage(SavedDatasetStorage):
244 _proto_attr_name = "bigquery_storage"
245
246 bigquery_options: BigQueryOptions
247
248 def __init__(self, table: str):
249 self.bigquery_options = BigQueryOptions(table=table, query=None)
250
251 @staticmethod
252 def from_proto(storage_proto: SavedDatasetStorageProto) -> SavedDatasetStorage:
253 return SavedDatasetBigQueryStorage(
254 table=BigQueryOptions.from_proto(storage_proto.bigquery_storage).table
255 )
256
257 def to_proto(self) -> SavedDatasetStorageProto:
258 return SavedDatasetStorageProto(
259 bigquery_storage=self.bigquery_options.to_proto()
260 )
261
262 def to_data_source(self) -> DataSource:
263 return BigQuerySource(table=self.bigquery_options.table)
264
265
266 class BigQueryLoggingDestination(LoggingDestination):
267 _proto_kind = "bigquery_destination"
268
269 table: str
270
271 def __init__(self, *, table_ref):
272 self.table = table_ref
273
274 @classmethod
275 def from_proto(cls, config_proto: LoggingConfigProto) -> "LoggingDestination":
276 return BigQueryLoggingDestination(
277 table_ref=config_proto.bigquery_destination.table_ref,
278 )
279
280 def to_data_source(self) -> DataSource:
281 return BigQuerySource(table=self.table)
282
283 def to_proto(self) -> LoggingConfigProto:
284 return LoggingConfigProto(
285 bigquery_destination=LoggingConfigProto.BigQueryDestination(
286 table_ref=self.table
287 )
288 )
289
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sdk/python/feast/infra/offline_stores/bigquery_source.py b/sdk/python/feast/infra/offline_stores/bigquery_source.py
--- a/sdk/python/feast/infra/offline_stores/bigquery_source.py
+++ b/sdk/python/feast/infra/offline_stores/bigquery_source.py
@@ -180,9 +180,9 @@
if not isinstance(schema[0], bigquery.schema.SchemaField):
raise TypeError("Could not parse BigQuery table schema.")
else:
- bq_columns_query = f"SELECT * FROM ({self.query}) LIMIT 1"
- queryRes = client.query(bq_columns_query).result()
- schema = queryRes.schema
+ bq_columns_query = f"SELECT * FROM ({self.query}) LIMIT 0"
+ query_res = client.query(bq_columns_query).result()
+ schema = query_res.schema
name_type_pairs: List[Tuple[str, str]] = []
for field in schema:
| {"golden_diff": "diff --git a/sdk/python/feast/infra/offline_stores/bigquery_source.py b/sdk/python/feast/infra/offline_stores/bigquery_source.py\n--- a/sdk/python/feast/infra/offline_stores/bigquery_source.py\n+++ b/sdk/python/feast/infra/offline_stores/bigquery_source.py\n@@ -180,9 +180,9 @@\n if not isinstance(schema[0], bigquery.schema.SchemaField):\n raise TypeError(\"Could not parse BigQuery table schema.\")\n else:\n- bq_columns_query = f\"SELECT * FROM ({self.query}) LIMIT 1\"\n- queryRes = client.query(bq_columns_query).result()\n- schema = queryRes.schema\n+ bq_columns_query = f\"SELECT * FROM ({self.query}) LIMIT 0\"\n+ query_res = client.query(bq_columns_query).result()\n+ schema = query_res.schema\n \n name_type_pairs: List[Tuple[str, str]] = []\n for field in schema:\n", "issue": "Schema inference of BQ OfflineStore costly\n## Expected Behavior \r\nWhen making schema inference, I expect there would be an extra minor cost when scanning tables.\r\n\r\n## Current Behavior\r\nBQ OfflineStore made a full scan on the entire table, although we are using `limit` statement. According to GCP documentation:\r\n`Applying a LIMIT clause to a SELECT * query does not affect the amount of data read. You are billed for reading all bytes in the entire table, and the query counts against your free tier quota.`\r\n\r\nhttps://cloud.google.com/bigquery/docs/best-practices-costs\r\n\r\n## Steps to reproduce\r\n1. Prepare a large feature table\r\n2. feast apply as a feature view\r\n\r\n### Specifications\r\n\r\n- Version: 0.28.0\r\n- Platform: Linux\r\n- Subsystem:\r\n\r\n## Possible Solution\r\nWe can add a filter by `timestamp_field` by modifying this [line](https://github.com/feast-dev/feast/blob/master/sdk/python/feast/infra/offline_stores/bigquery_source.py#L183) even though there is no data, the schema could be inferred eventually\n", "before_files": [{"content": "from typing import Callable, Dict, Iterable, List, Optional, Tuple\n\nfrom typeguard import typechecked\n\nfrom feast import type_map\nfrom feast.data_source import DataSource\nfrom feast.errors import DataSourceNoNameException, DataSourceNotFoundException\nfrom feast.feature_logging import LoggingDestination\nfrom feast.protos.feast.core.DataSource_pb2 import DataSource as DataSourceProto\nfrom feast.protos.feast.core.FeatureService_pb2 import (\n LoggingConfig as LoggingConfigProto,\n)\nfrom feast.protos.feast.core.SavedDataset_pb2 import (\n SavedDatasetStorage as SavedDatasetStorageProto,\n)\nfrom feast.repo_config import RepoConfig\nfrom feast.saved_dataset import SavedDatasetStorage\nfrom feast.usage import get_user_agent\nfrom feast.value_type import ValueType\n\n\n@typechecked\nclass BigQuerySource(DataSource):\n def __init__(\n self,\n *,\n name: Optional[str] = None,\n timestamp_field: Optional[str] = None,\n table: Optional[str] = None,\n created_timestamp_column: Optional[str] = \"\",\n field_mapping: Optional[Dict[str, str]] = None,\n query: Optional[str] = None,\n description: Optional[str] = \"\",\n tags: Optional[Dict[str, str]] = None,\n owner: Optional[str] = \"\",\n ):\n \"\"\"Create a BigQuerySource from an existing table or query.\n\n Args:\n name (optional): Name for the source. Defaults to the table if not specified, in which\n case the table must be specified.\n timestamp_field (optional): Event timestamp field used for point in time\n joins of feature values.\n table (optional): BigQuery table where the features are stored. Exactly one of 'table'\n and 'query' must be specified.\n table (optional): The BigQuery table where features can be found.\n created_timestamp_column (optional): Timestamp column when row was created, used for deduplicating rows.\n field_mapping (optional): A dictionary mapping of column names in this data source to feature names in a feature table\n or view. Only used for feature columns, not entities or timestamp columns.\n query (optional): The query to be executed to obtain the features. Exactly one of 'table'\n and 'query' must be specified.\n description (optional): A human-readable description.\n tags (optional): A dictionary of key-value pairs to store arbitrary metadata.\n owner (optional): The owner of the bigquery source, typically the email of the primary\n maintainer.\n Example:\n >>> from feast import BigQuerySource\n >>> my_bigquery_source = BigQuerySource(table=\"gcp_project:bq_dataset.bq_table\")\n \"\"\"\n if table is None and query is None:\n raise ValueError('No \"table\" or \"query\" argument provided.')\n\n self.bigquery_options = BigQueryOptions(table=table, query=query)\n\n # If no name, use the table as the default name.\n if name is None and table is None:\n raise DataSourceNoNameException()\n name = name or table\n assert name\n\n super().__init__(\n name=name,\n timestamp_field=timestamp_field,\n created_timestamp_column=created_timestamp_column,\n field_mapping=field_mapping,\n description=description,\n tags=tags,\n owner=owner,\n )\n\n # Note: Python requires redefining hash in child classes that override __eq__\n def __hash__(self):\n return super().__hash__()\n\n def __eq__(self, other):\n if not isinstance(other, BigQuerySource):\n raise TypeError(\n \"Comparisons should only involve BigQuerySource class objects.\"\n )\n\n return (\n super().__eq__(other)\n and self.table == other.table\n and self.query == other.query\n )\n\n @property\n def table(self):\n return self.bigquery_options.table\n\n @property\n def query(self):\n return self.bigquery_options.query\n\n @staticmethod\n def from_proto(data_source: DataSourceProto):\n assert data_source.HasField(\"bigquery_options\")\n\n return BigQuerySource(\n name=data_source.name,\n field_mapping=dict(data_source.field_mapping),\n table=data_source.bigquery_options.table,\n timestamp_field=data_source.timestamp_field,\n created_timestamp_column=data_source.created_timestamp_column,\n query=data_source.bigquery_options.query,\n description=data_source.description,\n tags=dict(data_source.tags),\n owner=data_source.owner,\n )\n\n def to_proto(self) -> DataSourceProto:\n data_source_proto = DataSourceProto(\n name=self.name,\n type=DataSourceProto.BATCH_BIGQUERY,\n field_mapping=self.field_mapping,\n bigquery_options=self.bigquery_options.to_proto(),\n description=self.description,\n tags=self.tags,\n owner=self.owner,\n timestamp_field=self.timestamp_field,\n created_timestamp_column=self.created_timestamp_column,\n )\n\n return data_source_proto\n\n def validate(self, config: RepoConfig):\n if not self.query:\n from google.api_core.exceptions import NotFound\n from google.cloud import bigquery\n\n client = bigquery.Client()\n try:\n client.get_table(self.table)\n except NotFound:\n raise DataSourceNotFoundException(self.table)\n\n def get_table_query_string(self) -> str:\n \"\"\"Returns a string that can directly be used to reference this table in SQL\"\"\"\n if self.table:\n return f\"`{self.table}`\"\n else:\n return f\"({self.query})\"\n\n @staticmethod\n def source_datatype_to_feast_value_type() -> Callable[[str], ValueType]:\n return type_map.bq_to_feast_value_type\n\n def get_table_column_names_and_types(\n self, config: RepoConfig\n ) -> Iterable[Tuple[str, str]]:\n try:\n from google.api_core import client_info as http_client_info\n except ImportError as e:\n from feast.errors import FeastExtrasDependencyImportError\n\n raise FeastExtrasDependencyImportError(\"gcp\", str(e))\n\n from google.cloud import bigquery\n\n project_id = (\n config.offline_store.billing_project_id or config.offline_store.project_id\n )\n client = bigquery.Client(\n project=project_id,\n location=config.offline_store.location,\n client_info=http_client_info.ClientInfo(user_agent=get_user_agent()),\n )\n if self.table:\n schema = client.get_table(self.table).schema\n if not isinstance(schema[0], bigquery.schema.SchemaField):\n raise TypeError(\"Could not parse BigQuery table schema.\")\n else:\n bq_columns_query = f\"SELECT * FROM ({self.query}) LIMIT 1\"\n queryRes = client.query(bq_columns_query).result()\n schema = queryRes.schema\n\n name_type_pairs: List[Tuple[str, str]] = []\n for field in schema:\n bq_type_as_str = field.field_type\n if field.mode == \"REPEATED\":\n bq_type_as_str = \"ARRAY<\" + bq_type_as_str + \">\"\n name_type_pairs.append((field.name, bq_type_as_str))\n\n return name_type_pairs\n\n\nclass BigQueryOptions:\n \"\"\"\n Configuration options for a BigQuery data source.\n \"\"\"\n\n def __init__(\n self,\n table: Optional[str],\n query: Optional[str],\n ):\n self.table = table or \"\"\n self.query = query or \"\"\n\n @classmethod\n def from_proto(cls, bigquery_options_proto: DataSourceProto.BigQueryOptions):\n \"\"\"\n Creates a BigQueryOptions from a protobuf representation of a BigQuery option\n\n Args:\n bigquery_options_proto: A protobuf representation of a DataSource\n\n Returns:\n Returns a BigQueryOptions object based on the bigquery_options protobuf\n \"\"\"\n bigquery_options = cls(\n table=bigquery_options_proto.table,\n query=bigquery_options_proto.query,\n )\n\n return bigquery_options\n\n def to_proto(self) -> DataSourceProto.BigQueryOptions:\n \"\"\"\n Converts an BigQueryOptionsProto object to its protobuf representation.\n\n Returns:\n BigQueryOptionsProto protobuf\n \"\"\"\n bigquery_options_proto = DataSourceProto.BigQueryOptions(\n table=self.table,\n query=self.query,\n )\n\n return bigquery_options_proto\n\n\nclass SavedDatasetBigQueryStorage(SavedDatasetStorage):\n _proto_attr_name = \"bigquery_storage\"\n\n bigquery_options: BigQueryOptions\n\n def __init__(self, table: str):\n self.bigquery_options = BigQueryOptions(table=table, query=None)\n\n @staticmethod\n def from_proto(storage_proto: SavedDatasetStorageProto) -> SavedDatasetStorage:\n return SavedDatasetBigQueryStorage(\n table=BigQueryOptions.from_proto(storage_proto.bigquery_storage).table\n )\n\n def to_proto(self) -> SavedDatasetStorageProto:\n return SavedDatasetStorageProto(\n bigquery_storage=self.bigquery_options.to_proto()\n )\n\n def to_data_source(self) -> DataSource:\n return BigQuerySource(table=self.bigquery_options.table)\n\n\nclass BigQueryLoggingDestination(LoggingDestination):\n _proto_kind = \"bigquery_destination\"\n\n table: str\n\n def __init__(self, *, table_ref):\n self.table = table_ref\n\n @classmethod\n def from_proto(cls, config_proto: LoggingConfigProto) -> \"LoggingDestination\":\n return BigQueryLoggingDestination(\n table_ref=config_proto.bigquery_destination.table_ref,\n )\n\n def to_data_source(self) -> DataSource:\n return BigQuerySource(table=self.table)\n\n def to_proto(self) -> LoggingConfigProto:\n return LoggingConfigProto(\n bigquery_destination=LoggingConfigProto.BigQueryDestination(\n table_ref=self.table\n )\n )\n", "path": "sdk/python/feast/infra/offline_stores/bigquery_source.py"}], "after_files": [{"content": "from typing import Callable, Dict, Iterable, List, Optional, Tuple\n\nfrom typeguard import typechecked\n\nfrom feast import type_map\nfrom feast.data_source import DataSource\nfrom feast.errors import DataSourceNoNameException, DataSourceNotFoundException\nfrom feast.feature_logging import LoggingDestination\nfrom feast.protos.feast.core.DataSource_pb2 import DataSource as DataSourceProto\nfrom feast.protos.feast.core.FeatureService_pb2 import (\n LoggingConfig as LoggingConfigProto,\n)\nfrom feast.protos.feast.core.SavedDataset_pb2 import (\n SavedDatasetStorage as SavedDatasetStorageProto,\n)\nfrom feast.repo_config import RepoConfig\nfrom feast.saved_dataset import SavedDatasetStorage\nfrom feast.usage import get_user_agent\nfrom feast.value_type import ValueType\n\n\n@typechecked\nclass BigQuerySource(DataSource):\n def __init__(\n self,\n *,\n name: Optional[str] = None,\n timestamp_field: Optional[str] = None,\n table: Optional[str] = None,\n created_timestamp_column: Optional[str] = \"\",\n field_mapping: Optional[Dict[str, str]] = None,\n query: Optional[str] = None,\n description: Optional[str] = \"\",\n tags: Optional[Dict[str, str]] = None,\n owner: Optional[str] = \"\",\n ):\n \"\"\"Create a BigQuerySource from an existing table or query.\n\n Args:\n name (optional): Name for the source. Defaults to the table if not specified, in which\n case the table must be specified.\n timestamp_field (optional): Event timestamp field used for point in time\n joins of feature values.\n table (optional): BigQuery table where the features are stored. Exactly one of 'table'\n and 'query' must be specified.\n table (optional): The BigQuery table where features can be found.\n created_timestamp_column (optional): Timestamp column when row was created, used for deduplicating rows.\n field_mapping (optional): A dictionary mapping of column names in this data source to feature names in a feature table\n or view. Only used for feature columns, not entities or timestamp columns.\n query (optional): The query to be executed to obtain the features. Exactly one of 'table'\n and 'query' must be specified.\n description (optional): A human-readable description.\n tags (optional): A dictionary of key-value pairs to store arbitrary metadata.\n owner (optional): The owner of the bigquery source, typically the email of the primary\n maintainer.\n Example:\n >>> from feast import BigQuerySource\n >>> my_bigquery_source = BigQuerySource(table=\"gcp_project:bq_dataset.bq_table\")\n \"\"\"\n if table is None and query is None:\n raise ValueError('No \"table\" or \"query\" argument provided.')\n\n self.bigquery_options = BigQueryOptions(table=table, query=query)\n\n # If no name, use the table as the default name.\n if name is None and table is None:\n raise DataSourceNoNameException()\n name = name or table\n assert name\n\n super().__init__(\n name=name,\n timestamp_field=timestamp_field,\n created_timestamp_column=created_timestamp_column,\n field_mapping=field_mapping,\n description=description,\n tags=tags,\n owner=owner,\n )\n\n # Note: Python requires redefining hash in child classes that override __eq__\n def __hash__(self):\n return super().__hash__()\n\n def __eq__(self, other):\n if not isinstance(other, BigQuerySource):\n raise TypeError(\n \"Comparisons should only involve BigQuerySource class objects.\"\n )\n\n return (\n super().__eq__(other)\n and self.table == other.table\n and self.query == other.query\n )\n\n @property\n def table(self):\n return self.bigquery_options.table\n\n @property\n def query(self):\n return self.bigquery_options.query\n\n @staticmethod\n def from_proto(data_source: DataSourceProto):\n assert data_source.HasField(\"bigquery_options\")\n\n return BigQuerySource(\n name=data_source.name,\n field_mapping=dict(data_source.field_mapping),\n table=data_source.bigquery_options.table,\n timestamp_field=data_source.timestamp_field,\n created_timestamp_column=data_source.created_timestamp_column,\n query=data_source.bigquery_options.query,\n description=data_source.description,\n tags=dict(data_source.tags),\n owner=data_source.owner,\n )\n\n def to_proto(self) -> DataSourceProto:\n data_source_proto = DataSourceProto(\n name=self.name,\n type=DataSourceProto.BATCH_BIGQUERY,\n field_mapping=self.field_mapping,\n bigquery_options=self.bigquery_options.to_proto(),\n description=self.description,\n tags=self.tags,\n owner=self.owner,\n timestamp_field=self.timestamp_field,\n created_timestamp_column=self.created_timestamp_column,\n )\n\n return data_source_proto\n\n def validate(self, config: RepoConfig):\n if not self.query:\n from google.api_core.exceptions import NotFound\n from google.cloud import bigquery\n\n client = bigquery.Client()\n try:\n client.get_table(self.table)\n except NotFound:\n raise DataSourceNotFoundException(self.table)\n\n def get_table_query_string(self) -> str:\n \"\"\"Returns a string that can directly be used to reference this table in SQL\"\"\"\n if self.table:\n return f\"`{self.table}`\"\n else:\n return f\"({self.query})\"\n\n @staticmethod\n def source_datatype_to_feast_value_type() -> Callable[[str], ValueType]:\n return type_map.bq_to_feast_value_type\n\n def get_table_column_names_and_types(\n self, config: RepoConfig\n ) -> Iterable[Tuple[str, str]]:\n try:\n from google.api_core import client_info as http_client_info\n except ImportError as e:\n from feast.errors import FeastExtrasDependencyImportError\n\n raise FeastExtrasDependencyImportError(\"gcp\", str(e))\n\n from google.cloud import bigquery\n\n project_id = (\n config.offline_store.billing_project_id or config.offline_store.project_id\n )\n client = bigquery.Client(\n project=project_id,\n location=config.offline_store.location,\n client_info=http_client_info.ClientInfo(user_agent=get_user_agent()),\n )\n if self.table:\n schema = client.get_table(self.table).schema\n if not isinstance(schema[0], bigquery.schema.SchemaField):\n raise TypeError(\"Could not parse BigQuery table schema.\")\n else:\n bq_columns_query = f\"SELECT * FROM ({self.query}) LIMIT 0\"\n query_res = client.query(bq_columns_query).result()\n schema = query_res.schema\n\n name_type_pairs: List[Tuple[str, str]] = []\n for field in schema:\n bq_type_as_str = field.field_type\n if field.mode == \"REPEATED\":\n bq_type_as_str = \"ARRAY<\" + bq_type_as_str + \">\"\n name_type_pairs.append((field.name, bq_type_as_str))\n\n return name_type_pairs\n\n\nclass BigQueryOptions:\n \"\"\"\n Configuration options for a BigQuery data source.\n \"\"\"\n\n def __init__(\n self,\n table: Optional[str],\n query: Optional[str],\n ):\n self.table = table or \"\"\n self.query = query or \"\"\n\n @classmethod\n def from_proto(cls, bigquery_options_proto: DataSourceProto.BigQueryOptions):\n \"\"\"\n Creates a BigQueryOptions from a protobuf representation of a BigQuery option\n\n Args:\n bigquery_options_proto: A protobuf representation of a DataSource\n\n Returns:\n Returns a BigQueryOptions object based on the bigquery_options protobuf\n \"\"\"\n bigquery_options = cls(\n table=bigquery_options_proto.table,\n query=bigquery_options_proto.query,\n )\n\n return bigquery_options\n\n def to_proto(self) -> DataSourceProto.BigQueryOptions:\n \"\"\"\n Converts an BigQueryOptionsProto object to its protobuf representation.\n\n Returns:\n BigQueryOptionsProto protobuf\n \"\"\"\n bigquery_options_proto = DataSourceProto.BigQueryOptions(\n table=self.table,\n query=self.query,\n )\n\n return bigquery_options_proto\n\n\nclass SavedDatasetBigQueryStorage(SavedDatasetStorage):\n _proto_attr_name = \"bigquery_storage\"\n\n bigquery_options: BigQueryOptions\n\n def __init__(self, table: str):\n self.bigquery_options = BigQueryOptions(table=table, query=None)\n\n @staticmethod\n def from_proto(storage_proto: SavedDatasetStorageProto) -> SavedDatasetStorage:\n return SavedDatasetBigQueryStorage(\n table=BigQueryOptions.from_proto(storage_proto.bigquery_storage).table\n )\n\n def to_proto(self) -> SavedDatasetStorageProto:\n return SavedDatasetStorageProto(\n bigquery_storage=self.bigquery_options.to_proto()\n )\n\n def to_data_source(self) -> DataSource:\n return BigQuerySource(table=self.bigquery_options.table)\n\n\nclass BigQueryLoggingDestination(LoggingDestination):\n _proto_kind = \"bigquery_destination\"\n\n table: str\n\n def __init__(self, *, table_ref):\n self.table = table_ref\n\n @classmethod\n def from_proto(cls, config_proto: LoggingConfigProto) -> \"LoggingDestination\":\n return BigQueryLoggingDestination(\n table_ref=config_proto.bigquery_destination.table_ref,\n )\n\n def to_data_source(self) -> DataSource:\n return BigQuerySource(table=self.table)\n\n def to_proto(self) -> LoggingConfigProto:\n return LoggingConfigProto(\n bigquery_destination=LoggingConfigProto.BigQueryDestination(\n table_ref=self.table\n )\n )\n", "path": "sdk/python/feast/infra/offline_stores/bigquery_source.py"}]} | 3,306 | 217 |
gh_patches_debug_1399 | rasdani/github-patches | git_diff | qutebrowser__qutebrowser-648 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Logo
qutebrowser still needs a logo!
Some random ideas:
- `qutebrowser` in some "cute" (fur?) font
- A `q` which is composed of a globe (because browsers need a globe) and a snake "hanging" around it. Ideally with either the snake or the globe being cute. :grin:
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qutebrowser/misc/split.py`
Content:
```
1 # vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:
2
3 # Copyright 2014-2015 Florian Bruhin (The Compiler) <[email protected]>
4 #
5 # This file is part of qutebrowser.
6 #
7 # qutebrowser is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU General Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # qutebrowser is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU General Public License for more details.
16 #
17 # You should have received a copy of the GNU General Public License
18 # along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.
19
20 """Our own fork of shlex.split with some added and removed features."""
21
22 import re
23
24 from qutebrowser.utils import log
25
26
27 class ShellLexer:
28
29 """A lexical analyzer class for simple shell-like syntaxes.
30
31 Based on Python's shlex, but cleaned up, removed some features, and added
32 some features useful for qutebrowser.
33
34 Attributes:
35 FIXME
36 """
37
38 def __init__(self, s):
39 self.string = s
40 self.whitespace = ' \t\r'
41 self.quotes = '\'"'
42 self.escape = '\\'
43 self.escapedquotes = '"'
44 self.keep = False
45 self.quoted = None
46 self.escapedstate = None
47 self.token = None
48 self.state = None
49 self.reset()
50
51 def reset(self):
52 """Reset the state machine state to the defaults."""
53 self.quoted = False
54 self.escapedstate = ' '
55 self.token = ''
56 self.state = ' '
57
58 def __iter__(self): # noqa
59 """Read a raw token from the input stream."""
60 # pylint: disable=too-many-branches,too-many-statements
61 self.reset()
62 for nextchar in self.string:
63 if self.state == ' ':
64 if self.keep:
65 self.token += nextchar
66 if nextchar in self.whitespace:
67 if self.token or self.quoted:
68 yield self.token
69 self.reset()
70 elif nextchar in self.escape:
71 self.escapedstate = 'a'
72 self.state = nextchar
73 elif nextchar in self.quotes:
74 self.state = nextchar
75 else:
76 self.token = nextchar
77 self.state = 'a'
78 elif self.state in self.quotes:
79 self.quoted = True
80 if nextchar == self.state:
81 if self.keep:
82 self.token += nextchar
83 self.state = 'a'
84 elif (nextchar in self.escape and
85 self.state in self.escapedquotes):
86 if self.keep:
87 self.token += nextchar
88 self.escapedstate = self.state
89 self.state = nextchar
90 else:
91 self.token += nextchar
92 elif self.state in self.escape:
93 # In posix shells, only the quote itself or the escape
94 # character may be escaped within quotes.
95 if (self.escapedstate in self.quotes and
96 nextchar != self.state and
97 nextchar != self.escapedstate and not self.keep):
98 self.token += self.state
99 self.token += nextchar
100 self.state = self.escapedstate
101 elif self.state == 'a':
102 if nextchar in self.whitespace:
103 self.state = ' '
104 if self.token or self.quoted:
105 yield self.token
106 self.reset()
107 if self.keep:
108 yield nextchar
109 elif nextchar in self.quotes:
110 if self.keep:
111 self.token += nextchar
112 self.state = nextchar
113 elif nextchar in self.escape:
114 if self.keep:
115 self.token += nextchar
116 self.escapedstate = 'a'
117 self.state = nextchar
118 else:
119 self.token += nextchar
120 if self.state in self.escape and not self.keep:
121 self.token += self.state
122 if self.token or self.quoted:
123 yield self.token
124
125
126 def split(s, keep=False):
127 """Split a string via ShellLexer.
128
129 Args:
130 keep: Whether to keep are special chars in the split output.
131 """
132 lexer = ShellLexer(s)
133 lexer.keep = keep
134 tokens = list(lexer)
135 if not tokens:
136 return []
137 out = []
138 spaces = ""
139
140 log.shlexer.vdebug("{!r} -> {!r}".format(s, tokens))
141
142 for t in tokens:
143 if t.isspace():
144 spaces += t
145 else:
146 out.append(spaces + t)
147 spaces = ""
148 if spaces:
149 out.append(spaces)
150
151 return out
152
153
154 def _combine_ws(parts, whitespace):
155 """Combine whitespace in a list with the element following it.
156
157 Args:
158 parts: A list of strings.
159 whitespace: A string containing what's considered whitespace.
160
161 Return:
162 The modified list.
163 """
164 out = []
165 ws = ''
166 for part in parts:
167 if not part:
168 continue
169 elif part in whitespace:
170 ws += part
171 else:
172 out.append(ws + part)
173 ws = ''
174 if ws:
175 out.append(ws)
176 return out
177
178
179 def simple_split(s, keep=False, maxsplit=None):
180 """Split a string on whitespace, optionally keeping the whitespace.
181
182 Args:
183 s: The string to split.
184 keep: Whether to keep whitespace.
185 maxsplit: The maximum count of splits.
186
187 Return:
188 A list of split strings.
189 """
190 whitespace = '\n\t '
191 if maxsplit == 0:
192 # re.split with maxsplit=0 splits everything, while str.split splits
193 # nothing (which is the behavior we want).
194 if keep:
195 return [s]
196 else:
197 return [s.strip(whitespace)]
198 elif maxsplit is None:
199 maxsplit = 0
200
201 if keep:
202 pattern = '([' + whitespace + '])'
203 parts = re.split(pattern, s, maxsplit)
204 return _combine_ws(parts, whitespace)
205 else:
206 pattern = '[' + whitespace + ']'
207 parts = re.split(pattern, s, maxsplit)
208 parts[-1] = parts[-1].rstrip()
209 return [p for p in parts if p]
210
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/qutebrowser/misc/split.py b/qutebrowser/misc/split.py
--- a/qutebrowser/misc/split.py
+++ b/qutebrowser/misc/split.py
@@ -127,7 +127,7 @@
"""Split a string via ShellLexer.
Args:
- keep: Whether to keep are special chars in the split output.
+ keep: Whether to keep special chars in the split output.
"""
lexer = ShellLexer(s)
lexer.keep = keep
| {"golden_diff": "diff --git a/qutebrowser/misc/split.py b/qutebrowser/misc/split.py\n--- a/qutebrowser/misc/split.py\n+++ b/qutebrowser/misc/split.py\n@@ -127,7 +127,7 @@\n \"\"\"Split a string via ShellLexer.\n \n Args:\n- keep: Whether to keep are special chars in the split output.\n+ keep: Whether to keep special chars in the split output.\n \"\"\"\n lexer = ShellLexer(s)\n lexer.keep = keep\n", "issue": "Logo\nqutebrowser still needs a logo!\n\nSome random ideas:\n- `qutebrowser` in some \"cute\" (fur?) font\n- A `q` which is composed of a globe (because browsers need a globe) and a snake \"hanging\" around it. Ideally with either the snake or the globe being cute. :grin:\n\n", "before_files": [{"content": "# vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:\n\n# Copyright 2014-2015 Florian Bruhin (The Compiler) <[email protected]>\n#\n# This file is part of qutebrowser.\n#\n# qutebrowser is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# qutebrowser is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"Our own fork of shlex.split with some added and removed features.\"\"\"\n\nimport re\n\nfrom qutebrowser.utils import log\n\n\nclass ShellLexer:\n\n \"\"\"A lexical analyzer class for simple shell-like syntaxes.\n\n Based on Python's shlex, but cleaned up, removed some features, and added\n some features useful for qutebrowser.\n\n Attributes:\n FIXME\n \"\"\"\n\n def __init__(self, s):\n self.string = s\n self.whitespace = ' \\t\\r'\n self.quotes = '\\'\"'\n self.escape = '\\\\'\n self.escapedquotes = '\"'\n self.keep = False\n self.quoted = None\n self.escapedstate = None\n self.token = None\n self.state = None\n self.reset()\n\n def reset(self):\n \"\"\"Reset the state machine state to the defaults.\"\"\"\n self.quoted = False\n self.escapedstate = ' '\n self.token = ''\n self.state = ' '\n\n def __iter__(self): # noqa\n \"\"\"Read a raw token from the input stream.\"\"\"\n # pylint: disable=too-many-branches,too-many-statements\n self.reset()\n for nextchar in self.string:\n if self.state == ' ':\n if self.keep:\n self.token += nextchar\n if nextchar in self.whitespace:\n if self.token or self.quoted:\n yield self.token\n self.reset()\n elif nextchar in self.escape:\n self.escapedstate = 'a'\n self.state = nextchar\n elif nextchar in self.quotes:\n self.state = nextchar\n else:\n self.token = nextchar\n self.state = 'a'\n elif self.state in self.quotes:\n self.quoted = True\n if nextchar == self.state:\n if self.keep:\n self.token += nextchar\n self.state = 'a'\n elif (nextchar in self.escape and\n self.state in self.escapedquotes):\n if self.keep:\n self.token += nextchar\n self.escapedstate = self.state\n self.state = nextchar\n else:\n self.token += nextchar\n elif self.state in self.escape:\n # In posix shells, only the quote itself or the escape\n # character may be escaped within quotes.\n if (self.escapedstate in self.quotes and\n nextchar != self.state and\n nextchar != self.escapedstate and not self.keep):\n self.token += self.state\n self.token += nextchar\n self.state = self.escapedstate\n elif self.state == 'a':\n if nextchar in self.whitespace:\n self.state = ' '\n if self.token or self.quoted:\n yield self.token\n self.reset()\n if self.keep:\n yield nextchar\n elif nextchar in self.quotes:\n if self.keep:\n self.token += nextchar\n self.state = nextchar\n elif nextchar in self.escape:\n if self.keep:\n self.token += nextchar\n self.escapedstate = 'a'\n self.state = nextchar\n else:\n self.token += nextchar\n if self.state in self.escape and not self.keep:\n self.token += self.state\n if self.token or self.quoted:\n yield self.token\n\n\ndef split(s, keep=False):\n \"\"\"Split a string via ShellLexer.\n\n Args:\n keep: Whether to keep are special chars in the split output.\n \"\"\"\n lexer = ShellLexer(s)\n lexer.keep = keep\n tokens = list(lexer)\n if not tokens:\n return []\n out = []\n spaces = \"\"\n\n log.shlexer.vdebug(\"{!r} -> {!r}\".format(s, tokens))\n\n for t in tokens:\n if t.isspace():\n spaces += t\n else:\n out.append(spaces + t)\n spaces = \"\"\n if spaces:\n out.append(spaces)\n\n return out\n\n\ndef _combine_ws(parts, whitespace):\n \"\"\"Combine whitespace in a list with the element following it.\n\n Args:\n parts: A list of strings.\n whitespace: A string containing what's considered whitespace.\n\n Return:\n The modified list.\n \"\"\"\n out = []\n ws = ''\n for part in parts:\n if not part:\n continue\n elif part in whitespace:\n ws += part\n else:\n out.append(ws + part)\n ws = ''\n if ws:\n out.append(ws)\n return out\n\n\ndef simple_split(s, keep=False, maxsplit=None):\n \"\"\"Split a string on whitespace, optionally keeping the whitespace.\n\n Args:\n s: The string to split.\n keep: Whether to keep whitespace.\n maxsplit: The maximum count of splits.\n\n Return:\n A list of split strings.\n \"\"\"\n whitespace = '\\n\\t '\n if maxsplit == 0:\n # re.split with maxsplit=0 splits everything, while str.split splits\n # nothing (which is the behavior we want).\n if keep:\n return [s]\n else:\n return [s.strip(whitespace)]\n elif maxsplit is None:\n maxsplit = 0\n\n if keep:\n pattern = '([' + whitespace + '])'\n parts = re.split(pattern, s, maxsplit)\n return _combine_ws(parts, whitespace)\n else:\n pattern = '[' + whitespace + ']'\n parts = re.split(pattern, s, maxsplit)\n parts[-1] = parts[-1].rstrip()\n return [p for p in parts if p]\n", "path": "qutebrowser/misc/split.py"}], "after_files": [{"content": "# vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:\n\n# Copyright 2014-2015 Florian Bruhin (The Compiler) <[email protected]>\n#\n# This file is part of qutebrowser.\n#\n# qutebrowser is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# qutebrowser is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"Our own fork of shlex.split with some added and removed features.\"\"\"\n\nimport re\n\nfrom qutebrowser.utils import log\n\n\nclass ShellLexer:\n\n \"\"\"A lexical analyzer class for simple shell-like syntaxes.\n\n Based on Python's shlex, but cleaned up, removed some features, and added\n some features useful for qutebrowser.\n\n Attributes:\n FIXME\n \"\"\"\n\n def __init__(self, s):\n self.string = s\n self.whitespace = ' \\t\\r'\n self.quotes = '\\'\"'\n self.escape = '\\\\'\n self.escapedquotes = '\"'\n self.keep = False\n self.quoted = None\n self.escapedstate = None\n self.token = None\n self.state = None\n self.reset()\n\n def reset(self):\n \"\"\"Reset the state machine state to the defaults.\"\"\"\n self.quoted = False\n self.escapedstate = ' '\n self.token = ''\n self.state = ' '\n\n def __iter__(self): # noqa\n \"\"\"Read a raw token from the input stream.\"\"\"\n # pylint: disable=too-many-branches,too-many-statements\n self.reset()\n for nextchar in self.string:\n if self.state == ' ':\n if self.keep:\n self.token += nextchar\n if nextchar in self.whitespace:\n if self.token or self.quoted:\n yield self.token\n self.reset()\n elif nextchar in self.escape:\n self.escapedstate = 'a'\n self.state = nextchar\n elif nextchar in self.quotes:\n self.state = nextchar\n else:\n self.token = nextchar\n self.state = 'a'\n elif self.state in self.quotes:\n self.quoted = True\n if nextchar == self.state:\n if self.keep:\n self.token += nextchar\n self.state = 'a'\n elif (nextchar in self.escape and\n self.state in self.escapedquotes):\n if self.keep:\n self.token += nextchar\n self.escapedstate = self.state\n self.state = nextchar\n else:\n self.token += nextchar\n elif self.state in self.escape:\n # In posix shells, only the quote itself or the escape\n # character may be escaped within quotes.\n if (self.escapedstate in self.quotes and\n nextchar != self.state and\n nextchar != self.escapedstate and not self.keep):\n self.token += self.state\n self.token += nextchar\n self.state = self.escapedstate\n elif self.state == 'a':\n if nextchar in self.whitespace:\n self.state = ' '\n if self.token or self.quoted:\n yield self.token\n self.reset()\n if self.keep:\n yield nextchar\n elif nextchar in self.quotes:\n if self.keep:\n self.token += nextchar\n self.state = nextchar\n elif nextchar in self.escape:\n if self.keep:\n self.token += nextchar\n self.escapedstate = 'a'\n self.state = nextchar\n else:\n self.token += nextchar\n if self.state in self.escape and not self.keep:\n self.token += self.state\n if self.token or self.quoted:\n yield self.token\n\n\ndef split(s, keep=False):\n \"\"\"Split a string via ShellLexer.\n\n Args:\n keep: Whether to keep special chars in the split output.\n \"\"\"\n lexer = ShellLexer(s)\n lexer.keep = keep\n tokens = list(lexer)\n if not tokens:\n return []\n out = []\n spaces = \"\"\n\n log.shlexer.vdebug(\"{!r} -> {!r}\".format(s, tokens))\n\n for t in tokens:\n if t.isspace():\n spaces += t\n else:\n out.append(spaces + t)\n spaces = \"\"\n if spaces:\n out.append(spaces)\n\n return out\n\n\ndef _combine_ws(parts, whitespace):\n \"\"\"Combine whitespace in a list with the element following it.\n\n Args:\n parts: A list of strings.\n whitespace: A string containing what's considered whitespace.\n\n Return:\n The modified list.\n \"\"\"\n out = []\n ws = ''\n for part in parts:\n if not part:\n continue\n elif part in whitespace:\n ws += part\n else:\n out.append(ws + part)\n ws = ''\n if ws:\n out.append(ws)\n return out\n\n\ndef simple_split(s, keep=False, maxsplit=None):\n \"\"\"Split a string on whitespace, optionally keeping the whitespace.\n\n Args:\n s: The string to split.\n keep: Whether to keep whitespace.\n maxsplit: The maximum count of splits.\n\n Return:\n A list of split strings.\n \"\"\"\n whitespace = '\\n\\t '\n if maxsplit == 0:\n # re.split with maxsplit=0 splits everything, while str.split splits\n # nothing (which is the behavior we want).\n if keep:\n return [s]\n else:\n return [s.strip(whitespace)]\n elif maxsplit is None:\n maxsplit = 0\n\n if keep:\n pattern = '([' + whitespace + '])'\n parts = re.split(pattern, s, maxsplit)\n return _combine_ws(parts, whitespace)\n else:\n pattern = '[' + whitespace + ']'\n parts = re.split(pattern, s, maxsplit)\n parts[-1] = parts[-1].rstrip()\n return [p for p in parts if p]\n", "path": "qutebrowser/misc/split.py"}]} | 2,266 | 111 |
gh_patches_debug_25808 | rasdani/github-patches | git_diff | weecology__retriever-661 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Wrap retriever_minimum_version value in str() in compile
On line 241 of `compile.py` the call to `value` should be changed to `str(value)`. It is natural for people to think of version numbers as numbers and if they accidentally enter `2.0` instead of `"2.0"` this causes a type failure.
Stop bad scripts from causing errors
Currently when `compile_json` gets run if something goes wrong the retriever errors out even on commands not running the script (e.g., `retriever ls`). What it should do is ignore the bad script and possibly report back that there is an issue with the script but keep running normally otherwise.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/compile.py`
Content:
```
1 from builtins import str
2 import json
3 import sys
4 if sys.version_info[0] < 3:
5 from codecs import open
6
7 script_templates = {
8 "default": """#retriever
9 from retriever.lib.templates import BasicTextTemplate
10 from retriever.lib.models import Table, Cleanup, correct_invalid_value
11
12 SCRIPT = BasicTextTemplate(%s)""",
13
14 "html_table": """#retriever
15 from retriever.lib.templates import HtmlTableTemplate
16 from retriever.lib.models import Table, Cleanup, correct_invalid_value
17
18 SCRIPT = HtmlTableTemplate(%s)""",
19 }
20
21
22 def compile_script(script_file):
23 definition = open(script_file + ".script", 'r')
24
25 values = {}
26 urls = {}
27 tables = {}
28 last_table = ""
29 replace = []
30 keys_to_ignore = ["template"]
31
32 for line in [line.strip() for line in definition]:
33 if line and ':' in line and not line[0] == '#':
34 split_line = [a.strip() for a in line.split(":")]
35 key = split_line[0].lower()
36 value = ':'.join(split_line[1:])
37 if key == "table":
38 table_name = value.split(',')[0].strip()
39 last_table = table_name
40 table_url = ','.join(value.split(',')[1:]).strip()
41 urls[table_name] = table_url
42 if replace:
43 try:
44 tables[last_table]
45 except:
46 tables[table_name] = {'replace_columns': str(replace)}
47 elif key == "*nulls":
48 if last_table:
49 nulls = [eval(v) for v in [v.strip()
50 for v in value.split(',')]]
51 try:
52 tables[last_table]
53 except KeyError:
54 if replace:
55 tables[last_table] = {'replace_columns': str(replace)}
56 else:
57 tables[last_table] = {}
58 tables[last_table]['cleanup'] = "Cleanup(correct_invalid_value, nulls=" + str(nulls) + ")"
59 elif key == "replace":
60 replace = [(v.split(',')[0].strip(), v.split(',')[1].strip())
61 for v in [v.strip() for v in value.split(';')]]
62 elif key == "tags":
63 values["tags"] = [v.strip() for v in value.split(',')]
64 elif key == "*ct_names":
65 tables[last_table]["ct_names"] = [v.strip()
66 for v in value.split(',')]
67 elif key == "*column":
68 if last_table:
69 vs = [v.strip() for v in value.split(',')]
70 column = [(vs[0], (vs[1], vs[2]) if len(vs) > 2 else (vs[1],))]
71 try:
72 tables[last_table]
73 except KeyError:
74 tables[last_table] = {}
75
76 try:
77 tables[last_table]['columns'] += column
78 except KeyError:
79 tables[last_table]['columns'] = column
80 elif key[0] == "*":
81 # attribute that should be applied to the most recently
82 # declared table
83 if key[0] == "*":
84 key = key[1:]
85 if last_table:
86 try:
87 tables[last_table]
88 except KeyError:
89 tables[last_table] = {}
90
91 try:
92 e = eval(value)
93 except:
94 e = str(value)
95
96 tables[last_table][key] = "'" + str(e) + "'"
97 else:
98 # general script attributes
99 values[key] = '"' + value + '"'
100
101 if 'shortname' not in list(values.keys()):
102 try:
103 values['shortname'] = values['name']
104 except:
105 pass
106 values['urls'] = str(urls)
107
108 def get_value(key):
109 try:
110 return values[key]
111 except KeyError:
112 return ""
113
114 table_desc = "{"
115 for (key, value) in list(tables.items()):
116 table_desc += "'" + key + "': Table('" + key + "', "
117 table_desc += ','.join([key + "=" + str(value)
118 for key, value, in list(value.items())])
119 table_desc += "),"
120 if table_desc != '{':
121 table_desc = table_desc[:-1]
122 table_desc += "}"
123
124 values['tables'] = table_desc
125
126 script_desc = []
127 for key, value in list(values.items()):
128 if key == "url":
129 key = "ref"
130 if key not in keys_to_ignore:
131 script_desc.append(key + "=" + str(value))
132 script_desc = (',\n' + ' ' * 27).join(script_desc)
133
134 if 'template' in list(values.keys()):
135 template = values["template"]
136 else:
137 template = "default"
138 script_contents = (script_templates[template] % script_desc)
139
140 new_script = open(script_file + '.py', 'w')
141 new_script.write(script_contents)
142 new_script.close()
143
144 definition.close()
145
146
147 def add_dialect(table_dict, table):
148 """
149 Reads dialect key of JSON script and extracts key-value pairs to store them
150 in python script
151
152 Contains properties such 'nulls', delimiter', etc
153 """
154 for (key, val) in table['dialect'].items():
155 # dialect related key-value pairs
156 # copied as is
157 if key == "nulls":
158 table_dict[
159 'cleanup'] = "Cleanup(correct_invalid_value, nulls=" + str(val) + ")"
160
161 elif key == "delimiter":
162 table_dict[key] = "'" + str(val) + "'"
163 else:
164 table_dict[key] = val
165
166
167 def add_schema(table_dict, table):
168 """
169 Reads schema key of JSON script and extracts values to store them in
170 python script
171
172 Contains properties related to table schema, such as 'fields' and cross-tab
173 column name ('ct_column').
174 """
175 for (key, val) in table['schema'].items():
176 # schema related key-value pairs
177
178 if key == "fields":
179 # fields = columns of the table
180
181 # list of column tuples
182 column_list = []
183 for obj in val:
184 # fields is a collection of JSON objects
185 # (similar to a list of dicts in python)
186
187 if "size" in obj:
188 column_list.append((obj["name"],
189 (obj["type"], obj["size"])))
190 else:
191 column_list.append((obj["name"],
192 (obj["type"],)))
193
194 table_dict["columns"] = column_list
195
196 elif key == "ct_column":
197 table_dict[key] = "'" + val + "'"
198
199 else:
200 table_dict[key] = val
201
202
203 def compile_json(json_file):
204 """
205 Function to compile JSON script files to python scripts
206 The scripts are created with `retriever create_json <script_name` using
207 command line
208 """
209 json_object = json.load(open(json_file + ".json", "r"))
210
211 if "retriever" not in json_object.keys():
212 # Compile only files that have retriever key
213 return
214
215 values = {}
216 values['urls'] = {}
217
218 keys_to_ignore = ["template"]
219
220 for (key, value) in json_object.items():
221
222 if key == "title":
223 values["name"] = "\"" + value + "\""
224
225 elif key == "name":
226 values["shortname"] = "\"" + value + "\""
227
228 elif key == "description":
229 values["description"] = "\"" + value + "\""
230
231 elif key == "homepage":
232 values["ref"] = "\"" + value + "\""
233
234 elif key == "citation":
235 values["citation"] = "\"" + value + "\""
236
237 elif key == "keywords":
238 values["tags"] = value
239
240 elif key == "retriever_minimum_version":
241 values["retriever_minimum_version"] = "\"" + value + "\""
242
243 elif key == "resources":
244 # Array of table objects
245 tables = {}
246 for table in value:
247 # Maintain a dict for table keys and values
248 table_dict = {}
249
250 try:
251 values['urls'][table['name']] = table['url']
252 except Exception as e:
253 print(e, "\nError in reading table: " + table)
254 continue
255
256 if table["schema"] == {} and table["dialect"] == {}:
257 continue
258
259 for (t_key, t_val) in table.items():
260
261 if t_key == "dialect":
262 add_dialect(table_dict, table)
263
264 elif t_key == "schema":
265 add_schema(table_dict, table)
266
267 tables[table["name"]] = table_dict
268
269 else:
270 values[key] = value
271
272 # Create a Table object string using the tables dict
273 table_desc = "{"
274 for (key, value) in tables.items():
275 table_desc += "'" + key + "': Table('" + key + "', "
276 table_desc += ','.join([key + "=" + str(value)
277 for key, value, in value.items()])
278 table_desc += "),"
279 if table_desc != '{':
280 table_desc = table_desc[:-1]
281 table_desc += "}"
282
283 values["tables"] = table_desc
284
285 script_desc = []
286 for key, value in values.items():
287 if key not in keys_to_ignore:
288 script_desc.append(key + "=" + str(value))
289 script_desc = (',\n' + ' ' * 27).join(script_desc)
290
291 if 'template' in values.keys():
292 template = values["template"]
293 else:
294 template = "default"
295 script_contents = (script_templates[template] % script_desc)
296
297 new_script = open(json_file + '.py', 'w', encoding='utf-8')
298 new_script.write('# -*- coding: latin-1 -*-\n')
299 new_script.write(script_contents)
300 new_script.close()
301
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/compile.py b/lib/compile.py
--- a/lib/compile.py
+++ b/lib/compile.py
@@ -92,7 +92,7 @@
tables[last_table][key] = "'" + str(e) + "'"
else:
# general script attributes
- values[key] = '"' + value + '"'
+ values[key] = '"' + str(value) + '"'
if 'shortname' not in list(values.keys()):
try:
@@ -216,19 +216,19 @@
for (key, value) in json_object.items():
if key == "title":
- values["name"] = "\"" + value + "\""
+ values["name"] = "\"" + str(value) + "\""
elif key == "name":
- values["shortname"] = "\"" + value + "\""
+ values["shortname"] = "\"" + str(value) + "\""
elif key == "description":
- values["description"] = "\"" + value + "\""
+ values["description"] = "\"" + str(value) + "\""
elif key == "homepage":
- values["ref"] = "\"" + value + "\""
+ values["ref"] = "\"" + str(value) + "\""
elif key == "citation":
- values["citation"] = "\"" + value + "\""
+ values["citation"] = "\"" + str(value) + "\""
elif key == "keywords":
values["tags"] = value
| {"golden_diff": "diff --git a/lib/compile.py b/lib/compile.py\n--- a/lib/compile.py\n+++ b/lib/compile.py\n@@ -92,7 +92,7 @@\n tables[last_table][key] = \"'\" + str(e) + \"'\"\n else:\n # general script attributes\n- values[key] = '\"' + value + '\"'\n+ values[key] = '\"' + str(value) + '\"'\n \n if 'shortname' not in list(values.keys()):\n try:\n@@ -216,19 +216,19 @@\n for (key, value) in json_object.items():\n \n if key == \"title\":\n- values[\"name\"] = \"\\\"\" + value + \"\\\"\"\n+ values[\"name\"] = \"\\\"\" + str(value) + \"\\\"\"\n \n elif key == \"name\":\n- values[\"shortname\"] = \"\\\"\" + value + \"\\\"\"\n+ values[\"shortname\"] = \"\\\"\" + str(value) + \"\\\"\"\n \n elif key == \"description\":\n- values[\"description\"] = \"\\\"\" + value + \"\\\"\"\n+ values[\"description\"] = \"\\\"\" + str(value) + \"\\\"\"\n \n elif key == \"homepage\":\n- values[\"ref\"] = \"\\\"\" + value + \"\\\"\"\n+ values[\"ref\"] = \"\\\"\" + str(value) + \"\\\"\"\n \n elif key == \"citation\":\n- values[\"citation\"] = \"\\\"\" + value + \"\\\"\"\n+ values[\"citation\"] = \"\\\"\" + str(value) + \"\\\"\"\n \n elif key == \"keywords\":\n values[\"tags\"] = value\n", "issue": "Wrap retriever_minimum_version value in str() in compile\nOn line 241 of `compile.py` the call to `value` should be changed to `str(value)`. It is natural for people to think of version numbers as numbers and if they accidentally enter `2.0` instead of `\"2.0\"` this causes a type failure.\n\nStop bad scripts from causing errors\nCurrently when `compile_json` gets run if something goes wrong the retriever errors out even on commands not running the script (e.g., `retriever ls`). What it should do is ignore the bad script and possibly report back that there is an issue with the script but keep running normally otherwise.\n\n", "before_files": [{"content": "from builtins import str\nimport json\nimport sys\nif sys.version_info[0] < 3:\n from codecs import open\n\nscript_templates = {\n \"default\": \"\"\"#retriever\nfrom retriever.lib.templates import BasicTextTemplate\nfrom retriever.lib.models import Table, Cleanup, correct_invalid_value\n\nSCRIPT = BasicTextTemplate(%s)\"\"\",\n\n \"html_table\": \"\"\"#retriever\nfrom retriever.lib.templates import HtmlTableTemplate\nfrom retriever.lib.models import Table, Cleanup, correct_invalid_value\n\nSCRIPT = HtmlTableTemplate(%s)\"\"\",\n}\n\n\ndef compile_script(script_file):\n definition = open(script_file + \".script\", 'r')\n\n values = {}\n urls = {}\n tables = {}\n last_table = \"\"\n replace = []\n keys_to_ignore = [\"template\"]\n\n for line in [line.strip() for line in definition]:\n if line and ':' in line and not line[0] == '#':\n split_line = [a.strip() for a in line.split(\":\")]\n key = split_line[0].lower()\n value = ':'.join(split_line[1:])\n if key == \"table\":\n table_name = value.split(',')[0].strip()\n last_table = table_name\n table_url = ','.join(value.split(',')[1:]).strip()\n urls[table_name] = table_url\n if replace:\n try:\n tables[last_table]\n except:\n tables[table_name] = {'replace_columns': str(replace)}\n elif key == \"*nulls\":\n if last_table:\n nulls = [eval(v) for v in [v.strip()\n for v in value.split(',')]]\n try:\n tables[last_table]\n except KeyError:\n if replace:\n tables[last_table] = {'replace_columns': str(replace)}\n else:\n tables[last_table] = {}\n tables[last_table]['cleanup'] = \"Cleanup(correct_invalid_value, nulls=\" + str(nulls) + \")\"\n elif key == \"replace\":\n replace = [(v.split(',')[0].strip(), v.split(',')[1].strip())\n for v in [v.strip() for v in value.split(';')]]\n elif key == \"tags\":\n values[\"tags\"] = [v.strip() for v in value.split(',')]\n elif key == \"*ct_names\":\n tables[last_table][\"ct_names\"] = [v.strip()\n for v in value.split(',')]\n elif key == \"*column\":\n if last_table:\n vs = [v.strip() for v in value.split(',')]\n column = [(vs[0], (vs[1], vs[2]) if len(vs) > 2 else (vs[1],))]\n try:\n tables[last_table]\n except KeyError:\n tables[last_table] = {}\n\n try:\n tables[last_table]['columns'] += column\n except KeyError:\n tables[last_table]['columns'] = column\n elif key[0] == \"*\":\n # attribute that should be applied to the most recently\n # declared table\n if key[0] == \"*\":\n key = key[1:]\n if last_table:\n try:\n tables[last_table]\n except KeyError:\n tables[last_table] = {}\n\n try:\n e = eval(value)\n except:\n e = str(value)\n\n tables[last_table][key] = \"'\" + str(e) + \"'\"\n else:\n # general script attributes\n values[key] = '\"' + value + '\"'\n\n if 'shortname' not in list(values.keys()):\n try:\n values['shortname'] = values['name']\n except:\n pass\n values['urls'] = str(urls)\n\n def get_value(key):\n try:\n return values[key]\n except KeyError:\n return \"\"\n\n table_desc = \"{\"\n for (key, value) in list(tables.items()):\n table_desc += \"'\" + key + \"': Table('\" + key + \"', \"\n table_desc += ','.join([key + \"=\" + str(value)\n for key, value, in list(value.items())])\n table_desc += \"),\"\n if table_desc != '{':\n table_desc = table_desc[:-1]\n table_desc += \"}\"\n\n values['tables'] = table_desc\n\n script_desc = []\n for key, value in list(values.items()):\n if key == \"url\":\n key = \"ref\"\n if key not in keys_to_ignore:\n script_desc.append(key + \"=\" + str(value))\n script_desc = (',\\n' + ' ' * 27).join(script_desc)\n\n if 'template' in list(values.keys()):\n template = values[\"template\"]\n else:\n template = \"default\"\n script_contents = (script_templates[template] % script_desc)\n\n new_script = open(script_file + '.py', 'w')\n new_script.write(script_contents)\n new_script.close()\n\n definition.close()\n\n\ndef add_dialect(table_dict, table):\n \"\"\"\n Reads dialect key of JSON script and extracts key-value pairs to store them\n in python script\n\n Contains properties such 'nulls', delimiter', etc\n \"\"\"\n for (key, val) in table['dialect'].items():\n # dialect related key-value pairs\n # copied as is\n if key == \"nulls\":\n table_dict[\n 'cleanup'] = \"Cleanup(correct_invalid_value, nulls=\" + str(val) + \")\"\n\n elif key == \"delimiter\":\n table_dict[key] = \"'\" + str(val) + \"'\"\n else:\n table_dict[key] = val\n\n\ndef add_schema(table_dict, table):\n \"\"\"\n Reads schema key of JSON script and extracts values to store them in\n python script\n\n Contains properties related to table schema, such as 'fields' and cross-tab\n column name ('ct_column').\n \"\"\"\n for (key, val) in table['schema'].items():\n # schema related key-value pairs\n\n if key == \"fields\":\n # fields = columns of the table\n\n # list of column tuples\n column_list = []\n for obj in val:\n # fields is a collection of JSON objects\n # (similar to a list of dicts in python)\n\n if \"size\" in obj:\n column_list.append((obj[\"name\"],\n (obj[\"type\"], obj[\"size\"])))\n else:\n column_list.append((obj[\"name\"],\n (obj[\"type\"],)))\n\n table_dict[\"columns\"] = column_list\n\n elif key == \"ct_column\":\n table_dict[key] = \"'\" + val + \"'\"\n\n else:\n table_dict[key] = val\n\n\ndef compile_json(json_file):\n \"\"\"\n Function to compile JSON script files to python scripts\n The scripts are created with `retriever create_json <script_name` using\n command line\n \"\"\"\n json_object = json.load(open(json_file + \".json\", \"r\"))\n\n if \"retriever\" not in json_object.keys():\n # Compile only files that have retriever key\n return\n\n values = {}\n values['urls'] = {}\n\n keys_to_ignore = [\"template\"]\n\n for (key, value) in json_object.items():\n\n if key == \"title\":\n values[\"name\"] = \"\\\"\" + value + \"\\\"\"\n\n elif key == \"name\":\n values[\"shortname\"] = \"\\\"\" + value + \"\\\"\"\n\n elif key == \"description\":\n values[\"description\"] = \"\\\"\" + value + \"\\\"\"\n\n elif key == \"homepage\":\n values[\"ref\"] = \"\\\"\" + value + \"\\\"\"\n\n elif key == \"citation\":\n values[\"citation\"] = \"\\\"\" + value + \"\\\"\"\n\n elif key == \"keywords\":\n values[\"tags\"] = value\n\n elif key == \"retriever_minimum_version\":\n values[\"retriever_minimum_version\"] = \"\\\"\" + value + \"\\\"\"\n\n elif key == \"resources\":\n # Array of table objects\n tables = {}\n for table in value:\n # Maintain a dict for table keys and values\n table_dict = {}\n\n try:\n values['urls'][table['name']] = table['url']\n except Exception as e:\n print(e, \"\\nError in reading table: \" + table)\n continue\n\n if table[\"schema\"] == {} and table[\"dialect\"] == {}:\n continue\n\n for (t_key, t_val) in table.items():\n\n if t_key == \"dialect\":\n add_dialect(table_dict, table)\n\n elif t_key == \"schema\":\n add_schema(table_dict, table)\n\n tables[table[\"name\"]] = table_dict\n\n else:\n values[key] = value\n\n # Create a Table object string using the tables dict\n table_desc = \"{\"\n for (key, value) in tables.items():\n table_desc += \"'\" + key + \"': Table('\" + key + \"', \"\n table_desc += ','.join([key + \"=\" + str(value)\n for key, value, in value.items()])\n table_desc += \"),\"\n if table_desc != '{':\n table_desc = table_desc[:-1]\n table_desc += \"}\"\n\n values[\"tables\"] = table_desc\n\n script_desc = []\n for key, value in values.items():\n if key not in keys_to_ignore:\n script_desc.append(key + \"=\" + str(value))\n script_desc = (',\\n' + ' ' * 27).join(script_desc)\n\n if 'template' in values.keys():\n template = values[\"template\"]\n else:\n template = \"default\"\n script_contents = (script_templates[template] % script_desc)\n\n new_script = open(json_file + '.py', 'w', encoding='utf-8')\n new_script.write('# -*- coding: latin-1 -*-\\n')\n new_script.write(script_contents)\n new_script.close()\n", "path": "lib/compile.py"}], "after_files": [{"content": "from builtins import str\nimport json\nscript_templates = {\n \"default\": \"\"\"#retriever\nfrom retriever.lib.templates import BasicTextTemplate\nfrom retriever.lib.models import Table, Cleanup, correct_invalid_value\n\nSCRIPT = BasicTextTemplate(%s)\"\"\",\n\n \"html_table\": \"\"\"#retriever\nfrom retriever.lib.templates import HtmlTableTemplate\nfrom retriever.lib.models import Table, Cleanup, correct_invalid_value\n\nSCRIPT = HtmlTableTemplate(%s)\"\"\",\n}\n\n\ndef compile_script(script_file):\n definition = open(script_file + \".script\", 'r')\n\n values = {}\n urls = {}\n tables = {}\n last_table = \"\"\n replace = []\n keys_to_ignore = [\"template\"]\n\n for line in [line.strip() for line in definition]:\n if line and ':' in line and not line[0] == '#':\n split_line = [a.strip() for a in line.split(\":\")]\n key = split_line[0].lower()\n value = ':'.join(split_line[1:])\n if key == \"table\":\n table_name = value.split(',')[0].strip()\n last_table = table_name\n table_url = ','.join(value.split(',')[1:]).strip()\n urls[table_name] = table_url\n if replace:\n try:\n tables[last_table]\n except:\n tables[table_name] = {'replace_columns': str(replace)}\n elif key == \"*nulls\":\n if last_table:\n nulls = [eval(v) for v in [v.strip()\n for v in value.split(',')]]\n try:\n tables[last_table]\n except KeyError:\n if replace:\n tables[last_table] = {'replace_columns': str(replace)}\n else:\n tables[last_table] = {}\n tables[last_table]['cleanup'] = \"Cleanup(correct_invalid_value, nulls=\" + str(nulls) + \")\"\n elif key == \"replace\":\n replace = [(v.split(',')[0].strip(), v.split(',')[1].strip())\n for v in [v.strip() for v in value.split(';')]]\n elif key == \"tags\":\n values[\"tags\"] = [v.strip() for v in value.split(',')]\n elif key == \"*ct_names\":\n tables[last_table][\"ct_names\"] = [v.strip()\n for v in value.split(',')]\n elif key == \"*column\":\n if last_table:\n vs = [v.strip() for v in value.split(',')]\n column = [(vs[0], (vs[1], vs[2]) if len(vs) > 2 else (vs[1],))]\n try:\n tables[last_table]\n except KeyError:\n tables[last_table] = {}\n\n try:\n tables[last_table]['columns'] += column\n except KeyError:\n tables[last_table]['columns'] = column\n elif key[0] == \"*\":\n # attribute that should be applied to the most recently\n # declared table\n if key[0] == \"*\":\n key = key[1:]\n if last_table:\n try:\n tables[last_table]\n except KeyError:\n tables[last_table] = {}\n\n try:\n e = eval(value)\n except:\n e = str(value)\n\n tables[last_table][key] = \"'\" + str(e) + \"'\"\n else:\n # general script attributes\n values[key] = '\"' + str(value) + '\"'\n\n if 'shortname' not in list(values.keys()):\n try:\n values['shortname'] = values['name']\n except:\n pass\n values['urls'] = str(urls)\n\n def get_value(key):\n try:\n return values[key]\n except KeyError:\n return \"\"\n\n table_desc = \"{\"\n for (key, value) in list(tables.items()):\n table_desc += \"'\" + key + \"': Table('\" + key + \"', \"\n table_desc += ','.join([key + \"=\" + str(value)\n for key, value, in list(value.items())])\n table_desc += \"),\"\n if table_desc != '{':\n table_desc = table_desc[:-1]\n table_desc += \"}\"\n\n values['tables'] = table_desc\n\n script_desc = []\n for key, value in list(values.items()):\n if key == \"url\":\n key = \"ref\"\n if key not in keys_to_ignore:\n script_desc.append(key + \"=\" + str(value))\n script_desc = (',\\n' + ' ' * 27).join(script_desc)\n\n if 'template' in list(values.keys()):\n template = values[\"template\"]\n else:\n template = \"default\"\n script_contents = (script_templates[template] % script_desc)\n\n new_script = open(script_file + '.py', 'w')\n new_script.write(script_contents)\n new_script.close()\n\n definition.close()\n\n\ndef add_dialect(table_dict, table):\n \"\"\"\n Reads dialect key of JSON script and extracts key-value pairs to store them\n in python script\n\n Contains properties such 'nulls', delimiter', etc\n \"\"\"\n for (key, val) in table['dialect'].items():\n # dialect related key-value pairs\n # copied as is\n if key == \"nulls\":\n table_dict[\n 'cleanup'] = \"Cleanup(correct_invalid_value, nulls=\" + str(val) + \")\"\n\n elif key == \"delimiter\":\n table_dict[key] = \"'\" + str(val) + \"'\"\n else:\n table_dict[key] = val\n\n\ndef add_schema(table_dict, table):\n \"\"\"\n Reads schema key of JSON script and extracts values to store them in\n python script\n\n Contains properties related to table schema, such as 'fields' and cross-tab\n column name ('ct_column').\n \"\"\"\n for (key, val) in table['schema'].items():\n # schema related key-value pairs\n\n if key == \"fields\":\n # fields = columns of the table\n\n # list of column tuples\n column_list = []\n for obj in val:\n # fields is a collection of JSON objects\n # (similar to a list of dicts in python)\n\n if \"size\" in obj:\n column_list.append((obj[\"name\"],\n (obj[\"type\"], obj[\"size\"])))\n else:\n column_list.append((obj[\"name\"],\n (obj[\"type\"],)))\n\n table_dict[\"columns\"] = column_list\n\n elif key == \"ct_column\":\n table_dict[key] = \"'\" + val + \"'\"\n\n else:\n table_dict[key] = val\n\n\ndef compile_json(json_file):\n \"\"\"\n Function to compile JSON script files to python scripts\n The scripts are created with `retriever create_json <script_name` using\n command line\n \"\"\"\n json_object = json.load(open(json_file + \".json\", \"r\"))\n\n if \"retriever\" not in json_object.keys():\n # Compile only files that have retriever key\n return\n\n values = {}\n values['urls'] = {}\n\n keys_to_ignore = [\"template\"]\n\n for (key, value) in json_object.items():\n\n if key == \"title\":\n values[\"name\"] = \"\\\"\" + str(value) + \"\\\"\"\n\n elif key == \"name\":\n values[\"shortname\"] = \"\\\"\" + str(value) + \"\\\"\"\n\n elif key == \"description\":\n values[\"description\"] = \"\\\"\" + str(value) + \"\\\"\"\n\n elif key == \"homepage\":\n values[\"ref\"] = \"\\\"\" + str(value) + \"\\\"\"\n\n elif key == \"citation\":\n values[\"citation\"] = \"\\\"\" + str(value) + \"\\\"\"\n\n elif key == \"keywords\":\n values[\"tags\"] = value\n\n elif key == \"resources\":\n # Array of table objects\n tables = {}\n for table in value:\n # Maintain a dict for table keys and values\n table_dict = {}\n\n try:\n values['urls'][table['name']] = table['url']\n except Exception as e:\n print(e, \"\\nError in reading table: \" + table)\n continue\n\n if table[\"schema\"] == {} and table[\"dialect\"] == {}:\n continue\n\n for (t_key, t_val) in table.items():\n\n if t_key == \"dialect\":\n add_dialect(table_dict, table)\n\n elif t_key == \"schema\":\n add_schema(table_dict, table)\n\n tables[table[\"name\"]] = table_dict\n\n else:\n values[key] = value\n\n # Create a Table object string using the tables dict\n table_desc = \"{\"\n for (key, value) in tables.items():\n table_desc += \"'\" + key + \"': Table('\" + key + \"', \"\n table_desc += ','.join([key + \"=\" + str(value)\n for key, value, in value.items()])\n table_desc += \"),\"\n if table_desc != '{':\n table_desc = table_desc[:-1]\n table_desc += \"}\"\n\n values[\"tables\"] = table_desc\n\n script_desc = []\n for key, value in values.items():\n if key not in keys_to_ignore:\n script_desc.append(key + \"=\" + str(value))\n script_desc = (',\\n' + ' ' * 27).join(script_desc)\n\n if 'template' in values.keys():\n template = values[\"template\"]\n else:\n template = \"default\"\n script_contents = (script_templates[template] % script_desc)\n\n new_script = open(json_file + '.py', 'w')\n new_script.write(script_contents)\n new_script.close()\n", "path": "lib/compile.py"}]} | 3,311 | 333 |
gh_patches_debug_19595 | rasdani/github-patches | git_diff | sosreport__sos-212 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
neutron plugin does not have check_enabled
The neutron plugin is runs on non-OpenStack systems, because it has no check_enabled method like the other openstack plugins.
I'm happy to help (say the word), but this plugin only has the one commit and I'm not sure what it's status is.
Also, it gets the following exception because it is running (which it shouldn't) and it is assuming that network namespaces are enabled.
```
Traceback (most recent call last):
File "/opt/sosreport/embedded/lib/python2.7/site-packages/sos/sosreport.py", line 940, in setup
plug.setup()
File "/opt/sosreport/embedded/lib/python2.7/site-packages/sos/plugins/neutron.py", line 170, in setup
super(RedHatNeutron, self).setup()
File "/opt/sosreport/embedded/lib/python2.7/site-packages/sos/plugins/neutron.py", line 49, in setup
self.netns_dumps()
File "/opt/sosreport/embedded/lib/python2.7/site-packages/sos/plugins/neutron.py", line 80, in netns_dumps
prefix, netid = nsname.split('-', 1)
ValueError: need more than 1 value to unpack
```
```
# ip netns
Object "netns" is unknown, try "ip help".
# cat /etc/redhat-release
CentOS release 6.3 (Final)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sos/plugins/neutron.py`
Content:
```
1 ## Copyright (C) 2013 Red Hat, Inc., Brent Eagles <[email protected]>
2
3 ### This program is free software; you can redistribute it and/or modify
4 ## it under the terms of the GNU General Public License as published by
5 ## the Free Software Foundation; either version 2 of the License, or
6 ## (at your option) any later version.
7
8 ## This program is distributed in the hope that it will be useful,
9 ## but WITHOUT ANY WARRANTY; without even the implied warranty of
10 ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
11 ## GNU General Public License for more details.
12
13 ## You should have received a copy of the GNU General Public License
14 ## along with this program; if not, write to the Free Software
15 ## Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
16
17 import os
18 import re
19
20 from sos.plugins import Plugin, RedHatPlugin, DebianPlugin, UbuntuPlugin
21
22 # The Networking plugin includes most of what is needed from a snapshot
23 # of the networking, so we only need to focus on the parts that are specific
24 # to OpenStack Networking. The Process plugin should capture the dnsmasq
25 # command line. The libvirt plugin grabs the instance's XML definition which
26 # has the interface names for an instance. So what remains is relevant database
27 # info...
28
29 class Neutron(Plugin):
30 """OpenStack Networking (quantum/neutron) related information
31 """
32 plugin_name = "neutron"
33
34 option_list = [("log", "Gathers all Neutron logs", "slow", False),
35 ("quantum", "Overrides checks for newer Neutron components",
36 "fast", False)]
37
38 component_name = "neutron"
39
40 def setup(self):
41 if os.path.exists("/etc/neutron/") and self.get_option("quantum", False):
42 self.component_name = self.plugin_name
43 else:
44 self.component_name = "quantum"
45
46 self.add_copy_specs(["/etc/%s/" % self.component_name,
47 "/var/log/%s/" % self.component_name])
48
49 self.netns_dumps()
50 self.get_ovs_dumps()
51
52
53 def get_ovs_dumps(self):
54 # Check to see if we are using the Open vSwitch plugin. If not we
55 # should be able to skip the rest of the dump.
56 ovs_conf_check_out = self.call_ext_prog('grep "^core_plugin.*openvswitch" ' +
57 ("/etc/%s/*.conf" + self.component_name))
58
59 if not ovs_conf_check_out or len(ovs_conf_check_out[1].splitlines()) == 0:
60 return
61
62 # The '-s' option enables dumping of packet counters on the
63 # ports.
64 self.add_cmd_output("ovs-dpctl -s show")
65
66 # The '-t 5' adds an upper bound on how long to wait to connect
67 # to the Open vSwitch server, avoiding hangs when running sosreport.
68 self.add_cmd_output("ovs-vsctl -t 5 show")
69
70 def netns_dumps(self):
71 # It would've been beautiful if we could get parts of the networking
72 # plugin to run in different namespaces. There are a couple of options
73 # in the short term: create a local instance and "borrow" some of the
74 # functionality, or simply copy some of the functionality.
75 prefixes = ["qdhcp", "qrouter"]
76 nslist = self.call_ext_prog("ip netns")
77 lease_directories = []
78 if nslist:
79 for nsname in nslist[1].splitlines():
80 prefix, netid = nsname.split('-', 1)
81 if len(netid) > 0 and prefix in prefixes:
82 self.ns_gather_data(nsname)
83 lease_directories.append("/var/lib/%s/dhcp/%s/" %
84 (self.component_name, netid))
85 self.add_copy_specs(lease_directories)
86
87 # TODO: Refactor! Copied from Networking plugin.
88 def get_interface_name(self,ip_addr_out):
89 """Return a dictionary for which key are interface name according to the
90 output of ifconifg-a stored in ifconfig_file.
91 """
92 out={}
93 for line in ip_addr_out[1].splitlines():
94 match=re.match('.*link/ether', line)
95 if match:
96 int=match.string.split(':')[1].lstrip()
97 out[int]=True
98 return out
99
100 def ns_gather_data(self, nsname):
101 cmd_prefix = "ip netns exec %s " % nsname
102 self.add_cmd_output(cmd_prefix + "iptables-save")
103 self.add_cmd_output(cmd_prefix + "ifconfig -a")
104 self.add_cmd_output(cmd_prefix + "route -n")
105 # borrowed from networking plugin
106 ip_addr_out=self.call_ext_prog(cmd_prefix + "ip -o addr")
107 if ip_addr_out:
108 for eth in self.get_interface_name(ip_addr_out):
109 self.add_cmd_output(cmd_prefix + "ethtool "+eth)
110 self.add_cmd_output(cmd_prefix + "ethtool -i "+eth)
111 self.add_cmd_output(cmd_prefix + "ethtool -k "+eth)
112 self.add_cmd_output(cmd_prefix + "ethtool -S "+eth)
113 # Most, if not all, IFs in the namespaces are going to be
114 # virtual. The '-a', '-c' and '-g' options are not likely to be
115 # supported so these ops are not copied from the network
116 # plugin.
117
118 # As all of the bridges are in the "global namespace", we do not need
119 # to gather info on them.
120
121 def gen_pkg_tuple(self, packages):
122 names = []
123 for p in packages:
124 names.append(p % { "comp" : self.component_name })
125 return tuple(names)
126
127 class DebianNeutron(Neutron, DebianPlugin, UbuntuPlugin):
128 """OpenStack Neutron related information for Debian based distributions
129 """
130 package_list_template = ['%(comp)s-common',
131 '%(comp)s-plugin-cisco',
132 '%(comp)s-plugin-linuxbridge-agent',
133 '%(comp)s-plugin-nicira',
134 '%(comp)s-plugin-openvswitch',
135 '%(comp)s-plugin-openvswitch-agent',
136 '%(comp)s-plugin-ryu',
137 '%(comp)s-plugin-ryu-agent',
138 '%(comp)s-server',
139 'python-%(comp)s',
140 'python-%(comp)sclient']
141
142 def setup(self):
143 super(DebianNeutron, self).setup()
144 self.packages = self.gen_pkg_tuple(self.package_list_template)
145 self.add_copy_spec("/etc/sudoers.d/%s_sudoers" % self.component_name)
146
147
148
149 class RedHatNeutron(Neutron, RedHatPlugin):
150 """OpenStack Neutron related information for Red Hat distributions
151 """
152
153 package_list_template = ['openstack-%(comp)s',
154 'openstack-%(comp)s-linuxbridge'
155 'openstack-%(comp)s-metaplugin',
156 'openstack-%(comp)s-openvswitch',
157 'openstack-%(comp)s-bigswitch',
158 'openstack-%(comp)s-brocade',
159 'openstack-%(comp)s-cisco',
160 'openstack-%(comp)s-hyperv',
161 'openstack-%(comp)s-midonet',
162 'openstack-%(comp)s-nec'
163 'openstack-%(comp)s-nicira',
164 'openstack-%(comp)s-plumgrid',
165 'openstack-%(comp)s-ryu',
166 'python-%(comp)s',
167 'python-%(comp)sclient']
168
169 def setup(self):
170 super(RedHatNeutron, self).setup()
171 self.packages = self.gen_pkg_tuple(self.package_list_template)
172 self.add_copy_specs(["/etc/sudoers.d/%s-rootwrap" % self.component_name])
173
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sos/plugins/neutron.py b/sos/plugins/neutron.py
--- a/sos/plugins/neutron.py
+++ b/sos/plugins/neutron.py
@@ -139,6 +139,9 @@
'python-%(comp)s',
'python-%(comp)sclient']
+ def check_enabled(self):
+ return self.is_installed("%s-common" % self.component_name)
+
def setup(self):
super(DebianNeutron, self).setup()
self.packages = self.gen_pkg_tuple(self.package_list_template)
@@ -166,6 +169,9 @@
'python-%(comp)s',
'python-%(comp)sclient']
+ def check_enabled(self):
+ return self.is_installed("openstack-%s" % self.component_name)
+
def setup(self):
super(RedHatNeutron, self).setup()
self.packages = self.gen_pkg_tuple(self.package_list_template)
| {"golden_diff": "diff --git a/sos/plugins/neutron.py b/sos/plugins/neutron.py\n--- a/sos/plugins/neutron.py\n+++ b/sos/plugins/neutron.py\n@@ -139,6 +139,9 @@\n 'python-%(comp)s',\n 'python-%(comp)sclient']\n \n+ def check_enabled(self):\n+ return self.is_installed(\"%s-common\" % self.component_name)\n+\n def setup(self):\n super(DebianNeutron, self).setup()\n self.packages = self.gen_pkg_tuple(self.package_list_template)\n@@ -166,6 +169,9 @@\n 'python-%(comp)s',\n 'python-%(comp)sclient']\n \n+ def check_enabled(self):\n+ return self.is_installed(\"openstack-%s\" % self.component_name)\n+\n def setup(self):\n super(RedHatNeutron, self).setup()\n self.packages = self.gen_pkg_tuple(self.package_list_template)\n", "issue": "neutron plugin does not have check_enabled\nThe neutron plugin is runs on non-OpenStack systems, because it has no check_enabled method like the other openstack plugins.\n\nI'm happy to help (say the word), but this plugin only has the one commit and I'm not sure what it's status is.\n\nAlso, it gets the following exception because it is running (which it shouldn't) and it is assuming that network namespaces are enabled.\n\n```\nTraceback (most recent call last):\n File \"/opt/sosreport/embedded/lib/python2.7/site-packages/sos/sosreport.py\", line 940, in setup\n plug.setup()\n File \"/opt/sosreport/embedded/lib/python2.7/site-packages/sos/plugins/neutron.py\", line 170, in setup\n super(RedHatNeutron, self).setup()\n File \"/opt/sosreport/embedded/lib/python2.7/site-packages/sos/plugins/neutron.py\", line 49, in setup\n self.netns_dumps()\n File \"/opt/sosreport/embedded/lib/python2.7/site-packages/sos/plugins/neutron.py\", line 80, in netns_dumps\n prefix, netid = nsname.split('-', 1)\nValueError: need more than 1 value to unpack\n```\n\n```\n# ip netns\nObject \"netns\" is unknown, try \"ip help\".\n# cat /etc/redhat-release \nCentOS release 6.3 (Final)\n```\n\n", "before_files": [{"content": "## Copyright (C) 2013 Red Hat, Inc., Brent Eagles <[email protected]>\n\n### This program is free software; you can redistribute it and/or modify\n## it under the terms of the GNU General Public License as published by\n## the Free Software Foundation; either version 2 of the License, or\n## (at your option) any later version.\n\n## This program is distributed in the hope that it will be useful,\n## but WITHOUT ANY WARRANTY; without even the implied warranty of\n## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n## GNU General Public License for more details.\n\n## You should have received a copy of the GNU General Public License\n## along with this program; if not, write to the Free Software\n## Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.\n\nimport os\nimport re\n\nfrom sos.plugins import Plugin, RedHatPlugin, DebianPlugin, UbuntuPlugin\n\n# The Networking plugin includes most of what is needed from a snapshot\n# of the networking, so we only need to focus on the parts that are specific\n# to OpenStack Networking. The Process plugin should capture the dnsmasq\n# command line. The libvirt plugin grabs the instance's XML definition which\n# has the interface names for an instance. So what remains is relevant database\n# info...\n\nclass Neutron(Plugin):\n \"\"\"OpenStack Networking (quantum/neutron) related information\n \"\"\"\n plugin_name = \"neutron\"\n\n option_list = [(\"log\", \"Gathers all Neutron logs\", \"slow\", False),\n (\"quantum\", \"Overrides checks for newer Neutron components\",\n \"fast\", False)]\n\n component_name = \"neutron\"\n\n def setup(self):\n if os.path.exists(\"/etc/neutron/\") and self.get_option(\"quantum\", False):\n self.component_name = self.plugin_name\n else:\n self.component_name = \"quantum\"\n\n self.add_copy_specs([\"/etc/%s/\" % self.component_name,\n \"/var/log/%s/\" % self.component_name])\n\n self.netns_dumps()\n self.get_ovs_dumps()\n\n\n def get_ovs_dumps(self):\n # Check to see if we are using the Open vSwitch plugin. If not we \n # should be able to skip the rest of the dump.\n ovs_conf_check_out = self.call_ext_prog('grep \"^core_plugin.*openvswitch\" ' +\n (\"/etc/%s/*.conf\" + self.component_name))\n\n if not ovs_conf_check_out or len(ovs_conf_check_out[1].splitlines()) == 0:\n return\n\n # The '-s' option enables dumping of packet counters on the\n # ports.\n self.add_cmd_output(\"ovs-dpctl -s show\")\n\n # The '-t 5' adds an upper bound on how long to wait to connect\n # to the Open vSwitch server, avoiding hangs when running sosreport.\n self.add_cmd_output(\"ovs-vsctl -t 5 show\")\n\n def netns_dumps(self):\n # It would've been beautiful if we could get parts of the networking\n # plugin to run in different namespaces. There are a couple of options\n # in the short term: create a local instance and \"borrow\" some of the\n # functionality, or simply copy some of the functionality.\n prefixes = [\"qdhcp\", \"qrouter\"]\n nslist = self.call_ext_prog(\"ip netns\")\n lease_directories = []\n if nslist:\n for nsname in nslist[1].splitlines():\n prefix, netid = nsname.split('-', 1)\n if len(netid) > 0 and prefix in prefixes:\n self.ns_gather_data(nsname)\n lease_directories.append(\"/var/lib/%s/dhcp/%s/\" %\n (self.component_name, netid))\n self.add_copy_specs(lease_directories)\n\n # TODO: Refactor! Copied from Networking plugin.\n def get_interface_name(self,ip_addr_out):\n \"\"\"Return a dictionary for which key are interface name according to the\n output of ifconifg-a stored in ifconfig_file.\n \"\"\"\n out={}\n for line in ip_addr_out[1].splitlines():\n match=re.match('.*link/ether', line)\n if match:\n int=match.string.split(':')[1].lstrip()\n out[int]=True\n return out\n \n def ns_gather_data(self, nsname):\n cmd_prefix = \"ip netns exec %s \" % nsname\n self.add_cmd_output(cmd_prefix + \"iptables-save\")\n self.add_cmd_output(cmd_prefix + \"ifconfig -a\")\n self.add_cmd_output(cmd_prefix + \"route -n\")\n # borrowed from networking plugin\n ip_addr_out=self.call_ext_prog(cmd_prefix + \"ip -o addr\")\n if ip_addr_out:\n for eth in self.get_interface_name(ip_addr_out):\n self.add_cmd_output(cmd_prefix + \"ethtool \"+eth)\n self.add_cmd_output(cmd_prefix + \"ethtool -i \"+eth)\n self.add_cmd_output(cmd_prefix + \"ethtool -k \"+eth)\n self.add_cmd_output(cmd_prefix + \"ethtool -S \"+eth)\n # Most, if not all, IFs in the namespaces are going to be \n # virtual. The '-a', '-c' and '-g' options are not likely to be\n # supported so these ops are not copied from the network\n # plugin.\n\n # As all of the bridges are in the \"global namespace\", we do not need\n # to gather info on them.\n\n def gen_pkg_tuple(self, packages):\n names = []\n for p in packages:\n names.append(p % { \"comp\" : self.component_name })\n return tuple(names)\n\nclass DebianNeutron(Neutron, DebianPlugin, UbuntuPlugin):\n \"\"\"OpenStack Neutron related information for Debian based distributions\n \"\"\"\n package_list_template = ['%(comp)s-common',\n '%(comp)s-plugin-cisco',\n '%(comp)s-plugin-linuxbridge-agent',\n '%(comp)s-plugin-nicira',\n '%(comp)s-plugin-openvswitch',\n '%(comp)s-plugin-openvswitch-agent',\n '%(comp)s-plugin-ryu',\n '%(comp)s-plugin-ryu-agent',\n '%(comp)s-server',\n 'python-%(comp)s',\n 'python-%(comp)sclient']\n\n def setup(self):\n super(DebianNeutron, self).setup()\n self.packages = self.gen_pkg_tuple(self.package_list_template)\n self.add_copy_spec(\"/etc/sudoers.d/%s_sudoers\" % self.component_name)\n\n\n\nclass RedHatNeutron(Neutron, RedHatPlugin):\n \"\"\"OpenStack Neutron related information for Red Hat distributions\n \"\"\"\n\n package_list_template = ['openstack-%(comp)s', \n 'openstack-%(comp)s-linuxbridge'\n 'openstack-%(comp)s-metaplugin',\n 'openstack-%(comp)s-openvswitch',\n 'openstack-%(comp)s-bigswitch',\n 'openstack-%(comp)s-brocade',\n 'openstack-%(comp)s-cisco',\n 'openstack-%(comp)s-hyperv',\n 'openstack-%(comp)s-midonet',\n 'openstack-%(comp)s-nec'\n 'openstack-%(comp)s-nicira',\n 'openstack-%(comp)s-plumgrid',\n 'openstack-%(comp)s-ryu',\n 'python-%(comp)s',\n 'python-%(comp)sclient']\n\n def setup(self):\n super(RedHatNeutron, self).setup()\n self.packages = self.gen_pkg_tuple(self.package_list_template)\n self.add_copy_specs([\"/etc/sudoers.d/%s-rootwrap\" % self.component_name])\n", "path": "sos/plugins/neutron.py"}], "after_files": [{"content": "## Copyright (C) 2013 Red Hat, Inc., Brent Eagles <[email protected]>\n\n### This program is free software; you can redistribute it and/or modify\n## it under the terms of the GNU General Public License as published by\n## the Free Software Foundation; either version 2 of the License, or\n## (at your option) any later version.\n\n## This program is distributed in the hope that it will be useful,\n## but WITHOUT ANY WARRANTY; without even the implied warranty of\n## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n## GNU General Public License for more details.\n\n## You should have received a copy of the GNU General Public License\n## along with this program; if not, write to the Free Software\n## Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.\n\nimport os\nimport re\n\nfrom sos.plugins import Plugin, RedHatPlugin, DebianPlugin, UbuntuPlugin\n\n# The Networking plugin includes most of what is needed from a snapshot\n# of the networking, so we only need to focus on the parts that are specific\n# to OpenStack Networking. The Process plugin should capture the dnsmasq\n# command line. The libvirt plugin grabs the instance's XML definition which\n# has the interface names for an instance. So what remains is relevant database\n# info...\n\nclass Neutron(Plugin):\n \"\"\"OpenStack Networking (quantum/neutron) related information\n \"\"\"\n plugin_name = \"neutron\"\n\n option_list = [(\"log\", \"Gathers all Neutron logs\", \"slow\", False),\n (\"quantum\", \"Overrides checks for newer Neutron components\",\n \"fast\", False)]\n\n component_name = \"neutron\"\n\n def setup(self):\n if os.path.exists(\"/etc/neutron/\") and self.get_option(\"quantum\", False):\n self.component_name = self.plugin_name\n else:\n self.component_name = \"quantum\"\n\n self.add_copy_specs([\"/etc/%s/\" % self.component_name,\n \"/var/log/%s/\" % self.component_name])\n\n self.netns_dumps()\n self.get_ovs_dumps()\n\n\n def get_ovs_dumps(self):\n # Check to see if we are using the Open vSwitch plugin. If not we \n # should be able to skip the rest of the dump.\n ovs_conf_check_out = self.call_ext_prog('grep \"^core_plugin.*openvswitch\" ' +\n (\"/etc/%s/*.conf\" + self.component_name))\n\n if not ovs_conf_check_out or len(ovs_conf_check_out[1].splitlines()) == 0:\n return\n\n # The '-s' option enables dumping of packet counters on the\n # ports.\n self.add_cmd_output(\"ovs-dpctl -s show\")\n\n # The '-t 5' adds an upper bound on how long to wait to connect\n # to the Open vSwitch server, avoiding hangs when running sosreport.\n self.add_cmd_output(\"ovs-vsctl -t 5 show\")\n\n def netns_dumps(self):\n # It would've been beautiful if we could get parts of the networking\n # plugin to run in different namespaces. There are a couple of options\n # in the short term: create a local instance and \"borrow\" some of the\n # functionality, or simply copy some of the functionality.\n prefixes = [\"qdhcp\", \"qrouter\"]\n nslist = self.call_ext_prog(\"ip netns\")\n lease_directories = []\n if nslist:\n for nsname in nslist[1].splitlines():\n prefix, netid = nsname.split('-', 1)\n if len(netid) > 0 and prefix in prefixes:\n self.ns_gather_data(nsname)\n lease_directories.append(\"/var/lib/%s/dhcp/%s/\" %\n (self.component_name, netid))\n self.add_copy_specs(lease_directories)\n\n # TODO: Refactor! Copied from Networking plugin.\n def get_interface_name(self,ip_addr_out):\n \"\"\"Return a dictionary for which key are interface name according to the\n output of ifconifg-a stored in ifconfig_file.\n \"\"\"\n out={}\n for line in ip_addr_out[1].splitlines():\n match=re.match('.*link/ether', line)\n if match:\n int=match.string.split(':')[1].lstrip()\n out[int]=True\n return out\n \n def ns_gather_data(self, nsname):\n cmd_prefix = \"ip netns exec %s \" % nsname\n self.add_cmd_output(cmd_prefix + \"iptables-save\")\n self.add_cmd_output(cmd_prefix + \"ifconfig -a\")\n self.add_cmd_output(cmd_prefix + \"route -n\")\n # borrowed from networking plugin\n ip_addr_out=self.call_ext_prog(cmd_prefix + \"ip -o addr\")\n if ip_addr_out:\n for eth in self.get_interface_name(ip_addr_out):\n self.add_cmd_output(cmd_prefix + \"ethtool \"+eth)\n self.add_cmd_output(cmd_prefix + \"ethtool -i \"+eth)\n self.add_cmd_output(cmd_prefix + \"ethtool -k \"+eth)\n self.add_cmd_output(cmd_prefix + \"ethtool -S \"+eth)\n # Most, if not all, IFs in the namespaces are going to be \n # virtual. The '-a', '-c' and '-g' options are not likely to be\n # supported so these ops are not copied from the network\n # plugin.\n\n # As all of the bridges are in the \"global namespace\", we do not need\n # to gather info on them.\n\n def gen_pkg_tuple(self, packages):\n names = []\n for p in packages:\n names.append(p % { \"comp\" : self.component_name })\n return tuple(names)\n\nclass DebianNeutron(Neutron, DebianPlugin, UbuntuPlugin):\n \"\"\"OpenStack Neutron related information for Debian based distributions\n \"\"\"\n package_list_template = ['%(comp)s-common',\n '%(comp)s-plugin-cisco',\n '%(comp)s-plugin-linuxbridge-agent',\n '%(comp)s-plugin-nicira',\n '%(comp)s-plugin-openvswitch',\n '%(comp)s-plugin-openvswitch-agent',\n '%(comp)s-plugin-ryu',\n '%(comp)s-plugin-ryu-agent',\n '%(comp)s-server',\n 'python-%(comp)s',\n 'python-%(comp)sclient']\n\n def check_enabled(self):\n return self.is_installed(\"%s-common\" % self.component_name)\n\n def setup(self):\n super(DebianNeutron, self).setup()\n self.packages = self.gen_pkg_tuple(self.package_list_template)\n self.add_copy_spec(\"/etc/sudoers.d/%s_sudoers\" % self.component_name)\n\n\n\nclass RedHatNeutron(Neutron, RedHatPlugin):\n \"\"\"OpenStack Neutron related information for Red Hat distributions\n \"\"\"\n\n package_list_template = ['openstack-%(comp)s', \n 'openstack-%(comp)s-linuxbridge'\n 'openstack-%(comp)s-metaplugin',\n 'openstack-%(comp)s-openvswitch',\n 'openstack-%(comp)s-bigswitch',\n 'openstack-%(comp)s-brocade',\n 'openstack-%(comp)s-cisco',\n 'openstack-%(comp)s-hyperv',\n 'openstack-%(comp)s-midonet',\n 'openstack-%(comp)s-nec'\n 'openstack-%(comp)s-nicira',\n 'openstack-%(comp)s-plumgrid',\n 'openstack-%(comp)s-ryu',\n 'python-%(comp)s',\n 'python-%(comp)sclient']\n\n def check_enabled(self):\n return self.is_installed(\"openstack-%s\" % self.component_name)\n\n def setup(self):\n super(RedHatNeutron, self).setup()\n self.packages = self.gen_pkg_tuple(self.package_list_template)\n self.add_copy_specs([\"/etc/sudoers.d/%s-rootwrap\" % self.component_name])\n", "path": "sos/plugins/neutron.py"}]} | 2,705 | 210 |
gh_patches_debug_38040 | rasdani/github-patches | git_diff | getsentry__sentry-16440 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
User-creation should not fail, when the account already exists
I'm creating an Ansible role for installing sentry on our network. The software is installed via `pip` -- version 8.22.0, which seems to be the latest as of today. Not using Docker -- just a dedicated VM.
One of the tasks is to create a Sentry-admin:
```
sentry createuser --superuser --no-input --password=FOO [email protected]
```
Unfortunately, the above command only works _the first time_. Trying to run it again fails:
```
django.db.utils.IntegrityError: IntegrityError('duplicate key value violates unique constraint \"auth_user_username_key\"
DETAIL: Key (username)=([email protected]) already exists
```
I understand, why the same account can not be created again. But it should not be an _error_ -- simply update the existing record if necessary. Or, if it must be an error, there should be a specific exit-code for this case, which would allow systems like Ansible to move on. Currently, `sentry` exits with the same code 1 for all errors...
At the very least, there needs to be a way to check, whether the account exists already... Ansible would then be able to perform the query before attempting to add the user.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/sentry/runner/commands/createuser.py`
Content:
```
1 from __future__ import absolute_import, print_function
2
3 import click
4 from sentry.runner.decorators import configuration
5
6
7 def _get_field(field_name):
8 from sentry.models import User
9
10 return User._meta.get_field(field_name)
11
12
13 def _get_email():
14 from django.core.exceptions import ValidationError
15
16 rv = click.prompt("Email")
17 field = _get_field("email")
18 try:
19 return field.clean(rv, None)
20 except ValidationError as e:
21 raise click.ClickException("; ".join(e.messages))
22
23
24 def _get_password():
25 from django.core.exceptions import ValidationError
26
27 rv = click.prompt("Password", hide_input=True, confirmation_prompt=True)
28 field = _get_field("password")
29 try:
30 return field.clean(rv, None)
31 except ValidationError as e:
32 raise click.ClickException("; ".join(e.messages))
33
34
35 def _get_superuser():
36 return click.confirm("Should this user be a superuser?", default=False)
37
38
39 @click.command()
40 @click.option("--email")
41 @click.option("--password")
42 @click.option("--superuser/--no-superuser", default=None, is_flag=True)
43 @click.option("--no-password", default=False, is_flag=True)
44 @click.option("--no-input", default=False, is_flag=True)
45 @configuration
46 def createuser(email, password, superuser, no_password, no_input):
47 "Create a new user."
48 if not no_input:
49 if not email:
50 email = _get_email()
51
52 if not (password or no_password):
53 password = _get_password()
54
55 if superuser is None:
56 superuser = _get_superuser()
57
58 if superuser is None:
59 superuser = False
60
61 if not email:
62 raise click.ClickException("Invalid or missing email address.")
63
64 # TODO(mattrobenolt): Accept password over stdin?
65 if not no_password and not password:
66 raise click.ClickException("No password set and --no-password not passed.")
67
68 from sentry import roles
69 from sentry.models import User
70 from django.conf import settings
71
72 user = User(
73 email=email, username=email, is_superuser=superuser, is_staff=superuser, is_active=True
74 )
75
76 if password:
77 user.set_password(password)
78
79 user.save()
80
81 click.echo("User created: %s" % (email,))
82
83 # TODO(dcramer): kill this when we improve flows
84 if settings.SENTRY_SINGLE_ORGANIZATION:
85 from sentry.models import Organization, OrganizationMember, OrganizationMemberTeam, Team
86
87 org = Organization.get_default()
88 if superuser:
89 role = roles.get_top_dog().id
90 else:
91 role = org.default_role
92 member = OrganizationMember.objects.create(organization=org, user=user, role=role)
93
94 # if we've only got a single team let's go ahead and give
95 # access to that team as its likely the desired outcome
96 teams = list(Team.objects.filter(organization=org)[0:2])
97 if len(teams) == 1:
98 OrganizationMemberTeam.objects.create(team=teams[0], organizationmember=member)
99 click.echo("Added to organization: %s" % (org.slug,))
100
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/sentry/runner/commands/createuser.py b/src/sentry/runner/commands/createuser.py
--- a/src/sentry/runner/commands/createuser.py
+++ b/src/sentry/runner/commands/createuser.py
@@ -1,6 +1,7 @@
from __future__ import absolute_import, print_function
import click
+import sys
from sentry.runner.decorators import configuration
@@ -42,8 +43,9 @@
@click.option("--superuser/--no-superuser", default=None, is_flag=True)
@click.option("--no-password", default=False, is_flag=True)
@click.option("--no-input", default=False, is_flag=True)
[email protected]("--force-update", default=False, is_flag=True)
@configuration
-def createuser(email, password, superuser, no_password, no_input):
+def createuser(email, password, superuser, no_password, no_input, force_update):
"Create a new user."
if not no_input:
if not email:
@@ -76,24 +78,31 @@
if password:
user.set_password(password)
- user.save()
-
- click.echo("User created: %s" % (email,))
-
- # TODO(dcramer): kill this when we improve flows
- if settings.SENTRY_SINGLE_ORGANIZATION:
- from sentry.models import Organization, OrganizationMember, OrganizationMemberTeam, Team
-
- org = Organization.get_default()
- if superuser:
- role = roles.get_top_dog().id
+ if User.objects.filter(username=email).exists():
+ if force_update:
+ user.save(force_update=force_update)
+ click.echo("User updated: %s" % (email,))
else:
- role = org.default_role
- member = OrganizationMember.objects.create(organization=org, user=user, role=role)
-
- # if we've only got a single team let's go ahead and give
- # access to that team as its likely the desired outcome
- teams = list(Team.objects.filter(organization=org)[0:2])
- if len(teams) == 1:
- OrganizationMemberTeam.objects.create(team=teams[0], organizationmember=member)
- click.echo("Added to organization: %s" % (org.slug,))
+ click.echo("User: %s exists, use --force-update to force" % (email,))
+ sys.exit(3)
+ else:
+ user.save()
+ click.echo("User created: %s" % (email,))
+
+ # TODO(dcramer): kill this when we improve flows
+ if settings.SENTRY_SINGLE_ORGANIZATION:
+ from sentry.models import Organization, OrganizationMember, OrganizationMemberTeam, Team
+
+ org = Organization.get_default()
+ if superuser:
+ role = roles.get_top_dog().id
+ else:
+ role = org.default_role
+ member = OrganizationMember.objects.create(organization=org, user=user, role=role)
+
+ # if we've only got a single team let's go ahead and give
+ # access to that team as its likely the desired outcome
+ teams = list(Team.objects.filter(organization=org)[0:2])
+ if len(teams) == 1:
+ OrganizationMemberTeam.objects.create(team=teams[0], organizationmember=member)
+ click.echo("Added to organization: %s" % (org.slug,))
| {"golden_diff": "diff --git a/src/sentry/runner/commands/createuser.py b/src/sentry/runner/commands/createuser.py\n--- a/src/sentry/runner/commands/createuser.py\n+++ b/src/sentry/runner/commands/createuser.py\n@@ -1,6 +1,7 @@\n from __future__ import absolute_import, print_function\n \n import click\n+import sys\n from sentry.runner.decorators import configuration\n \n \n@@ -42,8 +43,9 @@\n @click.option(\"--superuser/--no-superuser\", default=None, is_flag=True)\n @click.option(\"--no-password\", default=False, is_flag=True)\n @click.option(\"--no-input\", default=False, is_flag=True)\[email protected](\"--force-update\", default=False, is_flag=True)\n @configuration\n-def createuser(email, password, superuser, no_password, no_input):\n+def createuser(email, password, superuser, no_password, no_input, force_update):\n \"Create a new user.\"\n if not no_input:\n if not email:\n@@ -76,24 +78,31 @@\n if password:\n user.set_password(password)\n \n- user.save()\n-\n- click.echo(\"User created: %s\" % (email,))\n-\n- # TODO(dcramer): kill this when we improve flows\n- if settings.SENTRY_SINGLE_ORGANIZATION:\n- from sentry.models import Organization, OrganizationMember, OrganizationMemberTeam, Team\n-\n- org = Organization.get_default()\n- if superuser:\n- role = roles.get_top_dog().id\n+ if User.objects.filter(username=email).exists():\n+ if force_update:\n+ user.save(force_update=force_update)\n+ click.echo(\"User updated: %s\" % (email,))\n else:\n- role = org.default_role\n- member = OrganizationMember.objects.create(organization=org, user=user, role=role)\n-\n- # if we've only got a single team let's go ahead and give\n- # access to that team as its likely the desired outcome\n- teams = list(Team.objects.filter(organization=org)[0:2])\n- if len(teams) == 1:\n- OrganizationMemberTeam.objects.create(team=teams[0], organizationmember=member)\n- click.echo(\"Added to organization: %s\" % (org.slug,))\n+ click.echo(\"User: %s exists, use --force-update to force\" % (email,))\n+ sys.exit(3)\n+ else:\n+ user.save()\n+ click.echo(\"User created: %s\" % (email,))\n+\n+ # TODO(dcramer): kill this when we improve flows\n+ if settings.SENTRY_SINGLE_ORGANIZATION:\n+ from sentry.models import Organization, OrganizationMember, OrganizationMemberTeam, Team\n+\n+ org = Organization.get_default()\n+ if superuser:\n+ role = roles.get_top_dog().id\n+ else:\n+ role = org.default_role\n+ member = OrganizationMember.objects.create(organization=org, user=user, role=role)\n+\n+ # if we've only got a single team let's go ahead and give\n+ # access to that team as its likely the desired outcome\n+ teams = list(Team.objects.filter(organization=org)[0:2])\n+ if len(teams) == 1:\n+ OrganizationMemberTeam.objects.create(team=teams[0], organizationmember=member)\n+ click.echo(\"Added to organization: %s\" % (org.slug,))\n", "issue": "User-creation should not fail, when the account already exists\nI'm creating an Ansible role for installing sentry on our network. The software is installed via `pip` -- version 8.22.0, which seems to be the latest as of today. Not using Docker -- just a dedicated VM.\r\n\r\nOne of the tasks is to create a Sentry-admin:\r\n```\r\nsentry createuser --superuser --no-input --password=FOO [email protected]\r\n```\r\nUnfortunately, the above command only works _the first time_. Trying to run it again fails:\r\n```\r\ndjango.db.utils.IntegrityError: IntegrityError('duplicate key value violates unique constraint \\\"auth_user_username_key\\\"\r\nDETAIL: Key (username)=([email protected]) already exists\r\n```\r\nI understand, why the same account can not be created again. But it should not be an _error_ -- simply update the existing record if necessary. Or, if it must be an error, there should be a specific exit-code for this case, which would allow systems like Ansible to move on. Currently, `sentry` exits with the same code 1 for all errors...\r\n\r\nAt the very least, there needs to be a way to check, whether the account exists already... Ansible would then be able to perform the query before attempting to add the user.\r\n\n", "before_files": [{"content": "from __future__ import absolute_import, print_function\n\nimport click\nfrom sentry.runner.decorators import configuration\n\n\ndef _get_field(field_name):\n from sentry.models import User\n\n return User._meta.get_field(field_name)\n\n\ndef _get_email():\n from django.core.exceptions import ValidationError\n\n rv = click.prompt(\"Email\")\n field = _get_field(\"email\")\n try:\n return field.clean(rv, None)\n except ValidationError as e:\n raise click.ClickException(\"; \".join(e.messages))\n\n\ndef _get_password():\n from django.core.exceptions import ValidationError\n\n rv = click.prompt(\"Password\", hide_input=True, confirmation_prompt=True)\n field = _get_field(\"password\")\n try:\n return field.clean(rv, None)\n except ValidationError as e:\n raise click.ClickException(\"; \".join(e.messages))\n\n\ndef _get_superuser():\n return click.confirm(\"Should this user be a superuser?\", default=False)\n\n\[email protected]()\[email protected](\"--email\")\[email protected](\"--password\")\[email protected](\"--superuser/--no-superuser\", default=None, is_flag=True)\[email protected](\"--no-password\", default=False, is_flag=True)\[email protected](\"--no-input\", default=False, is_flag=True)\n@configuration\ndef createuser(email, password, superuser, no_password, no_input):\n \"Create a new user.\"\n if not no_input:\n if not email:\n email = _get_email()\n\n if not (password or no_password):\n password = _get_password()\n\n if superuser is None:\n superuser = _get_superuser()\n\n if superuser is None:\n superuser = False\n\n if not email:\n raise click.ClickException(\"Invalid or missing email address.\")\n\n # TODO(mattrobenolt): Accept password over stdin?\n if not no_password and not password:\n raise click.ClickException(\"No password set and --no-password not passed.\")\n\n from sentry import roles\n from sentry.models import User\n from django.conf import settings\n\n user = User(\n email=email, username=email, is_superuser=superuser, is_staff=superuser, is_active=True\n )\n\n if password:\n user.set_password(password)\n\n user.save()\n\n click.echo(\"User created: %s\" % (email,))\n\n # TODO(dcramer): kill this when we improve flows\n if settings.SENTRY_SINGLE_ORGANIZATION:\n from sentry.models import Organization, OrganizationMember, OrganizationMemberTeam, Team\n\n org = Organization.get_default()\n if superuser:\n role = roles.get_top_dog().id\n else:\n role = org.default_role\n member = OrganizationMember.objects.create(organization=org, user=user, role=role)\n\n # if we've only got a single team let's go ahead and give\n # access to that team as its likely the desired outcome\n teams = list(Team.objects.filter(organization=org)[0:2])\n if len(teams) == 1:\n OrganizationMemberTeam.objects.create(team=teams[0], organizationmember=member)\n click.echo(\"Added to organization: %s\" % (org.slug,))\n", "path": "src/sentry/runner/commands/createuser.py"}], "after_files": [{"content": "from __future__ import absolute_import, print_function\n\nimport click\nimport sys\nfrom sentry.runner.decorators import configuration\n\n\ndef _get_field(field_name):\n from sentry.models import User\n\n return User._meta.get_field(field_name)\n\n\ndef _get_email():\n from django.core.exceptions import ValidationError\n\n rv = click.prompt(\"Email\")\n field = _get_field(\"email\")\n try:\n return field.clean(rv, None)\n except ValidationError as e:\n raise click.ClickException(\"; \".join(e.messages))\n\n\ndef _get_password():\n from django.core.exceptions import ValidationError\n\n rv = click.prompt(\"Password\", hide_input=True, confirmation_prompt=True)\n field = _get_field(\"password\")\n try:\n return field.clean(rv, None)\n except ValidationError as e:\n raise click.ClickException(\"; \".join(e.messages))\n\n\ndef _get_superuser():\n return click.confirm(\"Should this user be a superuser?\", default=False)\n\n\[email protected]()\[email protected](\"--email\")\[email protected](\"--password\")\[email protected](\"--superuser/--no-superuser\", default=None, is_flag=True)\[email protected](\"--no-password\", default=False, is_flag=True)\[email protected](\"--no-input\", default=False, is_flag=True)\[email protected](\"--force-update\", default=False, is_flag=True)\n@configuration\ndef createuser(email, password, superuser, no_password, no_input, force_update):\n \"Create a new user.\"\n if not no_input:\n if not email:\n email = _get_email()\n\n if not (password or no_password):\n password = _get_password()\n\n if superuser is None:\n superuser = _get_superuser()\n\n if superuser is None:\n superuser = False\n\n if not email:\n raise click.ClickException(\"Invalid or missing email address.\")\n\n # TODO(mattrobenolt): Accept password over stdin?\n if not no_password and not password:\n raise click.ClickException(\"No password set and --no-password not passed.\")\n\n from sentry import roles\n from sentry.models import User\n from django.conf import settings\n\n user = User(\n email=email, username=email, is_superuser=superuser, is_staff=superuser, is_active=True\n )\n\n if password:\n user.set_password(password)\n\n if User.objects.filter(username=email).exists():\n if force_update:\n user.save(force_update=force_update)\n click.echo(\"User updated: %s\" % (email,))\n else:\n click.echo(\"User: %s exists, use --force-update to force\" % (email,))\n sys.exit(3)\n else:\n user.save()\n click.echo(\"User created: %s\" % (email,))\n\n # TODO(dcramer): kill this when we improve flows\n if settings.SENTRY_SINGLE_ORGANIZATION:\n from sentry.models import Organization, OrganizationMember, OrganizationMemberTeam, Team\n\n org = Organization.get_default()\n if superuser:\n role = roles.get_top_dog().id\n else:\n role = org.default_role\n member = OrganizationMember.objects.create(organization=org, user=user, role=role)\n\n # if we've only got a single team let's go ahead and give\n # access to that team as its likely the desired outcome\n teams = list(Team.objects.filter(organization=org)[0:2])\n if len(teams) == 1:\n OrganizationMemberTeam.objects.create(team=teams[0], organizationmember=member)\n click.echo(\"Added to organization: %s\" % (org.slug,))\n", "path": "src/sentry/runner/commands/createuser.py"}]} | 1,425 | 760 |
gh_patches_debug_67499 | rasdani/github-patches | git_diff | vllm-project__vllm-3176 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
why vllm==0.3.3 need to access google

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vllm/utils.py`
Content:
```
1 import enum
2 import os
3 import socket
4 import subprocess
5 import uuid
6 from platform import uname
7 from typing import List, Tuple, Union
8 from packaging.version import parse, Version
9
10 import psutil
11 import torch
12 import asyncio
13 from functools import partial
14 from typing import (
15 Awaitable,
16 Callable,
17 TypeVar,
18 )
19 from collections import OrderedDict
20 from typing import Any, Hashable, Optional
21
22 from vllm.logger import init_logger
23
24 T = TypeVar("T")
25 logger = init_logger(__name__)
26
27 STR_DTYPE_TO_TORCH_DTYPE = {
28 "half": torch.half,
29 "bfloat16": torch.bfloat16,
30 "float": torch.float,
31 "fp8_e5m2": torch.uint8,
32 }
33
34
35 class Device(enum.Enum):
36 GPU = enum.auto()
37 CPU = enum.auto()
38
39
40 class Counter:
41
42 def __init__(self, start: int = 0) -> None:
43 self.counter = start
44
45 def __next__(self) -> int:
46 i = self.counter
47 self.counter += 1
48 return i
49
50 def reset(self) -> None:
51 self.counter = 0
52
53
54 class LRUCache:
55
56 def __init__(self, capacity: int):
57 self.cache = OrderedDict()
58 self.capacity = capacity
59
60 def __contains__(self, key: Hashable) -> bool:
61 return key in self.cache
62
63 def __len__(self) -> int:
64 return len(self.cache)
65
66 def __getitem__(self, key: Hashable) -> Any:
67 return self.get(key)
68
69 def __setitem__(self, key: Hashable, value: Any) -> None:
70 self.put(key, value)
71
72 def __delitem__(self, key: Hashable) -> None:
73 self.pop(key)
74
75 def touch(self, key: Hashable) -> None:
76 self.cache.move_to_end(key)
77
78 def get(self, key: Hashable, default_value: Optional[Any] = None) -> int:
79 if key in self.cache:
80 value = self.cache[key]
81 self.cache.move_to_end(key)
82 else:
83 value = default_value
84 return value
85
86 def put(self, key: Hashable, value: Any) -> None:
87 self.cache[key] = value
88 self.cache.move_to_end(key)
89 self._remove_old_if_needed()
90
91 def _on_remove(self, key: Hashable, value: Any):
92 pass
93
94 def remove_oldest(self):
95 if not self.cache:
96 return
97 key, value = self.cache.popitem(last=False)
98 self._on_remove(key, value)
99
100 def _remove_old_if_needed(self) -> None:
101 while len(self.cache) > self.capacity:
102 self.remove_oldest()
103
104 def pop(self, key: int, default_value: Optional[Any] = None) -> Any:
105 run_on_remove = key in self.cache
106 value = self.cache.pop(key, default_value)
107 if run_on_remove:
108 self._on_remove(key, value)
109 return value
110
111 def clear(self):
112 while len(self.cache) > 0:
113 self.remove_oldest()
114 self.cache.clear()
115
116
117 def is_hip() -> bool:
118 return torch.version.hip is not None
119
120
121 def is_neuron() -> bool:
122 try:
123 import transformers_neuronx
124 except ImportError:
125 transformers_neuronx = None
126 return transformers_neuronx is not None
127
128
129 def get_max_shared_memory_bytes(gpu: int = 0) -> int:
130 """Returns the maximum shared memory per thread block in bytes."""
131 # NOTE: This import statement should be executed lazily since
132 # the Neuron-X backend does not have the `cuda_utils` module.
133 from vllm._C import cuda_utils
134
135 max_shared_mem = cuda_utils.get_max_shared_memory_per_block_device_attribute(
136 gpu)
137 # value 0 will cause MAX_SEQ_LEN become negative and test_attention.py will fail
138 assert max_shared_mem > 0, "max_shared_mem can not be zero"
139 return int(max_shared_mem)
140
141
142 def get_cpu_memory() -> int:
143 """Returns the total CPU memory of the node in bytes."""
144 return psutil.virtual_memory().total
145
146
147 def random_uuid() -> str:
148 return str(uuid.uuid4().hex)
149
150
151 def in_wsl() -> bool:
152 # Reference: https://github.com/microsoft/WSL/issues/4071
153 return "microsoft" in " ".join(uname()).lower()
154
155
156 def make_async(func: Callable[..., T]) -> Callable[..., Awaitable[T]]:
157 """Take a blocking function, and run it on in an executor thread.
158
159 This function prevents the blocking function from blocking the
160 asyncio event loop.
161 The code in this function needs to be thread safe.
162 """
163
164 def _async_wrapper(*args, **kwargs) -> asyncio.Future:
165 loop = asyncio.get_event_loop()
166 p_func = partial(func, *args, **kwargs)
167 return loop.run_in_executor(executor=None, func=p_func)
168
169 return _async_wrapper
170
171
172 def get_ip() -> str:
173 # try ipv4
174 s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
175 try:
176 s.connect(("dns.google", 80)) # Doesn't need to be reachable
177 return s.getsockname()[0]
178 except OSError:
179 # try ipv6
180 s = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM)
181 s.connect(("dns.google", 80))
182 return s.getsockname()[0]
183
184
185 def get_distributed_init_method(ip: str, port: int) -> str:
186 return f"tcp://{ip}:{port}"
187
188
189 def get_open_port() -> int:
190 # try ipv4
191 try:
192 with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
193 s.bind(("", 0))
194 return s.getsockname()[1]
195 except OSError:
196 # try ipv6
197 with socket.socket(socket.AF_INET6, socket.SOCK_STREAM) as s:
198 s.bind(("", 0))
199 return s.getsockname()[1]
200
201
202 def set_cuda_visible_devices(device_ids: List[int]) -> None:
203 os.environ["CUDA_VISIBLE_DEVICES"] = ",".join(map(str, device_ids))
204
205
206 def get_nvcc_cuda_version() -> Optional[Version]:
207 cuda_home = os.environ.get('CUDA_HOME')
208 if not cuda_home:
209 cuda_home = '/usr/local/cuda'
210 if os.path.isfile(cuda_home + '/bin/nvcc'):
211 logger.info(
212 f'CUDA_HOME is not found in the environment. Using {cuda_home} as CUDA_HOME.'
213 )
214 else:
215 logger.warning(
216 f'Not found nvcc in {cuda_home}. Skip cuda version check!')
217 return None
218 nvcc_output = subprocess.check_output([cuda_home + "/bin/nvcc", "-V"],
219 universal_newlines=True)
220 output = nvcc_output.split()
221 release_idx = output.index("release") + 1
222 nvcc_cuda_version = parse(output[release_idx].split(",")[0])
223 return nvcc_cuda_version
224
225
226 def _generate_random_fp8_e5m2(
227 tensor: torch.tensor,
228 low: float,
229 high: float,
230 ) -> None:
231 # NOTE(zhaoyang): Due to NaN and Inf representation for fp8 data type,
232 # it may occur Inf or NaN if we directly use torch.randint
233 # to generate random data for fp8 data.
234 # For example, s.11111.00 in fp8e5m2 format represents Inf.
235 # | E4M3 | E5M2
236 #-----|-------------|-------------------
237 # Inf | N/A | s.11111.00
238 # NaN | s.1111.111 | s.11111.{01,10,11}
239 from vllm._C import cache_ops
240 tensor_tmp = torch.empty_like(tensor, dtype=torch.float16)
241 tensor_tmp.uniform_(low, high)
242 cache_ops.convert_fp8_e5m2(tensor_tmp, tensor)
243 del tensor_tmp
244
245
246 def create_kv_caches_with_random(
247 num_blocks: int,
248 block_size: int,
249 num_layers: int,
250 num_heads: int,
251 head_size: int,
252 cache_dtype: Optional[Union[str, torch.dtype]],
253 model_dtype: Optional[Union[str, torch.dtype]] = None,
254 seed: Optional[int] = 0,
255 device: Optional[str] = "cuda",
256 ) -> Tuple[List[torch.Tensor], List[torch.Tensor]]:
257 torch.random.manual_seed(seed)
258 if torch.cuda.is_available():
259 torch.cuda.manual_seed(seed)
260
261 if isinstance(cache_dtype, str):
262 if cache_dtype == "auto":
263 if isinstance(model_dtype, str):
264 torch_dtype = STR_DTYPE_TO_TORCH_DTYPE[model_dtype]
265 elif isinstance(model_dtype, torch.dtype):
266 torch_dtype = model_dtype
267 else:
268 raise ValueError(f"Invalid model dtype: {model_dtype}")
269 elif cache_dtype in ["half", "bfloat16", "float"]:
270 torch_dtype = STR_DTYPE_TO_TORCH_DTYPE[cache_dtype]
271 elif cache_dtype == "fp8_e5m2":
272 torch_dtype = torch.uint8
273 else:
274 raise ValueError(f"Invalid kv cache dtype: {cache_dtype}")
275 elif isinstance(cache_dtype, torch.dtype):
276 torch_dtype = cache_dtype
277 else:
278 raise ValueError(f"Invalid kv cache dtype: {cache_dtype}")
279
280 scale = head_size**-0.5
281 x = 16 // torch.tensor([], dtype=torch_dtype).element_size()
282 key_cache_shape = (num_blocks, num_heads, head_size // x, block_size, x)
283 key_caches = []
284 for _ in range(num_layers):
285 key_cache = torch.empty(size=key_cache_shape,
286 dtype=torch_dtype,
287 device=device)
288 if cache_dtype == 'fp8_e5m2':
289 _generate_random_fp8_e5m2(key_cache, -scale, scale)
290 elif torch_dtype in [torch.half, torch.bfloat16, torch.float]:
291 key_cache.uniform_(-scale, scale)
292 else:
293 raise ValueError(
294 f"Does not support key cache of type {cache_dtype}")
295 key_caches.append(key_cache)
296
297 value_cache_shape = (num_blocks, num_heads, head_size, block_size)
298 value_caches = []
299 for _ in range(num_layers):
300 value_cache = torch.empty(size=value_cache_shape,
301 dtype=torch_dtype,
302 device=device)
303 if cache_dtype == 'fp8_e5m2':
304 _generate_random_fp8_e5m2(value_cache, -scale, scale)
305 elif torch_dtype in [torch.half, torch.bfloat16, torch.float]:
306 value_cache.uniform_(-scale, scale)
307 else:
308 raise ValueError(
309 f"Does not support value cache of type {cache_dtype}")
310 value_caches.append(value_cache)
311 return key_caches, value_caches
312
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/vllm/utils.py b/vllm/utils.py
--- a/vllm/utils.py
+++ b/vllm/utils.py
@@ -173,7 +173,7 @@
# try ipv4
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
try:
- s.connect(("dns.google", 80)) # Doesn't need to be reachable
+ s.connect(("8.8.8.8", 80)) # Doesn't need to be reachable
return s.getsockname()[0]
except OSError:
# try ipv6
| {"golden_diff": "diff --git a/vllm/utils.py b/vllm/utils.py\n--- a/vllm/utils.py\n+++ b/vllm/utils.py\n@@ -173,7 +173,7 @@\n # try ipv4\n s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)\n try:\n- s.connect((\"dns.google\", 80)) # Doesn't need to be reachable\n+ s.connect((\"8.8.8.8\", 80)) # Doesn't need to be reachable\n return s.getsockname()[0]\n except OSError:\n # try ipv6\n", "issue": "why vllm==0.3.3 need to access google\n\r\n\n", "before_files": [{"content": "import enum\nimport os\nimport socket\nimport subprocess\nimport uuid\nfrom platform import uname\nfrom typing import List, Tuple, Union\nfrom packaging.version import parse, Version\n\nimport psutil\nimport torch\nimport asyncio\nfrom functools import partial\nfrom typing import (\n Awaitable,\n Callable,\n TypeVar,\n)\nfrom collections import OrderedDict\nfrom typing import Any, Hashable, Optional\n\nfrom vllm.logger import init_logger\n\nT = TypeVar(\"T\")\nlogger = init_logger(__name__)\n\nSTR_DTYPE_TO_TORCH_DTYPE = {\n \"half\": torch.half,\n \"bfloat16\": torch.bfloat16,\n \"float\": torch.float,\n \"fp8_e5m2\": torch.uint8,\n}\n\n\nclass Device(enum.Enum):\n GPU = enum.auto()\n CPU = enum.auto()\n\n\nclass Counter:\n\n def __init__(self, start: int = 0) -> None:\n self.counter = start\n\n def __next__(self) -> int:\n i = self.counter\n self.counter += 1\n return i\n\n def reset(self) -> None:\n self.counter = 0\n\n\nclass LRUCache:\n\n def __init__(self, capacity: int):\n self.cache = OrderedDict()\n self.capacity = capacity\n\n def __contains__(self, key: Hashable) -> bool:\n return key in self.cache\n\n def __len__(self) -> int:\n return len(self.cache)\n\n def __getitem__(self, key: Hashable) -> Any:\n return self.get(key)\n\n def __setitem__(self, key: Hashable, value: Any) -> None:\n self.put(key, value)\n\n def __delitem__(self, key: Hashable) -> None:\n self.pop(key)\n\n def touch(self, key: Hashable) -> None:\n self.cache.move_to_end(key)\n\n def get(self, key: Hashable, default_value: Optional[Any] = None) -> int:\n if key in self.cache:\n value = self.cache[key]\n self.cache.move_to_end(key)\n else:\n value = default_value\n return value\n\n def put(self, key: Hashable, value: Any) -> None:\n self.cache[key] = value\n self.cache.move_to_end(key)\n self._remove_old_if_needed()\n\n def _on_remove(self, key: Hashable, value: Any):\n pass\n\n def remove_oldest(self):\n if not self.cache:\n return\n key, value = self.cache.popitem(last=False)\n self._on_remove(key, value)\n\n def _remove_old_if_needed(self) -> None:\n while len(self.cache) > self.capacity:\n self.remove_oldest()\n\n def pop(self, key: int, default_value: Optional[Any] = None) -> Any:\n run_on_remove = key in self.cache\n value = self.cache.pop(key, default_value)\n if run_on_remove:\n self._on_remove(key, value)\n return value\n\n def clear(self):\n while len(self.cache) > 0:\n self.remove_oldest()\n self.cache.clear()\n\n\ndef is_hip() -> bool:\n return torch.version.hip is not None\n\n\ndef is_neuron() -> bool:\n try:\n import transformers_neuronx\n except ImportError:\n transformers_neuronx = None\n return transformers_neuronx is not None\n\n\ndef get_max_shared_memory_bytes(gpu: int = 0) -> int:\n \"\"\"Returns the maximum shared memory per thread block in bytes.\"\"\"\n # NOTE: This import statement should be executed lazily since\n # the Neuron-X backend does not have the `cuda_utils` module.\n from vllm._C import cuda_utils\n\n max_shared_mem = cuda_utils.get_max_shared_memory_per_block_device_attribute(\n gpu)\n # value 0 will cause MAX_SEQ_LEN become negative and test_attention.py will fail\n assert max_shared_mem > 0, \"max_shared_mem can not be zero\"\n return int(max_shared_mem)\n\n\ndef get_cpu_memory() -> int:\n \"\"\"Returns the total CPU memory of the node in bytes.\"\"\"\n return psutil.virtual_memory().total\n\n\ndef random_uuid() -> str:\n return str(uuid.uuid4().hex)\n\n\ndef in_wsl() -> bool:\n # Reference: https://github.com/microsoft/WSL/issues/4071\n return \"microsoft\" in \" \".join(uname()).lower()\n\n\ndef make_async(func: Callable[..., T]) -> Callable[..., Awaitable[T]]:\n \"\"\"Take a blocking function, and run it on in an executor thread.\n\n This function prevents the blocking function from blocking the\n asyncio event loop.\n The code in this function needs to be thread safe.\n \"\"\"\n\n def _async_wrapper(*args, **kwargs) -> asyncio.Future:\n loop = asyncio.get_event_loop()\n p_func = partial(func, *args, **kwargs)\n return loop.run_in_executor(executor=None, func=p_func)\n\n return _async_wrapper\n\n\ndef get_ip() -> str:\n # try ipv4\n s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)\n try:\n s.connect((\"dns.google\", 80)) # Doesn't need to be reachable\n return s.getsockname()[0]\n except OSError:\n # try ipv6\n s = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM)\n s.connect((\"dns.google\", 80))\n return s.getsockname()[0]\n\n\ndef get_distributed_init_method(ip: str, port: int) -> str:\n return f\"tcp://{ip}:{port}\"\n\n\ndef get_open_port() -> int:\n # try ipv4\n try:\n with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:\n s.bind((\"\", 0))\n return s.getsockname()[1]\n except OSError:\n # try ipv6\n with socket.socket(socket.AF_INET6, socket.SOCK_STREAM) as s:\n s.bind((\"\", 0))\n return s.getsockname()[1]\n\n\ndef set_cuda_visible_devices(device_ids: List[int]) -> None:\n os.environ[\"CUDA_VISIBLE_DEVICES\"] = \",\".join(map(str, device_ids))\n\n\ndef get_nvcc_cuda_version() -> Optional[Version]:\n cuda_home = os.environ.get('CUDA_HOME')\n if not cuda_home:\n cuda_home = '/usr/local/cuda'\n if os.path.isfile(cuda_home + '/bin/nvcc'):\n logger.info(\n f'CUDA_HOME is not found in the environment. Using {cuda_home} as CUDA_HOME.'\n )\n else:\n logger.warning(\n f'Not found nvcc in {cuda_home}. Skip cuda version check!')\n return None\n nvcc_output = subprocess.check_output([cuda_home + \"/bin/nvcc\", \"-V\"],\n universal_newlines=True)\n output = nvcc_output.split()\n release_idx = output.index(\"release\") + 1\n nvcc_cuda_version = parse(output[release_idx].split(\",\")[0])\n return nvcc_cuda_version\n\n\ndef _generate_random_fp8_e5m2(\n tensor: torch.tensor,\n low: float,\n high: float,\n) -> None:\n # NOTE(zhaoyang): Due to NaN and Inf representation for fp8 data type,\n # it may occur Inf or NaN if we directly use torch.randint\n # to generate random data for fp8 data.\n # For example, s.11111.00 in fp8e5m2 format represents Inf.\n # | E4M3 | E5M2\n #-----|-------------|-------------------\n # Inf | N/A | s.11111.00\n # NaN | s.1111.111 | s.11111.{01,10,11}\n from vllm._C import cache_ops\n tensor_tmp = torch.empty_like(tensor, dtype=torch.float16)\n tensor_tmp.uniform_(low, high)\n cache_ops.convert_fp8_e5m2(tensor_tmp, tensor)\n del tensor_tmp\n\n\ndef create_kv_caches_with_random(\n num_blocks: int,\n block_size: int,\n num_layers: int,\n num_heads: int,\n head_size: int,\n cache_dtype: Optional[Union[str, torch.dtype]],\n model_dtype: Optional[Union[str, torch.dtype]] = None,\n seed: Optional[int] = 0,\n device: Optional[str] = \"cuda\",\n) -> Tuple[List[torch.Tensor], List[torch.Tensor]]:\n torch.random.manual_seed(seed)\n if torch.cuda.is_available():\n torch.cuda.manual_seed(seed)\n\n if isinstance(cache_dtype, str):\n if cache_dtype == \"auto\":\n if isinstance(model_dtype, str):\n torch_dtype = STR_DTYPE_TO_TORCH_DTYPE[model_dtype]\n elif isinstance(model_dtype, torch.dtype):\n torch_dtype = model_dtype\n else:\n raise ValueError(f\"Invalid model dtype: {model_dtype}\")\n elif cache_dtype in [\"half\", \"bfloat16\", \"float\"]:\n torch_dtype = STR_DTYPE_TO_TORCH_DTYPE[cache_dtype]\n elif cache_dtype == \"fp8_e5m2\":\n torch_dtype = torch.uint8\n else:\n raise ValueError(f\"Invalid kv cache dtype: {cache_dtype}\")\n elif isinstance(cache_dtype, torch.dtype):\n torch_dtype = cache_dtype\n else:\n raise ValueError(f\"Invalid kv cache dtype: {cache_dtype}\")\n\n scale = head_size**-0.5\n x = 16 // torch.tensor([], dtype=torch_dtype).element_size()\n key_cache_shape = (num_blocks, num_heads, head_size // x, block_size, x)\n key_caches = []\n for _ in range(num_layers):\n key_cache = torch.empty(size=key_cache_shape,\n dtype=torch_dtype,\n device=device)\n if cache_dtype == 'fp8_e5m2':\n _generate_random_fp8_e5m2(key_cache, -scale, scale)\n elif torch_dtype in [torch.half, torch.bfloat16, torch.float]:\n key_cache.uniform_(-scale, scale)\n else:\n raise ValueError(\n f\"Does not support key cache of type {cache_dtype}\")\n key_caches.append(key_cache)\n\n value_cache_shape = (num_blocks, num_heads, head_size, block_size)\n value_caches = []\n for _ in range(num_layers):\n value_cache = torch.empty(size=value_cache_shape,\n dtype=torch_dtype,\n device=device)\n if cache_dtype == 'fp8_e5m2':\n _generate_random_fp8_e5m2(value_cache, -scale, scale)\n elif torch_dtype in [torch.half, torch.bfloat16, torch.float]:\n value_cache.uniform_(-scale, scale)\n else:\n raise ValueError(\n f\"Does not support value cache of type {cache_dtype}\")\n value_caches.append(value_cache)\n return key_caches, value_caches\n", "path": "vllm/utils.py"}], "after_files": [{"content": "import enum\nimport os\nimport socket\nimport subprocess\nimport uuid\nfrom platform import uname\nfrom typing import List, Tuple, Union\nfrom packaging.version import parse, Version\n\nimport psutil\nimport torch\nimport asyncio\nfrom functools import partial\nfrom typing import (\n Awaitable,\n Callable,\n TypeVar,\n)\nfrom collections import OrderedDict\nfrom typing import Any, Hashable, Optional\n\nfrom vllm.logger import init_logger\n\nT = TypeVar(\"T\")\nlogger = init_logger(__name__)\n\nSTR_DTYPE_TO_TORCH_DTYPE = {\n \"half\": torch.half,\n \"bfloat16\": torch.bfloat16,\n \"float\": torch.float,\n \"fp8_e5m2\": torch.uint8,\n}\n\n\nclass Device(enum.Enum):\n GPU = enum.auto()\n CPU = enum.auto()\n\n\nclass Counter:\n\n def __init__(self, start: int = 0) -> None:\n self.counter = start\n\n def __next__(self) -> int:\n i = self.counter\n self.counter += 1\n return i\n\n def reset(self) -> None:\n self.counter = 0\n\n\nclass LRUCache:\n\n def __init__(self, capacity: int):\n self.cache = OrderedDict()\n self.capacity = capacity\n\n def __contains__(self, key: Hashable) -> bool:\n return key in self.cache\n\n def __len__(self) -> int:\n return len(self.cache)\n\n def __getitem__(self, key: Hashable) -> Any:\n return self.get(key)\n\n def __setitem__(self, key: Hashable, value: Any) -> None:\n self.put(key, value)\n\n def __delitem__(self, key: Hashable) -> None:\n self.pop(key)\n\n def touch(self, key: Hashable) -> None:\n self.cache.move_to_end(key)\n\n def get(self, key: Hashable, default_value: Optional[Any] = None) -> int:\n if key in self.cache:\n value = self.cache[key]\n self.cache.move_to_end(key)\n else:\n value = default_value\n return value\n\n def put(self, key: Hashable, value: Any) -> None:\n self.cache[key] = value\n self.cache.move_to_end(key)\n self._remove_old_if_needed()\n\n def _on_remove(self, key: Hashable, value: Any):\n pass\n\n def remove_oldest(self):\n if not self.cache:\n return\n key, value = self.cache.popitem(last=False)\n self._on_remove(key, value)\n\n def _remove_old_if_needed(self) -> None:\n while len(self.cache) > self.capacity:\n self.remove_oldest()\n\n def pop(self, key: int, default_value: Optional[Any] = None) -> Any:\n run_on_remove = key in self.cache\n value = self.cache.pop(key, default_value)\n if run_on_remove:\n self._on_remove(key, value)\n return value\n\n def clear(self):\n while len(self.cache) > 0:\n self.remove_oldest()\n self.cache.clear()\n\n\ndef is_hip() -> bool:\n return torch.version.hip is not None\n\n\ndef is_neuron() -> bool:\n try:\n import transformers_neuronx\n except ImportError:\n transformers_neuronx = None\n return transformers_neuronx is not None\n\n\ndef get_max_shared_memory_bytes(gpu: int = 0) -> int:\n \"\"\"Returns the maximum shared memory per thread block in bytes.\"\"\"\n # NOTE: This import statement should be executed lazily since\n # the Neuron-X backend does not have the `cuda_utils` module.\n from vllm._C import cuda_utils\n\n max_shared_mem = cuda_utils.get_max_shared_memory_per_block_device_attribute(\n gpu)\n # value 0 will cause MAX_SEQ_LEN become negative and test_attention.py will fail\n assert max_shared_mem > 0, \"max_shared_mem can not be zero\"\n return int(max_shared_mem)\n\n\ndef get_cpu_memory() -> int:\n \"\"\"Returns the total CPU memory of the node in bytes.\"\"\"\n return psutil.virtual_memory().total\n\n\ndef random_uuid() -> str:\n return str(uuid.uuid4().hex)\n\n\ndef in_wsl() -> bool:\n # Reference: https://github.com/microsoft/WSL/issues/4071\n return \"microsoft\" in \" \".join(uname()).lower()\n\n\ndef make_async(func: Callable[..., T]) -> Callable[..., Awaitable[T]]:\n \"\"\"Take a blocking function, and run it on in an executor thread.\n\n This function prevents the blocking function from blocking the\n asyncio event loop.\n The code in this function needs to be thread safe.\n \"\"\"\n\n def _async_wrapper(*args, **kwargs) -> asyncio.Future:\n loop = asyncio.get_event_loop()\n p_func = partial(func, *args, **kwargs)\n return loop.run_in_executor(executor=None, func=p_func)\n\n return _async_wrapper\n\n\ndef get_ip() -> str:\n # try ipv4\n s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)\n try:\n s.connect((\"8.8.8.8\", 80)) # Doesn't need to be reachable\n return s.getsockname()[0]\n except OSError:\n # try ipv6\n s = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM)\n s.connect((\"dns.google\", 80))\n return s.getsockname()[0]\n\n\ndef get_distributed_init_method(ip: str, port: int) -> str:\n return f\"tcp://{ip}:{port}\"\n\n\ndef get_open_port() -> int:\n # try ipv4\n try:\n with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:\n s.bind((\"\", 0))\n return s.getsockname()[1]\n except OSError:\n # try ipv6\n with socket.socket(socket.AF_INET6, socket.SOCK_STREAM) as s:\n s.bind((\"\", 0))\n return s.getsockname()[1]\n\n\ndef set_cuda_visible_devices(device_ids: List[int]) -> None:\n os.environ[\"CUDA_VISIBLE_DEVICES\"] = \",\".join(map(str, device_ids))\n\n\ndef get_nvcc_cuda_version() -> Optional[Version]:\n cuda_home = os.environ.get('CUDA_HOME')\n if not cuda_home:\n cuda_home = '/usr/local/cuda'\n if os.path.isfile(cuda_home + '/bin/nvcc'):\n logger.info(\n f'CUDA_HOME is not found in the environment. Using {cuda_home} as CUDA_HOME.'\n )\n else:\n logger.warning(\n f'Not found nvcc in {cuda_home}. Skip cuda version check!')\n return None\n nvcc_output = subprocess.check_output([cuda_home + \"/bin/nvcc\", \"-V\"],\n universal_newlines=True)\n output = nvcc_output.split()\n release_idx = output.index(\"release\") + 1\n nvcc_cuda_version = parse(output[release_idx].split(\",\")[0])\n return nvcc_cuda_version\n\n\ndef _generate_random_fp8_e5m2(\n tensor: torch.tensor,\n low: float,\n high: float,\n) -> None:\n # NOTE(zhaoyang): Due to NaN and Inf representation for fp8 data type,\n # it may occur Inf or NaN if we directly use torch.randint\n # to generate random data for fp8 data.\n # For example, s.11111.00 in fp8e5m2 format represents Inf.\n # | E4M3 | E5M2\n #-----|-------------|-------------------\n # Inf | N/A | s.11111.00\n # NaN | s.1111.111 | s.11111.{01,10,11}\n from vllm._C import cache_ops\n tensor_tmp = torch.empty_like(tensor, dtype=torch.float16)\n tensor_tmp.uniform_(low, high)\n cache_ops.convert_fp8_e5m2(tensor_tmp, tensor)\n del tensor_tmp\n\n\ndef create_kv_caches_with_random(\n num_blocks: int,\n block_size: int,\n num_layers: int,\n num_heads: int,\n head_size: int,\n cache_dtype: Optional[Union[str, torch.dtype]],\n model_dtype: Optional[Union[str, torch.dtype]] = None,\n seed: Optional[int] = 0,\n device: Optional[str] = \"cuda\",\n) -> Tuple[List[torch.Tensor], List[torch.Tensor]]:\n torch.random.manual_seed(seed)\n if torch.cuda.is_available():\n torch.cuda.manual_seed(seed)\n\n if isinstance(cache_dtype, str):\n if cache_dtype == \"auto\":\n if isinstance(model_dtype, str):\n torch_dtype = STR_DTYPE_TO_TORCH_DTYPE[model_dtype]\n elif isinstance(model_dtype, torch.dtype):\n torch_dtype = model_dtype\n else:\n raise ValueError(f\"Invalid model dtype: {model_dtype}\")\n elif cache_dtype in [\"half\", \"bfloat16\", \"float\"]:\n torch_dtype = STR_DTYPE_TO_TORCH_DTYPE[cache_dtype]\n elif cache_dtype == \"fp8_e5m2\":\n torch_dtype = torch.uint8\n else:\n raise ValueError(f\"Invalid kv cache dtype: {cache_dtype}\")\n elif isinstance(cache_dtype, torch.dtype):\n torch_dtype = cache_dtype\n else:\n raise ValueError(f\"Invalid kv cache dtype: {cache_dtype}\")\n\n scale = head_size**-0.5\n x = 16 // torch.tensor([], dtype=torch_dtype).element_size()\n key_cache_shape = (num_blocks, num_heads, head_size // x, block_size, x)\n key_caches = []\n for _ in range(num_layers):\n key_cache = torch.empty(size=key_cache_shape,\n dtype=torch_dtype,\n device=device)\n if cache_dtype == 'fp8_e5m2':\n _generate_random_fp8_e5m2(key_cache, -scale, scale)\n elif torch_dtype in [torch.half, torch.bfloat16, torch.float]:\n key_cache.uniform_(-scale, scale)\n else:\n raise ValueError(\n f\"Does not support key cache of type {cache_dtype}\")\n key_caches.append(key_cache)\n\n value_cache_shape = (num_blocks, num_heads, head_size, block_size)\n value_caches = []\n for _ in range(num_layers):\n value_cache = torch.empty(size=value_cache_shape,\n dtype=torch_dtype,\n device=device)\n if cache_dtype == 'fp8_e5m2':\n _generate_random_fp8_e5m2(value_cache, -scale, scale)\n elif torch_dtype in [torch.half, torch.bfloat16, torch.float]:\n value_cache.uniform_(-scale, scale)\n else:\n raise ValueError(\n f\"Does not support value cache of type {cache_dtype}\")\n value_caches.append(value_cache)\n return key_caches, value_caches\n", "path": "vllm/utils.py"}]} | 3,594 | 134 |
gh_patches_debug_6595 | rasdani/github-patches | git_diff | vispy__vispy-712 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bump VisPy version?
Should we bump the version to 0.4.0-dev or something? Currently it is 0.3.0, which doesn't seem right.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vispy/__init__.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright (c) 2014, Vispy Development Team.
3 # Distributed under the (new) BSD License. See LICENSE.txt for more info.
4
5 """
6
7 =====
8 Vispy
9 =====
10
11 Vispy is a **high-performance interactive 2D/3D data visualization
12 library**. Vispy leverages the computational power of modern **Graphics
13 Processing Units (GPUs)** through the **OpenGL** library to display very
14 large datasets.
15
16 For more information, see http://vispy.org.
17
18 """
19
20 from __future__ import division
21
22 __all__ = ['use', 'sys_info', 'set_log_level', 'test']
23
24 # Definition of the version number
25 version_info = 0, 3, 0, '' # major, minor, patch, extra
26
27 # Nice string for the version (mimic how IPython composes its version str)
28 __version__ = '-'.join(map(str, version_info)).replace('-', '.', 2).strip('-')
29
30 from .util import config, set_log_level, keys, sys_info # noqa
31 from .util.wrappers import use # noqa
32 from .testing import test # noqa
33
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/vispy/__init__.py b/vispy/__init__.py
--- a/vispy/__init__.py
+++ b/vispy/__init__.py
@@ -22,7 +22,7 @@
__all__ = ['use', 'sys_info', 'set_log_level', 'test']
# Definition of the version number
-version_info = 0, 3, 0, '' # major, minor, patch, extra
+version_info = 0, 4, 0, 'dev' # major, minor, patch, extra
# Nice string for the version (mimic how IPython composes its version str)
__version__ = '-'.join(map(str, version_info)).replace('-', '.', 2).strip('-')
| {"golden_diff": "diff --git a/vispy/__init__.py b/vispy/__init__.py\n--- a/vispy/__init__.py\n+++ b/vispy/__init__.py\n@@ -22,7 +22,7 @@\n __all__ = ['use', 'sys_info', 'set_log_level', 'test']\n \n # Definition of the version number\n-version_info = 0, 3, 0, '' # major, minor, patch, extra\n+version_info = 0, 4, 0, 'dev' # major, minor, patch, extra\n \n # Nice string for the version (mimic how IPython composes its version str)\n __version__ = '-'.join(map(str, version_info)).replace('-', '.', 2).strip('-')\n", "issue": "Bump VisPy version?\nShould we bump the version to 0.4.0-dev or something? Currently it is 0.3.0, which doesn't seem right.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2014, Vispy Development Team.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n\n\"\"\"\n\n=====\nVispy\n=====\n\nVispy is a **high-performance interactive 2D/3D data visualization\nlibrary**. Vispy leverages the computational power of modern **Graphics\nProcessing Units (GPUs)** through the **OpenGL** library to display very\nlarge datasets.\n\nFor more information, see http://vispy.org.\n\n\"\"\"\n\nfrom __future__ import division\n\n__all__ = ['use', 'sys_info', 'set_log_level', 'test']\n\n# Definition of the version number\nversion_info = 0, 3, 0, '' # major, minor, patch, extra\n\n# Nice string for the version (mimic how IPython composes its version str)\n__version__ = '-'.join(map(str, version_info)).replace('-', '.', 2).strip('-')\n\nfrom .util import config, set_log_level, keys, sys_info # noqa\nfrom .util.wrappers import use # noqa\nfrom .testing import test # noqa\n", "path": "vispy/__init__.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2014, Vispy Development Team.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n\n\"\"\"\n\n=====\nVispy\n=====\n\nVispy is a **high-performance interactive 2D/3D data visualization\nlibrary**. Vispy leverages the computational power of modern **Graphics\nProcessing Units (GPUs)** through the **OpenGL** library to display very\nlarge datasets.\n\nFor more information, see http://vispy.org.\n\n\"\"\"\n\nfrom __future__ import division\n\n__all__ = ['use', 'sys_info', 'set_log_level', 'test']\n\n# Definition of the version number\nversion_info = 0, 4, 0, 'dev' # major, minor, patch, extra\n\n# Nice string for the version (mimic how IPython composes its version str)\n__version__ = '-'.join(map(str, version_info)).replace('-', '.', 2).strip('-')\n\nfrom .util import config, set_log_level, keys, sys_info # noqa\nfrom .util.wrappers import use # noqa\nfrom .testing import test # noqa\n", "path": "vispy/__init__.py"}]} | 612 | 169 |
gh_patches_debug_23769 | rasdani/github-patches | git_diff | pyca__cryptography-1201 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Make it harder to allow truncation on a GCM tag
`GCM()` should take a `min_tag_length` parameter, which defaults to `GCM.MAX_TAG_LENGTH`, then we can make it error out on truncated tags by default, while still allowing truncation for use cases that need it.
This is technically a backwards compatibility break, but I argue that it should be allowed as a security hardening measure. Thoughts?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cryptography/__about__.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
10 # implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13 from __future__ import absolute_import, division, print_function
14
15 __all__ = [
16 "__title__", "__summary__", "__uri__", "__version__", "__author__",
17 "__email__", "__license__", "__copyright__",
18 ]
19
20 __title__ = "cryptography"
21 __summary__ = ("cryptography is a package which provides cryptographic recipes"
22 " and primitives to Python developers.")
23 __uri__ = "https://github.com/pyca/cryptography"
24
25 __version__ = "0.5.dev1"
26
27 __author__ = "The cryptography developers"
28 __email__ = "[email protected]"
29
30 __license__ = "Apache License, Version 2.0"
31 __copyright__ = "Copyright 2013-2014 %s" % __author__
32
```
Path: `cryptography/hazmat/primitives/ciphers/modes.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
10 # implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 from __future__ import absolute_import, division, print_function
15
16 from cryptography import utils
17 from cryptography.hazmat.primitives import interfaces
18
19
20 def _check_iv_length(mode, algorithm):
21 if len(mode.initialization_vector) * 8 != algorithm.block_size:
22 raise ValueError("Invalid IV size ({0}) for {1}.".format(
23 len(mode.initialization_vector), mode.name
24 ))
25
26
27 @utils.register_interface(interfaces.Mode)
28 @utils.register_interface(interfaces.ModeWithInitializationVector)
29 class CBC(object):
30 name = "CBC"
31
32 def __init__(self, initialization_vector):
33 self.initialization_vector = initialization_vector
34
35 validate_for_algorithm = _check_iv_length
36
37
38 @utils.register_interface(interfaces.Mode)
39 class ECB(object):
40 name = "ECB"
41
42 def validate_for_algorithm(self, algorithm):
43 pass
44
45
46 @utils.register_interface(interfaces.Mode)
47 @utils.register_interface(interfaces.ModeWithInitializationVector)
48 class OFB(object):
49 name = "OFB"
50
51 def __init__(self, initialization_vector):
52 self.initialization_vector = initialization_vector
53
54 validate_for_algorithm = _check_iv_length
55
56
57 @utils.register_interface(interfaces.Mode)
58 @utils.register_interface(interfaces.ModeWithInitializationVector)
59 class CFB(object):
60 name = "CFB"
61
62 def __init__(self, initialization_vector):
63 self.initialization_vector = initialization_vector
64
65 validate_for_algorithm = _check_iv_length
66
67
68 @utils.register_interface(interfaces.Mode)
69 @utils.register_interface(interfaces.ModeWithInitializationVector)
70 class CFB8(object):
71 name = "CFB8"
72
73 def __init__(self, initialization_vector):
74 self.initialization_vector = initialization_vector
75
76 validate_for_algorithm = _check_iv_length
77
78
79 @utils.register_interface(interfaces.Mode)
80 @utils.register_interface(interfaces.ModeWithNonce)
81 class CTR(object):
82 name = "CTR"
83
84 def __init__(self, nonce):
85 self.nonce = nonce
86
87 def validate_for_algorithm(self, algorithm):
88 if len(self.nonce) * 8 != algorithm.block_size:
89 raise ValueError("Invalid nonce size ({0}) for {1}.".format(
90 len(self.nonce), self.name
91 ))
92
93
94 @utils.register_interface(interfaces.Mode)
95 @utils.register_interface(interfaces.ModeWithInitializationVector)
96 @utils.register_interface(interfaces.ModeWithAuthenticationTag)
97 class GCM(object):
98 name = "GCM"
99
100 def __init__(self, initialization_vector, tag=None):
101 # len(initialization_vector) must in [1, 2 ** 64), but it's impossible
102 # to actually construct a bytes object that large, so we don't check
103 # for it
104 if tag is not None and len(tag) < 4:
105 raise ValueError(
106 "Authentication tag must be 4 bytes or longer."
107 )
108
109 self.initialization_vector = initialization_vector
110 self.tag = tag
111
112 def validate_for_algorithm(self, algorithm):
113 pass
114
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cryptography/__about__.py b/cryptography/__about__.py
--- a/cryptography/__about__.py
+++ b/cryptography/__about__.py
@@ -28,4 +28,4 @@
__email__ = "[email protected]"
__license__ = "Apache License, Version 2.0"
-__copyright__ = "Copyright 2013-2014 %s" % __author__
+__copyright__ = "Copyright 2013-2014 {0}".format(__author__)
diff --git a/cryptography/hazmat/primitives/ciphers/modes.py b/cryptography/hazmat/primitives/ciphers/modes.py
--- a/cryptography/hazmat/primitives/ciphers/modes.py
+++ b/cryptography/hazmat/primitives/ciphers/modes.py
@@ -97,13 +97,16 @@
class GCM(object):
name = "GCM"
- def __init__(self, initialization_vector, tag=None):
+ def __init__(self, initialization_vector, tag=None, min_tag_length=16):
# len(initialization_vector) must in [1, 2 ** 64), but it's impossible
# to actually construct a bytes object that large, so we don't check
# for it
- if tag is not None and len(tag) < 4:
+ if min_tag_length < 4:
+ raise ValueError("min_tag_length must be >= 4")
+ if tag is not None and len(tag) < min_tag_length:
raise ValueError(
- "Authentication tag must be 4 bytes or longer."
+ "Authentication tag must be {0} bytes or longer.".format(
+ min_tag_length)
)
self.initialization_vector = initialization_vector
| {"golden_diff": "diff --git a/cryptography/__about__.py b/cryptography/__about__.py\n--- a/cryptography/__about__.py\n+++ b/cryptography/__about__.py\n@@ -28,4 +28,4 @@\n __email__ = \"[email protected]\"\n \n __license__ = \"Apache License, Version 2.0\"\n-__copyright__ = \"Copyright 2013-2014 %s\" % __author__\n+__copyright__ = \"Copyright 2013-2014 {0}\".format(__author__)\ndiff --git a/cryptography/hazmat/primitives/ciphers/modes.py b/cryptography/hazmat/primitives/ciphers/modes.py\n--- a/cryptography/hazmat/primitives/ciphers/modes.py\n+++ b/cryptography/hazmat/primitives/ciphers/modes.py\n@@ -97,13 +97,16 @@\n class GCM(object):\n name = \"GCM\"\n \n- def __init__(self, initialization_vector, tag=None):\n+ def __init__(self, initialization_vector, tag=None, min_tag_length=16):\n # len(initialization_vector) must in [1, 2 ** 64), but it's impossible\n # to actually construct a bytes object that large, so we don't check\n # for it\n- if tag is not None and len(tag) < 4:\n+ if min_tag_length < 4:\n+ raise ValueError(\"min_tag_length must be >= 4\")\n+ if tag is not None and len(tag) < min_tag_length:\n raise ValueError(\n- \"Authentication tag must be 4 bytes or longer.\"\n+ \"Authentication tag must be {0} bytes or longer.\".format(\n+ min_tag_length)\n )\n \n self.initialization_vector = initialization_vector\n", "issue": "Make it harder to allow truncation on a GCM tag\n`GCM()` should take a `min_tag_length` parameter, which defaults to `GCM.MAX_TAG_LENGTH`, then we can make it error out on truncated tags by default, while still allowing truncation for use cases that need it.\n\nThis is technically a backwards compatibility break, but I argue that it should be allowed as a security hardening measure. Thoughts?\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom __future__ import absolute_import, division, print_function\n\n__all__ = [\n \"__title__\", \"__summary__\", \"__uri__\", \"__version__\", \"__author__\",\n \"__email__\", \"__license__\", \"__copyright__\",\n]\n\n__title__ = \"cryptography\"\n__summary__ = (\"cryptography is a package which provides cryptographic recipes\"\n \" and primitives to Python developers.\")\n__uri__ = \"https://github.com/pyca/cryptography\"\n\n__version__ = \"0.5.dev1\"\n\n__author__ = \"The cryptography developers\"\n__email__ = \"[email protected]\"\n\n__license__ = \"Apache License, Version 2.0\"\n__copyright__ = \"Copyright 2013-2014 %s\" % __author__\n", "path": "cryptography/__about__.py"}, {"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import, division, print_function\n\nfrom cryptography import utils\nfrom cryptography.hazmat.primitives import interfaces\n\n\ndef _check_iv_length(mode, algorithm):\n if len(mode.initialization_vector) * 8 != algorithm.block_size:\n raise ValueError(\"Invalid IV size ({0}) for {1}.\".format(\n len(mode.initialization_vector), mode.name\n ))\n\n\[email protected]_interface(interfaces.Mode)\[email protected]_interface(interfaces.ModeWithInitializationVector)\nclass CBC(object):\n name = \"CBC\"\n\n def __init__(self, initialization_vector):\n self.initialization_vector = initialization_vector\n\n validate_for_algorithm = _check_iv_length\n\n\[email protected]_interface(interfaces.Mode)\nclass ECB(object):\n name = \"ECB\"\n\n def validate_for_algorithm(self, algorithm):\n pass\n\n\[email protected]_interface(interfaces.Mode)\[email protected]_interface(interfaces.ModeWithInitializationVector)\nclass OFB(object):\n name = \"OFB\"\n\n def __init__(self, initialization_vector):\n self.initialization_vector = initialization_vector\n\n validate_for_algorithm = _check_iv_length\n\n\[email protected]_interface(interfaces.Mode)\[email protected]_interface(interfaces.ModeWithInitializationVector)\nclass CFB(object):\n name = \"CFB\"\n\n def __init__(self, initialization_vector):\n self.initialization_vector = initialization_vector\n\n validate_for_algorithm = _check_iv_length\n\n\[email protected]_interface(interfaces.Mode)\[email protected]_interface(interfaces.ModeWithInitializationVector)\nclass CFB8(object):\n name = \"CFB8\"\n\n def __init__(self, initialization_vector):\n self.initialization_vector = initialization_vector\n\n validate_for_algorithm = _check_iv_length\n\n\[email protected]_interface(interfaces.Mode)\[email protected]_interface(interfaces.ModeWithNonce)\nclass CTR(object):\n name = \"CTR\"\n\n def __init__(self, nonce):\n self.nonce = nonce\n\n def validate_for_algorithm(self, algorithm):\n if len(self.nonce) * 8 != algorithm.block_size:\n raise ValueError(\"Invalid nonce size ({0}) for {1}.\".format(\n len(self.nonce), self.name\n ))\n\n\[email protected]_interface(interfaces.Mode)\[email protected]_interface(interfaces.ModeWithInitializationVector)\[email protected]_interface(interfaces.ModeWithAuthenticationTag)\nclass GCM(object):\n name = \"GCM\"\n\n def __init__(self, initialization_vector, tag=None):\n # len(initialization_vector) must in [1, 2 ** 64), but it's impossible\n # to actually construct a bytes object that large, so we don't check\n # for it\n if tag is not None and len(tag) < 4:\n raise ValueError(\n \"Authentication tag must be 4 bytes or longer.\"\n )\n\n self.initialization_vector = initialization_vector\n self.tag = tag\n\n def validate_for_algorithm(self, algorithm):\n pass\n", "path": "cryptography/hazmat/primitives/ciphers/modes.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom __future__ import absolute_import, division, print_function\n\n__all__ = [\n \"__title__\", \"__summary__\", \"__uri__\", \"__version__\", \"__author__\",\n \"__email__\", \"__license__\", \"__copyright__\",\n]\n\n__title__ = \"cryptography\"\n__summary__ = (\"cryptography is a package which provides cryptographic recipes\"\n \" and primitives to Python developers.\")\n__uri__ = \"https://github.com/pyca/cryptography\"\n\n__version__ = \"0.5.dev1\"\n\n__author__ = \"The cryptography developers\"\n__email__ = \"[email protected]\"\n\n__license__ = \"Apache License, Version 2.0\"\n__copyright__ = \"Copyright 2013-2014 {0}\".format(__author__)\n", "path": "cryptography/__about__.py"}, {"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import, division, print_function\n\nfrom cryptography import utils\nfrom cryptography.hazmat.primitives import interfaces\n\n\ndef _check_iv_length(mode, algorithm):\n if len(mode.initialization_vector) * 8 != algorithm.block_size:\n raise ValueError(\"Invalid IV size ({0}) for {1}.\".format(\n len(mode.initialization_vector), mode.name\n ))\n\n\[email protected]_interface(interfaces.Mode)\[email protected]_interface(interfaces.ModeWithInitializationVector)\nclass CBC(object):\n name = \"CBC\"\n\n def __init__(self, initialization_vector):\n self.initialization_vector = initialization_vector\n\n validate_for_algorithm = _check_iv_length\n\n\[email protected]_interface(interfaces.Mode)\nclass ECB(object):\n name = \"ECB\"\n\n def validate_for_algorithm(self, algorithm):\n pass\n\n\[email protected]_interface(interfaces.Mode)\[email protected]_interface(interfaces.ModeWithInitializationVector)\nclass OFB(object):\n name = \"OFB\"\n\n def __init__(self, initialization_vector):\n self.initialization_vector = initialization_vector\n\n validate_for_algorithm = _check_iv_length\n\n\[email protected]_interface(interfaces.Mode)\[email protected]_interface(interfaces.ModeWithInitializationVector)\nclass CFB(object):\n name = \"CFB\"\n\n def __init__(self, initialization_vector):\n self.initialization_vector = initialization_vector\n\n validate_for_algorithm = _check_iv_length\n\n\[email protected]_interface(interfaces.Mode)\[email protected]_interface(interfaces.ModeWithInitializationVector)\nclass CFB8(object):\n name = \"CFB8\"\n\n def __init__(self, initialization_vector):\n self.initialization_vector = initialization_vector\n\n validate_for_algorithm = _check_iv_length\n\n\[email protected]_interface(interfaces.Mode)\[email protected]_interface(interfaces.ModeWithNonce)\nclass CTR(object):\n name = \"CTR\"\n\n def __init__(self, nonce):\n self.nonce = nonce\n\n def validate_for_algorithm(self, algorithm):\n if len(self.nonce) * 8 != algorithm.block_size:\n raise ValueError(\"Invalid nonce size ({0}) for {1}.\".format(\n len(self.nonce), self.name\n ))\n\n\[email protected]_interface(interfaces.Mode)\[email protected]_interface(interfaces.ModeWithInitializationVector)\[email protected]_interface(interfaces.ModeWithAuthenticationTag)\nclass GCM(object):\n name = \"GCM\"\n\n def __init__(self, initialization_vector, tag=None, min_tag_length=16):\n # len(initialization_vector) must in [1, 2 ** 64), but it's impossible\n # to actually construct a bytes object that large, so we don't check\n # for it\n if min_tag_length < 4:\n raise ValueError(\"min_tag_length must be >= 4\")\n if tag is not None and len(tag) < min_tag_length:\n raise ValueError(\n \"Authentication tag must be {0} bytes or longer.\".format(\n min_tag_length)\n )\n\n self.initialization_vector = initialization_vector\n self.tag = tag\n\n def validate_for_algorithm(self, algorithm):\n pass\n", "path": "cryptography/hazmat/primitives/ciphers/modes.py"}]} | 1,715 | 397 |
gh_patches_debug_14941 | rasdani/github-patches | git_diff | ansible__ansible-42231 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
influxdb_query fails with 'NoneType' object is not iterable
<!---
Verify first that your issue/request is not already reported on GitHub.
THIS FORM WILL BE READ BY A MACHINE, COMPLETE ALL SECTIONS AS DESCRIBED.
Also test if the latest release, and devel branch are affected too.
ALWAYS add information AFTER (OUTSIDE) these html comments.
Otherwise it may end up being automatically closed by our bot. -->
##### SUMMARY
influxdb_query throws an error when trying to create/drop a continuous query in influxdb using influxdb_query module but the continuous query gets created/deleted successfully nevertheless.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Insert, BELOW THIS COMMENT, the name of the module, plugin, task or feature.
Do not include extra details here, e.g. "vyos_command" not "the network module vyos_command" or the full path-->
influxdb_query
##### ANSIBLE VERSION
<!--- Paste, BELOW THIS COMMENT, verbatim output from "ansible --version" between quotes below -->
```
ansible 2.5.5
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/plamen/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.5 (default, May 11 2018, 04:00:52) [GCC 8.1.0]
```
##### CONFIGURATION
<!--- If using Ansible 2.4 or above, paste, BELOW THIS COMMENT, the results of "ansible-config dump --only-changed"
Otherwise, mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).-->
DEFAULT_STDOUT_CALLBACK(/etc/ansible/ansible.cfg) = skippy
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
##### OS / ENVIRONMENT
<!--- Mention, BELOW THIS COMMENT, the OS you are running Ansible from, and the OS you are
managing, or say "N/A" for anything that is not platform-specific.
Also mention the specific version of what you are trying to control,
e.g. if this is a network bug the version of firmware on the network device.-->
Arch Linux
##### STEPS TO REPRODUCE
<!--- For bugs, show exactly how to reproduce the problem, using a minimal test-case.
For new features, show how the feature would be used. -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: InfluxDB | Create InfluxDB DB
influxdb_database:
database_name: "mydb"
- name: InfluxDB | Create InfluxDB Retention Policy
influxdb_retention_policy:
database_name: "mydb"
policy_name: "aggregated"
default: no
replication: 1
duration: "400d"
- name: InfluxDB | Create InfluxDB Continuous Query
influxdb_query:
database_name: "mydb"
query: "CREATE CONTINUOUS QUERY host_router ON mydb BEGIN select sum(activeQueuedInteractions) as activeQueuedInteractions, sum(loggedUsers) as loggedUsers INTO aggregated.host_router FROM router GROUP BY hostname, time(10s, 1m) END"
```
I get the same result when I try to drop the continuous query - the continuous query gets deleted but ansible fails with the same error
```yaml
- name: InfluxDB | Drop InfluxDB Continuous Queries
influxdb_query:
database_name: "{{ cluster_name }}"
query: "DROP CONTINUOUS QUERY host_router ON {{ cluster_name }}"
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
The continuous query should be created successfully and Ansible should not fail.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
It looks like influxdb_query manages to create/drop the continuous query but it throws the following error:
<!--- Paste verbatim command output between quotes below -->
```
ERROR! Unexpected Exception, this is probably a bug: 'NoneType' object is not iterable
the full traceback was:
Traceback (most recent call last):
File "/usr/bin/ansible-playbook", line 118, in <module>
exit_code = cli.run()
File "/usr/lib/python3.6/site-packages/ansible/cli/playbook.py", line 122, in run
results = pbex.run()
File "/usr/lib/python3.6/site-packages/ansible/executor/playbook_executor.py", line 159, in run
result = self._tqm.run(play=play)
File "/usr/lib/python3.6/site-packages/ansible/executor/task_queue_manager.py", line 289, in run
play_return = strategy.run(iterator, play_context)
File "/usr/lib/python3.6/site-packages/ansible/plugins/strategy/linear.py", line 321, in run
results += self._wait_on_pending_results(iterator)
File "/usr/lib/python3.6/site-packages/ansible/plugins/strategy/__init__.py", line 674, in _wait_on_pending_results
results = self._process_pending_results(iterator)
File "/usr/lib/python3.6/site-packages/ansible/plugins/strategy/__init__.py", line 117, in inner
results = func(self, iterator, one_pass=one_pass, max_passes=max_passes)
File "/usr/lib/python3.6/site-packages/ansible/plugins/strategy/__init__.py", line 447, in _process_pending_results
if task_result.is_failed() or task_result.is_unreachable():
File "/usr/lib/python3.6/site-packages/ansible/executor/task_result.py", line 59, in is_failed
'results' in self._result and True in [True for x in self._result['results'] if 'failed_when_result' in x]:
TypeError: 'NoneType' object is not iterable
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/ansible/modules/database/influxdb/influxdb_query.py`
Content:
```
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3 # Copyright (c) 2017, René Moser <[email protected]>
4 # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
5
6 from __future__ import absolute_import, division, print_function
7 __metaclass__ = type
8
9
10 ANSIBLE_METADATA = {'metadata_version': '1.1',
11 'status': ['preview'],
12 'supported_by': 'community'}
13
14
15 DOCUMENTATION = '''
16 ---
17 module: influxdb_query
18 short_description: Query data points from InfluxDB.
19 description:
20 - Query data points from InfluxDB.
21 version_added: 2.5
22 author: "René Moser (@resmo)"
23 requirements:
24 - "python >= 2.6"
25 - "influxdb >= 0.9"
26 options:
27 query:
28 description:
29 - Query to be executed.
30 required: true
31 database_name:
32 description:
33 - Name of the database.
34 required: true
35 extends_documentation_fragment: influxdb
36 '''
37
38 EXAMPLES = r'''
39 - name: Query connections
40 influxdb_query:
41 hostname: "{{ influxdb_ip_address }}"
42 database_name: "{{ influxdb_database_name }}"
43 query: "select mean(value) from connections"
44 register: connection
45
46 - name: Query connections with tags filters
47 influxdb_query:
48 hostname: "{{ influxdb_ip_address }}"
49 database_name: "{{ influxdb_database_name }}"
50 query: "select mean(value) from connections where region='zue01' and host='server01'"
51 register: connection
52
53 - name: Print results from the query
54 debug:
55 var: connection.results
56 '''
57
58 RETURN = '''
59 results:
60 description: Result from the query
61 returned: success
62 type: list
63 sample:
64 - mean: 1245.5333333333333
65 time: "1970-01-01T00:00:00Z"
66 '''
67
68 from ansible.module_utils.basic import AnsibleModule
69 from ansible.module_utils._text import to_native
70 from ansible.module_utils.influxdb import InfluxDb
71
72
73 class AnsibleInfluxDBRead(InfluxDb):
74
75 def read_by_query(self, query):
76 client = self.connect_to_influxdb()
77 try:
78 rs = client.query(query)
79 if rs:
80 return list(rs.get_points())
81 except Exception as e:
82 self.module.fail_json(msg=to_native(e))
83
84
85 def main():
86 argument_spec = InfluxDb.influxdb_argument_spec()
87 argument_spec.update(
88 query=dict(type='str', required=True),
89 database_name=dict(required=True, type='str'),
90 )
91 module = AnsibleModule(
92 argument_spec=argument_spec,
93 supports_check_mode=True
94 )
95
96 influx = AnsibleInfluxDBRead(module)
97 query = module.params.get('query')
98 results = influx.read_by_query(query)
99 module.exit_json(changed=True, results=results)
100
101
102 if __name__ == '__main__':
103 main()
104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/ansible/modules/database/influxdb/influxdb_query.py b/lib/ansible/modules/database/influxdb/influxdb_query.py
--- a/lib/ansible/modules/database/influxdb/influxdb_query.py
+++ b/lib/ansible/modules/database/influxdb/influxdb_query.py
@@ -52,11 +52,11 @@
- name: Print results from the query
debug:
- var: connection.results
+ var: connection.query_results
'''
RETURN = '''
-results:
+query_results:
description: Result from the query
returned: success
type: list
@@ -96,7 +96,7 @@
influx = AnsibleInfluxDBRead(module)
query = module.params.get('query')
results = influx.read_by_query(query)
- module.exit_json(changed=True, results=results)
+ module.exit_json(changed=True, query_results=results)
if __name__ == '__main__':
| {"golden_diff": "diff --git a/lib/ansible/modules/database/influxdb/influxdb_query.py b/lib/ansible/modules/database/influxdb/influxdb_query.py\n--- a/lib/ansible/modules/database/influxdb/influxdb_query.py\n+++ b/lib/ansible/modules/database/influxdb/influxdb_query.py\n@@ -52,11 +52,11 @@\n \n - name: Print results from the query\n debug:\n- var: connection.results\n+ var: connection.query_results\n '''\n \n RETURN = '''\n-results:\n+query_results:\n description: Result from the query\n returned: success\n type: list\n@@ -96,7 +96,7 @@\n influx = AnsibleInfluxDBRead(module)\n query = module.params.get('query')\n results = influx.read_by_query(query)\n- module.exit_json(changed=True, results=results)\n+ module.exit_json(changed=True, query_results=results)\n \n \n if __name__ == '__main__':\n", "issue": "influxdb_query fails with 'NoneType' object is not iterable\n<!---\r\nVerify first that your issue/request is not already reported on GitHub.\r\nTHIS FORM WILL BE READ BY A MACHINE, COMPLETE ALL SECTIONS AS DESCRIBED.\r\nAlso test if the latest release, and devel branch are affected too.\r\nALWAYS add information AFTER (OUTSIDE) these html comments.\r\nOtherwise it may end up being automatically closed by our bot. -->\r\n\r\n##### SUMMARY\r\ninfluxdb_query throws an error when trying to create/drop a continuous query in influxdb using influxdb_query module but the continuous query gets created/deleted successfully nevertheless.\r\n\r\n##### ISSUE TYPE\r\n - Bug Report\r\n\r\n##### COMPONENT NAME\r\n<!--- Insert, BELOW THIS COMMENT, the name of the module, plugin, task or feature.\r\nDo not include extra details here, e.g. \"vyos_command\" not \"the network module vyos_command\" or the full path-->\r\ninfluxdb_query\r\n\r\n##### ANSIBLE VERSION\r\n<!--- Paste, BELOW THIS COMMENT, verbatim output from \"ansible --version\" between quotes below -->\r\n```\r\nansible 2.5.5\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = ['/home/plamen/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/lib/python3.6/site-packages/ansible\r\n executable location = /usr/bin/ansible\r\n python version = 3.6.5 (default, May 11 2018, 04:00:52) [GCC 8.1.0]\r\n```\r\n\r\n##### CONFIGURATION\r\n<!--- If using Ansible 2.4 or above, paste, BELOW THIS COMMENT, the results of \"ansible-config dump --only-changed\"\r\nOtherwise, mention any settings you have changed/added/removed in ansible.cfg\r\n(or using the ANSIBLE_* environment variables).-->\r\nDEFAULT_STDOUT_CALLBACK(/etc/ansible/ansible.cfg) = skippy\r\nRETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False\r\n\r\n##### OS / ENVIRONMENT\r\n<!--- Mention, BELOW THIS COMMENT, the OS you are running Ansible from, and the OS you are\r\nmanaging, or say \"N/A\" for anything that is not platform-specific.\r\nAlso mention the specific version of what you are trying to control,\r\ne.g. if this is a network bug the version of firmware on the network device.-->\r\nArch Linux\r\n\r\n##### STEPS TO REPRODUCE\r\n<!--- For bugs, show exactly how to reproduce the problem, using a minimal test-case.\r\nFor new features, show how the feature would be used. -->\r\n\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml\r\n- name: InfluxDB | Create InfluxDB DB\r\n influxdb_database:\r\n database_name: \"mydb\"\r\n\r\n- name: InfluxDB | Create InfluxDB Retention Policy\r\n influxdb_retention_policy:\r\n database_name: \"mydb\"\r\n policy_name: \"aggregated\"\r\n default: no\r\n replication: 1\r\n duration: \"400d\"\r\n\r\n- name: InfluxDB | Create InfluxDB Continuous Query\r\n influxdb_query:\r\n database_name: \"mydb\"\r\n query: \"CREATE CONTINUOUS QUERY host_router ON mydb BEGIN select sum(activeQueuedInteractions) as activeQueuedInteractions, sum(loggedUsers) as loggedUsers INTO aggregated.host_router FROM router GROUP BY hostname, time(10s, 1m) END\"\r\n```\r\nI get the same result when I try to drop the continuous query - the continuous query gets deleted but ansible fails with the same error\r\n```yaml\r\n- name: InfluxDB | Drop InfluxDB Continuous Queries\r\n influxdb_query:\r\n database_name: \"{{ cluster_name }}\"\r\n query: \"DROP CONTINUOUS QUERY host_router ON {{ cluster_name }}\"\r\n```\r\n\r\n\r\n\r\n<!--- You can also paste gist.github.com links for larger files -->\r\n\r\n##### EXPECTED RESULTS\r\nThe continuous query should be created successfully and Ansible should not fail.\r\n\r\n##### ACTUAL RESULTS\r\n<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->\r\nIt looks like influxdb_query manages to create/drop the continuous query but it throws the following error:\r\n<!--- Paste verbatim command output between quotes below -->\r\n```\r\nERROR! Unexpected Exception, this is probably a bug: 'NoneType' object is not iterable\r\nthe full traceback was:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/bin/ansible-playbook\", line 118, in <module>\r\n exit_code = cli.run()\r\n File \"/usr/lib/python3.6/site-packages/ansible/cli/playbook.py\", line 122, in run\r\n results = pbex.run()\r\n File \"/usr/lib/python3.6/site-packages/ansible/executor/playbook_executor.py\", line 159, in run\r\n result = self._tqm.run(play=play)\r\n File \"/usr/lib/python3.6/site-packages/ansible/executor/task_queue_manager.py\", line 289, in run\r\n play_return = strategy.run(iterator, play_context)\r\n File \"/usr/lib/python3.6/site-packages/ansible/plugins/strategy/linear.py\", line 321, in run\r\n results += self._wait_on_pending_results(iterator)\r\n File \"/usr/lib/python3.6/site-packages/ansible/plugins/strategy/__init__.py\", line 674, in _wait_on_pending_results\r\n results = self._process_pending_results(iterator)\r\n File \"/usr/lib/python3.6/site-packages/ansible/plugins/strategy/__init__.py\", line 117, in inner\r\n results = func(self, iterator, one_pass=one_pass, max_passes=max_passes)\r\n File \"/usr/lib/python3.6/site-packages/ansible/plugins/strategy/__init__.py\", line 447, in _process_pending_results\r\n if task_result.is_failed() or task_result.is_unreachable():\r\n File \"/usr/lib/python3.6/site-packages/ansible/executor/task_result.py\", line 59, in is_failed\r\n 'results' in self._result and True in [True for x in self._result['results'] if 'failed_when_result' in x]:\r\nTypeError: 'NoneType' object is not iterable\r\n\r\n```\r\n\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n# Copyright (c) 2017, Ren\u00e9 Moser <[email protected]>\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nANSIBLE_METADATA = {'metadata_version': '1.1',\n 'status': ['preview'],\n 'supported_by': 'community'}\n\n\nDOCUMENTATION = '''\n---\nmodule: influxdb_query\nshort_description: Query data points from InfluxDB.\ndescription:\n - Query data points from InfluxDB.\nversion_added: 2.5\nauthor: \"Ren\u00e9 Moser (@resmo)\"\nrequirements:\n - \"python >= 2.6\"\n - \"influxdb >= 0.9\"\noptions:\n query:\n description:\n - Query to be executed.\n required: true\n database_name:\n description:\n - Name of the database.\n required: true\nextends_documentation_fragment: influxdb\n'''\n\nEXAMPLES = r'''\n- name: Query connections\n influxdb_query:\n hostname: \"{{ influxdb_ip_address }}\"\n database_name: \"{{ influxdb_database_name }}\"\n query: \"select mean(value) from connections\"\n register: connection\n\n- name: Query connections with tags filters\n influxdb_query:\n hostname: \"{{ influxdb_ip_address }}\"\n database_name: \"{{ influxdb_database_name }}\"\n query: \"select mean(value) from connections where region='zue01' and host='server01'\"\n register: connection\n\n- name: Print results from the query\n debug:\n var: connection.results\n'''\n\nRETURN = '''\nresults:\n description: Result from the query\n returned: success\n type: list\n sample:\n - mean: 1245.5333333333333\n time: \"1970-01-01T00:00:00Z\"\n'''\n\nfrom ansible.module_utils.basic import AnsibleModule\nfrom ansible.module_utils._text import to_native\nfrom ansible.module_utils.influxdb import InfluxDb\n\n\nclass AnsibleInfluxDBRead(InfluxDb):\n\n def read_by_query(self, query):\n client = self.connect_to_influxdb()\n try:\n rs = client.query(query)\n if rs:\n return list(rs.get_points())\n except Exception as e:\n self.module.fail_json(msg=to_native(e))\n\n\ndef main():\n argument_spec = InfluxDb.influxdb_argument_spec()\n argument_spec.update(\n query=dict(type='str', required=True),\n database_name=dict(required=True, type='str'),\n )\n module = AnsibleModule(\n argument_spec=argument_spec,\n supports_check_mode=True\n )\n\n influx = AnsibleInfluxDBRead(module)\n query = module.params.get('query')\n results = influx.read_by_query(query)\n module.exit_json(changed=True, results=results)\n\n\nif __name__ == '__main__':\n main()\n", "path": "lib/ansible/modules/database/influxdb/influxdb_query.py"}], "after_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n# Copyright (c) 2017, Ren\u00e9 Moser <[email protected]>\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nANSIBLE_METADATA = {'metadata_version': '1.1',\n 'status': ['preview'],\n 'supported_by': 'community'}\n\n\nDOCUMENTATION = '''\n---\nmodule: influxdb_query\nshort_description: Query data points from InfluxDB.\ndescription:\n - Query data points from InfluxDB.\nversion_added: 2.5\nauthor: \"Ren\u00e9 Moser (@resmo)\"\nrequirements:\n - \"python >= 2.6\"\n - \"influxdb >= 0.9\"\noptions:\n query:\n description:\n - Query to be executed.\n required: true\n database_name:\n description:\n - Name of the database.\n required: true\nextends_documentation_fragment: influxdb\n'''\n\nEXAMPLES = r'''\n- name: Query connections\n influxdb_query:\n hostname: \"{{ influxdb_ip_address }}\"\n database_name: \"{{ influxdb_database_name }}\"\n query: \"select mean(value) from connections\"\n register: connection\n\n- name: Query connections with tags filters\n influxdb_query:\n hostname: \"{{ influxdb_ip_address }}\"\n database_name: \"{{ influxdb_database_name }}\"\n query: \"select mean(value) from connections where region='zue01' and host='server01'\"\n register: connection\n\n- name: Print results from the query\n debug:\n var: connection.query_results\n'''\n\nRETURN = '''\nquery_results:\n description: Result from the query\n returned: success\n type: list\n sample:\n - mean: 1245.5333333333333\n time: \"1970-01-01T00:00:00Z\"\n'''\n\nfrom ansible.module_utils.basic import AnsibleModule\nfrom ansible.module_utils._text import to_native\nfrom ansible.module_utils.influxdb import InfluxDb\n\n\nclass AnsibleInfluxDBRead(InfluxDb):\n\n def read_by_query(self, query):\n client = self.connect_to_influxdb()\n try:\n rs = client.query(query)\n if rs:\n return list(rs.get_points())\n except Exception as e:\n self.module.fail_json(msg=to_native(e))\n\n\ndef main():\n argument_spec = InfluxDb.influxdb_argument_spec()\n argument_spec.update(\n query=dict(type='str', required=True),\n database_name=dict(required=True, type='str'),\n )\n module = AnsibleModule(\n argument_spec=argument_spec,\n supports_check_mode=True\n )\n\n influx = AnsibleInfluxDBRead(module)\n query = module.params.get('query')\n results = influx.read_by_query(query)\n module.exit_json(changed=True, query_results=results)\n\n\nif __name__ == '__main__':\n main()\n", "path": "lib/ansible/modules/database/influxdb/influxdb_query.py"}]} | 2,491 | 212 |
gh_patches_debug_17152 | rasdani/github-patches | git_diff | learningequality__kolibri-11257 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Learn-only device - The 'Full name' value entered in the setup wizard is missing
## Observed behavior
If I create a new learner user through the 'LOD>Create a new user account for an existing facility' user flow then the 'Full name' value is missing from both the learner's profile page and the user details in Facility > Users

## Expected behavior
The 'Full name' value should be stored and displayed correctly as it is a required field.
## Steps to reproduce the issue
1. Install the latest [Beta 4 build](https://github.com/learningequality/kolibri/releases/tag/v0.16.0-beta4).
2. On one VM create a Full facility
3. On another VM go through the 'LOD>Create a new user account for an existing facility' user flow
## More information
https://github.com/learningequality/kolibri/assets/79847249/0a5e978b-ee99-481b-8ce3-d5c4d166eef2
## Usage Details
Kolibri 0.16beta4
Windows 10, Ubuntu - Chrome, Firefox
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kolibri/plugins/setup_wizard/api.py`
Content:
```
1 import requests
2 from django.urls import reverse
3 from rest_framework import decorators
4 from rest_framework.exceptions import NotFound
5 from rest_framework.exceptions import PermissionDenied
6 from rest_framework.exceptions import ValidationError
7 from rest_framework.permissions import BasePermission
8 from rest_framework.response import Response
9 from rest_framework.viewsets import ViewSet
10
11 from kolibri.core.auth.constants import user_kinds
12 from kolibri.core.auth.models import Facility
13 from kolibri.core.auth.models import FacilityUser
14 from kolibri.core.auth.utils.users import get_remote_users_info
15 from kolibri.core.device.models import DevicePermissions
16
17
18 # Basic class that makes these endpoints unusable if device is provisioned
19 class HasPermissionDuringSetup(BasePermission):
20 def has_permission(self, request, view):
21 from kolibri.core.device.utils import device_provisioned
22
23 return not device_provisioned()
24
25
26 class HasPermissionDuringLODSetup(BasePermission):
27 def has_permission(self, request, view):
28 from kolibri.core.device.utils import get_device_setting
29
30 return get_device_setting("subset_of_users_device")
31
32
33 class SetupWizardResource(ViewSet):
34 """
35 Generic endpoints for use during various setup wizard onboarding flows
36 """
37
38 permission_classes = (HasPermissionDuringSetup,)
39
40 @decorators.action(methods=["post"], detail=False)
41 def createuseronremote(self, request):
42 facility_id = request.data.get("facility_id", None)
43 username = request.data.get("username", None)
44 password = request.data.get("password", None)
45 baseurl = request.data.get("baseurl", None)
46
47 api_url = reverse("kolibri:core:publicsignup-list")
48
49 url = "{}{}".format(baseurl, api_url)
50
51 payload = {
52 "facility_id": facility_id,
53 "username": username,
54 "password": password,
55 }
56
57 r = requests.post(url, data=payload)
58 return Response({"status": r.status_code, "data": r.content})
59
60
61 class FacilityImportViewSet(ViewSet):
62 """
63 A group of endpoints that are used by the SetupWizard to import a facility
64 and create a superuser
65 """
66
67 permission_classes = (HasPermissionDuringSetup,)
68
69 @decorators.action(methods=["get"], detail=False)
70 def facilityadmins(self, request):
71 # The filter is very loose, since we are assuming that the only
72 # users are from the new facility
73 queryset = FacilityUser.objects.filter(roles__kind__contains="admin")
74 response_data = [
75 {"full_name": user.full_name, "username": user.username, "id": user.id}
76 for user in queryset
77 ]
78 return Response(response_data)
79
80 @decorators.action(methods=["post"], detail=False)
81 def grantsuperuserpermissions(self, request):
82 """
83 Given a user ID and credentials, create a superuser DevicePermissions record
84 """
85 user_id = request.data.get("user_id", "")
86 password = request.data.get("password", "")
87
88 # Get the Facility User object
89 try:
90 facilityuser = FacilityUser.objects.get(id=user_id)
91 except (Exception, FacilityUser.DoesNotExist):
92 raise NotFound()
93
94 # Test for password and admin role
95 if (
96 not facilityuser.check_password(password)
97 or user_kinds.ADMIN not in facilityuser.session_data["kind"]
98 ):
99 raise PermissionDenied()
100
101 # If it succeeds, create a DevicePermissions model for the user
102 DevicePermissions.objects.update_or_create(
103 user=facilityuser,
104 defaults={"is_superuser": True, "can_manage_content": True},
105 )
106
107 # Finally: return a simple 200 so UI can continue on
108 return Response({"user_id": user_id})
109
110 @decorators.action(methods=["post"], detail=False)
111 def createsuperuser(self, request):
112 """
113 Given a username, full name and password, create a superuser attached
114 to the facility that was imported (or create a facility with given facility_name)
115 """
116 facility_name = request.data.get("facility_name", None)
117
118 # Get the imported facility (assuming its the only one at this point)
119 if Facility.objects.count() == 0:
120 the_facility = Facility.objects.create(name=facility_name)
121 else:
122 the_facility = Facility.objects.get()
123 if facility_name:
124 the_facility.name = facility_name
125 the_facility.save()
126
127 try:
128 superuser = FacilityUser.objects.create_superuser(
129 request.data.get("username"),
130 request.data.get("password"),
131 facility=the_facility,
132 full_name=request.data.get("full_name"),
133 )
134 return Response({"username": superuser.username})
135
136 except ValidationError:
137 raise ValidationError(detail="duplicate", code="duplicate_username")
138
139 @decorators.action(methods=["post"], detail=False)
140 def listfacilitylearners(self, request):
141 """
142 If the request is done by an admin user it will return a list of the users of the
143 facility
144
145 :param baseurl: First part of the url of the server that's going to be requested
146 :param facility_id: Id of the facility to authenticate and get the list of users
147 :param username: Username of the user that's going to authenticate
148 :param password: Password of the user that's going to authenticate
149 :return: List of the learners of the facility.
150 """
151 facility_id = request.data.get("facility_id")
152 baseurl = request.data.get("baseurl")
153 password = request.data.get("password")
154 username = request.data.get("username")
155 facility_info = get_remote_users_info(baseurl, facility_id, username, password)
156 user_info = facility_info["user"]
157 roles = user_info["roles"]
158 admin_roles = (user_kinds.ADMIN, user_kinds.SUPERUSER)
159 if not any(role in roles for role in admin_roles):
160 raise PermissionDenied()
161 students = [u for u in facility_info["users"] if not u["roles"]]
162 return Response({"students": students, "admin": facility_info["user"]})
163
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kolibri/plugins/setup_wizard/api.py b/kolibri/plugins/setup_wizard/api.py
--- a/kolibri/plugins/setup_wizard/api.py
+++ b/kolibri/plugins/setup_wizard/api.py
@@ -42,6 +42,7 @@
facility_id = request.data.get("facility_id", None)
username = request.data.get("username", None)
password = request.data.get("password", None)
+ full_name = request.data.get("full_name", "")
baseurl = request.data.get("baseurl", None)
api_url = reverse("kolibri:core:publicsignup-list")
@@ -52,6 +53,7 @@
"facility_id": facility_id,
"username": username,
"password": password,
+ "full_name": full_name,
}
r = requests.post(url, data=payload)
| {"golden_diff": "diff --git a/kolibri/plugins/setup_wizard/api.py b/kolibri/plugins/setup_wizard/api.py\n--- a/kolibri/plugins/setup_wizard/api.py\n+++ b/kolibri/plugins/setup_wizard/api.py\n@@ -42,6 +42,7 @@\n facility_id = request.data.get(\"facility_id\", None)\n username = request.data.get(\"username\", None)\n password = request.data.get(\"password\", None)\n+ full_name = request.data.get(\"full_name\", \"\")\n baseurl = request.data.get(\"baseurl\", None)\n \n api_url = reverse(\"kolibri:core:publicsignup-list\")\n@@ -52,6 +53,7 @@\n \"facility_id\": facility_id,\n \"username\": username,\n \"password\": password,\n+ \"full_name\": full_name,\n }\n \n r = requests.post(url, data=payload)\n", "issue": "Learn-only device - The 'Full name' value entered in the setup wizard is missing\n## Observed behavior\r\nIf I create a new learner user through the 'LOD>Create a new user account for an existing facility' user flow then the 'Full name' value is missing from both the learner's profile page and the user details in Facility > Users\r\n\r\n\r\n\r\n## Expected behavior\r\nThe 'Full name' value should be stored and displayed correctly as it is a required field.\r\n\r\n## Steps to reproduce the issue\r\n1. Install the latest [Beta 4 build](https://github.com/learningequality/kolibri/releases/tag/v0.16.0-beta4).\r\n2. On one VM create a Full facility\r\n3. On another VM go through the 'LOD>Create a new user account for an existing facility' user flow\r\n\r\n## More information\r\n\r\nhttps://github.com/learningequality/kolibri/assets/79847249/0a5e978b-ee99-481b-8ce3-d5c4d166eef2\r\n\r\n## Usage Details\r\nKolibri 0.16beta4\r\nWindows 10, Ubuntu - Chrome, Firefox\n", "before_files": [{"content": "import requests\nfrom django.urls import reverse\nfrom rest_framework import decorators\nfrom rest_framework.exceptions import NotFound\nfrom rest_framework.exceptions import PermissionDenied\nfrom rest_framework.exceptions import ValidationError\nfrom rest_framework.permissions import BasePermission\nfrom rest_framework.response import Response\nfrom rest_framework.viewsets import ViewSet\n\nfrom kolibri.core.auth.constants import user_kinds\nfrom kolibri.core.auth.models import Facility\nfrom kolibri.core.auth.models import FacilityUser\nfrom kolibri.core.auth.utils.users import get_remote_users_info\nfrom kolibri.core.device.models import DevicePermissions\n\n\n# Basic class that makes these endpoints unusable if device is provisioned\nclass HasPermissionDuringSetup(BasePermission):\n def has_permission(self, request, view):\n from kolibri.core.device.utils import device_provisioned\n\n return not device_provisioned()\n\n\nclass HasPermissionDuringLODSetup(BasePermission):\n def has_permission(self, request, view):\n from kolibri.core.device.utils import get_device_setting\n\n return get_device_setting(\"subset_of_users_device\")\n\n\nclass SetupWizardResource(ViewSet):\n \"\"\"\n Generic endpoints for use during various setup wizard onboarding flows\n \"\"\"\n\n permission_classes = (HasPermissionDuringSetup,)\n\n @decorators.action(methods=[\"post\"], detail=False)\n def createuseronremote(self, request):\n facility_id = request.data.get(\"facility_id\", None)\n username = request.data.get(\"username\", None)\n password = request.data.get(\"password\", None)\n baseurl = request.data.get(\"baseurl\", None)\n\n api_url = reverse(\"kolibri:core:publicsignup-list\")\n\n url = \"{}{}\".format(baseurl, api_url)\n\n payload = {\n \"facility_id\": facility_id,\n \"username\": username,\n \"password\": password,\n }\n\n r = requests.post(url, data=payload)\n return Response({\"status\": r.status_code, \"data\": r.content})\n\n\nclass FacilityImportViewSet(ViewSet):\n \"\"\"\n A group of endpoints that are used by the SetupWizard to import a facility\n and create a superuser\n \"\"\"\n\n permission_classes = (HasPermissionDuringSetup,)\n\n @decorators.action(methods=[\"get\"], detail=False)\n def facilityadmins(self, request):\n # The filter is very loose, since we are assuming that the only\n # users are from the new facility\n queryset = FacilityUser.objects.filter(roles__kind__contains=\"admin\")\n response_data = [\n {\"full_name\": user.full_name, \"username\": user.username, \"id\": user.id}\n for user in queryset\n ]\n return Response(response_data)\n\n @decorators.action(methods=[\"post\"], detail=False)\n def grantsuperuserpermissions(self, request):\n \"\"\"\n Given a user ID and credentials, create a superuser DevicePermissions record\n \"\"\"\n user_id = request.data.get(\"user_id\", \"\")\n password = request.data.get(\"password\", \"\")\n\n # Get the Facility User object\n try:\n facilityuser = FacilityUser.objects.get(id=user_id)\n except (Exception, FacilityUser.DoesNotExist):\n raise NotFound()\n\n # Test for password and admin role\n if (\n not facilityuser.check_password(password)\n or user_kinds.ADMIN not in facilityuser.session_data[\"kind\"]\n ):\n raise PermissionDenied()\n\n # If it succeeds, create a DevicePermissions model for the user\n DevicePermissions.objects.update_or_create(\n user=facilityuser,\n defaults={\"is_superuser\": True, \"can_manage_content\": True},\n )\n\n # Finally: return a simple 200 so UI can continue on\n return Response({\"user_id\": user_id})\n\n @decorators.action(methods=[\"post\"], detail=False)\n def createsuperuser(self, request):\n \"\"\"\n Given a username, full name and password, create a superuser attached\n to the facility that was imported (or create a facility with given facility_name)\n \"\"\"\n facility_name = request.data.get(\"facility_name\", None)\n\n # Get the imported facility (assuming its the only one at this point)\n if Facility.objects.count() == 0:\n the_facility = Facility.objects.create(name=facility_name)\n else:\n the_facility = Facility.objects.get()\n if facility_name:\n the_facility.name = facility_name\n the_facility.save()\n\n try:\n superuser = FacilityUser.objects.create_superuser(\n request.data.get(\"username\"),\n request.data.get(\"password\"),\n facility=the_facility,\n full_name=request.data.get(\"full_name\"),\n )\n return Response({\"username\": superuser.username})\n\n except ValidationError:\n raise ValidationError(detail=\"duplicate\", code=\"duplicate_username\")\n\n @decorators.action(methods=[\"post\"], detail=False)\n def listfacilitylearners(self, request):\n \"\"\"\n If the request is done by an admin user it will return a list of the users of the\n facility\n\n :param baseurl: First part of the url of the server that's going to be requested\n :param facility_id: Id of the facility to authenticate and get the list of users\n :param username: Username of the user that's going to authenticate\n :param password: Password of the user that's going to authenticate\n :return: List of the learners of the facility.\n \"\"\"\n facility_id = request.data.get(\"facility_id\")\n baseurl = request.data.get(\"baseurl\")\n password = request.data.get(\"password\")\n username = request.data.get(\"username\")\n facility_info = get_remote_users_info(baseurl, facility_id, username, password)\n user_info = facility_info[\"user\"]\n roles = user_info[\"roles\"]\n admin_roles = (user_kinds.ADMIN, user_kinds.SUPERUSER)\n if not any(role in roles for role in admin_roles):\n raise PermissionDenied()\n students = [u for u in facility_info[\"users\"] if not u[\"roles\"]]\n return Response({\"students\": students, \"admin\": facility_info[\"user\"]})\n", "path": "kolibri/plugins/setup_wizard/api.py"}], "after_files": [{"content": "import requests\nfrom django.urls import reverse\nfrom rest_framework import decorators\nfrom rest_framework.exceptions import NotFound\nfrom rest_framework.exceptions import PermissionDenied\nfrom rest_framework.exceptions import ValidationError\nfrom rest_framework.permissions import BasePermission\nfrom rest_framework.response import Response\nfrom rest_framework.viewsets import ViewSet\n\nfrom kolibri.core.auth.constants import user_kinds\nfrom kolibri.core.auth.models import Facility\nfrom kolibri.core.auth.models import FacilityUser\nfrom kolibri.core.auth.utils.users import get_remote_users_info\nfrom kolibri.core.device.models import DevicePermissions\n\n\n# Basic class that makes these endpoints unusable if device is provisioned\nclass HasPermissionDuringSetup(BasePermission):\n def has_permission(self, request, view):\n from kolibri.core.device.utils import device_provisioned\n\n return not device_provisioned()\n\n\nclass HasPermissionDuringLODSetup(BasePermission):\n def has_permission(self, request, view):\n from kolibri.core.device.utils import get_device_setting\n\n return get_device_setting(\"subset_of_users_device\")\n\n\nclass SetupWizardResource(ViewSet):\n \"\"\"\n Generic endpoints for use during various setup wizard onboarding flows\n \"\"\"\n\n permission_classes = (HasPermissionDuringSetup,)\n\n @decorators.action(methods=[\"post\"], detail=False)\n def createuseronremote(self, request):\n facility_id = request.data.get(\"facility_id\", None)\n username = request.data.get(\"username\", None)\n password = request.data.get(\"password\", None)\n full_name = request.data.get(\"full_name\", \"\")\n baseurl = request.data.get(\"baseurl\", None)\n\n api_url = reverse(\"kolibri:core:publicsignup-list\")\n\n url = \"{}{}\".format(baseurl, api_url)\n\n payload = {\n \"facility_id\": facility_id,\n \"username\": username,\n \"password\": password,\n \"full_name\": full_name,\n }\n\n r = requests.post(url, data=payload)\n return Response({\"status\": r.status_code, \"data\": r.content})\n\n\nclass FacilityImportViewSet(ViewSet):\n \"\"\"\n A group of endpoints that are used by the SetupWizard to import a facility\n and create a superuser\n \"\"\"\n\n permission_classes = (HasPermissionDuringSetup,)\n\n @decorators.action(methods=[\"get\"], detail=False)\n def facilityadmins(self, request):\n # The filter is very loose, since we are assuming that the only\n # users are from the new facility\n queryset = FacilityUser.objects.filter(roles__kind__contains=\"admin\")\n response_data = [\n {\"full_name\": user.full_name, \"username\": user.username, \"id\": user.id}\n for user in queryset\n ]\n return Response(response_data)\n\n @decorators.action(methods=[\"post\"], detail=False)\n def grantsuperuserpermissions(self, request):\n \"\"\"\n Given a user ID and credentials, create a superuser DevicePermissions record\n \"\"\"\n user_id = request.data.get(\"user_id\", \"\")\n password = request.data.get(\"password\", \"\")\n\n # Get the Facility User object\n try:\n facilityuser = FacilityUser.objects.get(id=user_id)\n except (Exception, FacilityUser.DoesNotExist):\n raise NotFound()\n\n # Test for password and admin role\n if (\n not facilityuser.check_password(password)\n or user_kinds.ADMIN not in facilityuser.session_data[\"kind\"]\n ):\n raise PermissionDenied()\n\n # If it succeeds, create a DevicePermissions model for the user\n DevicePermissions.objects.update_or_create(\n user=facilityuser,\n defaults={\"is_superuser\": True, \"can_manage_content\": True},\n )\n\n # Finally: return a simple 200 so UI can continue on\n return Response({\"user_id\": user_id})\n\n @decorators.action(methods=[\"post\"], detail=False)\n def createsuperuser(self, request):\n \"\"\"\n Given a username, full name and password, create a superuser attached\n to the facility that was imported (or create a facility with given facility_name)\n \"\"\"\n facility_name = request.data.get(\"facility_name\", None)\n\n # Get the imported facility (assuming its the only one at this point)\n if Facility.objects.count() == 0:\n the_facility = Facility.objects.create(name=facility_name)\n else:\n the_facility = Facility.objects.get()\n if facility_name:\n the_facility.name = facility_name\n the_facility.save()\n\n try:\n superuser = FacilityUser.objects.create_superuser(\n request.data.get(\"username\"),\n request.data.get(\"password\"),\n facility=the_facility,\n full_name=request.data.get(\"full_name\"),\n )\n return Response({\"username\": superuser.username})\n\n except ValidationError:\n raise ValidationError(detail=\"duplicate\", code=\"duplicate_username\")\n\n @decorators.action(methods=[\"post\"], detail=False)\n def listfacilitylearners(self, request):\n \"\"\"\n If the request is done by an admin user it will return a list of the users of the\n facility\n\n :param baseurl: First part of the url of the server that's going to be requested\n :param facility_id: Id of the facility to authenticate and get the list of users\n :param username: Username of the user that's going to authenticate\n :param password: Password of the user that's going to authenticate\n :return: List of the learners of the facility.\n \"\"\"\n facility_id = request.data.get(\"facility_id\")\n baseurl = request.data.get(\"baseurl\")\n password = request.data.get(\"password\")\n username = request.data.get(\"username\")\n facility_info = get_remote_users_info(baseurl, facility_id, username, password)\n user_info = facility_info[\"user\"]\n roles = user_info[\"roles\"]\n admin_roles = (user_kinds.ADMIN, user_kinds.SUPERUSER)\n if not any(role in roles for role in admin_roles):\n raise PermissionDenied()\n students = [u for u in facility_info[\"users\"] if not u[\"roles\"]]\n return Response({\"students\": students, \"admin\": facility_info[\"user\"]})\n", "path": "kolibri/plugins/setup_wizard/api.py"}]} | 2,253 | 192 |
gh_patches_debug_25684 | rasdani/github-patches | git_diff | freedomofpress__securedrop-2336 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Extra level in zip folders for single sources
## Description
There is an extra unnecessary folder for zipfiles downloaded from the journalist interface for single sources.
## Steps to Reproduce
0. Submit some files to SecureDrop
1. Load the home page of the journalist interface
2. Select one source using the checkbox
3. Select "Download"
4. Unzip the resulting file
## Expected Behavior
File structure would ideally be:
`source-name/number_date`
e.g.:
`ecstatic-potato/1_03-20-20`
## Actual Behavior
Instead there is an additional layer:
`ecstatic-potato/ecstatic-potato/1_03-20-20`
## Comment
This additional folder is confusing to users and should be removed.
Note: if you download unread or download all for multiple sources - the outer folder will not be named after the source and will be e.g. `all`:
`all/ecstatic-potato/1_03-20-20`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `securedrop/store.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import os
3 import re
4 import config
5 import zipfile
6 import crypto_util
7 import tempfile
8 import gzip
9 from werkzeug import secure_filename
10
11 from secure_tempfile import SecureTemporaryFile
12
13 import logging
14 log = logging.getLogger(__name__)
15
16 VALIDATE_FILENAME = re.compile(
17 "^(?P<index>\d+)\-[a-z0-9-_]*"
18 "(?P<file_type>msg|doc\.(gz|zip)|reply)\.gpg$").match
19
20
21 class PathException(Exception):
22
23 """An exception raised by `util.verify` when it encounters a bad path. A path
24 can be bad when it is not absolute or not normalized.
25 """
26 pass
27
28
29 def verify(p):
30 """Assert that the path is absolute, normalized, inside `config.STORE_DIR`, and
31 matches the filename format.
32 """
33 if not os.path.isabs(config.STORE_DIR):
34 raise PathException("config.STORE_DIR(%s) is not absolute" % (
35 config.STORE_DIR, ))
36
37 # os.path.abspath makes the path absolute and normalizes
38 # '/foo/../bar' to '/bar', etc. We have to check that the path is
39 # normalized before checking that it starts with the
40 # `config.STORE_DIR` or else a malicious actor could append a
41 # bunch of '../../..' to access files outside of the store.
42 if not p == os.path.abspath(p):
43 raise PathException("The path is not absolute and/or normalized")
44
45 # Check that the path p is in config.STORE_DIR
46 if os.path.relpath(p, config.STORE_DIR).startswith('..'):
47 raise PathException("Invalid directory %s" % (p, ))
48
49 if os.path.isfile(p):
50 filename = os.path.basename(p)
51 ext = os.path.splitext(filename)[-1]
52 if filename == '_FLAG':
53 return True
54 if ext != '.gpg':
55 # if there's an extension, verify it's a GPG
56 raise PathException("Invalid file extension %s" % (ext, ))
57 if not VALIDATE_FILENAME(filename):
58 raise PathException("Invalid filename %s" % (filename, ))
59
60
61 def path(*s):
62 """Get the normalized, absolute file path, within `config.STORE_DIR`."""
63 joined = os.path.join(os.path.abspath(config.STORE_DIR), *s)
64 absolute = os.path.abspath(joined)
65 verify(absolute)
66 return absolute
67
68
69 def get_bulk_archive(selected_submissions, zip_directory=''):
70 """Generate a zip file from the selected submissions"""
71 zip_file = tempfile.NamedTemporaryFile(prefix='tmp_securedrop_bulk_dl_',
72 dir=config.TEMP_DIR,
73 delete=False)
74 sources = set([i.source.journalist_designation
75 for i in selected_submissions])
76 # The below nested for-loops are there to create a more usable
77 # folder structure per #383
78 with zipfile.ZipFile(zip_file, 'w') as zip:
79 for source in sources:
80 submissions = [s for s in selected_submissions
81 if s.source.journalist_designation == source]
82 for submission in submissions:
83 filename = path(submission.source.filesystem_id,
84 submission.filename)
85 verify(filename)
86 document_number = submission.filename.split('-')[0]
87 zip.write(filename, arcname=os.path.join(
88 zip_directory,
89 source,
90 "%s_%s" % (document_number,
91 submission.source.last_updated.date()),
92 os.path.basename(filename)
93 ))
94 return zip_file
95
96
97 def save_file_submission(filesystem_id, count, journalist_filename, filename,
98 stream):
99 sanitized_filename = secure_filename(filename)
100
101 # We store file submissions in a .gz file for two reasons:
102 #
103 # 1. Downloading large files over Tor is very slow. If we can
104 # compress the file, we can speed up future downloads.
105 #
106 # 2. We want to record the original filename because it might be
107 # useful, either for context about the content of the submission
108 # or for figuring out which application should be used to open
109 # it. However, we'd like to encrypt that info and have the
110 # decrypted file automatically have the name of the original
111 # file. Given various usability constraints in GPG and Tails, this
112 # is the most user-friendly way we have found to do this.
113
114 encrypted_file_name = "{0}-{1}-doc.gz.gpg".format(
115 count,
116 journalist_filename)
117 encrypted_file_path = path(filesystem_id, encrypted_file_name)
118 with SecureTemporaryFile("/tmp") as stf:
119 with gzip.GzipFile(filename=sanitized_filename,
120 mode='wb', fileobj=stf) as gzf:
121 # Buffer the stream into the gzip file to avoid excessive
122 # memory consumption
123 while True:
124 buf = stream.read(1024 * 8)
125 if not buf:
126 break
127 gzf.write(buf)
128
129 crypto_util.encrypt(stf, config.JOURNALIST_KEY, encrypted_file_path)
130
131 return encrypted_file_name
132
133
134 def save_message_submission(filesystem_id, count, journalist_filename,
135 message):
136 filename = "{0}-{1}-msg.gpg".format(count, journalist_filename)
137 msg_loc = path(filesystem_id, filename)
138 crypto_util.encrypt(message, config.JOURNALIST_KEY, msg_loc)
139 return filename
140
141
142 def rename_submission(filesystem_id, orig_filename, journalist_filename):
143 check_submission_name = VALIDATE_FILENAME(orig_filename)
144 if check_submission_name:
145 parsed_filename = check_submission_name.groupdict()
146 if parsed_filename.get('file_type'):
147 new_filename = "{}-{}-{}.gpg".format(
148 parsed_filename['index'], journalist_filename,
149 parsed_filename['file_type'])
150 try:
151 os.rename(path(filesystem_id, orig_filename),
152 path(filesystem_id, new_filename))
153 except OSError:
154 pass
155 else:
156 return new_filename # Only return new filename if successful
157 return orig_filename
158
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/securedrop/store.py b/securedrop/store.py
--- a/securedrop/store.py
+++ b/securedrop/store.py
@@ -78,6 +78,7 @@
# folder structure per #383
with zipfile.ZipFile(zip_file, 'w') as zip:
for source in sources:
+ fname = ""
submissions = [s for s in selected_submissions
if s.source.journalist_designation == source]
for submission in submissions:
@@ -85,9 +86,12 @@
submission.filename)
verify(filename)
document_number = submission.filename.split('-')[0]
+ if zip_directory == submission.source.journalist_filename:
+ fname = zip_directory
+ else:
+ fname = os.path.join(zip_directory, source)
zip.write(filename, arcname=os.path.join(
- zip_directory,
- source,
+ fname,
"%s_%s" % (document_number,
submission.source.last_updated.date()),
os.path.basename(filename)
| {"golden_diff": "diff --git a/securedrop/store.py b/securedrop/store.py\n--- a/securedrop/store.py\n+++ b/securedrop/store.py\n@@ -78,6 +78,7 @@\n # folder structure per #383\n with zipfile.ZipFile(zip_file, 'w') as zip:\n for source in sources:\n+ fname = \"\"\n submissions = [s for s in selected_submissions\n if s.source.journalist_designation == source]\n for submission in submissions:\n@@ -85,9 +86,12 @@\n submission.filename)\n verify(filename)\n document_number = submission.filename.split('-')[0]\n+ if zip_directory == submission.source.journalist_filename:\n+ fname = zip_directory\n+ else:\n+ fname = os.path.join(zip_directory, source)\n zip.write(filename, arcname=os.path.join(\n- zip_directory,\n- source,\n+ fname,\n \"%s_%s\" % (document_number,\n submission.source.last_updated.date()),\n os.path.basename(filename)\n", "issue": "Extra level in zip folders for single sources\n## Description\r\n\r\nThere is an extra unnecessary folder for zipfiles downloaded from the journalist interface for single sources. \r\n\r\n## Steps to Reproduce\r\n\r\n0. Submit some files to SecureDrop\r\n1. Load the home page of the journalist interface\r\n2. Select one source using the checkbox\r\n3. Select \"Download\"\r\n4. Unzip the resulting file\r\n\r\n## Expected Behavior\r\n\r\nFile structure would ideally be: \r\n\r\n`source-name/number_date`\r\n\r\ne.g.:\r\n\r\n`ecstatic-potato/1_03-20-20`\r\n\r\n## Actual Behavior\r\n\r\nInstead there is an additional layer:\r\n\r\n`ecstatic-potato/ecstatic-potato/1_03-20-20`\r\n\r\n## Comment\r\n\r\nThis additional folder is confusing to users and should be removed.\r\n\r\nNote: if you download unread or download all for multiple sources - the outer folder will not be named after the source and will be e.g. `all`:\r\n\r\n`all/ecstatic-potato/1_03-20-20`\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport os\nimport re\nimport config\nimport zipfile\nimport crypto_util\nimport tempfile\nimport gzip\nfrom werkzeug import secure_filename\n\nfrom secure_tempfile import SecureTemporaryFile\n\nimport logging\nlog = logging.getLogger(__name__)\n\nVALIDATE_FILENAME = re.compile(\n \"^(?P<index>\\d+)\\-[a-z0-9-_]*\"\n \"(?P<file_type>msg|doc\\.(gz|zip)|reply)\\.gpg$\").match\n\n\nclass PathException(Exception):\n\n \"\"\"An exception raised by `util.verify` when it encounters a bad path. A path\n can be bad when it is not absolute or not normalized.\n \"\"\"\n pass\n\n\ndef verify(p):\n \"\"\"Assert that the path is absolute, normalized, inside `config.STORE_DIR`, and\n matches the filename format.\n \"\"\"\n if not os.path.isabs(config.STORE_DIR):\n raise PathException(\"config.STORE_DIR(%s) is not absolute\" % (\n config.STORE_DIR, ))\n\n # os.path.abspath makes the path absolute and normalizes\n # '/foo/../bar' to '/bar', etc. We have to check that the path is\n # normalized before checking that it starts with the\n # `config.STORE_DIR` or else a malicious actor could append a\n # bunch of '../../..' to access files outside of the store.\n if not p == os.path.abspath(p):\n raise PathException(\"The path is not absolute and/or normalized\")\n\n # Check that the path p is in config.STORE_DIR\n if os.path.relpath(p, config.STORE_DIR).startswith('..'):\n raise PathException(\"Invalid directory %s\" % (p, ))\n\n if os.path.isfile(p):\n filename = os.path.basename(p)\n ext = os.path.splitext(filename)[-1]\n if filename == '_FLAG':\n return True\n if ext != '.gpg':\n # if there's an extension, verify it's a GPG\n raise PathException(\"Invalid file extension %s\" % (ext, ))\n if not VALIDATE_FILENAME(filename):\n raise PathException(\"Invalid filename %s\" % (filename, ))\n\n\ndef path(*s):\n \"\"\"Get the normalized, absolute file path, within `config.STORE_DIR`.\"\"\"\n joined = os.path.join(os.path.abspath(config.STORE_DIR), *s)\n absolute = os.path.abspath(joined)\n verify(absolute)\n return absolute\n\n\ndef get_bulk_archive(selected_submissions, zip_directory=''):\n \"\"\"Generate a zip file from the selected submissions\"\"\"\n zip_file = tempfile.NamedTemporaryFile(prefix='tmp_securedrop_bulk_dl_',\n dir=config.TEMP_DIR,\n delete=False)\n sources = set([i.source.journalist_designation\n for i in selected_submissions])\n # The below nested for-loops are there to create a more usable\n # folder structure per #383\n with zipfile.ZipFile(zip_file, 'w') as zip:\n for source in sources:\n submissions = [s for s in selected_submissions\n if s.source.journalist_designation == source]\n for submission in submissions:\n filename = path(submission.source.filesystem_id,\n submission.filename)\n verify(filename)\n document_number = submission.filename.split('-')[0]\n zip.write(filename, arcname=os.path.join(\n zip_directory,\n source,\n \"%s_%s\" % (document_number,\n submission.source.last_updated.date()),\n os.path.basename(filename)\n ))\n return zip_file\n\n\ndef save_file_submission(filesystem_id, count, journalist_filename, filename,\n stream):\n sanitized_filename = secure_filename(filename)\n\n # We store file submissions in a .gz file for two reasons:\n #\n # 1. Downloading large files over Tor is very slow. If we can\n # compress the file, we can speed up future downloads.\n #\n # 2. We want to record the original filename because it might be\n # useful, either for context about the content of the submission\n # or for figuring out which application should be used to open\n # it. However, we'd like to encrypt that info and have the\n # decrypted file automatically have the name of the original\n # file. Given various usability constraints in GPG and Tails, this\n # is the most user-friendly way we have found to do this.\n\n encrypted_file_name = \"{0}-{1}-doc.gz.gpg\".format(\n count,\n journalist_filename)\n encrypted_file_path = path(filesystem_id, encrypted_file_name)\n with SecureTemporaryFile(\"/tmp\") as stf:\n with gzip.GzipFile(filename=sanitized_filename,\n mode='wb', fileobj=stf) as gzf:\n # Buffer the stream into the gzip file to avoid excessive\n # memory consumption\n while True:\n buf = stream.read(1024 * 8)\n if not buf:\n break\n gzf.write(buf)\n\n crypto_util.encrypt(stf, config.JOURNALIST_KEY, encrypted_file_path)\n\n return encrypted_file_name\n\n\ndef save_message_submission(filesystem_id, count, journalist_filename,\n message):\n filename = \"{0}-{1}-msg.gpg\".format(count, journalist_filename)\n msg_loc = path(filesystem_id, filename)\n crypto_util.encrypt(message, config.JOURNALIST_KEY, msg_loc)\n return filename\n\n\ndef rename_submission(filesystem_id, orig_filename, journalist_filename):\n check_submission_name = VALIDATE_FILENAME(orig_filename)\n if check_submission_name:\n parsed_filename = check_submission_name.groupdict()\n if parsed_filename.get('file_type'):\n new_filename = \"{}-{}-{}.gpg\".format(\n parsed_filename['index'], journalist_filename,\n parsed_filename['file_type'])\n try:\n os.rename(path(filesystem_id, orig_filename),\n path(filesystem_id, new_filename))\n except OSError:\n pass\n else:\n return new_filename # Only return new filename if successful\n return orig_filename\n", "path": "securedrop/store.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport os\nimport re\nimport config\nimport zipfile\nimport crypto_util\nimport tempfile\nimport subprocess\nimport gzip\nfrom werkzeug import secure_filename\n\nfrom secure_tempfile import SecureTemporaryFile\n\nimport logging\nlog = logging.getLogger(__name__)\n\nVALIDATE_FILENAME = re.compile(\n \"^(?P<index>\\d+)\\-[a-z0-9-_]*\"\n \"(?P<file_type>msg|doc\\.(gz|zip)|reply)\\.gpg$\").match\n\n\nclass PathException(Exception):\n\n \"\"\"An exception raised by `util.verify` when it encounters a bad path. A path\n can be bad when it is not absolute or not normalized.\n \"\"\"\n pass\n\n\ndef verify(p):\n \"\"\"Assert that the path is absolute, normalized, inside `config.STORE_DIR`, and\n matches the filename format.\n \"\"\"\n if not os.path.isabs(config.STORE_DIR):\n raise PathException(\"config.STORE_DIR(%s) is not absolute\" % (\n config.STORE_DIR, ))\n\n # os.path.abspath makes the path absolute and normalizes\n # '/foo/../bar' to '/bar', etc. We have to check that the path is\n # normalized before checking that it starts with the\n # `config.STORE_DIR` or else a malicious actor could append a\n # bunch of '../../..' to access files outside of the store.\n if not p == os.path.abspath(p):\n raise PathException(\"The path is not absolute and/or normalized\")\n\n # Check that the path p is in config.STORE_DIR\n if os.path.relpath(p, config.STORE_DIR).startswith('..'):\n raise PathException(\"Invalid directory %s\" % (p, ))\n\n if os.path.isfile(p):\n filename = os.path.basename(p)\n ext = os.path.splitext(filename)[-1]\n if filename == '_FLAG':\n return True\n if ext != '.gpg':\n # if there's an extension, verify it's a GPG\n raise PathException(\"Invalid file extension %s\" % (ext, ))\n if not VALIDATE_FILENAME(filename):\n raise PathException(\"Invalid filename %s\" % (filename, ))\n\n\ndef path(*s):\n \"\"\"Get the normalized, absolute file path, within `config.STORE_DIR`.\"\"\"\n joined = os.path.join(os.path.abspath(config.STORE_DIR), *s)\n absolute = os.path.abspath(joined)\n verify(absolute)\n return absolute\n\n\ndef get_bulk_archive(selected_submissions, zip_directory=''):\n \"\"\"Generate a zip file from the selected submissions\"\"\"\n zip_file = tempfile.NamedTemporaryFile(prefix='tmp_securedrop_bulk_dl_',\n dir=config.TEMP_DIR,\n delete=False)\n sources = set([i.source.journalist_designation\n for i in selected_submissions])\n # The below nested for-loops are there to create a more usable\n # folder structure per #383\n with zipfile.ZipFile(zip_file, 'w') as zip:\n for source in sources:\n fname = \"\"\n submissions = [s for s in selected_submissions\n if s.source.journalist_designation == source]\n for submission in submissions:\n filename = path(submission.source.filesystem_id,\n submission.filename)\n verify(filename)\n document_number = submission.filename.split('-')[0]\n if zip_directory == submission.source.journalist_filename:\n fname = zip_directory\n else:\n fname = os.path.join(zip_directory, source)\n zip.write(filename, arcname=os.path.join(\n fname,\n \"%s_%s\" % (document_number,\n submission.source.last_updated.date()),\n os.path.basename(filename)\n ))\n return zip_file\n\n\ndef save_file_submission(filesystem_id, count, journalist_filename, filename,\n stream):\n sanitized_filename = secure_filename(filename)\n\n # We store file submissions in a .gz file for two reasons:\n #\n # 1. Downloading large files over Tor is very slow. If we can\n # compress the file, we can speed up future downloads.\n #\n # 2. We want to record the original filename because it might be\n # useful, either for context about the content of the submission\n # or for figuring out which application should be used to open\n # it. However, we'd like to encrypt that info and have the\n # decrypted file automatically have the name of the original\n # file. Given various usability constraints in GPG and Tails, this\n # is the most user-friendly way we have found to do this.\n\n encrypted_file_name = \"{0}-{1}-doc.gz.gpg\".format(\n count,\n journalist_filename)\n encrypted_file_path = path(filesystem_id, encrypted_file_name)\n with SecureTemporaryFile(\"/tmp\") as stf:\n with gzip.GzipFile(filename=sanitized_filename,\n mode='wb', fileobj=stf) as gzf:\n # Buffer the stream into the gzip file to avoid excessive\n # memory consumption\n while True:\n buf = stream.read(1024 * 8)\n if not buf:\n break\n gzf.write(buf)\n\n crypto_util.encrypt(stf, config.JOURNALIST_KEY, encrypted_file_path)\n\n return encrypted_file_name\n\n\ndef save_message_submission(filesystem_id, count, journalist_filename,\n message):\n filename = \"{0}-{1}-msg.gpg\".format(count, journalist_filename)\n msg_loc = path(filesystem_id, filename)\n crypto_util.encrypt(message, config.JOURNALIST_KEY, msg_loc)\n return filename\n\n\ndef rename_submission(filesystem_id, orig_filename, journalist_filename):\n check_submission_name = VALIDATE_FILENAME(orig_filename)\n if check_submission_name:\n parsed_filename = check_submission_name.groupdict()\n if parsed_filename.get('file_type'):\n new_filename = \"{}-{}-{}.gpg\".format(\n parsed_filename['index'], journalist_filename,\n parsed_filename['file_type'])\n try:\n os.rename(path(filesystem_id, orig_filename),\n path(filesystem_id, new_filename))\n except OSError:\n pass\n else:\n return new_filename # Only return new filename if successful\n return orig_filename\n\n\ndef secure_unlink(fn, recursive=False):\n verify(fn)\n command = ['srm']\n if recursive:\n command.append('-r')\n command.append(fn)\n subprocess.check_call(command)\n return \"success\"\n\n\ndef delete_source_directory(filesystem_id):\n secure_unlink(path(filesystem_id), recursive=True)\n return \"success\"\n", "path": "securedrop/store.py"}]} | 2,150 | 225 |
gh_patches_debug_3799 | rasdani/github-patches | git_diff | dotkom__onlineweb4-1514 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Show price on event before attending
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/payment/models.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 import uuid
4
5 from django.conf import settings
6 from django.contrib.contenttypes.fields import GenericForeignKey
7 from django.contrib.contenttypes.models import ContentType
8 from django.db import models
9 from django.utils import timezone
10 from django.utils.translation import ugettext as _
11 from rest_framework.exceptions import NotAcceptable
12
13 from apps.events.models import AttendanceEvent, Attendee
14 from apps.marks.models import Suspension
15
16 User = settings.AUTH_USER_MODEL
17
18
19 class Payment(models.Model):
20
21 TYPE_CHOICES = (
22 (1, _('Umiddelbar')),
23 (2, _('Frist')),
24 (3, _('Utsettelse')),
25 )
26
27 # creates tuples used as key choices.
28 STRIPE_KEY_CHOICES = zip(settings.STRIPE_PUBLIC_KEYS.keys(), settings.STRIPE_PUBLIC_KEYS.keys())
29
30 content_type = models.ForeignKey(ContentType)
31 object_id = models.PositiveIntegerField()
32 content_object = GenericForeignKey()
33 stripe_key = models.CharField(
34 _('stripe key'),
35 max_length=10,
36 choices=STRIPE_KEY_CHOICES,
37 default="arrkom"
38 )
39
40 payment_type = models.SmallIntegerField(_('type'), choices=TYPE_CHOICES)
41
42 # Optional fields depending on payment type
43 deadline = models.DateTimeField(_("frist"), blank=True, null=True)
44 active = models.BooleanField(default=True)
45 delay = models.SmallIntegerField(_('utsettelse'), blank=True, null=True, default=2)
46
47 # For logging and history
48 added_date = models.DateTimeField(_("opprettet dato"), auto_now=True)
49 changed_date = models.DateTimeField(auto_now=True, editable=False)
50 last_changed_by = models.ForeignKey(User, editable=False, null=True) # Blank and null is temperarly
51
52 def paid_users(self):
53 return [payment_relation.user for payment_relation in self.paymentrelation_set.filter(refunded=False)]
54
55 def payment_delays(self):
56 return self.paymentdelay_set.filter(active=True)
57
58 def payment_delay_users(self):
59 return [payment_delay.user for payment_delay in self.payment_delays()]
60
61 def create_payment_delay(self, user, deadline):
62 payment_delays = self.paymentdelay_set.filter(payment=self, user=user)
63
64 if payment_delays:
65 for payment_delay in payment_delays:
66 payment_delay.valid_to = deadline
67 payment_delay.save()
68 else:
69 PaymentDelay.objects.create(payment=self, user=user, valid_to=deadline)
70
71 def description(self):
72 if self._is_type(AttendanceEvent):
73 return self.content_object.event.title
74
75 def responsible_mail(self):
76 if self._is_type(AttendanceEvent):
77 event_type = self.content_object.event.event_type
78 if event_type == 1 or event_type == 4: # Sosialt & Utflukt
79 return settings.EMAIL_ARRKOM
80 elif event_type == 2: # Bedpres
81 return settings.EMAIL_BEDKOM
82 elif event_type == 3: # Kurs
83 return settings.EMAIL_FAGKOM
84 elif event_type == 5: # Ekskursjon
85 return settings.EMAIL_EKSKOM
86 else:
87 return settings.DEFAULT_FROM_EMAIL
88 else:
89 return settings.DEFAULT_FROM_EMAIL
90
91 def handle_payment(self, user):
92 if self._is_type(AttendanceEvent):
93 attendee = Attendee.objects.filter(event=self.content_object, user=user)
94
95 # Delete payment delay objects for the user if there are any
96 delays = PaymentDelay.objects.filter(payment=self, user=user)
97 for delay in delays:
98 delay.delete()
99
100 # If the user is suspended because of a lack of payment the suspension is deactivated.
101 suspensions = Suspension.objects.filter(payment_id=self.id, user=user)
102 for suspension in suspensions:
103 suspension.active = False
104 suspension.save()
105
106 if attendee:
107 attendee[0].paid = True
108 attendee[0].save()
109 else:
110 Attendee.objects.create(event=self.content_object, user=user, paid=True)
111
112 def handle_refund(self, host, payment_relation):
113 payment_relation.refunded = True
114 payment_relation.save()
115
116 if self._is_type(AttendanceEvent):
117 self.content_object.notify_waiting_list(
118 host=host, unattended_user=payment_relation.user)
119 Attendee.objects.get(event=self.content_object,
120 user=payment_relation.user).delete()
121
122 def check_refund(self, payment_relation):
123 if self._is_type(AttendanceEvent):
124 attendance_event = self.content_object
125 if attendance_event.unattend_deadline < timezone.now():
126 return False, _("Fristen for og melde seg av har utgått")
127 if len(Attendee.objects.filter(event=attendance_event, user=payment_relation.user)) == 0:
128 return False, _("Du er ikke påmeldt dette arrangementet.")
129 if attendance_event.event.event_start < timezone.now():
130 return False, _("Dette arrangementet har allerede startet.")
131
132 return True, ''
133
134 return False, 'Refund checks not implemented'
135
136 def prices(self):
137 return self.paymentprice_set.all()
138
139 def _is_type(self, model_type):
140 return ContentType.objects.get_for_model(model_type) == self.content_type
141
142 def __str__(self):
143 return self.description()
144
145 class Meta(object):
146 unique_together = ('content_type', 'object_id')
147
148 verbose_name = _("betaling")
149 verbose_name_plural = _("betalinger")
150
151
152 class PaymentPrice(models.Model):
153 payment = models.ForeignKey(Payment)
154 price = models.IntegerField(_("pris"))
155 description = models.CharField(max_length=128, null=True, blank=True)
156
157 def __str__(self):
158 return str(self.price)
159
160 class Meta(object):
161 verbose_name = _("pris")
162 verbose_name_plural = _("priser")
163
164
165 class PaymentRelation(models.Model):
166 payment = models.ForeignKey(Payment)
167 payment_price = models.ForeignKey(PaymentPrice)
168 user = models.ForeignKey(User)
169 datetime = models.DateTimeField(auto_now=True)
170 refunded = models.BooleanField(default=False)
171
172 unique_id = models.CharField(max_length=128, null=True, blank=True)
173 stripe_id = models.CharField(max_length=128)
174
175 def save(self, *args, **kwargs):
176 if not self.unique_id:
177 self.unique_id = str(uuid.uuid4())
178 super(PaymentRelation, self).save(*args, **kwargs)
179
180 def __str__(self):
181 return self.payment.description() + " - " + str(self.user)
182
183 class Meta(object):
184 verbose_name = _("betalingsrelasjon")
185 verbose_name_plural = _("betalingsrelasjoner")
186
187
188 class PaymentDelay(models.Model):
189 payment = models.ForeignKey(Payment)
190 user = models.ForeignKey(User)
191 valid_to = models.DateTimeField()
192
193 active = models.BooleanField(default=True)
194
195 def __str__(self):
196 return self.payment.description() + " - " + str(self.user)
197
198 class Meta(object):
199 unique_together = ('payment', 'user')
200
201 verbose_name = _('betalingsutsettelse')
202 verbose_name_plural = _('betalingsutsettelser')
203
204
205 class PaymentTransaction(models.Model):
206 user = models.ForeignKey(User)
207 amount = models.IntegerField(null=True, blank=True)
208 used_stripe = models.BooleanField(default=False)
209
210 datetime = models.DateTimeField(auto_now=True)
211
212 def __str__(self):
213 return str(self.user) + " - " + str(self.amount) + "(" + str(self.datetime) + ")"
214
215 def save(self, *args, **kwargs):
216 if not self.pk:
217 self.user.saldo = self.user.saldo + self.amount
218
219 if self.user.saldo < 0:
220 raise NotAcceptable("Insufficient funds")
221
222 self.user.save()
223 super(PaymentTransaction, self).save(*args, **kwargs)
224
225 class Meta:
226 ordering = ['-datetime']
227 verbose_name = _('transaksjon')
228 verbose_name_plural = _('transaksjoner')
229
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/apps/payment/models.py b/apps/payment/models.py
--- a/apps/payment/models.py
+++ b/apps/payment/models.py
@@ -136,6 +136,10 @@
def prices(self):
return self.paymentprice_set.all()
+ def price(self):
+ # TODO implement group based pricing
+ return self.paymentprice_set.all()[0]
+
def _is_type(self, model_type):
return ContentType.objects.get_for_model(model_type) == self.content_type
| {"golden_diff": "diff --git a/apps/payment/models.py b/apps/payment/models.py\n--- a/apps/payment/models.py\n+++ b/apps/payment/models.py\n@@ -136,6 +136,10 @@\n def prices(self):\n return self.paymentprice_set.all()\n \n+ def price(self):\n+ # TODO implement group based pricing\n+ return self.paymentprice_set.all()[0]\n+\n def _is_type(self, model_type):\n return ContentType.objects.get_for_model(model_type) == self.content_type\n", "issue": "Show price on event before attending\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport uuid\n\nfrom django.conf import settings\nfrom django.contrib.contenttypes.fields import GenericForeignKey\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.db import models\nfrom django.utils import timezone\nfrom django.utils.translation import ugettext as _\nfrom rest_framework.exceptions import NotAcceptable\n\nfrom apps.events.models import AttendanceEvent, Attendee\nfrom apps.marks.models import Suspension\n\nUser = settings.AUTH_USER_MODEL\n\n\nclass Payment(models.Model):\n\n TYPE_CHOICES = (\n (1, _('Umiddelbar')),\n (2, _('Frist')),\n (3, _('Utsettelse')),\n )\n\n # creates tuples used as key choices.\n STRIPE_KEY_CHOICES = zip(settings.STRIPE_PUBLIC_KEYS.keys(), settings.STRIPE_PUBLIC_KEYS.keys())\n\n content_type = models.ForeignKey(ContentType)\n object_id = models.PositiveIntegerField()\n content_object = GenericForeignKey()\n stripe_key = models.CharField(\n _('stripe key'),\n max_length=10,\n choices=STRIPE_KEY_CHOICES,\n default=\"arrkom\"\n )\n\n payment_type = models.SmallIntegerField(_('type'), choices=TYPE_CHOICES)\n\n # Optional fields depending on payment type\n deadline = models.DateTimeField(_(\"frist\"), blank=True, null=True)\n active = models.BooleanField(default=True)\n delay = models.SmallIntegerField(_('utsettelse'), blank=True, null=True, default=2)\n\n # For logging and history\n added_date = models.DateTimeField(_(\"opprettet dato\"), auto_now=True)\n changed_date = models.DateTimeField(auto_now=True, editable=False)\n last_changed_by = models.ForeignKey(User, editable=False, null=True) # Blank and null is temperarly\n\n def paid_users(self):\n return [payment_relation.user for payment_relation in self.paymentrelation_set.filter(refunded=False)]\n\n def payment_delays(self):\n return self.paymentdelay_set.filter(active=True)\n\n def payment_delay_users(self):\n return [payment_delay.user for payment_delay in self.payment_delays()]\n\n def create_payment_delay(self, user, deadline):\n payment_delays = self.paymentdelay_set.filter(payment=self, user=user)\n\n if payment_delays:\n for payment_delay in payment_delays:\n payment_delay.valid_to = deadline\n payment_delay.save()\n else:\n PaymentDelay.objects.create(payment=self, user=user, valid_to=deadline)\n\n def description(self):\n if self._is_type(AttendanceEvent):\n return self.content_object.event.title\n\n def responsible_mail(self):\n if self._is_type(AttendanceEvent):\n event_type = self.content_object.event.event_type\n if event_type == 1 or event_type == 4: # Sosialt & Utflukt\n return settings.EMAIL_ARRKOM\n elif event_type == 2: # Bedpres\n return settings.EMAIL_BEDKOM\n elif event_type == 3: # Kurs\n return settings.EMAIL_FAGKOM\n elif event_type == 5: # Ekskursjon\n return settings.EMAIL_EKSKOM\n else:\n return settings.DEFAULT_FROM_EMAIL\n else:\n return settings.DEFAULT_FROM_EMAIL\n\n def handle_payment(self, user):\n if self._is_type(AttendanceEvent):\n attendee = Attendee.objects.filter(event=self.content_object, user=user)\n\n # Delete payment delay objects for the user if there are any\n delays = PaymentDelay.objects.filter(payment=self, user=user)\n for delay in delays:\n delay.delete()\n\n # If the user is suspended because of a lack of payment the suspension is deactivated.\n suspensions = Suspension.objects.filter(payment_id=self.id, user=user)\n for suspension in suspensions:\n suspension.active = False\n suspension.save()\n\n if attendee:\n attendee[0].paid = True\n attendee[0].save()\n else:\n Attendee.objects.create(event=self.content_object, user=user, paid=True)\n\n def handle_refund(self, host, payment_relation):\n payment_relation.refunded = True\n payment_relation.save()\n\n if self._is_type(AttendanceEvent):\n self.content_object.notify_waiting_list(\n host=host, unattended_user=payment_relation.user)\n Attendee.objects.get(event=self.content_object,\n user=payment_relation.user).delete()\n\n def check_refund(self, payment_relation):\n if self._is_type(AttendanceEvent):\n attendance_event = self.content_object\n if attendance_event.unattend_deadline < timezone.now():\n return False, _(\"Fristen for og melde seg av har utg\u00e5tt\")\n if len(Attendee.objects.filter(event=attendance_event, user=payment_relation.user)) == 0:\n return False, _(\"Du er ikke p\u00e5meldt dette arrangementet.\")\n if attendance_event.event.event_start < timezone.now():\n return False, _(\"Dette arrangementet har allerede startet.\")\n\n return True, ''\n\n return False, 'Refund checks not implemented'\n\n def prices(self):\n return self.paymentprice_set.all()\n\n def _is_type(self, model_type):\n return ContentType.objects.get_for_model(model_type) == self.content_type\n\n def __str__(self):\n return self.description()\n\n class Meta(object):\n unique_together = ('content_type', 'object_id')\n\n verbose_name = _(\"betaling\")\n verbose_name_plural = _(\"betalinger\")\n\n\nclass PaymentPrice(models.Model):\n payment = models.ForeignKey(Payment)\n price = models.IntegerField(_(\"pris\"))\n description = models.CharField(max_length=128, null=True, blank=True)\n\n def __str__(self):\n return str(self.price)\n\n class Meta(object):\n verbose_name = _(\"pris\")\n verbose_name_plural = _(\"priser\")\n\n\nclass PaymentRelation(models.Model):\n payment = models.ForeignKey(Payment)\n payment_price = models.ForeignKey(PaymentPrice)\n user = models.ForeignKey(User)\n datetime = models.DateTimeField(auto_now=True)\n refunded = models.BooleanField(default=False)\n\n unique_id = models.CharField(max_length=128, null=True, blank=True)\n stripe_id = models.CharField(max_length=128)\n\n def save(self, *args, **kwargs):\n if not self.unique_id:\n self.unique_id = str(uuid.uuid4())\n super(PaymentRelation, self).save(*args, **kwargs)\n\n def __str__(self):\n return self.payment.description() + \" - \" + str(self.user)\n\n class Meta(object):\n verbose_name = _(\"betalingsrelasjon\")\n verbose_name_plural = _(\"betalingsrelasjoner\")\n\n\nclass PaymentDelay(models.Model):\n payment = models.ForeignKey(Payment)\n user = models.ForeignKey(User)\n valid_to = models.DateTimeField()\n\n active = models.BooleanField(default=True)\n\n def __str__(self):\n return self.payment.description() + \" - \" + str(self.user)\n\n class Meta(object):\n unique_together = ('payment', 'user')\n\n verbose_name = _('betalingsutsettelse')\n verbose_name_plural = _('betalingsutsettelser')\n\n\nclass PaymentTransaction(models.Model):\n user = models.ForeignKey(User)\n amount = models.IntegerField(null=True, blank=True)\n used_stripe = models.BooleanField(default=False)\n\n datetime = models.DateTimeField(auto_now=True)\n\n def __str__(self):\n return str(self.user) + \" - \" + str(self.amount) + \"(\" + str(self.datetime) + \")\"\n\n def save(self, *args, **kwargs):\n if not self.pk:\n self.user.saldo = self.user.saldo + self.amount\n\n if self.user.saldo < 0:\n raise NotAcceptable(\"Insufficient funds\")\n\n self.user.save()\n super(PaymentTransaction, self).save(*args, **kwargs)\n\n class Meta:\n ordering = ['-datetime']\n verbose_name = _('transaksjon')\n verbose_name_plural = _('transaksjoner')\n", "path": "apps/payment/models.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport uuid\n\nfrom django.conf import settings\nfrom django.contrib.contenttypes.fields import GenericForeignKey\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.db import models\nfrom django.utils import timezone\nfrom django.utils.translation import ugettext as _\nfrom rest_framework.exceptions import NotAcceptable\n\nfrom apps.events.models import AttendanceEvent, Attendee\nfrom apps.marks.models import Suspension\n\nUser = settings.AUTH_USER_MODEL\n\n\nclass Payment(models.Model):\n\n TYPE_CHOICES = (\n (1, _('Umiddelbar')),\n (2, _('Frist')),\n (3, _('Utsettelse')),\n )\n\n # creates tuples used as key choices.\n STRIPE_KEY_CHOICES = zip(settings.STRIPE_PUBLIC_KEYS.keys(), settings.STRIPE_PUBLIC_KEYS.keys())\n\n content_type = models.ForeignKey(ContentType)\n object_id = models.PositiveIntegerField()\n content_object = GenericForeignKey()\n stripe_key = models.CharField(\n _('stripe key'),\n max_length=10,\n choices=STRIPE_KEY_CHOICES,\n default=\"arrkom\"\n )\n\n payment_type = models.SmallIntegerField(_('type'), choices=TYPE_CHOICES)\n\n # Optional fields depending on payment type\n deadline = models.DateTimeField(_(\"frist\"), blank=True, null=True)\n active = models.BooleanField(default=True)\n delay = models.SmallIntegerField(_('utsettelse'), blank=True, null=True, default=2)\n\n # For logging and history\n added_date = models.DateTimeField(_(\"opprettet dato\"), auto_now=True)\n changed_date = models.DateTimeField(auto_now=True, editable=False)\n last_changed_by = models.ForeignKey(User, editable=False, null=True) # Blank and null is temperarly\n\n def paid_users(self):\n return [payment_relation.user for payment_relation in self.paymentrelation_set.filter(refunded=False)]\n\n def payment_delays(self):\n return self.paymentdelay_set.filter(active=True)\n\n def payment_delay_users(self):\n return [payment_delay.user for payment_delay in self.payment_delays()]\n\n def create_payment_delay(self, user, deadline):\n payment_delays = self.paymentdelay_set.filter(payment=self, user=user)\n\n if payment_delays:\n for payment_delay in payment_delays:\n payment_delay.valid_to = deadline\n payment_delay.save()\n else:\n PaymentDelay.objects.create(payment=self, user=user, valid_to=deadline)\n\n def description(self):\n if self._is_type(AttendanceEvent):\n return self.content_object.event.title\n\n def responsible_mail(self):\n if self._is_type(AttendanceEvent):\n event_type = self.content_object.event.event_type\n if event_type == 1 or event_type == 4: # Sosialt & Utflukt\n return settings.EMAIL_ARRKOM\n elif event_type == 2: # Bedpres\n return settings.EMAIL_BEDKOM\n elif event_type == 3: # Kurs\n return settings.EMAIL_FAGKOM\n elif event_type == 5: # Ekskursjon\n return settings.EMAIL_EKSKOM\n else:\n return settings.DEFAULT_FROM_EMAIL\n else:\n return settings.DEFAULT_FROM_EMAIL\n\n def handle_payment(self, user):\n if self._is_type(AttendanceEvent):\n attendee = Attendee.objects.filter(event=self.content_object, user=user)\n\n # Delete payment delay objects for the user if there are any\n delays = PaymentDelay.objects.filter(payment=self, user=user)\n for delay in delays:\n delay.delete()\n\n # If the user is suspended because of a lack of payment the suspension is deactivated.\n suspensions = Suspension.objects.filter(payment_id=self.id, user=user)\n for suspension in suspensions:\n suspension.active = False\n suspension.save()\n\n if attendee:\n attendee[0].paid = True\n attendee[0].save()\n else:\n Attendee.objects.create(event=self.content_object, user=user, paid=True)\n\n def handle_refund(self, host, payment_relation):\n payment_relation.refunded = True\n payment_relation.save()\n\n if self._is_type(AttendanceEvent):\n self.content_object.notify_waiting_list(\n host=host, unattended_user=payment_relation.user)\n Attendee.objects.get(event=self.content_object,\n user=payment_relation.user).delete()\n\n def check_refund(self, payment_relation):\n if self._is_type(AttendanceEvent):\n attendance_event = self.content_object\n if attendance_event.unattend_deadline < timezone.now():\n return False, _(\"Fristen for og melde seg av har utg\u00e5tt\")\n if len(Attendee.objects.filter(event=attendance_event, user=payment_relation.user)) == 0:\n return False, _(\"Du er ikke p\u00e5meldt dette arrangementet.\")\n if attendance_event.event.event_start < timezone.now():\n return False, _(\"Dette arrangementet har allerede startet.\")\n\n return True, ''\n\n return False, 'Refund checks not implemented'\n\n def prices(self):\n return self.paymentprice_set.all()\n\n def price(self):\n # TODO implement group based pricing\n return self.paymentprice_set.all()[0]\n\n def _is_type(self, model_type):\n return ContentType.objects.get_for_model(model_type) == self.content_type\n\n def __str__(self):\n return self.description()\n\n class Meta(object):\n unique_together = ('content_type', 'object_id')\n\n verbose_name = _(\"betaling\")\n verbose_name_plural = _(\"betalinger\")\n\n\nclass PaymentPrice(models.Model):\n payment = models.ForeignKey(Payment)\n price = models.IntegerField(_(\"pris\"))\n description = models.CharField(max_length=128, null=True, blank=True)\n\n def __str__(self):\n return str(self.price)\n\n class Meta(object):\n verbose_name = _(\"pris\")\n verbose_name_plural = _(\"priser\")\n\n\nclass PaymentRelation(models.Model):\n payment = models.ForeignKey(Payment)\n payment_price = models.ForeignKey(PaymentPrice)\n user = models.ForeignKey(User)\n datetime = models.DateTimeField(auto_now=True)\n refunded = models.BooleanField(default=False)\n\n unique_id = models.CharField(max_length=128, null=True, blank=True)\n stripe_id = models.CharField(max_length=128)\n\n def save(self, *args, **kwargs):\n if not self.unique_id:\n self.unique_id = str(uuid.uuid4())\n super(PaymentRelation, self).save(*args, **kwargs)\n\n def __str__(self):\n return self.payment.description() + \" - \" + str(self.user)\n\n class Meta(object):\n verbose_name = _(\"betalingsrelasjon\")\n verbose_name_plural = _(\"betalingsrelasjoner\")\n\n\nclass PaymentDelay(models.Model):\n payment = models.ForeignKey(Payment)\n user = models.ForeignKey(User)\n valid_to = models.DateTimeField()\n\n active = models.BooleanField(default=True)\n\n def __str__(self):\n return self.payment.description() + \" - \" + str(self.user)\n\n class Meta(object):\n unique_together = ('payment', 'user')\n\n verbose_name = _('betalingsutsettelse')\n verbose_name_plural = _('betalingsutsettelser')\n\n\nclass PaymentTransaction(models.Model):\n user = models.ForeignKey(User)\n amount = models.IntegerField(null=True, blank=True)\n used_stripe = models.BooleanField(default=False)\n\n datetime = models.DateTimeField(auto_now=True)\n\n def __str__(self):\n return str(self.user) + \" - \" + str(self.amount) + \"(\" + str(self.datetime) + \")\"\n\n def save(self, *args, **kwargs):\n if not self.pk:\n self.user.saldo = self.user.saldo + self.amount\n\n if self.user.saldo < 0:\n raise NotAcceptable(\"Insufficient funds\")\n\n self.user.save()\n super(PaymentTransaction, self).save(*args, **kwargs)\n\n class Meta:\n ordering = ['-datetime']\n verbose_name = _('transaksjon')\n verbose_name_plural = _('transaksjoner')\n", "path": "apps/payment/models.py"}]} | 2,552 | 109 |
gh_patches_debug_29942 | rasdani/github-patches | git_diff | sql-machine-learning__elasticdl-1818 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Exception calling application: dictionary changed size during iteration
The exception occasionally happens when the worker reports gradients to ps.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticdl/python/ps/embedding_table.py`
Content:
```
1 import numpy as np
2 import tensorflow as tf
3
4 from elasticdl.proto.elasticdl_pb2 import EmbeddingTableInfo
5 from elasticdl.python.common.dtypes import dtype_numpy_to_tensor
6
7
8 class EmbeddingTable(object):
9 """
10 EmbeddingTable is used to store embedding parameters of an embedding
11 layer. The name of an embedding table is actually the embedding layer
12 name. It uses a dictionary to store embedding vectors, the key is the
13 item id, the value is a 1-D numpy.ndarray.
14
15 Embedding vectors are lazily initialized in parameter server.
16 EmbeddingTable also has dim and initializer fields. Inside the get
17 interface of EmbeddingTable, if the id is not in the embedding_vectors
18 dictionary, the corresponding value will be initialized.
19 """
20
21 def __init__(self, name, dim=None, initializer=None, is_slot=False):
22 """
23 Args:
24 name: The embedding table name.
25 dim: The dimension of embeddings in this embedding table.
26 initializer: The initializer to initialize new embeddings. If this
27 embedding table is for slots, `initializer` is a float and this
28 table will initialize with constant initializer. Otherwise
29 `initializer` is the name of Keras initializer.
30 is_slot: A bool. True for storing slot variable, otherwise false.
31 """
32 self.name = name
33 self.dim = dim
34 self.initializer_value = initializer
35 # set dtype to float32
36 self.dtype = np.dtype("float32")
37 if is_slot:
38 self.initializer = tf.keras.initializers.Constant(
39 float(self.initializer_value)
40 )
41 else:
42 self.initializer = tf.keras.initializers.get(
43 self.initializer_value
44 )
45 self.is_slot = is_slot
46 self.embedding_vectors = {}
47
48 def get(self, indices):
49 if len(indices) == 0:
50 return None
51 values = []
52 for i in indices:
53 value = self.embedding_vectors.get(i, None)
54 if value is None:
55 value = self.initializer(shape=(self.dim,)).numpy()
56 self.embedding_vectors[i] = value
57 values.append(value)
58 return np.stack(values)
59
60 def set(self, indices, values):
61 # TODO(qijun) need to add a RWLock in Sync-SGD
62 for index, i in enumerate(indices):
63 embedding_vector = values[index]
64 self.embedding_vectors[i] = embedding_vector
65
66 def clear(self):
67 self.embedding_vectors.clear()
68
69 def to_indexed_slices(self):
70 indices = []
71 embedding_vectors = []
72 for id, embedding_vector in self.embedding_vectors.items():
73 indices.append(id)
74 embedding_vectors.append(embedding_vector)
75 return tf.IndexedSlices(
76 values=np.array(embedding_vectors), indices=np.array(indices)
77 )
78
79 def to_embedding_table_info_pb(self):
80 """Convert the embedding table information to a protobuf"""
81 embedding_pb = EmbeddingTableInfo()
82 embedding_pb.name = self.name
83 embedding_pb.dim = self.dim
84 embedding_pb.initializer = str(self.initializer_value)
85 embedding_pb.dtype = dtype_numpy_to_tensor(self.dtype)
86 return embedding_pb
87
88 def get_table_size(self):
89 """Get the element count of an embedding table"""
90 if len(self.embedding_vectors) > 0:
91 element_size = list(self.embedding_vectors.values())[0].itemsize
92 size = self.dim * len(self.embedding_vectors) * element_size
93 return size
94 return 0
95
96 def debug_info(self):
97 return (
98 "Embedding param name: %s\n shape: [%d, %d]\n size: %d bytes\n"
99 % (
100 self.name,
101 len(self.embedding_vectors),
102 self.dim,
103 self.get_table_size(),
104 )
105 )
106
107
108 # TODO(bug): create_embedding_table does not create EmbeddingTable correctly
109 # if it is a slot table.
110 def create_embedding_table(embedding_table_info_pb):
111 name = embedding_table_info_pb.name
112 dim = embedding_table_info_pb.dim
113 initializer = embedding_table_info_pb.initializer
114 return EmbeddingTable(name, dim, initializer)
115
116
117 def get_slot_table_name(embedding_name, slot_name):
118 return embedding_name + "-" + slot_name
119
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/elasticdl/python/ps/embedding_table.py b/elasticdl/python/ps/embedding_table.py
--- a/elasticdl/python/ps/embedding_table.py
+++ b/elasticdl/python/ps/embedding_table.py
@@ -1,3 +1,5 @@
+import threading
+
import numpy as np
import tensorflow as tf
@@ -44,6 +46,7 @@
)
self.is_slot = is_slot
self.embedding_vectors = {}
+ self._lock = threading.Lock()
def get(self, indices):
if len(indices) == 0:
@@ -52,8 +55,9 @@
for i in indices:
value = self.embedding_vectors.get(i, None)
if value is None:
- value = self.initializer(shape=(self.dim,)).numpy()
- self.embedding_vectors[i] = value
+ with self._lock:
+ value = self.initializer(shape=(self.dim,)).numpy()
+ self.embedding_vectors[i] = value
values.append(value)
return np.stack(values)
@@ -69,9 +73,10 @@
def to_indexed_slices(self):
indices = []
embedding_vectors = []
- for id, embedding_vector in self.embedding_vectors.items():
- indices.append(id)
- embedding_vectors.append(embedding_vector)
+ with self._lock:
+ for id, embedding_vector in self.embedding_vectors.items():
+ indices.append(id)
+ embedding_vectors.append(embedding_vector)
return tf.IndexedSlices(
values=np.array(embedding_vectors), indices=np.array(indices)
)
| {"golden_diff": "diff --git a/elasticdl/python/ps/embedding_table.py b/elasticdl/python/ps/embedding_table.py\n--- a/elasticdl/python/ps/embedding_table.py\n+++ b/elasticdl/python/ps/embedding_table.py\n@@ -1,3 +1,5 @@\n+import threading\n+\n import numpy as np\n import tensorflow as tf\n \n@@ -44,6 +46,7 @@\n )\n self.is_slot = is_slot\n self.embedding_vectors = {}\n+ self._lock = threading.Lock()\n \n def get(self, indices):\n if len(indices) == 0:\n@@ -52,8 +55,9 @@\n for i in indices:\n value = self.embedding_vectors.get(i, None)\n if value is None:\n- value = self.initializer(shape=(self.dim,)).numpy()\n- self.embedding_vectors[i] = value\n+ with self._lock:\n+ value = self.initializer(shape=(self.dim,)).numpy()\n+ self.embedding_vectors[i] = value\n values.append(value)\n return np.stack(values)\n \n@@ -69,9 +73,10 @@\n def to_indexed_slices(self):\n indices = []\n embedding_vectors = []\n- for id, embedding_vector in self.embedding_vectors.items():\n- indices.append(id)\n- embedding_vectors.append(embedding_vector)\n+ with self._lock:\n+ for id, embedding_vector in self.embedding_vectors.items():\n+ indices.append(id)\n+ embedding_vectors.append(embedding_vector)\n return tf.IndexedSlices(\n values=np.array(embedding_vectors), indices=np.array(indices)\n )\n", "issue": "Exception calling application: dictionary changed size during iteration\nThe exception occasionally happens when the worker reports gradients to ps.\r\n\r\n\n", "before_files": [{"content": "import numpy as np\nimport tensorflow as tf\n\nfrom elasticdl.proto.elasticdl_pb2 import EmbeddingTableInfo\nfrom elasticdl.python.common.dtypes import dtype_numpy_to_tensor\n\n\nclass EmbeddingTable(object):\n \"\"\"\n EmbeddingTable is used to store embedding parameters of an embedding\n layer. The name of an embedding table is actually the embedding layer\n name. It uses a dictionary to store embedding vectors, the key is the\n item id, the value is a 1-D numpy.ndarray.\n\n Embedding vectors are lazily initialized in parameter server.\n EmbeddingTable also has dim and initializer fields. Inside the get\n interface of EmbeddingTable, if the id is not in the embedding_vectors\n dictionary, the corresponding value will be initialized.\n \"\"\"\n\n def __init__(self, name, dim=None, initializer=None, is_slot=False):\n \"\"\"\n Args:\n name: The embedding table name.\n dim: The dimension of embeddings in this embedding table.\n initializer: The initializer to initialize new embeddings. If this\n embedding table is for slots, `initializer` is a float and this\n table will initialize with constant initializer. Otherwise\n `initializer` is the name of Keras initializer.\n is_slot: A bool. True for storing slot variable, otherwise false.\n \"\"\"\n self.name = name\n self.dim = dim\n self.initializer_value = initializer\n # set dtype to float32\n self.dtype = np.dtype(\"float32\")\n if is_slot:\n self.initializer = tf.keras.initializers.Constant(\n float(self.initializer_value)\n )\n else:\n self.initializer = tf.keras.initializers.get(\n self.initializer_value\n )\n self.is_slot = is_slot\n self.embedding_vectors = {}\n\n def get(self, indices):\n if len(indices) == 0:\n return None\n values = []\n for i in indices:\n value = self.embedding_vectors.get(i, None)\n if value is None:\n value = self.initializer(shape=(self.dim,)).numpy()\n self.embedding_vectors[i] = value\n values.append(value)\n return np.stack(values)\n\n def set(self, indices, values):\n # TODO(qijun) need to add a RWLock in Sync-SGD\n for index, i in enumerate(indices):\n embedding_vector = values[index]\n self.embedding_vectors[i] = embedding_vector\n\n def clear(self):\n self.embedding_vectors.clear()\n\n def to_indexed_slices(self):\n indices = []\n embedding_vectors = []\n for id, embedding_vector in self.embedding_vectors.items():\n indices.append(id)\n embedding_vectors.append(embedding_vector)\n return tf.IndexedSlices(\n values=np.array(embedding_vectors), indices=np.array(indices)\n )\n\n def to_embedding_table_info_pb(self):\n \"\"\"Convert the embedding table information to a protobuf\"\"\"\n embedding_pb = EmbeddingTableInfo()\n embedding_pb.name = self.name\n embedding_pb.dim = self.dim\n embedding_pb.initializer = str(self.initializer_value)\n embedding_pb.dtype = dtype_numpy_to_tensor(self.dtype)\n return embedding_pb\n\n def get_table_size(self):\n \"\"\"Get the element count of an embedding table\"\"\"\n if len(self.embedding_vectors) > 0:\n element_size = list(self.embedding_vectors.values())[0].itemsize\n size = self.dim * len(self.embedding_vectors) * element_size\n return size\n return 0\n\n def debug_info(self):\n return (\n \"Embedding param name: %s\\n shape: [%d, %d]\\n size: %d bytes\\n\"\n % (\n self.name,\n len(self.embedding_vectors),\n self.dim,\n self.get_table_size(),\n )\n )\n\n\n# TODO(bug): create_embedding_table does not create EmbeddingTable correctly\n# if it is a slot table.\ndef create_embedding_table(embedding_table_info_pb):\n name = embedding_table_info_pb.name\n dim = embedding_table_info_pb.dim\n initializer = embedding_table_info_pb.initializer\n return EmbeddingTable(name, dim, initializer)\n\n\ndef get_slot_table_name(embedding_name, slot_name):\n return embedding_name + \"-\" + slot_name\n", "path": "elasticdl/python/ps/embedding_table.py"}], "after_files": [{"content": "import threading\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom elasticdl.proto.elasticdl_pb2 import EmbeddingTableInfo\nfrom elasticdl.python.common.dtypes import dtype_numpy_to_tensor\n\n\nclass EmbeddingTable(object):\n \"\"\"\n EmbeddingTable is used to store embedding parameters of an embedding\n layer. The name of an embedding table is actually the embedding layer\n name. It uses a dictionary to store embedding vectors, the key is the\n item id, the value is a 1-D numpy.ndarray.\n\n Embedding vectors are lazily initialized in parameter server.\n EmbeddingTable also has dim and initializer fields. Inside the get\n interface of EmbeddingTable, if the id is not in the embedding_vectors\n dictionary, the corresponding value will be initialized.\n \"\"\"\n\n def __init__(self, name, dim=None, initializer=None, is_slot=False):\n \"\"\"\n Args:\n name: The embedding table name.\n dim: The dimension of embeddings in this embedding table.\n initializer: The initializer to initialize new embeddings. If this\n embedding table is for slots, `initializer` is a float and this\n table will initialize with constant initializer. Otherwise\n `initializer` is the name of Keras initializer.\n is_slot: A bool. True for storing slot variable, otherwise false.\n \"\"\"\n self.name = name\n self.dim = dim\n self.initializer_value = initializer\n # set dtype to float32\n self.dtype = np.dtype(\"float32\")\n if is_slot:\n self.initializer = tf.keras.initializers.Constant(\n float(self.initializer_value)\n )\n else:\n self.initializer = tf.keras.initializers.get(\n self.initializer_value\n )\n self.is_slot = is_slot\n self.embedding_vectors = {}\n self._lock = threading.Lock()\n\n def get(self, indices):\n if len(indices) == 0:\n return None\n values = []\n for i in indices:\n value = self.embedding_vectors.get(i, None)\n if value is None:\n with self._lock:\n value = self.initializer(shape=(self.dim,)).numpy()\n self.embedding_vectors[i] = value\n values.append(value)\n return np.stack(values)\n\n def set(self, indices, values):\n # TODO(qijun) need to add a RWLock in Sync-SGD\n for index, i in enumerate(indices):\n embedding_vector = values[index]\n self.embedding_vectors[i] = embedding_vector\n\n def clear(self):\n self.embedding_vectors.clear()\n\n def to_indexed_slices(self):\n indices = []\n embedding_vectors = []\n with self._lock:\n for id, embedding_vector in self.embedding_vectors.items():\n indices.append(id)\n embedding_vectors.append(embedding_vector)\n return tf.IndexedSlices(\n values=np.array(embedding_vectors), indices=np.array(indices)\n )\n\n def to_embedding_table_info_pb(self):\n \"\"\"Convert the embedding table information to a protobuf\"\"\"\n embedding_pb = EmbeddingTableInfo()\n embedding_pb.name = self.name\n embedding_pb.dim = self.dim\n embedding_pb.initializer = str(self.initializer_value)\n embedding_pb.dtype = dtype_numpy_to_tensor(self.dtype)\n return embedding_pb\n\n def get_table_size(self):\n \"\"\"Get the element count of an embedding table\"\"\"\n if len(self.embedding_vectors) > 0:\n element_size = list(self.embedding_vectors.values())[0].itemsize\n size = self.dim * len(self.embedding_vectors) * element_size\n return size\n return 0\n\n def debug_info(self):\n return (\n \"Embedding param name: %s\\n shape: [%d, %d]\\n size: %d bytes\\n\"\n % (\n self.name,\n len(self.embedding_vectors),\n self.dim,\n self.get_table_size(),\n )\n )\n\n\n# TODO(bug): create_embedding_table does not create EmbeddingTable correctly\n# if it is a slot table.\ndef create_embedding_table(embedding_table_info_pb):\n name = embedding_table_info_pb.name\n dim = embedding_table_info_pb.dim\n initializer = embedding_table_info_pb.initializer\n return EmbeddingTable(name, dim, initializer)\n\n\ndef get_slot_table_name(embedding_name, slot_name):\n return embedding_name + \"-\" + slot_name\n", "path": "elasticdl/python/ps/embedding_table.py"}]} | 1,502 | 351 |
gh_patches_debug_11089 | rasdani/github-patches | git_diff | encode__starlette-1218 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Consider making `starlette.templating.Jinja2Templates.get_env` "protected"
Hi,
Thank you very much for your project! We are using it intensively through FastAPI.
While I was exploring the code base and looking how to use `Jinja2Templates`, I noticed the method `get_env`. At first I thought it was a getter for the environment, but eventually realized it is only called in the constructor of the class.
It would make much more sense to me if the method was called `_get_env`. This would signal that it should not be used outside of the class (or at least that the caller needs to know what she is doing).
For example, if I want to add filters or globals to the environment, I should write:
```
templates = Jinja2Templates(...)
templates.env.globals[...] = ...
```
instead of:
```
templates = Jinja2Templates(...)
templates.get_env().globals[...] = ...
```
As of current implementation, the second snippet has no effect.
Thank you very much for considering this request!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `starlette/templating.py`
Content:
```
1 import typing
2
3 from starlette.background import BackgroundTask
4 from starlette.responses import Response
5 from starlette.types import Receive, Scope, Send
6
7 try:
8 import jinja2
9
10 # @contextfunction renamed to @pass_context in Jinja 3.0, to be removed in 3.1
11 if hasattr(jinja2, "pass_context"):
12 pass_context = jinja2.pass_context
13 else: # pragma: nocover
14 pass_context = jinja2.contextfunction
15 except ImportError: # pragma: nocover
16 jinja2 = None # type: ignore
17
18
19 class _TemplateResponse(Response):
20 media_type = "text/html"
21
22 def __init__(
23 self,
24 template: typing.Any,
25 context: dict,
26 status_code: int = 200,
27 headers: dict = None,
28 media_type: str = None,
29 background: BackgroundTask = None,
30 ):
31 self.template = template
32 self.context = context
33 content = template.render(context)
34 super().__init__(content, status_code, headers, media_type, background)
35
36 async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
37 request = self.context.get("request", {})
38 extensions = request.get("extensions", {})
39 if "http.response.template" in extensions:
40 await send(
41 {
42 "type": "http.response.template",
43 "template": self.template,
44 "context": self.context,
45 }
46 )
47 await super().__call__(scope, receive, send)
48
49
50 class Jinja2Templates:
51 """
52 templates = Jinja2Templates("templates")
53
54 return templates.TemplateResponse("index.html", {"request": request})
55 """
56
57 def __init__(self, directory: str) -> None:
58 assert jinja2 is not None, "jinja2 must be installed to use Jinja2Templates"
59 self.env = self.get_env(directory)
60
61 def get_env(self, directory: str) -> "jinja2.Environment":
62 @pass_context
63 def url_for(context: dict, name: str, **path_params: typing.Any) -> str:
64 request = context["request"]
65 return request.url_for(name, **path_params)
66
67 loader = jinja2.FileSystemLoader(directory)
68 env = jinja2.Environment(loader=loader, autoescape=True)
69 env.globals["url_for"] = url_for
70 return env
71
72 def get_template(self, name: str) -> "jinja2.Template":
73 return self.env.get_template(name)
74
75 def TemplateResponse(
76 self,
77 name: str,
78 context: dict,
79 status_code: int = 200,
80 headers: dict = None,
81 media_type: str = None,
82 background: BackgroundTask = None,
83 ) -> _TemplateResponse:
84 if "request" not in context:
85 raise ValueError('context must include a "request" key')
86 template = self.get_template(name)
87 return _TemplateResponse(
88 template,
89 context,
90 status_code=status_code,
91 headers=headers,
92 media_type=media_type,
93 background=background,
94 )
95
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/starlette/templating.py b/starlette/templating.py
--- a/starlette/templating.py
+++ b/starlette/templating.py
@@ -56,9 +56,9 @@
def __init__(self, directory: str) -> None:
assert jinja2 is not None, "jinja2 must be installed to use Jinja2Templates"
- self.env = self.get_env(directory)
+ self.env = self._create_env(directory)
- def get_env(self, directory: str) -> "jinja2.Environment":
+ def _create_env(self, directory: str) -> "jinja2.Environment":
@pass_context
def url_for(context: dict, name: str, **path_params: typing.Any) -> str:
request = context["request"]
| {"golden_diff": "diff --git a/starlette/templating.py b/starlette/templating.py\n--- a/starlette/templating.py\n+++ b/starlette/templating.py\n@@ -56,9 +56,9 @@\n \n def __init__(self, directory: str) -> None:\n assert jinja2 is not None, \"jinja2 must be installed to use Jinja2Templates\"\n- self.env = self.get_env(directory)\n+ self.env = self._create_env(directory)\n \n- def get_env(self, directory: str) -> \"jinja2.Environment\":\n+ def _create_env(self, directory: str) -> \"jinja2.Environment\":\n @pass_context\n def url_for(context: dict, name: str, **path_params: typing.Any) -> str:\n request = context[\"request\"]\n", "issue": "Consider making `starlette.templating.Jinja2Templates.get_env` \"protected\"\nHi,\r\nThank you very much for your project! We are using it intensively through FastAPI.\r\n\r\nWhile I was exploring the code base and looking how to use `Jinja2Templates`, I noticed the method `get_env`. At first I thought it was a getter for the environment, but eventually realized it is only called in the constructor of the class.\r\n\r\nIt would make much more sense to me if the method was called `_get_env`. This would signal that it should not be used outside of the class (or at least that the caller needs to know what she is doing).\r\n\r\nFor example, if I want to add filters or globals to the environment, I should write:\r\n\r\n```\r\ntemplates = Jinja2Templates(...)\r\ntemplates.env.globals[...] = ...\r\n```\r\n\r\ninstead of:\r\n\r\n```\r\ntemplates = Jinja2Templates(...)\r\ntemplates.get_env().globals[...] = ...\r\n```\r\n\r\nAs of current implementation, the second snippet has no effect.\r\n\r\nThank you very much for considering this request!\n", "before_files": [{"content": "import typing\n\nfrom starlette.background import BackgroundTask\nfrom starlette.responses import Response\nfrom starlette.types import Receive, Scope, Send\n\ntry:\n import jinja2\n\n # @contextfunction renamed to @pass_context in Jinja 3.0, to be removed in 3.1\n if hasattr(jinja2, \"pass_context\"):\n pass_context = jinja2.pass_context\n else: # pragma: nocover\n pass_context = jinja2.contextfunction\nexcept ImportError: # pragma: nocover\n jinja2 = None # type: ignore\n\n\nclass _TemplateResponse(Response):\n media_type = \"text/html\"\n\n def __init__(\n self,\n template: typing.Any,\n context: dict,\n status_code: int = 200,\n headers: dict = None,\n media_type: str = None,\n background: BackgroundTask = None,\n ):\n self.template = template\n self.context = context\n content = template.render(context)\n super().__init__(content, status_code, headers, media_type, background)\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n request = self.context.get(\"request\", {})\n extensions = request.get(\"extensions\", {})\n if \"http.response.template\" in extensions:\n await send(\n {\n \"type\": \"http.response.template\",\n \"template\": self.template,\n \"context\": self.context,\n }\n )\n await super().__call__(scope, receive, send)\n\n\nclass Jinja2Templates:\n \"\"\"\n templates = Jinja2Templates(\"templates\")\n\n return templates.TemplateResponse(\"index.html\", {\"request\": request})\n \"\"\"\n\n def __init__(self, directory: str) -> None:\n assert jinja2 is not None, \"jinja2 must be installed to use Jinja2Templates\"\n self.env = self.get_env(directory)\n\n def get_env(self, directory: str) -> \"jinja2.Environment\":\n @pass_context\n def url_for(context: dict, name: str, **path_params: typing.Any) -> str:\n request = context[\"request\"]\n return request.url_for(name, **path_params)\n\n loader = jinja2.FileSystemLoader(directory)\n env = jinja2.Environment(loader=loader, autoescape=True)\n env.globals[\"url_for\"] = url_for\n return env\n\n def get_template(self, name: str) -> \"jinja2.Template\":\n return self.env.get_template(name)\n\n def TemplateResponse(\n self,\n name: str,\n context: dict,\n status_code: int = 200,\n headers: dict = None,\n media_type: str = None,\n background: BackgroundTask = None,\n ) -> _TemplateResponse:\n if \"request\" not in context:\n raise ValueError('context must include a \"request\" key')\n template = self.get_template(name)\n return _TemplateResponse(\n template,\n context,\n status_code=status_code,\n headers=headers,\n media_type=media_type,\n background=background,\n )\n", "path": "starlette/templating.py"}], "after_files": [{"content": "import typing\n\nfrom starlette.background import BackgroundTask\nfrom starlette.responses import Response\nfrom starlette.types import Receive, Scope, Send\n\ntry:\n import jinja2\n\n # @contextfunction renamed to @pass_context in Jinja 3.0, to be removed in 3.1\n if hasattr(jinja2, \"pass_context\"):\n pass_context = jinja2.pass_context\n else: # pragma: nocover\n pass_context = jinja2.contextfunction\nexcept ImportError: # pragma: nocover\n jinja2 = None # type: ignore\n\n\nclass _TemplateResponse(Response):\n media_type = \"text/html\"\n\n def __init__(\n self,\n template: typing.Any,\n context: dict,\n status_code: int = 200,\n headers: dict = None,\n media_type: str = None,\n background: BackgroundTask = None,\n ):\n self.template = template\n self.context = context\n content = template.render(context)\n super().__init__(content, status_code, headers, media_type, background)\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n request = self.context.get(\"request\", {})\n extensions = request.get(\"extensions\", {})\n if \"http.response.template\" in extensions:\n await send(\n {\n \"type\": \"http.response.template\",\n \"template\": self.template,\n \"context\": self.context,\n }\n )\n await super().__call__(scope, receive, send)\n\n\nclass Jinja2Templates:\n \"\"\"\n templates = Jinja2Templates(\"templates\")\n\n return templates.TemplateResponse(\"index.html\", {\"request\": request})\n \"\"\"\n\n def __init__(self, directory: str) -> None:\n assert jinja2 is not None, \"jinja2 must be installed to use Jinja2Templates\"\n self.env = self._create_env(directory)\n\n def _create_env(self, directory: str) -> \"jinja2.Environment\":\n @pass_context\n def url_for(context: dict, name: str, **path_params: typing.Any) -> str:\n request = context[\"request\"]\n return request.url_for(name, **path_params)\n\n loader = jinja2.FileSystemLoader(directory)\n env = jinja2.Environment(loader=loader, autoescape=True)\n env.globals[\"url_for\"] = url_for\n return env\n\n def get_template(self, name: str) -> \"jinja2.Template\":\n return self.env.get_template(name)\n\n def TemplateResponse(\n self,\n name: str,\n context: dict,\n status_code: int = 200,\n headers: dict = None,\n media_type: str = None,\n background: BackgroundTask = None,\n ) -> _TemplateResponse:\n if \"request\" not in context:\n raise ValueError('context must include a \"request\" key')\n template = self.get_template(name)\n return _TemplateResponse(\n template,\n context,\n status_code=status_code,\n headers=headers,\n media_type=media_type,\n background=background,\n )\n", "path": "starlette/templating.py"}]} | 1,338 | 179 |
gh_patches_debug_15302 | rasdani/github-patches | git_diff | dj-stripe__dj-stripe-975 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Document our plan to standardise on storing currency in cents
- [x] Add note on field classes that StripeQuantumCurrencyAmountField should be used for new fields
- [x] Update changelog about 3.0 migration to cents
_Originally posted by @therefromhere in https://github.com/dj-stripe/dj-stripe/issues/955#issuecomment-527251976_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `djstripe/fields.py`
Content:
```
1 """
2 dj-stripe Custom Field Definitions
3 """
4 import decimal
5
6 from django.core.validators import MaxValueValidator, MinValueValidator
7 from django.db import models
8
9 from .settings import USE_NATIVE_JSONFIELD
10 from .utils import convert_tstamp
11
12 if USE_NATIVE_JSONFIELD:
13 from django.contrib.postgres.fields import JSONField as BaseJSONField
14 else:
15 from jsonfield import JSONField as BaseJSONField
16
17
18 class PaymentMethodForeignKey(models.ForeignKey):
19 def __init__(self, **kwargs):
20 kwargs.setdefault("to", "DjstripePaymentMethod")
21 super().__init__(**kwargs)
22
23
24 class StripePercentField(models.DecimalField):
25 """A field used to define a percent according to djstripe logic."""
26
27 def __init__(self, *args, **kwargs):
28 """Assign default args to this field."""
29 defaults = {
30 "decimal_places": 2,
31 "max_digits": 5,
32 "validators": [MinValueValidator(1), MaxValueValidator(100)],
33 }
34 defaults.update(kwargs)
35 super().__init__(*args, **defaults)
36
37
38 class StripeCurrencyCodeField(models.CharField):
39 """
40 A field used to store a three-letter currency code (eg. usd, eur, ...)
41 """
42
43 def __init__(self, *args, **kwargs):
44 defaults = {"max_length": 3, "help_text": "Three-letter ISO currency code"}
45 defaults.update(kwargs)
46 super().__init__(*args, **defaults)
47
48
49 class StripeQuantumCurrencyAmountField(models.IntegerField):
50 pass
51
52
53 class StripeDecimalCurrencyAmountField(models.DecimalField):
54 """
55 A field used to define currency according to djstripe logic.
56
57 Stripe is always in cents. djstripe stores everything in dollars.
58 """
59
60 def __init__(self, *args, **kwargs):
61 """Assign default args to this field."""
62 defaults = {"decimal_places": 2, "max_digits": 8}
63 defaults.update(kwargs)
64 super().__init__(*args, **defaults)
65
66 def stripe_to_db(self, data):
67 """Convert the raw value to decimal representation."""
68 val = data.get(self.name)
69
70 # Note: 0 is a possible return value, which is 'falseish'
71 if val is not None:
72 return val / decimal.Decimal("100")
73
74
75 class StripeEnumField(models.CharField):
76 def __init__(self, enum, *args, **kwargs):
77 self.enum = enum
78 choices = enum.choices
79 defaults = {"choices": choices, "max_length": max(len(k) for k, v in choices)}
80 defaults.update(kwargs)
81 super().__init__(*args, **defaults)
82
83 def deconstruct(self):
84 name, path, args, kwargs = super().deconstruct()
85 kwargs["enum"] = self.enum
86 del kwargs["choices"]
87 return name, path, args, kwargs
88
89
90 class StripeIdField(models.CharField):
91 """A field with enough space to hold any stripe ID."""
92
93 def __init__(self, *args, **kwargs):
94 """
95 Assign default args to this field.
96
97 As per: https://stripe.com/docs/upgrades
98 You can safely assume object IDs we generate will never exceed 255
99 characters, but you should be able to handle IDs of up to that
100 length.
101 """
102 defaults = {"max_length": 255, "blank": False, "null": False}
103 defaults.update(kwargs)
104 super().__init__(*args, **defaults)
105
106
107 class StripeDateTimeField(models.DateTimeField):
108 """A field used to define a DateTimeField value according to djstripe logic."""
109
110 def stripe_to_db(self, data):
111 """Convert the raw timestamp value to a DateTime representation."""
112 val = data.get(self.name)
113
114 # Note: 0 is a possible return value, which is 'falseish'
115 if val is not None:
116 return convert_tstamp(val)
117
118
119 class JSONField(BaseJSONField):
120 """A field used to define a JSONField value according to djstripe logic."""
121
122 pass
123
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/djstripe/fields.py b/djstripe/fields.py
--- a/djstripe/fields.py
+++ b/djstripe/fields.py
@@ -47,14 +47,22 @@
class StripeQuantumCurrencyAmountField(models.IntegerField):
+ """
+ A field used to store currency amounts in cents (etc) as per stripe.
+ """
+
pass
class StripeDecimalCurrencyAmountField(models.DecimalField):
"""
- A field used to define currency according to djstripe logic.
+ A legacy field to store currency amounts in dollars (etc).
+
+ Stripe is always in cents. Historically djstripe stored everything in dollars.
- Stripe is always in cents. djstripe stores everything in dollars.
+ Note: Don't use this for new fields, use StripeQuantumCurrencyAmountField instead.
+ We're planning on migrating existing fields in dj-stripe 3.0,
+ see https://github.com/dj-stripe/dj-stripe/issues/955
"""
def __init__(self, *args, **kwargs):
| {"golden_diff": "diff --git a/djstripe/fields.py b/djstripe/fields.py\n--- a/djstripe/fields.py\n+++ b/djstripe/fields.py\n@@ -47,14 +47,22 @@\n \n \n class StripeQuantumCurrencyAmountField(models.IntegerField):\n+ \"\"\"\n+ A field used to store currency amounts in cents (etc) as per stripe.\n+ \"\"\"\n+\n pass\n \n \n class StripeDecimalCurrencyAmountField(models.DecimalField):\n \"\"\"\n- A field used to define currency according to djstripe logic.\n+ A legacy field to store currency amounts in dollars (etc).\n+\n+ Stripe is always in cents. Historically djstripe stored everything in dollars.\n \n- Stripe is always in cents. djstripe stores everything in dollars.\n+ Note: Don't use this for new fields, use StripeQuantumCurrencyAmountField instead.\n+ We're planning on migrating existing fields in dj-stripe 3.0,\n+ see https://github.com/dj-stripe/dj-stripe/issues/955\n \"\"\"\n \n def __init__(self, *args, **kwargs):\n", "issue": "Document our plan to standardise on storing currency in cents\n- [x] Add note on field classes that StripeQuantumCurrencyAmountField should be used for new fields\r\n- [x] Update changelog about 3.0 migration to cents\r\n\r\n_Originally posted by @therefromhere in https://github.com/dj-stripe/dj-stripe/issues/955#issuecomment-527251976_\n", "before_files": [{"content": "\"\"\"\ndj-stripe Custom Field Definitions\n\"\"\"\nimport decimal\n\nfrom django.core.validators import MaxValueValidator, MinValueValidator\nfrom django.db import models\n\nfrom .settings import USE_NATIVE_JSONFIELD\nfrom .utils import convert_tstamp\n\nif USE_NATIVE_JSONFIELD:\n from django.contrib.postgres.fields import JSONField as BaseJSONField\nelse:\n from jsonfield import JSONField as BaseJSONField\n\n\nclass PaymentMethodForeignKey(models.ForeignKey):\n def __init__(self, **kwargs):\n kwargs.setdefault(\"to\", \"DjstripePaymentMethod\")\n super().__init__(**kwargs)\n\n\nclass StripePercentField(models.DecimalField):\n \"\"\"A field used to define a percent according to djstripe logic.\"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"Assign default args to this field.\"\"\"\n defaults = {\n \"decimal_places\": 2,\n \"max_digits\": 5,\n \"validators\": [MinValueValidator(1), MaxValueValidator(100)],\n }\n defaults.update(kwargs)\n super().__init__(*args, **defaults)\n\n\nclass StripeCurrencyCodeField(models.CharField):\n \"\"\"\n A field used to store a three-letter currency code (eg. usd, eur, ...)\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n defaults = {\"max_length\": 3, \"help_text\": \"Three-letter ISO currency code\"}\n defaults.update(kwargs)\n super().__init__(*args, **defaults)\n\n\nclass StripeQuantumCurrencyAmountField(models.IntegerField):\n pass\n\n\nclass StripeDecimalCurrencyAmountField(models.DecimalField):\n \"\"\"\n A field used to define currency according to djstripe logic.\n\n Stripe is always in cents. djstripe stores everything in dollars.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"Assign default args to this field.\"\"\"\n defaults = {\"decimal_places\": 2, \"max_digits\": 8}\n defaults.update(kwargs)\n super().__init__(*args, **defaults)\n\n def stripe_to_db(self, data):\n \"\"\"Convert the raw value to decimal representation.\"\"\"\n val = data.get(self.name)\n\n # Note: 0 is a possible return value, which is 'falseish'\n if val is not None:\n return val / decimal.Decimal(\"100\")\n\n\nclass StripeEnumField(models.CharField):\n def __init__(self, enum, *args, **kwargs):\n self.enum = enum\n choices = enum.choices\n defaults = {\"choices\": choices, \"max_length\": max(len(k) for k, v in choices)}\n defaults.update(kwargs)\n super().__init__(*args, **defaults)\n\n def deconstruct(self):\n name, path, args, kwargs = super().deconstruct()\n kwargs[\"enum\"] = self.enum\n del kwargs[\"choices\"]\n return name, path, args, kwargs\n\n\nclass StripeIdField(models.CharField):\n \"\"\"A field with enough space to hold any stripe ID.\"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"\n Assign default args to this field.\n\n As per: https://stripe.com/docs/upgrades\n You can safely assume object IDs we generate will never exceed 255\n characters, but you should be able to handle IDs of up to that\n length.\n \"\"\"\n defaults = {\"max_length\": 255, \"blank\": False, \"null\": False}\n defaults.update(kwargs)\n super().__init__(*args, **defaults)\n\n\nclass StripeDateTimeField(models.DateTimeField):\n \"\"\"A field used to define a DateTimeField value according to djstripe logic.\"\"\"\n\n def stripe_to_db(self, data):\n \"\"\"Convert the raw timestamp value to a DateTime representation.\"\"\"\n val = data.get(self.name)\n\n # Note: 0 is a possible return value, which is 'falseish'\n if val is not None:\n return convert_tstamp(val)\n\n\nclass JSONField(BaseJSONField):\n \"\"\"A field used to define a JSONField value according to djstripe logic.\"\"\"\n\n pass\n", "path": "djstripe/fields.py"}], "after_files": [{"content": "\"\"\"\ndj-stripe Custom Field Definitions\n\"\"\"\nimport decimal\n\nfrom django.core.validators import MaxValueValidator, MinValueValidator\nfrom django.db import models\n\nfrom .settings import USE_NATIVE_JSONFIELD\nfrom .utils import convert_tstamp\n\nif USE_NATIVE_JSONFIELD:\n from django.contrib.postgres.fields import JSONField as BaseJSONField\nelse:\n from jsonfield import JSONField as BaseJSONField\n\n\nclass PaymentMethodForeignKey(models.ForeignKey):\n def __init__(self, **kwargs):\n kwargs.setdefault(\"to\", \"DjstripePaymentMethod\")\n super().__init__(**kwargs)\n\n\nclass StripePercentField(models.DecimalField):\n \"\"\"A field used to define a percent according to djstripe logic.\"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"Assign default args to this field.\"\"\"\n defaults = {\n \"decimal_places\": 2,\n \"max_digits\": 5,\n \"validators\": [MinValueValidator(1), MaxValueValidator(100)],\n }\n defaults.update(kwargs)\n super().__init__(*args, **defaults)\n\n\nclass StripeCurrencyCodeField(models.CharField):\n \"\"\"\n A field used to store a three-letter currency code (eg. usd, eur, ...)\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n defaults = {\"max_length\": 3, \"help_text\": \"Three-letter ISO currency code\"}\n defaults.update(kwargs)\n super().__init__(*args, **defaults)\n\n\nclass StripeQuantumCurrencyAmountField(models.IntegerField):\n \"\"\"\n A field used to store currency amounts in cents (etc) as per stripe.\n \"\"\"\n\n pass\n\n\nclass StripeDecimalCurrencyAmountField(models.DecimalField):\n \"\"\"\n A legacy field to store currency amounts in dollars (etc).\n\n Stripe is always in cents. Historically djstripe stored everything in dollars.\n\n Note: Don't use this for new fields, use StripeQuantumCurrencyAmountField instead.\n We're planning on migrating existing fields in dj-stripe 3.0,\n see https://github.com/dj-stripe/dj-stripe/issues/955\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"Assign default args to this field.\"\"\"\n defaults = {\"decimal_places\": 2, \"max_digits\": 8}\n defaults.update(kwargs)\n super().__init__(*args, **defaults)\n\n def stripe_to_db(self, data):\n \"\"\"Convert the raw value to decimal representation.\"\"\"\n val = data.get(self.name)\n\n # Note: 0 is a possible return value, which is 'falseish'\n if val is not None:\n return val / decimal.Decimal(\"100\")\n\n\nclass StripeEnumField(models.CharField):\n def __init__(self, enum, *args, **kwargs):\n self.enum = enum\n choices = enum.choices\n defaults = {\"choices\": choices, \"max_length\": max(len(k) for k, v in choices)}\n defaults.update(kwargs)\n super().__init__(*args, **defaults)\n\n def deconstruct(self):\n name, path, args, kwargs = super().deconstruct()\n kwargs[\"enum\"] = self.enum\n del kwargs[\"choices\"]\n return name, path, args, kwargs\n\n\nclass StripeIdField(models.CharField):\n \"\"\"A field with enough space to hold any stripe ID.\"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"\n Assign default args to this field.\n\n As per: https://stripe.com/docs/upgrades\n You can safely assume object IDs we generate will never exceed 255\n characters, but you should be able to handle IDs of up to that\n length.\n \"\"\"\n defaults = {\"max_length\": 255, \"blank\": False, \"null\": False}\n defaults.update(kwargs)\n super().__init__(*args, **defaults)\n\n\nclass StripeDateTimeField(models.DateTimeField):\n \"\"\"A field used to define a DateTimeField value according to djstripe logic.\"\"\"\n\n def stripe_to_db(self, data):\n \"\"\"Convert the raw timestamp value to a DateTime representation.\"\"\"\n val = data.get(self.name)\n\n # Note: 0 is a possible return value, which is 'falseish'\n if val is not None:\n return convert_tstamp(val)\n\n\nclass JSONField(BaseJSONField):\n \"\"\"A field used to define a JSONField value according to djstripe logic.\"\"\"\n\n pass\n", "path": "djstripe/fields.py"}]} | 1,476 | 239 |
gh_patches_debug_31231 | rasdani/github-patches | git_diff | avocado-framework__avocado-4144 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
avocado list --resolver doesn't work with hint files
Sample setup with QEMU's source code, plus `avocado.hint`:
```
[kinds]
tap = ./tests/test-*
[tap]
uri = $testpath
args =
kwargs =
```
Then, running:
```
$ avocado list --resolver ./tests/test-iov
exec-test ./tests/test-iov
```
Where `tap` would be expected.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `avocado/core/suite.py`
Content:
```
1 from enum import Enum
2 from uuid import uuid4
3
4 from .dispatcher import RunnerDispatcher
5 from .exceptions import (JobTestSuiteReferenceResolutionError,
6 OptionValidationError)
7 from .loader import (DiscoverMode, LoaderError, LoaderUnhandledReferenceError,
8 loader)
9 from .resolver import resolve
10 from .settings import settings
11 from .tags import filter_test_tags
12 from .test import DryRunTest, Test
13 from .utils import resolutions_to_tasks
14 from .varianter import Varianter
15
16
17 class TestSuiteError(Exception):
18 pass
19
20
21 class TestSuiteStatus(Enum):
22 RESOLUTION_NOT_STARTED = object()
23 TESTS_NOT_FOUND = object()
24 TESTS_FOUND = object()
25 UNKNOWN = object()
26
27
28 class TestSuite:
29 def __init__(self, name, config=None, tests=None, job_config=None,
30 resolutions=None):
31 self.name = name
32 self.tests = tests
33 self.resolutions = resolutions
34
35 # Create a complete config dict with all registered options + custom
36 # config
37 self.config = settings.as_dict()
38 if job_config:
39 self.config.update(job_config)
40 if config:
41 self.config.update(config)
42
43 self._variants = None
44 self._references = None
45 self._runner = None
46 self._test_parameters = None
47
48 if (config.get('run.dry_run.enabled') and
49 self.config.get('run.test_runner') == 'runner'):
50 self._convert_to_dry_run()
51
52 def __len__(self):
53 """This is a convenient method to run `len()` over this object.
54
55 With this you can run: len(a_suite) and will return the same as
56 `len(a_suite.tests)`.
57 """
58 return self.size
59
60 def _convert_to_dry_run(self):
61 for i in range(self.size):
62 self.tests[i] = [DryRunTest, self.tests[i][1]]
63
64 @classmethod
65 def _from_config_with_loader(cls, config, name=None):
66 references = config.get('run.references')
67 ignore_missing = config.get('run.ignore_missing_references')
68 verbose = config.get('core.verbose')
69 subcommand = config.get('subcommand')
70
71 # To-be-removed: For some reason, avocado list will display more tests
72 # if in verbose mode. IMO, this is a little inconsistent with the 'run'
73 # command. This hack was needed to make one specific test happy.
74 tests_mode = DiscoverMode.DEFAULT
75 if subcommand == 'list':
76 if verbose:
77 tests_mode = DiscoverMode.ALL
78 else:
79 tests_mode = DiscoverMode.AVAILABLE
80
81 try:
82 loader.load_plugins(config)
83 tests = loader.discover(references,
84 force=ignore_missing,
85 which_tests=tests_mode)
86 if config.get("filter.by_tags.tags"):
87 tests = filter_test_tags(
88 tests,
89 config.get("filter.by_tags.tags"),
90 config.get("filter.by_tags.include_empty"),
91 config.get('filter.by_tags.include_empty_key'))
92 except (LoaderUnhandledReferenceError, LoaderError) as details:
93 raise TestSuiteError(details)
94
95 return cls(name=name or str(uuid4()),
96 config=config,
97 tests=tests)
98
99 @classmethod
100 def _from_config_with_resolver(cls, config, name=None):
101 ignore_missing = config.get('run.ignore_missing_references')
102 references = config.get('run.references')
103 try:
104 resolutions = resolve(references, ignore_missing=ignore_missing)
105 except JobTestSuiteReferenceResolutionError as details:
106 raise TestSuiteError(details)
107 tasks = resolutions_to_tasks(resolutions, config)
108
109 return cls(name=name or str(uuid4()),
110 config=config,
111 tests=tasks,
112 resolutions=resolutions)
113
114 def _get_stats_from_nrunner(self):
115 stats = {}
116 for test in self.tests:
117 stats = self._increment_dict_key_counter(stats, test.runnable.kind)
118 return stats
119
120 def _get_stats_from_runner(self):
121 stats = {}
122 mapping = loader.get_type_label_mapping()
123
124 for cls, _ in self.tests:
125 if isinstance(cls, str):
126 cls = Test
127 stats = self._increment_dict_key_counter(stats, mapping[cls])
128 return stats
129
130 def _get_tags_stats_from_nrunner(self):
131 stats = {}
132 for test in self.tests:
133 if test.runnable is None:
134 continue
135 tags = test.runnable.tags or {}
136 for tag in tags:
137 stats = self._increment_dict_key_counter(stats, tag)
138 return stats
139
140 def _get_tags_stats_from_runner(self):
141 stats = {}
142 for test in self.tests:
143 params = test[1]
144 for tag in params.get('tags', {}):
145 stats = self._increment_dict_key_counter(stats, tag)
146 return stats
147
148 @staticmethod
149 def _increment_dict_key_counter(dict_object, key):
150 try:
151 dict_object[key.lower()] += 1
152 except KeyError:
153 dict_object[key.lower()] = 1
154 return dict_object
155
156 @property
157 def references(self):
158 if self._references is None:
159 self._references = self.config.get('run.references')
160 return self._references
161
162 @property
163 def runner(self):
164 if self._runner is None:
165 runner_name = self.config.get('run.test_runner') or 'runner'
166 try:
167 runner_extension = RunnerDispatcher()[runner_name]
168 self._runner = runner_extension.obj
169 except KeyError:
170 raise TestSuiteError("Runner not implemented.")
171 return self._runner
172
173 @property
174 def size(self):
175 """The overall length/size of this test suite."""
176 if self.tests is None:
177 return 0
178 return len(self.tests)
179
180 @property
181 def stats(self):
182 """Return a statistics dict with the current tests."""
183 runner_name = self.config.get('run.test_runner') or 'runner'
184 if runner_name == 'runner':
185 return self._get_stats_from_runner()
186 elif runner_name == 'nrunner':
187 return self._get_stats_from_nrunner()
188 return {}
189
190 @property
191 def status(self):
192 if self.tests is None:
193 return TestSuiteStatus.RESOLUTION_NOT_STARTED
194 elif self.size == 0:
195 return TestSuiteStatus.TESTS_NOT_FOUND
196 elif self.size > 0:
197 return TestSuiteStatus.TESTS_FOUND
198 else:
199 return TestSuiteStatus.UNKNOWN
200
201 @property
202 def tags_stats(self):
203 """Return a statistics dict with the current tests tags."""
204 runner_name = self.config.get('run.test_runner') or 'runner'
205 if runner_name == 'runner':
206 return self._get_tags_stats_from_runner()
207 elif runner_name == 'nrunner':
208 return self._get_tags_stats_from_nrunner()
209 return {}
210
211 @property
212 def test_parameters(self):
213 """Placeholder for test parameters.
214
215 This is related to --test-parameters command line option or
216 (run.test_parameters).
217 """
218 if self._test_parameters is None:
219 self._test_parameters = {name: value for name, value
220 in self.config.get('run.test_parameters',
221 [])}
222 return self._test_parameters
223
224 @property
225 def variants(self):
226 if self._variants is None:
227 variants = Varianter()
228 if not variants.is_parsed():
229 try:
230 variants.parse(self.config)
231 except (IOError, ValueError) as details:
232 raise OptionValidationError("Unable to parse "
233 "variant: %s" % details)
234 self._variants = variants
235 return self._variants
236
237 def run(self, job):
238 """Run this test suite with the job context in mind.
239
240 :param job: A :class:`avocado.core.job.Job` instance.
241 :rtype: set
242 """
243 return self.runner.run_suite(job, self)
244
245 @classmethod
246 def from_config(cls, config, name=None, job_config=None):
247 """Helper method to create a TestSuite from config dicts.
248
249 This is different from the TestSuite() initialization because here we
250 are assuming that you need some help to build the test suite. Avocado
251 will try to resolve tests based on the configuration information insead
252 of assuming pre populated tests.
253
254 If you need to create a custom TestSuite, please use the TestSuite()
255 constructor instead of this method.
256
257 :param config: A config dict to be used on the desired test suite.
258 :type config: dict
259 :param name: The name of the test suite. This is optional and default
260 is a random uuid.
261 :type name: str
262 :param job_config: The job config dict (a global config). Use this to
263 avoid huge configs per test suite. This is also
264 optional.
265 :type job_config: dict
266 """
267 if job_config:
268 config.update(job_config)
269 runner = config.get('run.test_runner') or 'runner'
270 if runner == 'nrunner':
271 suite = cls._from_config_with_resolver(config, name)
272 else:
273 suite = cls._from_config_with_loader(config, name)
274
275 if not config.get('run.ignore_missing_references'):
276 if not suite.tests:
277 msg = ("Test Suite could not be create. No test references "
278 "provided nor any other arguments resolved into tests")
279 raise TestSuiteError(msg)
280
281 return suite
282
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/avocado/core/suite.py b/avocado/core/suite.py
--- a/avocado/core/suite.py
+++ b/avocado/core/suite.py
@@ -1,3 +1,18 @@
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+#
+# See LICENSE for more details.
+#
+# Copyright: Red Hat Inc. 2020
+# Author: Beraldo Leal <[email protected]>
+
+import os
from enum import Enum
from uuid import uuid4
@@ -6,6 +21,7 @@
OptionValidationError)
from .loader import (DiscoverMode, LoaderError, LoaderUnhandledReferenceError,
loader)
+from .parser import HintParser
from .resolver import resolve
from .settings import settings
from .tags import filter_test_tags
@@ -101,7 +117,13 @@
ignore_missing = config.get('run.ignore_missing_references')
references = config.get('run.references')
try:
- resolutions = resolve(references, ignore_missing=ignore_missing)
+ hint = None
+ hint_filepath = '.avocado.hint'
+ if os.path.exists(hint_filepath):
+ hint = HintParser(hint_filepath)
+ resolutions = resolve(references,
+ hint=hint,
+ ignore_missing=ignore_missing)
except JobTestSuiteReferenceResolutionError as details:
raise TestSuiteError(details)
tasks = resolutions_to_tasks(resolutions, config)
| {"golden_diff": "diff --git a/avocado/core/suite.py b/avocado/core/suite.py\n--- a/avocado/core/suite.py\n+++ b/avocado/core/suite.py\n@@ -1,3 +1,18 @@\n+# This program is free software; you can redistribute it and/or modify\n+# it under the terms of the GNU General Public License as published by\n+# the Free Software Foundation; either version 2 of the License, or\n+# (at your option) any later version.\n+#\n+# This program is distributed in the hope that it will be useful,\n+# but WITHOUT ANY WARRANTY; without even the implied warranty of\n+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n+#\n+# See LICENSE for more details.\n+#\n+# Copyright: Red Hat Inc. 2020\n+# Author: Beraldo Leal <[email protected]>\n+\n+import os\n from enum import Enum\n from uuid import uuid4\n \n@@ -6,6 +21,7 @@\n OptionValidationError)\n from .loader import (DiscoverMode, LoaderError, LoaderUnhandledReferenceError,\n loader)\n+from .parser import HintParser\n from .resolver import resolve\n from .settings import settings\n from .tags import filter_test_tags\n@@ -101,7 +117,13 @@\n ignore_missing = config.get('run.ignore_missing_references')\n references = config.get('run.references')\n try:\n- resolutions = resolve(references, ignore_missing=ignore_missing)\n+ hint = None\n+ hint_filepath = '.avocado.hint'\n+ if os.path.exists(hint_filepath):\n+ hint = HintParser(hint_filepath)\n+ resolutions = resolve(references,\n+ hint=hint,\n+ ignore_missing=ignore_missing)\n except JobTestSuiteReferenceResolutionError as details:\n raise TestSuiteError(details)\n tasks = resolutions_to_tasks(resolutions, config)\n", "issue": "avocado list --resolver doesn't work with hint files\nSample setup with QEMU's source code, plus `avocado.hint`:\r\n\r\n``` \r\n[kinds]\r\ntap = ./tests/test-*\r\n\r\n[tap]\r\nuri = $testpath\r\nargs = \r\nkwargs =\r\n```\r\n\r\nThen, running:\r\n\r\n```\r\n$ avocado list --resolver ./tests/test-iov \r\nexec-test ./tests/test-iov\r\n```\r\n\r\nWhere `tap` would be expected.\r\n\n", "before_files": [{"content": "from enum import Enum\nfrom uuid import uuid4\n\nfrom .dispatcher import RunnerDispatcher\nfrom .exceptions import (JobTestSuiteReferenceResolutionError,\n OptionValidationError)\nfrom .loader import (DiscoverMode, LoaderError, LoaderUnhandledReferenceError,\n loader)\nfrom .resolver import resolve\nfrom .settings import settings\nfrom .tags import filter_test_tags\nfrom .test import DryRunTest, Test\nfrom .utils import resolutions_to_tasks\nfrom .varianter import Varianter\n\n\nclass TestSuiteError(Exception):\n pass\n\n\nclass TestSuiteStatus(Enum):\n RESOLUTION_NOT_STARTED = object()\n TESTS_NOT_FOUND = object()\n TESTS_FOUND = object()\n UNKNOWN = object()\n\n\nclass TestSuite:\n def __init__(self, name, config=None, tests=None, job_config=None,\n resolutions=None):\n self.name = name\n self.tests = tests\n self.resolutions = resolutions\n\n # Create a complete config dict with all registered options + custom\n # config\n self.config = settings.as_dict()\n if job_config:\n self.config.update(job_config)\n if config:\n self.config.update(config)\n\n self._variants = None\n self._references = None\n self._runner = None\n self._test_parameters = None\n\n if (config.get('run.dry_run.enabled') and\n self.config.get('run.test_runner') == 'runner'):\n self._convert_to_dry_run()\n\n def __len__(self):\n \"\"\"This is a convenient method to run `len()` over this object.\n\n With this you can run: len(a_suite) and will return the same as\n `len(a_suite.tests)`.\n \"\"\"\n return self.size\n\n def _convert_to_dry_run(self):\n for i in range(self.size):\n self.tests[i] = [DryRunTest, self.tests[i][1]]\n\n @classmethod\n def _from_config_with_loader(cls, config, name=None):\n references = config.get('run.references')\n ignore_missing = config.get('run.ignore_missing_references')\n verbose = config.get('core.verbose')\n subcommand = config.get('subcommand')\n\n # To-be-removed: For some reason, avocado list will display more tests\n # if in verbose mode. IMO, this is a little inconsistent with the 'run'\n # command. This hack was needed to make one specific test happy.\n tests_mode = DiscoverMode.DEFAULT\n if subcommand == 'list':\n if verbose:\n tests_mode = DiscoverMode.ALL\n else:\n tests_mode = DiscoverMode.AVAILABLE\n\n try:\n loader.load_plugins(config)\n tests = loader.discover(references,\n force=ignore_missing,\n which_tests=tests_mode)\n if config.get(\"filter.by_tags.tags\"):\n tests = filter_test_tags(\n tests,\n config.get(\"filter.by_tags.tags\"),\n config.get(\"filter.by_tags.include_empty\"),\n config.get('filter.by_tags.include_empty_key'))\n except (LoaderUnhandledReferenceError, LoaderError) as details:\n raise TestSuiteError(details)\n\n return cls(name=name or str(uuid4()),\n config=config,\n tests=tests)\n\n @classmethod\n def _from_config_with_resolver(cls, config, name=None):\n ignore_missing = config.get('run.ignore_missing_references')\n references = config.get('run.references')\n try:\n resolutions = resolve(references, ignore_missing=ignore_missing)\n except JobTestSuiteReferenceResolutionError as details:\n raise TestSuiteError(details)\n tasks = resolutions_to_tasks(resolutions, config)\n\n return cls(name=name or str(uuid4()),\n config=config,\n tests=tasks,\n resolutions=resolutions)\n\n def _get_stats_from_nrunner(self):\n stats = {}\n for test in self.tests:\n stats = self._increment_dict_key_counter(stats, test.runnable.kind)\n return stats\n\n def _get_stats_from_runner(self):\n stats = {}\n mapping = loader.get_type_label_mapping()\n\n for cls, _ in self.tests:\n if isinstance(cls, str):\n cls = Test\n stats = self._increment_dict_key_counter(stats, mapping[cls])\n return stats\n\n def _get_tags_stats_from_nrunner(self):\n stats = {}\n for test in self.tests:\n if test.runnable is None:\n continue\n tags = test.runnable.tags or {}\n for tag in tags:\n stats = self._increment_dict_key_counter(stats, tag)\n return stats\n\n def _get_tags_stats_from_runner(self):\n stats = {}\n for test in self.tests:\n params = test[1]\n for tag in params.get('tags', {}):\n stats = self._increment_dict_key_counter(stats, tag)\n return stats\n\n @staticmethod\n def _increment_dict_key_counter(dict_object, key):\n try:\n dict_object[key.lower()] += 1\n except KeyError:\n dict_object[key.lower()] = 1\n return dict_object\n\n @property\n def references(self):\n if self._references is None:\n self._references = self.config.get('run.references')\n return self._references\n\n @property\n def runner(self):\n if self._runner is None:\n runner_name = self.config.get('run.test_runner') or 'runner'\n try:\n runner_extension = RunnerDispatcher()[runner_name]\n self._runner = runner_extension.obj\n except KeyError:\n raise TestSuiteError(\"Runner not implemented.\")\n return self._runner\n\n @property\n def size(self):\n \"\"\"The overall length/size of this test suite.\"\"\"\n if self.tests is None:\n return 0\n return len(self.tests)\n\n @property\n def stats(self):\n \"\"\"Return a statistics dict with the current tests.\"\"\"\n runner_name = self.config.get('run.test_runner') or 'runner'\n if runner_name == 'runner':\n return self._get_stats_from_runner()\n elif runner_name == 'nrunner':\n return self._get_stats_from_nrunner()\n return {}\n\n @property\n def status(self):\n if self.tests is None:\n return TestSuiteStatus.RESOLUTION_NOT_STARTED\n elif self.size == 0:\n return TestSuiteStatus.TESTS_NOT_FOUND\n elif self.size > 0:\n return TestSuiteStatus.TESTS_FOUND\n else:\n return TestSuiteStatus.UNKNOWN\n\n @property\n def tags_stats(self):\n \"\"\"Return a statistics dict with the current tests tags.\"\"\"\n runner_name = self.config.get('run.test_runner') or 'runner'\n if runner_name == 'runner':\n return self._get_tags_stats_from_runner()\n elif runner_name == 'nrunner':\n return self._get_tags_stats_from_nrunner()\n return {}\n\n @property\n def test_parameters(self):\n \"\"\"Placeholder for test parameters.\n\n This is related to --test-parameters command line option or\n (run.test_parameters).\n \"\"\"\n if self._test_parameters is None:\n self._test_parameters = {name: value for name, value\n in self.config.get('run.test_parameters',\n [])}\n return self._test_parameters\n\n @property\n def variants(self):\n if self._variants is None:\n variants = Varianter()\n if not variants.is_parsed():\n try:\n variants.parse(self.config)\n except (IOError, ValueError) as details:\n raise OptionValidationError(\"Unable to parse \"\n \"variant: %s\" % details)\n self._variants = variants\n return self._variants\n\n def run(self, job):\n \"\"\"Run this test suite with the job context in mind.\n\n :param job: A :class:`avocado.core.job.Job` instance.\n :rtype: set\n \"\"\"\n return self.runner.run_suite(job, self)\n\n @classmethod\n def from_config(cls, config, name=None, job_config=None):\n \"\"\"Helper method to create a TestSuite from config dicts.\n\n This is different from the TestSuite() initialization because here we\n are assuming that you need some help to build the test suite. Avocado\n will try to resolve tests based on the configuration information insead\n of assuming pre populated tests.\n\n If you need to create a custom TestSuite, please use the TestSuite()\n constructor instead of this method.\n\n :param config: A config dict to be used on the desired test suite.\n :type config: dict\n :param name: The name of the test suite. This is optional and default\n is a random uuid.\n :type name: str\n :param job_config: The job config dict (a global config). Use this to\n avoid huge configs per test suite. This is also\n optional.\n :type job_config: dict\n \"\"\"\n if job_config:\n config.update(job_config)\n runner = config.get('run.test_runner') or 'runner'\n if runner == 'nrunner':\n suite = cls._from_config_with_resolver(config, name)\n else:\n suite = cls._from_config_with_loader(config, name)\n\n if not config.get('run.ignore_missing_references'):\n if not suite.tests:\n msg = (\"Test Suite could not be create. No test references \"\n \"provided nor any other arguments resolved into tests\")\n raise TestSuiteError(msg)\n\n return suite\n", "path": "avocado/core/suite.py"}], "after_files": [{"content": "# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n#\n# See LICENSE for more details.\n#\n# Copyright: Red Hat Inc. 2020\n# Author: Beraldo Leal <[email protected]>\n\nimport os\nfrom enum import Enum\nfrom uuid import uuid4\n\nfrom .dispatcher import RunnerDispatcher\nfrom .exceptions import (JobTestSuiteReferenceResolutionError,\n OptionValidationError)\nfrom .loader import (DiscoverMode, LoaderError, LoaderUnhandledReferenceError,\n loader)\nfrom .parser import HintParser\nfrom .resolver import resolve\nfrom .settings import settings\nfrom .tags import filter_test_tags\nfrom .test import DryRunTest, Test\nfrom .utils import resolutions_to_tasks\nfrom .varianter import Varianter\n\n\nclass TestSuiteError(Exception):\n pass\n\n\nclass TestSuiteStatus(Enum):\n RESOLUTION_NOT_STARTED = object()\n TESTS_NOT_FOUND = object()\n TESTS_FOUND = object()\n UNKNOWN = object()\n\n\nclass TestSuite:\n def __init__(self, name, config=None, tests=None, job_config=None,\n resolutions=None):\n self.name = name\n self.tests = tests\n self.resolutions = resolutions\n\n # Create a complete config dict with all registered options + custom\n # config\n self.config = settings.as_dict()\n if job_config:\n self.config.update(job_config)\n if config:\n self.config.update(config)\n\n self._variants = None\n self._references = None\n self._runner = None\n self._test_parameters = None\n\n if (config.get('run.dry_run.enabled') and\n self.config.get('run.test_runner') == 'runner'):\n self._convert_to_dry_run()\n\n def __len__(self):\n \"\"\"This is a convenient method to run `len()` over this object.\n\n With this you can run: len(a_suite) and will return the same as\n `len(a_suite.tests)`.\n \"\"\"\n return self.size\n\n def _convert_to_dry_run(self):\n for i in range(self.size):\n self.tests[i] = [DryRunTest, self.tests[i][1]]\n\n @classmethod\n def _from_config_with_loader(cls, config, name=None):\n references = config.get('run.references')\n ignore_missing = config.get('run.ignore_missing_references')\n verbose = config.get('core.verbose')\n subcommand = config.get('subcommand')\n\n # To-be-removed: For some reason, avocado list will display more tests\n # if in verbose mode. IMO, this is a little inconsistent with the 'run'\n # command. This hack was needed to make one specific test happy.\n tests_mode = DiscoverMode.DEFAULT\n if subcommand == 'list':\n if verbose:\n tests_mode = DiscoverMode.ALL\n else:\n tests_mode = DiscoverMode.AVAILABLE\n\n try:\n loader.load_plugins(config)\n tests = loader.discover(references,\n force=ignore_missing,\n which_tests=tests_mode)\n if config.get(\"filter.by_tags.tags\"):\n tests = filter_test_tags(\n tests,\n config.get(\"filter.by_tags.tags\"),\n config.get(\"filter.by_tags.include_empty\"),\n config.get('filter.by_tags.include_empty_key'))\n except (LoaderUnhandledReferenceError, LoaderError) as details:\n raise TestSuiteError(details)\n\n return cls(name=name or str(uuid4()),\n config=config,\n tests=tests)\n\n @classmethod\n def _from_config_with_resolver(cls, config, name=None):\n ignore_missing = config.get('run.ignore_missing_references')\n references = config.get('run.references')\n try:\n hint = None\n hint_filepath = '.avocado.hint'\n if os.path.exists(hint_filepath):\n hint = HintParser(hint_filepath)\n resolutions = resolve(references,\n hint=hint,\n ignore_missing=ignore_missing)\n except JobTestSuiteReferenceResolutionError as details:\n raise TestSuiteError(details)\n tasks = resolutions_to_tasks(resolutions, config)\n\n return cls(name=name or str(uuid4()),\n config=config,\n tests=tasks,\n resolutions=resolutions)\n\n def _get_stats_from_nrunner(self):\n stats = {}\n for test in self.tests:\n stats = self._increment_dict_key_counter(stats, test.runnable.kind)\n return stats\n\n def _get_stats_from_runner(self):\n stats = {}\n mapping = loader.get_type_label_mapping()\n\n for cls, _ in self.tests:\n if isinstance(cls, str):\n cls = Test\n stats = self._increment_dict_key_counter(stats, mapping[cls])\n return stats\n\n def _get_tags_stats_from_nrunner(self):\n stats = {}\n for test in self.tests:\n if test.runnable is None:\n continue\n tags = test.runnable.tags or {}\n for tag in tags:\n stats = self._increment_dict_key_counter(stats, tag)\n return stats\n\n def _get_tags_stats_from_runner(self):\n stats = {}\n for test in self.tests:\n params = test[1]\n for tag in params.get('tags', {}):\n stats = self._increment_dict_key_counter(stats, tag)\n return stats\n\n @staticmethod\n def _increment_dict_key_counter(dict_object, key):\n try:\n dict_object[key.lower()] += 1\n except KeyError:\n dict_object[key.lower()] = 1\n return dict_object\n\n @property\n def references(self):\n if self._references is None:\n self._references = self.config.get('run.references')\n return self._references\n\n @property\n def runner(self):\n if self._runner is None:\n runner_name = self.config.get('run.test_runner') or 'runner'\n try:\n runner_extension = RunnerDispatcher()[runner_name]\n self._runner = runner_extension.obj\n except KeyError:\n raise TestSuiteError(\"Runner not implemented.\")\n return self._runner\n\n @property\n def size(self):\n \"\"\"The overall length/size of this test suite.\"\"\"\n if self.tests is None:\n return 0\n return len(self.tests)\n\n @property\n def stats(self):\n \"\"\"Return a statistics dict with the current tests.\"\"\"\n runner_name = self.config.get('run.test_runner') or 'runner'\n if runner_name == 'runner':\n return self._get_stats_from_runner()\n elif runner_name == 'nrunner':\n return self._get_stats_from_nrunner()\n return {}\n\n @property\n def status(self):\n if self.tests is None:\n return TestSuiteStatus.RESOLUTION_NOT_STARTED\n elif self.size == 0:\n return TestSuiteStatus.TESTS_NOT_FOUND\n elif self.size > 0:\n return TestSuiteStatus.TESTS_FOUND\n else:\n return TestSuiteStatus.UNKNOWN\n\n @property\n def tags_stats(self):\n \"\"\"Return a statistics dict with the current tests tags.\"\"\"\n runner_name = self.config.get('run.test_runner') or 'runner'\n if runner_name == 'runner':\n return self._get_tags_stats_from_runner()\n elif runner_name == 'nrunner':\n return self._get_tags_stats_from_nrunner()\n return {}\n\n @property\n def test_parameters(self):\n \"\"\"Placeholder for test parameters.\n\n This is related to --test-parameters command line option or\n (run.test_parameters).\n \"\"\"\n if self._test_parameters is None:\n self._test_parameters = {name: value for name, value\n in self.config.get('run.test_parameters',\n [])}\n return self._test_parameters\n\n @property\n def variants(self):\n if self._variants is None:\n variants = Varianter()\n if not variants.is_parsed():\n try:\n variants.parse(self.config)\n except (IOError, ValueError) as details:\n raise OptionValidationError(\"Unable to parse \"\n \"variant: %s\" % details)\n self._variants = variants\n return self._variants\n\n def run(self, job):\n \"\"\"Run this test suite with the job context in mind.\n\n :param job: A :class:`avocado.core.job.Job` instance.\n :rtype: set\n \"\"\"\n return self.runner.run_suite(job, self)\n\n @classmethod\n def from_config(cls, config, name=None, job_config=None):\n \"\"\"Helper method to create a TestSuite from config dicts.\n\n This is different from the TestSuite() initialization because here we\n are assuming that you need some help to build the test suite. Avocado\n will try to resolve tests based on the configuration information insead\n of assuming pre populated tests.\n\n If you need to create a custom TestSuite, please use the TestSuite()\n constructor instead of this method.\n\n :param config: A config dict to be used on the desired test suite.\n :type config: dict\n :param name: The name of the test suite. This is optional and default\n is a random uuid.\n :type name: str\n :param job_config: The job config dict (a global config). Use this to\n avoid huge configs per test suite. This is also\n optional.\n :type job_config: dict\n \"\"\"\n if job_config:\n config.update(job_config)\n runner = config.get('run.test_runner') or 'runner'\n if runner == 'nrunner':\n suite = cls._from_config_with_resolver(config, name)\n else:\n suite = cls._from_config_with_loader(config, name)\n\n if not config.get('run.ignore_missing_references'):\n if not suite.tests:\n msg = (\"Test Suite could not be create. No test references \"\n \"provided nor any other arguments resolved into tests\")\n raise TestSuiteError(msg)\n\n return suite\n", "path": "avocado/core/suite.py"}]} | 3,099 | 404 |
gh_patches_debug_23652 | rasdani/github-patches | git_diff | Mailu__Mailu-2150 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
fix key error in dictionary
## What type of PR?
(Feature, enhancement, bug-fix, documentation)
## What does this PR do?
### Related issue(s)
- Auto close an issue like: closes #2145
## Prerequisites
Before we can consider review and merge, please make sure the following list is done and checked.
If an entry in not applicable, you can check it or remove it from the list.
- [ ] In case of feature or enhancement: documentation updated accordingly
- [ ] Unless it's docs or a minor change: add [changelog](https://mailu.io/master/contributors/workflow.html#changelog) entry file.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/admin/mailu/internal/views/auth.py`
Content:
```
1 from mailu import models, utils
2 from mailu.internal import internal, nginx
3 from flask import current_app as app
4
5 import flask
6 import flask_login
7 import base64
8
9 @internal.route("/auth/email")
10 def nginx_authentication():
11 """ Main authentication endpoint for Nginx email server
12 """
13 client_ip = flask.request.headers["Client-Ip"]
14 headers = flask.request.headers
15 if headers["Auth-Port"] == '25' and headers['Auth-Method'] == 'plain':
16 response = flask.Response()
17 response.headers['Auth-Status'] = 'AUTH not supported'
18 response.headers['Auth-Error-Code'] = '502 5.5.1'
19 utils.limiter.rate_limit_ip(client_ip)
20 return response
21 if utils.limiter.should_rate_limit_ip(client_ip):
22 status, code = nginx.get_status(flask.request.headers['Auth-Protocol'], 'ratelimit')
23 response = flask.Response()
24 response.headers['Auth-Status'] = status
25 response.headers['Auth-Error-Code'] = code
26 if int(flask.request.headers['Auth-Login-Attempt']) < 10:
27 response.headers['Auth-Wait'] = '3'
28 return response
29 headers = nginx.handle_authentication(flask.request.headers)
30 response = flask.Response()
31 for key, value in headers.items():
32 response.headers[key] = str(value)
33 is_valid_user = False
34 is_from_webmail = headers['Auth-Port'] in ['10143', '10025']
35 if response.headers.get("Auth-User-Exists"):
36 username = response.headers["Auth-User"]
37 if utils.limiter.should_rate_limit_user(username, client_ip):
38 # FIXME could be done before handle_authentication()
39 status, code = nginx.get_status(flask.request.headers['Auth-Protocol'], 'ratelimit')
40 response = flask.Response()
41 response.headers['Auth-Status'] = status
42 response.headers['Auth-Error-Code'] = code
43 if int(flask.request.headers['Auth-Login-Attempt']) < 10:
44 response.headers['Auth-Wait'] = '3'
45 return response
46 is_valid_user = True
47 if headers.get("Auth-Status") == "OK":
48 utils.limiter.exempt_ip_from_ratelimits(client_ip)
49 elif is_valid_user:
50 utils.limiter.rate_limit_user(username, client_ip)
51 elif not is_from_webmail:
52 utils.limiter.rate_limit_ip(client_ip)
53 return response
54
55 @internal.route("/auth/admin")
56 def admin_authentication():
57 """ Fails if the user is not an authenticated admin.
58 """
59 if (not flask_login.current_user.is_anonymous
60 and flask_login.current_user.global_admin
61 and flask_login.current_user.enabled):
62 return ""
63 return flask.abort(403)
64
65 @internal.route("/auth/user")
66 def user_authentication():
67 """ Fails if the user is not authenticated.
68 """
69 if (not flask_login.current_user.is_anonymous
70 and flask_login.current_user.enabled):
71 response = flask.Response()
72 email = flask_login.current_user.get_id()
73 response.headers["X-User"] = models.IdnaEmail.process_bind_param(flask_login, email, "")
74 response.headers["X-User-Token"] = utils.gen_temp_token(email, flask.session)
75 return response
76 return flask.abort(403)
77
78
79 @internal.route("/auth/basic")
80 def basic_authentication():
81 """ Tries to authenticate using the Authorization header.
82 """
83 client_ip = flask.request.headers.get('X-Real-IP', flask.request.remote_addr)
84 if utils.limiter.should_rate_limit_ip(client_ip):
85 response = flask.Response(status=401)
86 response.headers["WWW-Authenticate"] = 'Basic realm="Authentication rate limit from one source exceeded"'
87 response.headers['Retry-After'] = '60'
88 return response
89 authorization = flask.request.headers.get("Authorization")
90 if authorization and authorization.startswith("Basic "):
91 encoded = authorization.replace("Basic ", "")
92 user_email, password = base64.b64decode(encoded).split(b":", 1)
93 user_email = user_email.decode("utf8")
94 if utils.limiter.should_rate_limit_user(user_email, client_ip):
95 response = flask.Response(status=401)
96 response.headers["WWW-Authenticate"] = 'Basic realm="Authentication rate limit for this username exceeded"'
97 response.headers['Retry-After'] = '60'
98 return response
99 user = models.User.query.get(user_email)
100 if user and nginx.check_credentials(user, password.decode('utf-8'), client_ip, "web"):
101 response = flask.Response()
102 response.headers["X-User"] = models.IdnaEmail.process_bind_param(flask_login, user.email, "")
103 utils.limiter.exempt_ip_from_ratelimits(client_ip)
104 return response
105 utils.limiter.rate_limit_user(user_email, client_ip) if user else utils.limiter.rate_limit_ip(client_ip)
106 response = flask.Response(status=401)
107 response.headers["WWW-Authenticate"] = 'Basic realm="Login Required"'
108 return response
109
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/core/admin/mailu/internal/views/auth.py b/core/admin/mailu/internal/views/auth.py
--- a/core/admin/mailu/internal/views/auth.py
+++ b/core/admin/mailu/internal/views/auth.py
@@ -18,7 +18,8 @@
response.headers['Auth-Error-Code'] = '502 5.5.1'
utils.limiter.rate_limit_ip(client_ip)
return response
- if utils.limiter.should_rate_limit_ip(client_ip):
+ is_from_webmail = headers['Auth-Port'] in ['10143', '10025']
+ if not is_from_webmail and utils.limiter.should_rate_limit_ip(client_ip):
status, code = nginx.get_status(flask.request.headers['Auth-Protocol'], 'ratelimit')
response = flask.Response()
response.headers['Auth-Status'] = status
@@ -31,7 +32,6 @@
for key, value in headers.items():
response.headers[key] = str(value)
is_valid_user = False
- is_from_webmail = headers['Auth-Port'] in ['10143', '10025']
if response.headers.get("Auth-User-Exists"):
username = response.headers["Auth-User"]
if utils.limiter.should_rate_limit_user(username, client_ip):
| {"golden_diff": "diff --git a/core/admin/mailu/internal/views/auth.py b/core/admin/mailu/internal/views/auth.py\n--- a/core/admin/mailu/internal/views/auth.py\n+++ b/core/admin/mailu/internal/views/auth.py\n@@ -18,7 +18,8 @@\n response.headers['Auth-Error-Code'] = '502 5.5.1'\n utils.limiter.rate_limit_ip(client_ip)\n return response\n- if utils.limiter.should_rate_limit_ip(client_ip):\n+ is_from_webmail = headers['Auth-Port'] in ['10143', '10025']\n+ if not is_from_webmail and utils.limiter.should_rate_limit_ip(client_ip):\n status, code = nginx.get_status(flask.request.headers['Auth-Protocol'], 'ratelimit')\n response = flask.Response()\n response.headers['Auth-Status'] = status\n@@ -31,7 +32,6 @@\n for key, value in headers.items():\n response.headers[key] = str(value)\n is_valid_user = False\n- is_from_webmail = headers['Auth-Port'] in ['10143', '10025']\n if response.headers.get(\"Auth-User-Exists\"):\n username = response.headers[\"Auth-User\"]\n if utils.limiter.should_rate_limit_user(username, client_ip):\n", "issue": "fix key error in dictionary\n## What type of PR?\r\n\r\n(Feature, enhancement, bug-fix, documentation)\r\n\r\n## What does this PR do?\r\n\r\n### Related issue(s)\r\n- Auto close an issue like: closes #2145\r\n\r\n## Prerequisites\r\nBefore we can consider review and merge, please make sure the following list is done and checked.\r\nIf an entry in not applicable, you can check it or remove it from the list.\r\n\r\n- [ ] In case of feature or enhancement: documentation updated accordingly\r\n- [ ] Unless it's docs or a minor change: add [changelog](https://mailu.io/master/contributors/workflow.html#changelog) entry file.\r\n\n", "before_files": [{"content": "from mailu import models, utils\nfrom mailu.internal import internal, nginx\nfrom flask import current_app as app\n\nimport flask\nimport flask_login\nimport base64\n\[email protected](\"/auth/email\")\ndef nginx_authentication():\n \"\"\" Main authentication endpoint for Nginx email server\n \"\"\"\n client_ip = flask.request.headers[\"Client-Ip\"]\n headers = flask.request.headers\n if headers[\"Auth-Port\"] == '25' and headers['Auth-Method'] == 'plain':\n response = flask.Response()\n response.headers['Auth-Status'] = 'AUTH not supported'\n response.headers['Auth-Error-Code'] = '502 5.5.1'\n utils.limiter.rate_limit_ip(client_ip)\n return response\n if utils.limiter.should_rate_limit_ip(client_ip):\n status, code = nginx.get_status(flask.request.headers['Auth-Protocol'], 'ratelimit')\n response = flask.Response()\n response.headers['Auth-Status'] = status\n response.headers['Auth-Error-Code'] = code\n if int(flask.request.headers['Auth-Login-Attempt']) < 10:\n response.headers['Auth-Wait'] = '3'\n return response\n headers = nginx.handle_authentication(flask.request.headers)\n response = flask.Response()\n for key, value in headers.items():\n response.headers[key] = str(value)\n is_valid_user = False\n is_from_webmail = headers['Auth-Port'] in ['10143', '10025']\n if response.headers.get(\"Auth-User-Exists\"):\n username = response.headers[\"Auth-User\"]\n if utils.limiter.should_rate_limit_user(username, client_ip):\n # FIXME could be done before handle_authentication()\n status, code = nginx.get_status(flask.request.headers['Auth-Protocol'], 'ratelimit')\n response = flask.Response()\n response.headers['Auth-Status'] = status\n response.headers['Auth-Error-Code'] = code\n if int(flask.request.headers['Auth-Login-Attempt']) < 10:\n response.headers['Auth-Wait'] = '3'\n return response\n is_valid_user = True\n if headers.get(\"Auth-Status\") == \"OK\":\n utils.limiter.exempt_ip_from_ratelimits(client_ip)\n elif is_valid_user:\n utils.limiter.rate_limit_user(username, client_ip)\n elif not is_from_webmail:\n utils.limiter.rate_limit_ip(client_ip)\n return response\n\[email protected](\"/auth/admin\")\ndef admin_authentication():\n \"\"\" Fails if the user is not an authenticated admin.\n \"\"\"\n if (not flask_login.current_user.is_anonymous\n and flask_login.current_user.global_admin\n and flask_login.current_user.enabled):\n return \"\"\n return flask.abort(403)\n\[email protected](\"/auth/user\")\ndef user_authentication():\n \"\"\" Fails if the user is not authenticated.\n \"\"\"\n if (not flask_login.current_user.is_anonymous\n and flask_login.current_user.enabled):\n response = flask.Response()\n email = flask_login.current_user.get_id()\n response.headers[\"X-User\"] = models.IdnaEmail.process_bind_param(flask_login, email, \"\")\n response.headers[\"X-User-Token\"] = utils.gen_temp_token(email, flask.session)\n return response\n return flask.abort(403)\n\n\[email protected](\"/auth/basic\")\ndef basic_authentication():\n \"\"\" Tries to authenticate using the Authorization header.\n \"\"\"\n client_ip = flask.request.headers.get('X-Real-IP', flask.request.remote_addr)\n if utils.limiter.should_rate_limit_ip(client_ip):\n response = flask.Response(status=401)\n response.headers[\"WWW-Authenticate\"] = 'Basic realm=\"Authentication rate limit from one source exceeded\"'\n response.headers['Retry-After'] = '60'\n return response\n authorization = flask.request.headers.get(\"Authorization\")\n if authorization and authorization.startswith(\"Basic \"):\n encoded = authorization.replace(\"Basic \", \"\")\n user_email, password = base64.b64decode(encoded).split(b\":\", 1)\n user_email = user_email.decode(\"utf8\")\n if utils.limiter.should_rate_limit_user(user_email, client_ip):\n response = flask.Response(status=401)\n response.headers[\"WWW-Authenticate\"] = 'Basic realm=\"Authentication rate limit for this username exceeded\"'\n response.headers['Retry-After'] = '60'\n return response\n user = models.User.query.get(user_email)\n if user and nginx.check_credentials(user, password.decode('utf-8'), client_ip, \"web\"):\n response = flask.Response()\n response.headers[\"X-User\"] = models.IdnaEmail.process_bind_param(flask_login, user.email, \"\")\n utils.limiter.exempt_ip_from_ratelimits(client_ip)\n return response\n utils.limiter.rate_limit_user(user_email, client_ip) if user else utils.limiter.rate_limit_ip(client_ip)\n response = flask.Response(status=401)\n response.headers[\"WWW-Authenticate\"] = 'Basic realm=\"Login Required\"'\n return response\n", "path": "core/admin/mailu/internal/views/auth.py"}], "after_files": [{"content": "from mailu import models, utils\nfrom mailu.internal import internal, nginx\nfrom flask import current_app as app\n\nimport flask\nimport flask_login\nimport base64\n\[email protected](\"/auth/email\")\ndef nginx_authentication():\n \"\"\" Main authentication endpoint for Nginx email server\n \"\"\"\n client_ip = flask.request.headers[\"Client-Ip\"]\n headers = flask.request.headers\n if headers[\"Auth-Port\"] == '25' and headers['Auth-Method'] == 'plain':\n response = flask.Response()\n response.headers['Auth-Status'] = 'AUTH not supported'\n response.headers['Auth-Error-Code'] = '502 5.5.1'\n utils.limiter.rate_limit_ip(client_ip)\n return response\n is_from_webmail = headers['Auth-Port'] in ['10143', '10025']\n if not is_from_webmail and utils.limiter.should_rate_limit_ip(client_ip):\n status, code = nginx.get_status(flask.request.headers['Auth-Protocol'], 'ratelimit')\n response = flask.Response()\n response.headers['Auth-Status'] = status\n response.headers['Auth-Error-Code'] = code\n if int(flask.request.headers['Auth-Login-Attempt']) < 10:\n response.headers['Auth-Wait'] = '3'\n return response\n headers = nginx.handle_authentication(flask.request.headers)\n response = flask.Response()\n for key, value in headers.items():\n response.headers[key] = str(value)\n is_valid_user = False\n if response.headers.get(\"Auth-User-Exists\"):\n username = response.headers[\"Auth-User\"]\n if utils.limiter.should_rate_limit_user(username, client_ip):\n # FIXME could be done before handle_authentication()\n status, code = nginx.get_status(flask.request.headers['Auth-Protocol'], 'ratelimit')\n response = flask.Response()\n response.headers['Auth-Status'] = status\n response.headers['Auth-Error-Code'] = code\n if int(flask.request.headers['Auth-Login-Attempt']) < 10:\n response.headers['Auth-Wait'] = '3'\n return response\n is_valid_user = True\n if headers.get(\"Auth-Status\") == \"OK\":\n utils.limiter.exempt_ip_from_ratelimits(client_ip)\n elif is_valid_user:\n utils.limiter.rate_limit_user(username, client_ip)\n elif not is_from_webmail:\n utils.limiter.rate_limit_ip(client_ip)\n return response\n\[email protected](\"/auth/admin\")\ndef admin_authentication():\n \"\"\" Fails if the user is not an authenticated admin.\n \"\"\"\n if (not flask_login.current_user.is_anonymous\n and flask_login.current_user.global_admin\n and flask_login.current_user.enabled):\n return \"\"\n return flask.abort(403)\n\[email protected](\"/auth/user\")\ndef user_authentication():\n \"\"\" Fails if the user is not authenticated.\n \"\"\"\n if (not flask_login.current_user.is_anonymous\n and flask_login.current_user.enabled):\n response = flask.Response()\n email = flask_login.current_user.get_id()\n response.headers[\"X-User\"] = models.IdnaEmail.process_bind_param(flask_login, email, \"\")\n response.headers[\"X-User-Token\"] = utils.gen_temp_token(email, flask.session)\n return response\n return flask.abort(403)\n\n\[email protected](\"/auth/basic\")\ndef basic_authentication():\n \"\"\" Tries to authenticate using the Authorization header.\n \"\"\"\n client_ip = flask.request.headers.get('X-Real-IP', flask.request.remote_addr)\n if utils.limiter.should_rate_limit_ip(client_ip):\n response = flask.Response(status=401)\n response.headers[\"WWW-Authenticate\"] = 'Basic realm=\"Authentication rate limit from one source exceeded\"'\n response.headers['Retry-After'] = '60'\n return response\n authorization = flask.request.headers.get(\"Authorization\")\n if authorization and authorization.startswith(\"Basic \"):\n encoded = authorization.replace(\"Basic \", \"\")\n user_email, password = base64.b64decode(encoded).split(b\":\", 1)\n user_email = user_email.decode(\"utf8\")\n if utils.limiter.should_rate_limit_user(user_email, client_ip):\n response = flask.Response(status=401)\n response.headers[\"WWW-Authenticate\"] = 'Basic realm=\"Authentication rate limit for this username exceeded\"'\n response.headers['Retry-After'] = '60'\n return response\n user = models.User.query.get(user_email)\n if user and nginx.check_credentials(user, password.decode('utf-8'), client_ip, \"web\"):\n response = flask.Response()\n response.headers[\"X-User\"] = models.IdnaEmail.process_bind_param(flask_login, user.email, \"\")\n utils.limiter.exempt_ip_from_ratelimits(client_ip)\n return response\n utils.limiter.rate_limit_user(user_email, client_ip) if user else utils.limiter.rate_limit_ip(client_ip)\n response = flask.Response(status=401)\n response.headers[\"WWW-Authenticate\"] = 'Basic realm=\"Login Required\"'\n return response\n", "path": "core/admin/mailu/internal/views/auth.py"}]} | 1,715 | 292 |
gh_patches_debug_40739 | rasdani/github-patches | git_diff | encode__starlette-2231 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
No module named 'exceptiongroup' when upgraded to starlette 0.31.0
After upgrading to starlette 0.31.0 today, we run into the following exception when importing starlette:
```
from starlette import applications, requests, responses, routing
2023-07-24 15:47:43 File "/opt/conda/lib/python3.8/site-packages/starlette/applications.py", line 8, in <module>
2023-07-24 15:47:43 from starlette.middleware.base import BaseHTTPMiddleware
2023-07-24 15:47:43 File "/opt/conda/lib/python3.8/site-packages/starlette/middleware/base.py", line 14, in <module>
2023-07-24 15:47:43 from exceptiongroup import BaseExceptionGroup
2023-07-24 15:47:43 ModuleNotFoundError: No module named 'exceptiongroup'
```
This is likely introduced from https://github.com/encode/starlette/pull/2211/files
Apply AnyIO upgrade changes only on test suite
- Closes https://github.com/encode/starlette/issues/2227
- Closes https://github.com/encode/starlette/discussions/2228
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `starlette/_utils.py`
Content:
```
1 import asyncio
2 import functools
3 import sys
4 import typing
5
6 if sys.version_info >= (3, 10): # pragma: no cover
7 from typing import TypeGuard
8 else: # pragma: no cover
9 from typing_extensions import TypeGuard
10
11 T = typing.TypeVar("T")
12 AwaitableCallable = typing.Callable[..., typing.Awaitable[T]]
13
14
15 @typing.overload
16 def is_async_callable(obj: AwaitableCallable[T]) -> TypeGuard[AwaitableCallable[T]]:
17 ...
18
19
20 @typing.overload
21 def is_async_callable(obj: typing.Any) -> TypeGuard[AwaitableCallable[typing.Any]]:
22 ...
23
24
25 def is_async_callable(obj: typing.Any) -> typing.Any:
26 while isinstance(obj, functools.partial):
27 obj = obj.func
28
29 return asyncio.iscoroutinefunction(obj) or (
30 callable(obj) and asyncio.iscoroutinefunction(obj.__call__)
31 )
32
33
34 T_co = typing.TypeVar("T_co", covariant=True)
35
36
37 class AwaitableOrContextManager(
38 typing.Awaitable[T_co], typing.AsyncContextManager[T_co], typing.Protocol[T_co]
39 ):
40 ...
41
42
43 class SupportsAsyncClose(typing.Protocol):
44 async def close(self) -> None:
45 ... # pragma: no cover
46
47
48 SupportsAsyncCloseType = typing.TypeVar(
49 "SupportsAsyncCloseType", bound=SupportsAsyncClose, covariant=False
50 )
51
52
53 class AwaitableOrContextManagerWrapper(typing.Generic[SupportsAsyncCloseType]):
54 __slots__ = ("aw", "entered")
55
56 def __init__(self, aw: typing.Awaitable[SupportsAsyncCloseType]) -> None:
57 self.aw = aw
58
59 def __await__(self) -> typing.Generator[typing.Any, None, SupportsAsyncCloseType]:
60 return self.aw.__await__()
61
62 async def __aenter__(self) -> SupportsAsyncCloseType:
63 self.entered = await self.aw
64 return self.entered
65
66 async def __aexit__(self, *args: typing.Any) -> typing.Union[None, bool]:
67 await self.entered.close()
68 return None
69
```
Path: `starlette/middleware/base.py`
Content:
```
1 import sys
2 import typing
3 from contextlib import contextmanager
4
5 import anyio
6 from anyio.abc import ObjectReceiveStream, ObjectSendStream
7
8 from starlette.background import BackgroundTask
9 from starlette.requests import ClientDisconnect, Request
10 from starlette.responses import ContentStream, Response, StreamingResponse
11 from starlette.types import ASGIApp, Message, Receive, Scope, Send
12
13 if sys.version_info < (3, 11): # pragma: no cover
14 from exceptiongroup import BaseExceptionGroup
15
16 RequestResponseEndpoint = typing.Callable[[Request], typing.Awaitable[Response]]
17 DispatchFunction = typing.Callable[
18 [Request, RequestResponseEndpoint], typing.Awaitable[Response]
19 ]
20 T = typing.TypeVar("T")
21
22
23 @contextmanager
24 def _convert_excgroups() -> typing.Generator[None, None, None]:
25 try:
26 yield
27 except BaseException as exc:
28 while isinstance(exc, BaseExceptionGroup) and len(exc.exceptions) == 1:
29 exc = exc.exceptions[0]
30
31 raise exc
32
33
34 class _CachedRequest(Request):
35 """
36 If the user calls Request.body() from their dispatch function
37 we cache the entire request body in memory and pass that to downstream middlewares,
38 but if they call Request.stream() then all we do is send an
39 empty body so that downstream things don't hang forever.
40 """
41
42 def __init__(self, scope: Scope, receive: Receive):
43 super().__init__(scope, receive)
44 self._wrapped_rcv_disconnected = False
45 self._wrapped_rcv_consumed = False
46 self._wrapped_rc_stream = self.stream()
47
48 async def wrapped_receive(self) -> Message:
49 # wrapped_rcv state 1: disconnected
50 if self._wrapped_rcv_disconnected:
51 # we've already sent a disconnect to the downstream app
52 # we don't need to wait to get another one
53 # (although most ASGI servers will just keep sending it)
54 return {"type": "http.disconnect"}
55 # wrapped_rcv state 1: consumed but not yet disconnected
56 if self._wrapped_rcv_consumed:
57 # since the downstream app has consumed us all that is left
58 # is to send it a disconnect
59 if self._is_disconnected:
60 # the middleware has already seen the disconnect
61 # since we know the client is disconnected no need to wait
62 # for the message
63 self._wrapped_rcv_disconnected = True
64 return {"type": "http.disconnect"}
65 # we don't know yet if the client is disconnected or not
66 # so we'll wait until we get that message
67 msg = await self.receive()
68 if msg["type"] != "http.disconnect": # pragma: no cover
69 # at this point a disconnect is all that we should be receiving
70 # if we get something else, things went wrong somewhere
71 raise RuntimeError(f"Unexpected message received: {msg['type']}")
72 return msg
73
74 # wrapped_rcv state 3: not yet consumed
75 if getattr(self, "_body", None) is not None:
76 # body() was called, we return it even if the client disconnected
77 self._wrapped_rcv_consumed = True
78 return {
79 "type": "http.request",
80 "body": self._body,
81 "more_body": False,
82 }
83 elif self._stream_consumed:
84 # stream() was called to completion
85 # return an empty body so that downstream apps don't hang
86 # waiting for a disconnect
87 self._wrapped_rcv_consumed = True
88 return {
89 "type": "http.request",
90 "body": b"",
91 "more_body": False,
92 }
93 else:
94 # body() was never called and stream() wasn't consumed
95 try:
96 stream = self.stream()
97 chunk = await stream.__anext__()
98 self._wrapped_rcv_consumed = self._stream_consumed
99 return {
100 "type": "http.request",
101 "body": chunk,
102 "more_body": not self._stream_consumed,
103 }
104 except ClientDisconnect:
105 self._wrapped_rcv_disconnected = True
106 return {"type": "http.disconnect"}
107
108
109 class BaseHTTPMiddleware:
110 def __init__(
111 self, app: ASGIApp, dispatch: typing.Optional[DispatchFunction] = None
112 ) -> None:
113 self.app = app
114 self.dispatch_func = self.dispatch if dispatch is None else dispatch
115
116 async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
117 if scope["type"] != "http":
118 await self.app(scope, receive, send)
119 return
120
121 request = _CachedRequest(scope, receive)
122 wrapped_receive = request.wrapped_receive
123 response_sent = anyio.Event()
124
125 async def call_next(request: Request) -> Response:
126 app_exc: typing.Optional[Exception] = None
127 send_stream: ObjectSendStream[typing.MutableMapping[str, typing.Any]]
128 recv_stream: ObjectReceiveStream[typing.MutableMapping[str, typing.Any]]
129 send_stream, recv_stream = anyio.create_memory_object_stream()
130
131 async def receive_or_disconnect() -> Message:
132 if response_sent.is_set():
133 return {"type": "http.disconnect"}
134
135 async with anyio.create_task_group() as task_group:
136
137 async def wrap(func: typing.Callable[[], typing.Awaitable[T]]) -> T:
138 result = await func()
139 task_group.cancel_scope.cancel()
140 return result
141
142 task_group.start_soon(wrap, response_sent.wait)
143 message = await wrap(wrapped_receive)
144
145 if response_sent.is_set():
146 return {"type": "http.disconnect"}
147
148 return message
149
150 async def close_recv_stream_on_response_sent() -> None:
151 await response_sent.wait()
152 recv_stream.close()
153
154 async def send_no_error(message: Message) -> None:
155 try:
156 await send_stream.send(message)
157 except anyio.BrokenResourceError:
158 # recv_stream has been closed, i.e. response_sent has been set.
159 return
160
161 async def coro() -> None:
162 nonlocal app_exc
163
164 async with send_stream:
165 try:
166 await self.app(scope, receive_or_disconnect, send_no_error)
167 except Exception as exc:
168 app_exc = exc
169
170 task_group.start_soon(close_recv_stream_on_response_sent)
171 task_group.start_soon(coro)
172
173 try:
174 message = await recv_stream.receive()
175 info = message.get("info", None)
176 if message["type"] == "http.response.debug" and info is not None:
177 message = await recv_stream.receive()
178 except anyio.EndOfStream:
179 if app_exc is not None:
180 raise app_exc
181 raise RuntimeError("No response returned.")
182
183 assert message["type"] == "http.response.start"
184
185 async def body_stream() -> typing.AsyncGenerator[bytes, None]:
186 async with recv_stream:
187 async for message in recv_stream:
188 assert message["type"] == "http.response.body"
189 body = message.get("body", b"")
190 if body:
191 yield body
192 if not message.get("more_body", False):
193 break
194
195 if app_exc is not None:
196 raise app_exc
197
198 response = _StreamingResponse(
199 status_code=message["status"], content=body_stream(), info=info
200 )
201 response.raw_headers = message["headers"]
202 return response
203
204 with _convert_excgroups():
205 async with anyio.create_task_group() as task_group:
206 response = await self.dispatch_func(request, call_next)
207 await response(scope, wrapped_receive, send)
208 response_sent.set()
209
210 async def dispatch(
211 self, request: Request, call_next: RequestResponseEndpoint
212 ) -> Response:
213 raise NotImplementedError() # pragma: no cover
214
215
216 class _StreamingResponse(StreamingResponse):
217 def __init__(
218 self,
219 content: ContentStream,
220 status_code: int = 200,
221 headers: typing.Optional[typing.Mapping[str, str]] = None,
222 media_type: typing.Optional[str] = None,
223 background: typing.Optional[BackgroundTask] = None,
224 info: typing.Optional[typing.Mapping[str, typing.Any]] = None,
225 ) -> None:
226 self._info = info
227 super().__init__(content, status_code, headers, media_type, background)
228
229 async def stream_response(self, send: Send) -> None:
230 if self._info:
231 await send({"type": "http.response.debug", "info": self._info})
232 return await super().stream_response(send)
233
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/starlette/_utils.py b/starlette/_utils.py
--- a/starlette/_utils.py
+++ b/starlette/_utils.py
@@ -2,12 +2,20 @@
import functools
import sys
import typing
+from contextlib import contextmanager
if sys.version_info >= (3, 10): # pragma: no cover
from typing import TypeGuard
else: # pragma: no cover
from typing_extensions import TypeGuard
+has_exceptiongroups = True
+if sys.version_info < (3, 11): # pragma: no cover
+ try:
+ from exceptiongroup import BaseExceptionGroup
+ except ImportError:
+ has_exceptiongroups = False
+
T = typing.TypeVar("T")
AwaitableCallable = typing.Callable[..., typing.Awaitable[T]]
@@ -66,3 +74,15 @@
async def __aexit__(self, *args: typing.Any) -> typing.Union[None, bool]:
await self.entered.close()
return None
+
+
+@contextmanager
+def collapse_excgroups() -> typing.Generator[None, None, None]:
+ try:
+ yield
+ except BaseException as exc:
+ if has_exceptiongroups:
+ while isinstance(exc, BaseExceptionGroup) and len(exc.exceptions) == 1:
+ exc = exc.exceptions[0] # pragma: no cover
+
+ raise exc
diff --git a/starlette/middleware/base.py b/starlette/middleware/base.py
--- a/starlette/middleware/base.py
+++ b/starlette/middleware/base.py
@@ -1,18 +1,14 @@
-import sys
import typing
-from contextlib import contextmanager
import anyio
from anyio.abc import ObjectReceiveStream, ObjectSendStream
+from starlette._utils import collapse_excgroups
from starlette.background import BackgroundTask
from starlette.requests import ClientDisconnect, Request
from starlette.responses import ContentStream, Response, StreamingResponse
from starlette.types import ASGIApp, Message, Receive, Scope, Send
-if sys.version_info < (3, 11): # pragma: no cover
- from exceptiongroup import BaseExceptionGroup
-
RequestResponseEndpoint = typing.Callable[[Request], typing.Awaitable[Response]]
DispatchFunction = typing.Callable[
[Request, RequestResponseEndpoint], typing.Awaitable[Response]
@@ -20,17 +16,6 @@
T = typing.TypeVar("T")
-@contextmanager
-def _convert_excgroups() -> typing.Generator[None, None, None]:
- try:
- yield
- except BaseException as exc:
- while isinstance(exc, BaseExceptionGroup) and len(exc.exceptions) == 1:
- exc = exc.exceptions[0]
-
- raise exc
-
-
class _CachedRequest(Request):
"""
If the user calls Request.body() from their dispatch function
@@ -201,7 +186,7 @@
response.raw_headers = message["headers"]
return response
- with _convert_excgroups():
+ with collapse_excgroups():
async with anyio.create_task_group() as task_group:
response = await self.dispatch_func(request, call_next)
await response(scope, wrapped_receive, send)
| {"golden_diff": "diff --git a/starlette/_utils.py b/starlette/_utils.py\n--- a/starlette/_utils.py\n+++ b/starlette/_utils.py\n@@ -2,12 +2,20 @@\n import functools\n import sys\n import typing\n+from contextlib import contextmanager\n \n if sys.version_info >= (3, 10): # pragma: no cover\n from typing import TypeGuard\n else: # pragma: no cover\n from typing_extensions import TypeGuard\n \n+has_exceptiongroups = True\n+if sys.version_info < (3, 11): # pragma: no cover\n+ try:\n+ from exceptiongroup import BaseExceptionGroup\n+ except ImportError:\n+ has_exceptiongroups = False\n+\n T = typing.TypeVar(\"T\")\n AwaitableCallable = typing.Callable[..., typing.Awaitable[T]]\n \n@@ -66,3 +74,15 @@\n async def __aexit__(self, *args: typing.Any) -> typing.Union[None, bool]:\n await self.entered.close()\n return None\n+\n+\n+@contextmanager\n+def collapse_excgroups() -> typing.Generator[None, None, None]:\n+ try:\n+ yield\n+ except BaseException as exc:\n+ if has_exceptiongroups:\n+ while isinstance(exc, BaseExceptionGroup) and len(exc.exceptions) == 1:\n+ exc = exc.exceptions[0] # pragma: no cover\n+\n+ raise exc\ndiff --git a/starlette/middleware/base.py b/starlette/middleware/base.py\n--- a/starlette/middleware/base.py\n+++ b/starlette/middleware/base.py\n@@ -1,18 +1,14 @@\n-import sys\n import typing\n-from contextlib import contextmanager\n \n import anyio\n from anyio.abc import ObjectReceiveStream, ObjectSendStream\n \n+from starlette._utils import collapse_excgroups\n from starlette.background import BackgroundTask\n from starlette.requests import ClientDisconnect, Request\n from starlette.responses import ContentStream, Response, StreamingResponse\n from starlette.types import ASGIApp, Message, Receive, Scope, Send\n \n-if sys.version_info < (3, 11): # pragma: no cover\n- from exceptiongroup import BaseExceptionGroup\n-\n RequestResponseEndpoint = typing.Callable[[Request], typing.Awaitable[Response]]\n DispatchFunction = typing.Callable[\n [Request, RequestResponseEndpoint], typing.Awaitable[Response]\n@@ -20,17 +16,6 @@\n T = typing.TypeVar(\"T\")\n \n \n-@contextmanager\n-def _convert_excgroups() -> typing.Generator[None, None, None]:\n- try:\n- yield\n- except BaseException as exc:\n- while isinstance(exc, BaseExceptionGroup) and len(exc.exceptions) == 1:\n- exc = exc.exceptions[0]\n-\n- raise exc\n-\n-\n class _CachedRequest(Request):\n \"\"\"\n If the user calls Request.body() from their dispatch function\n@@ -201,7 +186,7 @@\n response.raw_headers = message[\"headers\"]\n return response\n \n- with _convert_excgroups():\n+ with collapse_excgroups():\n async with anyio.create_task_group() as task_group:\n response = await self.dispatch_func(request, call_next)\n await response(scope, wrapped_receive, send)\n", "issue": "No module named 'exceptiongroup' when upgraded to starlette 0.31.0\nAfter upgrading to starlette 0.31.0 today, we run into the following exception when importing starlette:\r\n\r\n```\r\nfrom starlette import applications, requests, responses, routing\r\n2023-07-24 15:47:43 File \"/opt/conda/lib/python3.8/site-packages/starlette/applications.py\", line 8, in <module>\r\n2023-07-24 15:47:43 from starlette.middleware.base import BaseHTTPMiddleware\r\n2023-07-24 15:47:43 File \"/opt/conda/lib/python3.8/site-packages/starlette/middleware/base.py\", line 14, in <module>\r\n2023-07-24 15:47:43 from exceptiongroup import BaseExceptionGroup\r\n2023-07-24 15:47:43 ModuleNotFoundError: No module named 'exceptiongroup'\r\n```\r\n\r\nThis is likely introduced from https://github.com/encode/starlette/pull/2211/files\nApply AnyIO upgrade changes only on test suite\n- Closes https://github.com/encode/starlette/issues/2227\r\n- Closes https://github.com/encode/starlette/discussions/2228\n", "before_files": [{"content": "import asyncio\nimport functools\nimport sys\nimport typing\n\nif sys.version_info >= (3, 10): # pragma: no cover\n from typing import TypeGuard\nelse: # pragma: no cover\n from typing_extensions import TypeGuard\n\nT = typing.TypeVar(\"T\")\nAwaitableCallable = typing.Callable[..., typing.Awaitable[T]]\n\n\[email protected]\ndef is_async_callable(obj: AwaitableCallable[T]) -> TypeGuard[AwaitableCallable[T]]:\n ...\n\n\[email protected]\ndef is_async_callable(obj: typing.Any) -> TypeGuard[AwaitableCallable[typing.Any]]:\n ...\n\n\ndef is_async_callable(obj: typing.Any) -> typing.Any:\n while isinstance(obj, functools.partial):\n obj = obj.func\n\n return asyncio.iscoroutinefunction(obj) or (\n callable(obj) and asyncio.iscoroutinefunction(obj.__call__)\n )\n\n\nT_co = typing.TypeVar(\"T_co\", covariant=True)\n\n\nclass AwaitableOrContextManager(\n typing.Awaitable[T_co], typing.AsyncContextManager[T_co], typing.Protocol[T_co]\n):\n ...\n\n\nclass SupportsAsyncClose(typing.Protocol):\n async def close(self) -> None:\n ... # pragma: no cover\n\n\nSupportsAsyncCloseType = typing.TypeVar(\n \"SupportsAsyncCloseType\", bound=SupportsAsyncClose, covariant=False\n)\n\n\nclass AwaitableOrContextManagerWrapper(typing.Generic[SupportsAsyncCloseType]):\n __slots__ = (\"aw\", \"entered\")\n\n def __init__(self, aw: typing.Awaitable[SupportsAsyncCloseType]) -> None:\n self.aw = aw\n\n def __await__(self) -> typing.Generator[typing.Any, None, SupportsAsyncCloseType]:\n return self.aw.__await__()\n\n async def __aenter__(self) -> SupportsAsyncCloseType:\n self.entered = await self.aw\n return self.entered\n\n async def __aexit__(self, *args: typing.Any) -> typing.Union[None, bool]:\n await self.entered.close()\n return None\n", "path": "starlette/_utils.py"}, {"content": "import sys\nimport typing\nfrom contextlib import contextmanager\n\nimport anyio\nfrom anyio.abc import ObjectReceiveStream, ObjectSendStream\n\nfrom starlette.background import BackgroundTask\nfrom starlette.requests import ClientDisconnect, Request\nfrom starlette.responses import ContentStream, Response, StreamingResponse\nfrom starlette.types import ASGIApp, Message, Receive, Scope, Send\n\nif sys.version_info < (3, 11): # pragma: no cover\n from exceptiongroup import BaseExceptionGroup\n\nRequestResponseEndpoint = typing.Callable[[Request], typing.Awaitable[Response]]\nDispatchFunction = typing.Callable[\n [Request, RequestResponseEndpoint], typing.Awaitable[Response]\n]\nT = typing.TypeVar(\"T\")\n\n\n@contextmanager\ndef _convert_excgroups() -> typing.Generator[None, None, None]:\n try:\n yield\n except BaseException as exc:\n while isinstance(exc, BaseExceptionGroup) and len(exc.exceptions) == 1:\n exc = exc.exceptions[0]\n\n raise exc\n\n\nclass _CachedRequest(Request):\n \"\"\"\n If the user calls Request.body() from their dispatch function\n we cache the entire request body in memory and pass that to downstream middlewares,\n but if they call Request.stream() then all we do is send an\n empty body so that downstream things don't hang forever.\n \"\"\"\n\n def __init__(self, scope: Scope, receive: Receive):\n super().__init__(scope, receive)\n self._wrapped_rcv_disconnected = False\n self._wrapped_rcv_consumed = False\n self._wrapped_rc_stream = self.stream()\n\n async def wrapped_receive(self) -> Message:\n # wrapped_rcv state 1: disconnected\n if self._wrapped_rcv_disconnected:\n # we've already sent a disconnect to the downstream app\n # we don't need to wait to get another one\n # (although most ASGI servers will just keep sending it)\n return {\"type\": \"http.disconnect\"}\n # wrapped_rcv state 1: consumed but not yet disconnected\n if self._wrapped_rcv_consumed:\n # since the downstream app has consumed us all that is left\n # is to send it a disconnect\n if self._is_disconnected:\n # the middleware has already seen the disconnect\n # since we know the client is disconnected no need to wait\n # for the message\n self._wrapped_rcv_disconnected = True\n return {\"type\": \"http.disconnect\"}\n # we don't know yet if the client is disconnected or not\n # so we'll wait until we get that message\n msg = await self.receive()\n if msg[\"type\"] != \"http.disconnect\": # pragma: no cover\n # at this point a disconnect is all that we should be receiving\n # if we get something else, things went wrong somewhere\n raise RuntimeError(f\"Unexpected message received: {msg['type']}\")\n return msg\n\n # wrapped_rcv state 3: not yet consumed\n if getattr(self, \"_body\", None) is not None:\n # body() was called, we return it even if the client disconnected\n self._wrapped_rcv_consumed = True\n return {\n \"type\": \"http.request\",\n \"body\": self._body,\n \"more_body\": False,\n }\n elif self._stream_consumed:\n # stream() was called to completion\n # return an empty body so that downstream apps don't hang\n # waiting for a disconnect\n self._wrapped_rcv_consumed = True\n return {\n \"type\": \"http.request\",\n \"body\": b\"\",\n \"more_body\": False,\n }\n else:\n # body() was never called and stream() wasn't consumed\n try:\n stream = self.stream()\n chunk = await stream.__anext__()\n self._wrapped_rcv_consumed = self._stream_consumed\n return {\n \"type\": \"http.request\",\n \"body\": chunk,\n \"more_body\": not self._stream_consumed,\n }\n except ClientDisconnect:\n self._wrapped_rcv_disconnected = True\n return {\"type\": \"http.disconnect\"}\n\n\nclass BaseHTTPMiddleware:\n def __init__(\n self, app: ASGIApp, dispatch: typing.Optional[DispatchFunction] = None\n ) -> None:\n self.app = app\n self.dispatch_func = self.dispatch if dispatch is None else dispatch\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n if scope[\"type\"] != \"http\":\n await self.app(scope, receive, send)\n return\n\n request = _CachedRequest(scope, receive)\n wrapped_receive = request.wrapped_receive\n response_sent = anyio.Event()\n\n async def call_next(request: Request) -> Response:\n app_exc: typing.Optional[Exception] = None\n send_stream: ObjectSendStream[typing.MutableMapping[str, typing.Any]]\n recv_stream: ObjectReceiveStream[typing.MutableMapping[str, typing.Any]]\n send_stream, recv_stream = anyio.create_memory_object_stream()\n\n async def receive_or_disconnect() -> Message:\n if response_sent.is_set():\n return {\"type\": \"http.disconnect\"}\n\n async with anyio.create_task_group() as task_group:\n\n async def wrap(func: typing.Callable[[], typing.Awaitable[T]]) -> T:\n result = await func()\n task_group.cancel_scope.cancel()\n return result\n\n task_group.start_soon(wrap, response_sent.wait)\n message = await wrap(wrapped_receive)\n\n if response_sent.is_set():\n return {\"type\": \"http.disconnect\"}\n\n return message\n\n async def close_recv_stream_on_response_sent() -> None:\n await response_sent.wait()\n recv_stream.close()\n\n async def send_no_error(message: Message) -> None:\n try:\n await send_stream.send(message)\n except anyio.BrokenResourceError:\n # recv_stream has been closed, i.e. response_sent has been set.\n return\n\n async def coro() -> None:\n nonlocal app_exc\n\n async with send_stream:\n try:\n await self.app(scope, receive_or_disconnect, send_no_error)\n except Exception as exc:\n app_exc = exc\n\n task_group.start_soon(close_recv_stream_on_response_sent)\n task_group.start_soon(coro)\n\n try:\n message = await recv_stream.receive()\n info = message.get(\"info\", None)\n if message[\"type\"] == \"http.response.debug\" and info is not None:\n message = await recv_stream.receive()\n except anyio.EndOfStream:\n if app_exc is not None:\n raise app_exc\n raise RuntimeError(\"No response returned.\")\n\n assert message[\"type\"] == \"http.response.start\"\n\n async def body_stream() -> typing.AsyncGenerator[bytes, None]:\n async with recv_stream:\n async for message in recv_stream:\n assert message[\"type\"] == \"http.response.body\"\n body = message.get(\"body\", b\"\")\n if body:\n yield body\n if not message.get(\"more_body\", False):\n break\n\n if app_exc is not None:\n raise app_exc\n\n response = _StreamingResponse(\n status_code=message[\"status\"], content=body_stream(), info=info\n )\n response.raw_headers = message[\"headers\"]\n return response\n\n with _convert_excgroups():\n async with anyio.create_task_group() as task_group:\n response = await self.dispatch_func(request, call_next)\n await response(scope, wrapped_receive, send)\n response_sent.set()\n\n async def dispatch(\n self, request: Request, call_next: RequestResponseEndpoint\n ) -> Response:\n raise NotImplementedError() # pragma: no cover\n\n\nclass _StreamingResponse(StreamingResponse):\n def __init__(\n self,\n content: ContentStream,\n status_code: int = 200,\n headers: typing.Optional[typing.Mapping[str, str]] = None,\n media_type: typing.Optional[str] = None,\n background: typing.Optional[BackgroundTask] = None,\n info: typing.Optional[typing.Mapping[str, typing.Any]] = None,\n ) -> None:\n self._info = info\n super().__init__(content, status_code, headers, media_type, background)\n\n async def stream_response(self, send: Send) -> None:\n if self._info:\n await send({\"type\": \"http.response.debug\", \"info\": self._info})\n return await super().stream_response(send)\n", "path": "starlette/middleware/base.py"}], "after_files": [{"content": "import asyncio\nimport functools\nimport sys\nimport typing\nfrom contextlib import contextmanager\n\nif sys.version_info >= (3, 10): # pragma: no cover\n from typing import TypeGuard\nelse: # pragma: no cover\n from typing_extensions import TypeGuard\n\nhas_exceptiongroups = True\nif sys.version_info < (3, 11): # pragma: no cover\n try:\n from exceptiongroup import BaseExceptionGroup\n except ImportError:\n has_exceptiongroups = False\n\nT = typing.TypeVar(\"T\")\nAwaitableCallable = typing.Callable[..., typing.Awaitable[T]]\n\n\[email protected]\ndef is_async_callable(obj: AwaitableCallable[T]) -> TypeGuard[AwaitableCallable[T]]:\n ...\n\n\[email protected]\ndef is_async_callable(obj: typing.Any) -> TypeGuard[AwaitableCallable[typing.Any]]:\n ...\n\n\ndef is_async_callable(obj: typing.Any) -> typing.Any:\n while isinstance(obj, functools.partial):\n obj = obj.func\n\n return asyncio.iscoroutinefunction(obj) or (\n callable(obj) and asyncio.iscoroutinefunction(obj.__call__)\n )\n\n\nT_co = typing.TypeVar(\"T_co\", covariant=True)\n\n\nclass AwaitableOrContextManager(\n typing.Awaitable[T_co], typing.AsyncContextManager[T_co], typing.Protocol[T_co]\n):\n ...\n\n\nclass SupportsAsyncClose(typing.Protocol):\n async def close(self) -> None:\n ... # pragma: no cover\n\n\nSupportsAsyncCloseType = typing.TypeVar(\n \"SupportsAsyncCloseType\", bound=SupportsAsyncClose, covariant=False\n)\n\n\nclass AwaitableOrContextManagerWrapper(typing.Generic[SupportsAsyncCloseType]):\n __slots__ = (\"aw\", \"entered\")\n\n def __init__(self, aw: typing.Awaitable[SupportsAsyncCloseType]) -> None:\n self.aw = aw\n\n def __await__(self) -> typing.Generator[typing.Any, None, SupportsAsyncCloseType]:\n return self.aw.__await__()\n\n async def __aenter__(self) -> SupportsAsyncCloseType:\n self.entered = await self.aw\n return self.entered\n\n async def __aexit__(self, *args: typing.Any) -> typing.Union[None, bool]:\n await self.entered.close()\n return None\n\n\n@contextmanager\ndef collapse_excgroups() -> typing.Generator[None, None, None]:\n try:\n yield\n except BaseException as exc:\n if has_exceptiongroups:\n while isinstance(exc, BaseExceptionGroup) and len(exc.exceptions) == 1:\n exc = exc.exceptions[0] # pragma: no cover\n\n raise exc\n", "path": "starlette/_utils.py"}, {"content": "import typing\n\nimport anyio\nfrom anyio.abc import ObjectReceiveStream, ObjectSendStream\n\nfrom starlette._utils import collapse_excgroups\nfrom starlette.background import BackgroundTask\nfrom starlette.requests import ClientDisconnect, Request\nfrom starlette.responses import ContentStream, Response, StreamingResponse\nfrom starlette.types import ASGIApp, Message, Receive, Scope, Send\n\nRequestResponseEndpoint = typing.Callable[[Request], typing.Awaitable[Response]]\nDispatchFunction = typing.Callable[\n [Request, RequestResponseEndpoint], typing.Awaitable[Response]\n]\nT = typing.TypeVar(\"T\")\n\n\nclass _CachedRequest(Request):\n \"\"\"\n If the user calls Request.body() from their dispatch function\n we cache the entire request body in memory and pass that to downstream middlewares,\n but if they call Request.stream() then all we do is send an\n empty body so that downstream things don't hang forever.\n \"\"\"\n\n def __init__(self, scope: Scope, receive: Receive):\n super().__init__(scope, receive)\n self._wrapped_rcv_disconnected = False\n self._wrapped_rcv_consumed = False\n self._wrapped_rc_stream = self.stream()\n\n async def wrapped_receive(self) -> Message:\n # wrapped_rcv state 1: disconnected\n if self._wrapped_rcv_disconnected:\n # we've already sent a disconnect to the downstream app\n # we don't need to wait to get another one\n # (although most ASGI servers will just keep sending it)\n return {\"type\": \"http.disconnect\"}\n # wrapped_rcv state 1: consumed but not yet disconnected\n if self._wrapped_rcv_consumed:\n # since the downstream app has consumed us all that is left\n # is to send it a disconnect\n if self._is_disconnected:\n # the middleware has already seen the disconnect\n # since we know the client is disconnected no need to wait\n # for the message\n self._wrapped_rcv_disconnected = True\n return {\"type\": \"http.disconnect\"}\n # we don't know yet if the client is disconnected or not\n # so we'll wait until we get that message\n msg = await self.receive()\n if msg[\"type\"] != \"http.disconnect\": # pragma: no cover\n # at this point a disconnect is all that we should be receiving\n # if we get something else, things went wrong somewhere\n raise RuntimeError(f\"Unexpected message received: {msg['type']}\")\n return msg\n\n # wrapped_rcv state 3: not yet consumed\n if getattr(self, \"_body\", None) is not None:\n # body() was called, we return it even if the client disconnected\n self._wrapped_rcv_consumed = True\n return {\n \"type\": \"http.request\",\n \"body\": self._body,\n \"more_body\": False,\n }\n elif self._stream_consumed:\n # stream() was called to completion\n # return an empty body so that downstream apps don't hang\n # waiting for a disconnect\n self._wrapped_rcv_consumed = True\n return {\n \"type\": \"http.request\",\n \"body\": b\"\",\n \"more_body\": False,\n }\n else:\n # body() was never called and stream() wasn't consumed\n try:\n stream = self.stream()\n chunk = await stream.__anext__()\n self._wrapped_rcv_consumed = self._stream_consumed\n return {\n \"type\": \"http.request\",\n \"body\": chunk,\n \"more_body\": not self._stream_consumed,\n }\n except ClientDisconnect:\n self._wrapped_rcv_disconnected = True\n return {\"type\": \"http.disconnect\"}\n\n\nclass BaseHTTPMiddleware:\n def __init__(\n self, app: ASGIApp, dispatch: typing.Optional[DispatchFunction] = None\n ) -> None:\n self.app = app\n self.dispatch_func = self.dispatch if dispatch is None else dispatch\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n if scope[\"type\"] != \"http\":\n await self.app(scope, receive, send)\n return\n\n request = _CachedRequest(scope, receive)\n wrapped_receive = request.wrapped_receive\n response_sent = anyio.Event()\n\n async def call_next(request: Request) -> Response:\n app_exc: typing.Optional[Exception] = None\n send_stream: ObjectSendStream[typing.MutableMapping[str, typing.Any]]\n recv_stream: ObjectReceiveStream[typing.MutableMapping[str, typing.Any]]\n send_stream, recv_stream = anyio.create_memory_object_stream()\n\n async def receive_or_disconnect() -> Message:\n if response_sent.is_set():\n return {\"type\": \"http.disconnect\"}\n\n async with anyio.create_task_group() as task_group:\n\n async def wrap(func: typing.Callable[[], typing.Awaitable[T]]) -> T:\n result = await func()\n task_group.cancel_scope.cancel()\n return result\n\n task_group.start_soon(wrap, response_sent.wait)\n message = await wrap(wrapped_receive)\n\n if response_sent.is_set():\n return {\"type\": \"http.disconnect\"}\n\n return message\n\n async def close_recv_stream_on_response_sent() -> None:\n await response_sent.wait()\n recv_stream.close()\n\n async def send_no_error(message: Message) -> None:\n try:\n await send_stream.send(message)\n except anyio.BrokenResourceError:\n # recv_stream has been closed, i.e. response_sent has been set.\n return\n\n async def coro() -> None:\n nonlocal app_exc\n\n async with send_stream:\n try:\n await self.app(scope, receive_or_disconnect, send_no_error)\n except Exception as exc:\n app_exc = exc\n\n task_group.start_soon(close_recv_stream_on_response_sent)\n task_group.start_soon(coro)\n\n try:\n message = await recv_stream.receive()\n info = message.get(\"info\", None)\n if message[\"type\"] == \"http.response.debug\" and info is not None:\n message = await recv_stream.receive()\n except anyio.EndOfStream:\n if app_exc is not None:\n raise app_exc\n raise RuntimeError(\"No response returned.\")\n\n assert message[\"type\"] == \"http.response.start\"\n\n async def body_stream() -> typing.AsyncGenerator[bytes, None]:\n async with recv_stream:\n async for message in recv_stream:\n assert message[\"type\"] == \"http.response.body\"\n body = message.get(\"body\", b\"\")\n if body:\n yield body\n if not message.get(\"more_body\", False):\n break\n\n if app_exc is not None:\n raise app_exc\n\n response = _StreamingResponse(\n status_code=message[\"status\"], content=body_stream(), info=info\n )\n response.raw_headers = message[\"headers\"]\n return response\n\n with collapse_excgroups():\n async with anyio.create_task_group() as task_group:\n response = await self.dispatch_func(request, call_next)\n await response(scope, wrapped_receive, send)\n response_sent.set()\n\n async def dispatch(\n self, request: Request, call_next: RequestResponseEndpoint\n ) -> Response:\n raise NotImplementedError() # pragma: no cover\n\n\nclass _StreamingResponse(StreamingResponse):\n def __init__(\n self,\n content: ContentStream,\n status_code: int = 200,\n headers: typing.Optional[typing.Mapping[str, str]] = None,\n media_type: typing.Optional[str] = None,\n background: typing.Optional[BackgroundTask] = None,\n info: typing.Optional[typing.Mapping[str, typing.Any]] = None,\n ) -> None:\n self._info = info\n super().__init__(content, status_code, headers, media_type, background)\n\n async def stream_response(self, send: Send) -> None:\n if self._info:\n await send({\"type\": \"http.response.debug\", \"info\": self._info})\n return await super().stream_response(send)\n", "path": "starlette/middleware/base.py"}]} | 3,617 | 719 |
gh_patches_debug_31039 | rasdani/github-patches | git_diff | pallets__werkzeug-1393 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Debugger garbles markupsafe.Markup() repr
Tested in the `/console` of a flask app that's using the werkzeug debugging middleware.
```python
>>> from markupsafe import Markup
>>> Markup('test')
markupsafe.Markup(uarkup(u'test' )
>>> Markup('')
markupsafe.Markup(uarkup(u'' )
>>> print repr(Markup('test'))
Markup(u'test')
>>> print repr(Markup(''))
Markup(u'')
>>> Markup('test')
markupsafe.Markup(uarkup(u'test'rkup(u'') )
```

Not sure if this is an issue with the werkzeug debugger, markupsafe, or both.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `werkzeug/debug/repr.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 werkzeug.debug.repr
4 ~~~~~~~~~~~~~~~~~~~
5
6 This module implements object representations for debugging purposes.
7 Unlike the default repr these reprs expose a lot more information and
8 produce HTML instead of ASCII.
9
10 Together with the CSS and JavaScript files of the debugger this gives
11 a colorful and more compact output.
12
13 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
14 :license: BSD.
15 """
16 import sys
17 import re
18 import codecs
19 from traceback import format_exception_only
20 try:
21 from collections import deque
22 except ImportError: # pragma: no cover
23 deque = None
24 from werkzeug.utils import escape
25 from werkzeug._compat import iteritems, PY2, text_type, integer_types, \
26 string_types
27
28
29 missing = object()
30 _paragraph_re = re.compile(r'(?:\r\n|\r|\n){2,}')
31 RegexType = type(_paragraph_re)
32
33
34 HELP_HTML = '''\
35 <div class=box>
36 <h3>%(title)s</h3>
37 <pre class=help>%(text)s</pre>
38 </div>\
39 '''
40 OBJECT_DUMP_HTML = '''\
41 <div class=box>
42 <h3>%(title)s</h3>
43 %(repr)s
44 <table>%(items)s</table>
45 </div>\
46 '''
47
48
49 def debug_repr(obj):
50 """Creates a debug repr of an object as HTML unicode string."""
51 return DebugReprGenerator().repr(obj)
52
53
54 def dump(obj=missing):
55 """Print the object details to stdout._write (for the interactive
56 console of the web debugger.
57 """
58 gen = DebugReprGenerator()
59 if obj is missing:
60 rv = gen.dump_locals(sys._getframe(1).f_locals)
61 else:
62 rv = gen.dump_object(obj)
63 sys.stdout._write(rv)
64
65
66 class _Helper(object):
67
68 """Displays an HTML version of the normal help, for the interactive
69 debugger only because it requires a patched sys.stdout.
70 """
71
72 def __repr__(self):
73 return 'Type help(object) for help about object.'
74
75 def __call__(self, topic=None):
76 if topic is None:
77 sys.stdout._write('<span class=help>%s</span>' % repr(self))
78 return
79 import pydoc
80 pydoc.help(topic)
81 rv = sys.stdout.reset()
82 if isinstance(rv, bytes):
83 rv = rv.decode('utf-8', 'ignore')
84 paragraphs = _paragraph_re.split(rv)
85 if len(paragraphs) > 1:
86 title = paragraphs[0]
87 text = '\n\n'.join(paragraphs[1:])
88 else: # pragma: no cover
89 title = 'Help'
90 text = paragraphs[0]
91 sys.stdout._write(HELP_HTML % {'title': title, 'text': text})
92
93
94 helper = _Helper()
95
96
97 def _add_subclass_info(inner, obj, base):
98 if isinstance(base, tuple):
99 for base in base:
100 if type(obj) is base:
101 return inner
102 elif type(obj) is base:
103 return inner
104 module = ''
105 if obj.__class__.__module__ not in ('__builtin__', 'exceptions'):
106 module = '<span class="module">%s.</span>' % obj.__class__.__module__
107 return '%s%s(%s)' % (module, obj.__class__.__name__, inner)
108
109
110 class DebugReprGenerator(object):
111
112 def __init__(self):
113 self._stack = []
114
115 def _sequence_repr_maker(left, right, base=object(), limit=8):
116 def proxy(self, obj, recursive):
117 if recursive:
118 return _add_subclass_info(left + '...' + right, obj, base)
119 buf = [left]
120 have_extended_section = False
121 for idx, item in enumerate(obj):
122 if idx:
123 buf.append(', ')
124 if idx == limit:
125 buf.append('<span class="extended">')
126 have_extended_section = True
127 buf.append(self.repr(item))
128 if have_extended_section:
129 buf.append('</span>')
130 buf.append(right)
131 return _add_subclass_info(u''.join(buf), obj, base)
132 return proxy
133
134 list_repr = _sequence_repr_maker('[', ']', list)
135 tuple_repr = _sequence_repr_maker('(', ')', tuple)
136 set_repr = _sequence_repr_maker('set([', '])', set)
137 frozenset_repr = _sequence_repr_maker('frozenset([', '])', frozenset)
138 if deque is not None:
139 deque_repr = _sequence_repr_maker('<span class="module">collections.'
140 '</span>deque([', '])', deque)
141 del _sequence_repr_maker
142
143 def regex_repr(self, obj):
144 pattern = repr(obj.pattern)
145 if PY2:
146 pattern = pattern.decode('string-escape', 'ignore')
147 else:
148 pattern = codecs.decode(pattern, 'unicode-escape', 'ignore')
149 if pattern[:1] == 'u':
150 pattern = 'ur' + pattern[1:]
151 else:
152 pattern = 'r' + pattern
153 return u're.compile(<span class="string regex">%s</span>)' % pattern
154
155 def string_repr(self, obj, limit=70):
156 buf = ['<span class="string">']
157 a = repr(obj[:limit])
158 b = repr(obj[limit:])
159 if isinstance(obj, text_type) and PY2:
160 buf.append('u')
161 a = a[1:]
162 b = b[1:]
163 if b != "''":
164 buf.extend((escape(a[:-1]), '<span class="extended">', escape(b[1:]), '</span>'))
165 else:
166 buf.append(escape(a))
167 buf.append('</span>')
168 return _add_subclass_info(u''.join(buf), obj, (bytes, text_type))
169
170 def dict_repr(self, d, recursive, limit=5):
171 if recursive:
172 return _add_subclass_info(u'{...}', d, dict)
173 buf = ['{']
174 have_extended_section = False
175 for idx, (key, value) in enumerate(iteritems(d)):
176 if idx:
177 buf.append(', ')
178 if idx == limit - 1:
179 buf.append('<span class="extended">')
180 have_extended_section = True
181 buf.append('<span class="pair"><span class="key">%s</span>: '
182 '<span class="value">%s</span></span>' %
183 (self.repr(key), self.repr(value)))
184 if have_extended_section:
185 buf.append('</span>')
186 buf.append('}')
187 return _add_subclass_info(u''.join(buf), d, dict)
188
189 def object_repr(self, obj):
190 r = repr(obj)
191 if PY2:
192 r = r.decode('utf-8', 'replace')
193 return u'<span class="object">%s</span>' % escape(r)
194
195 def dispatch_repr(self, obj, recursive):
196 if obj is helper:
197 return u'<span class="help">%r</span>' % helper
198 if isinstance(obj, (integer_types, float, complex)):
199 return u'<span class="number">%r</span>' % obj
200 if isinstance(obj, string_types):
201 return self.string_repr(obj)
202 if isinstance(obj, RegexType):
203 return self.regex_repr(obj)
204 if isinstance(obj, list):
205 return self.list_repr(obj, recursive)
206 if isinstance(obj, tuple):
207 return self.tuple_repr(obj, recursive)
208 if isinstance(obj, set):
209 return self.set_repr(obj, recursive)
210 if isinstance(obj, frozenset):
211 return self.frozenset_repr(obj, recursive)
212 if isinstance(obj, dict):
213 return self.dict_repr(obj, recursive)
214 if deque is not None and isinstance(obj, deque):
215 return self.deque_repr(obj, recursive)
216 return self.object_repr(obj)
217
218 def fallback_repr(self):
219 try:
220 info = ''.join(format_exception_only(*sys.exc_info()[:2]))
221 except Exception: # pragma: no cover
222 info = '?'
223 if PY2:
224 info = info.decode('utf-8', 'ignore')
225 return u'<span class="brokenrepr"><broken repr (%s)>' \
226 u'</span>' % escape(info.strip())
227
228 def repr(self, obj):
229 recursive = False
230 for item in self._stack:
231 if item is obj:
232 recursive = True
233 break
234 self._stack.append(obj)
235 try:
236 try:
237 return self.dispatch_repr(obj, recursive)
238 except Exception:
239 return self.fallback_repr()
240 finally:
241 self._stack.pop()
242
243 def dump_object(self, obj):
244 repr = items = None
245 if isinstance(obj, dict):
246 title = 'Contents of'
247 items = []
248 for key, value in iteritems(obj):
249 if not isinstance(key, string_types):
250 items = None
251 break
252 items.append((key, self.repr(value)))
253 if items is None:
254 items = []
255 repr = self.repr(obj)
256 for key in dir(obj):
257 try:
258 items.append((key, self.repr(getattr(obj, key))))
259 except Exception:
260 pass
261 title = 'Details for'
262 title += ' ' + object.__repr__(obj)[1:-1]
263 return self.render_object_dump(items, title, repr)
264
265 def dump_locals(self, d):
266 items = [(key, self.repr(value)) for key, value in d.items()]
267 return self.render_object_dump(items, 'Local variables in frame')
268
269 def render_object_dump(self, items, title, repr=None):
270 html_items = []
271 for key, value in items:
272 html_items.append('<tr><th>%s<td><pre class=repr>%s</pre>' %
273 (escape(key), value))
274 if not html_items:
275 html_items.append('<tr><td><em>Nothing</em>')
276 return OBJECT_DUMP_HTML % {
277 'title': escape(title),
278 'repr': repr and '<pre class=repr>%s</pre>' % repr or '',
279 'items': '\n'.join(html_items)
280 }
281
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/werkzeug/debug/repr.py b/werkzeug/debug/repr.py
--- a/werkzeug/debug/repr.py
+++ b/werkzeug/debug/repr.py
@@ -154,18 +154,26 @@
def string_repr(self, obj, limit=70):
buf = ['<span class="string">']
- a = repr(obj[:limit])
- b = repr(obj[limit:])
- if isinstance(obj, text_type) and PY2:
- buf.append('u')
- a = a[1:]
- b = b[1:]
- if b != "''":
- buf.extend((escape(a[:-1]), '<span class="extended">', escape(b[1:]), '</span>'))
+ r = repr(obj)
+
+ # shorten the repr when the hidden part would be at least 3 chars
+ if len(r) - limit > 2:
+ buf.extend((
+ escape(r[:limit]),
+ '<span class="extended">', escape(r[limit:]), '</span>',
+ ))
else:
- buf.append(escape(a))
+ buf.append(escape(r))
+
buf.append('</span>')
- return _add_subclass_info(u''.join(buf), obj, (bytes, text_type))
+ out = u"".join(buf)
+
+ # if the repr looks like a standard string, add subclass info if needed
+ if r[0] in "'\"" or (r[0] in "ub" and r[1] in "'\""):
+ return _add_subclass_info(out, obj, (bytes, text_type))
+
+ # otherwise, assume the repr distinguishes the subclass already
+ return out
def dict_repr(self, d, recursive, limit=5):
if recursive:
@@ -197,7 +205,7 @@
return u'<span class="help">%r</span>' % helper
if isinstance(obj, (integer_types, float, complex)):
return u'<span class="number">%r</span>' % obj
- if isinstance(obj, string_types):
+ if isinstance(obj, string_types) or isinstance(obj, bytes):
return self.string_repr(obj)
if isinstance(obj, RegexType):
return self.regex_repr(obj)
| {"golden_diff": "diff --git a/werkzeug/debug/repr.py b/werkzeug/debug/repr.py\n--- a/werkzeug/debug/repr.py\n+++ b/werkzeug/debug/repr.py\n@@ -154,18 +154,26 @@\n \n def string_repr(self, obj, limit=70):\n buf = ['<span class=\"string\">']\n- a = repr(obj[:limit])\n- b = repr(obj[limit:])\n- if isinstance(obj, text_type) and PY2:\n- buf.append('u')\n- a = a[1:]\n- b = b[1:]\n- if b != \"''\":\n- buf.extend((escape(a[:-1]), '<span class=\"extended\">', escape(b[1:]), '</span>'))\n+ r = repr(obj)\n+\n+ # shorten the repr when the hidden part would be at least 3 chars\n+ if len(r) - limit > 2:\n+ buf.extend((\n+ escape(r[:limit]),\n+ '<span class=\"extended\">', escape(r[limit:]), '</span>',\n+ ))\n else:\n- buf.append(escape(a))\n+ buf.append(escape(r))\n+\n buf.append('</span>')\n- return _add_subclass_info(u''.join(buf), obj, (bytes, text_type))\n+ out = u\"\".join(buf)\n+\n+ # if the repr looks like a standard string, add subclass info if needed\n+ if r[0] in \"'\\\"\" or (r[0] in \"ub\" and r[1] in \"'\\\"\"):\n+ return _add_subclass_info(out, obj, (bytes, text_type))\n+\n+ # otherwise, assume the repr distinguishes the subclass already\n+ return out\n \n def dict_repr(self, d, recursive, limit=5):\n if recursive:\n@@ -197,7 +205,7 @@\n return u'<span class=\"help\">%r</span>' % helper\n if isinstance(obj, (integer_types, float, complex)):\n return u'<span class=\"number\">%r</span>' % obj\n- if isinstance(obj, string_types):\n+ if isinstance(obj, string_types) or isinstance(obj, bytes):\n return self.string_repr(obj)\n if isinstance(obj, RegexType):\n return self.regex_repr(obj)\n", "issue": "Debugger garbles markupsafe.Markup() repr\nTested in the `/console` of a flask app that's using the werkzeug debugging middleware.\r\n\r\n```python\r\n>>> from markupsafe import Markup\r\n>>> Markup('test')\r\nmarkupsafe.Markup(uarkup(u'test' )\r\n>>> Markup('')\r\nmarkupsafe.Markup(uarkup(u'' )\r\n>>> print repr(Markup('test'))\r\nMarkup(u'test')\r\n>>> print repr(Markup(''))\r\nMarkup(u'')\r\n>>> Markup('test')\r\nmarkupsafe.Markup(uarkup(u'test'rkup(u'') )\r\n```\r\n\r\n\r\n\r\nNot sure if this is an issue with the werkzeug debugger, markupsafe, or both.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\n werkzeug.debug.repr\n ~~~~~~~~~~~~~~~~~~~\n\n This module implements object representations for debugging purposes.\n Unlike the default repr these reprs expose a lot more information and\n produce HTML instead of ASCII.\n\n Together with the CSS and JavaScript files of the debugger this gives\n a colorful and more compact output.\n\n :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.\n :license: BSD.\n\"\"\"\nimport sys\nimport re\nimport codecs\nfrom traceback import format_exception_only\ntry:\n from collections import deque\nexcept ImportError: # pragma: no cover\n deque = None\nfrom werkzeug.utils import escape\nfrom werkzeug._compat import iteritems, PY2, text_type, integer_types, \\\n string_types\n\n\nmissing = object()\n_paragraph_re = re.compile(r'(?:\\r\\n|\\r|\\n){2,}')\nRegexType = type(_paragraph_re)\n\n\nHELP_HTML = '''\\\n<div class=box>\n <h3>%(title)s</h3>\n <pre class=help>%(text)s</pre>\n</div>\\\n'''\nOBJECT_DUMP_HTML = '''\\\n<div class=box>\n <h3>%(title)s</h3>\n %(repr)s\n <table>%(items)s</table>\n</div>\\\n'''\n\n\ndef debug_repr(obj):\n \"\"\"Creates a debug repr of an object as HTML unicode string.\"\"\"\n return DebugReprGenerator().repr(obj)\n\n\ndef dump(obj=missing):\n \"\"\"Print the object details to stdout._write (for the interactive\n console of the web debugger.\n \"\"\"\n gen = DebugReprGenerator()\n if obj is missing:\n rv = gen.dump_locals(sys._getframe(1).f_locals)\n else:\n rv = gen.dump_object(obj)\n sys.stdout._write(rv)\n\n\nclass _Helper(object):\n\n \"\"\"Displays an HTML version of the normal help, for the interactive\n debugger only because it requires a patched sys.stdout.\n \"\"\"\n\n def __repr__(self):\n return 'Type help(object) for help about object.'\n\n def __call__(self, topic=None):\n if topic is None:\n sys.stdout._write('<span class=help>%s</span>' % repr(self))\n return\n import pydoc\n pydoc.help(topic)\n rv = sys.stdout.reset()\n if isinstance(rv, bytes):\n rv = rv.decode('utf-8', 'ignore')\n paragraphs = _paragraph_re.split(rv)\n if len(paragraphs) > 1:\n title = paragraphs[0]\n text = '\\n\\n'.join(paragraphs[1:])\n else: # pragma: no cover\n title = 'Help'\n text = paragraphs[0]\n sys.stdout._write(HELP_HTML % {'title': title, 'text': text})\n\n\nhelper = _Helper()\n\n\ndef _add_subclass_info(inner, obj, base):\n if isinstance(base, tuple):\n for base in base:\n if type(obj) is base:\n return inner\n elif type(obj) is base:\n return inner\n module = ''\n if obj.__class__.__module__ not in ('__builtin__', 'exceptions'):\n module = '<span class=\"module\">%s.</span>' % obj.__class__.__module__\n return '%s%s(%s)' % (module, obj.__class__.__name__, inner)\n\n\nclass DebugReprGenerator(object):\n\n def __init__(self):\n self._stack = []\n\n def _sequence_repr_maker(left, right, base=object(), limit=8):\n def proxy(self, obj, recursive):\n if recursive:\n return _add_subclass_info(left + '...' + right, obj, base)\n buf = [left]\n have_extended_section = False\n for idx, item in enumerate(obj):\n if idx:\n buf.append(', ')\n if idx == limit:\n buf.append('<span class=\"extended\">')\n have_extended_section = True\n buf.append(self.repr(item))\n if have_extended_section:\n buf.append('</span>')\n buf.append(right)\n return _add_subclass_info(u''.join(buf), obj, base)\n return proxy\n\n list_repr = _sequence_repr_maker('[', ']', list)\n tuple_repr = _sequence_repr_maker('(', ')', tuple)\n set_repr = _sequence_repr_maker('set([', '])', set)\n frozenset_repr = _sequence_repr_maker('frozenset([', '])', frozenset)\n if deque is not None:\n deque_repr = _sequence_repr_maker('<span class=\"module\">collections.'\n '</span>deque([', '])', deque)\n del _sequence_repr_maker\n\n def regex_repr(self, obj):\n pattern = repr(obj.pattern)\n if PY2:\n pattern = pattern.decode('string-escape', 'ignore')\n else:\n pattern = codecs.decode(pattern, 'unicode-escape', 'ignore')\n if pattern[:1] == 'u':\n pattern = 'ur' + pattern[1:]\n else:\n pattern = 'r' + pattern\n return u're.compile(<span class=\"string regex\">%s</span>)' % pattern\n\n def string_repr(self, obj, limit=70):\n buf = ['<span class=\"string\">']\n a = repr(obj[:limit])\n b = repr(obj[limit:])\n if isinstance(obj, text_type) and PY2:\n buf.append('u')\n a = a[1:]\n b = b[1:]\n if b != \"''\":\n buf.extend((escape(a[:-1]), '<span class=\"extended\">', escape(b[1:]), '</span>'))\n else:\n buf.append(escape(a))\n buf.append('</span>')\n return _add_subclass_info(u''.join(buf), obj, (bytes, text_type))\n\n def dict_repr(self, d, recursive, limit=5):\n if recursive:\n return _add_subclass_info(u'{...}', d, dict)\n buf = ['{']\n have_extended_section = False\n for idx, (key, value) in enumerate(iteritems(d)):\n if idx:\n buf.append(', ')\n if idx == limit - 1:\n buf.append('<span class=\"extended\">')\n have_extended_section = True\n buf.append('<span class=\"pair\"><span class=\"key\">%s</span>: '\n '<span class=\"value\">%s</span></span>' %\n (self.repr(key), self.repr(value)))\n if have_extended_section:\n buf.append('</span>')\n buf.append('}')\n return _add_subclass_info(u''.join(buf), d, dict)\n\n def object_repr(self, obj):\n r = repr(obj)\n if PY2:\n r = r.decode('utf-8', 'replace')\n return u'<span class=\"object\">%s</span>' % escape(r)\n\n def dispatch_repr(self, obj, recursive):\n if obj is helper:\n return u'<span class=\"help\">%r</span>' % helper\n if isinstance(obj, (integer_types, float, complex)):\n return u'<span class=\"number\">%r</span>' % obj\n if isinstance(obj, string_types):\n return self.string_repr(obj)\n if isinstance(obj, RegexType):\n return self.regex_repr(obj)\n if isinstance(obj, list):\n return self.list_repr(obj, recursive)\n if isinstance(obj, tuple):\n return self.tuple_repr(obj, recursive)\n if isinstance(obj, set):\n return self.set_repr(obj, recursive)\n if isinstance(obj, frozenset):\n return self.frozenset_repr(obj, recursive)\n if isinstance(obj, dict):\n return self.dict_repr(obj, recursive)\n if deque is not None and isinstance(obj, deque):\n return self.deque_repr(obj, recursive)\n return self.object_repr(obj)\n\n def fallback_repr(self):\n try:\n info = ''.join(format_exception_only(*sys.exc_info()[:2]))\n except Exception: # pragma: no cover\n info = '?'\n if PY2:\n info = info.decode('utf-8', 'ignore')\n return u'<span class=\"brokenrepr\"><broken repr (%s)>' \\\n u'</span>' % escape(info.strip())\n\n def repr(self, obj):\n recursive = False\n for item in self._stack:\n if item is obj:\n recursive = True\n break\n self._stack.append(obj)\n try:\n try:\n return self.dispatch_repr(obj, recursive)\n except Exception:\n return self.fallback_repr()\n finally:\n self._stack.pop()\n\n def dump_object(self, obj):\n repr = items = None\n if isinstance(obj, dict):\n title = 'Contents of'\n items = []\n for key, value in iteritems(obj):\n if not isinstance(key, string_types):\n items = None\n break\n items.append((key, self.repr(value)))\n if items is None:\n items = []\n repr = self.repr(obj)\n for key in dir(obj):\n try:\n items.append((key, self.repr(getattr(obj, key))))\n except Exception:\n pass\n title = 'Details for'\n title += ' ' + object.__repr__(obj)[1:-1]\n return self.render_object_dump(items, title, repr)\n\n def dump_locals(self, d):\n items = [(key, self.repr(value)) for key, value in d.items()]\n return self.render_object_dump(items, 'Local variables in frame')\n\n def render_object_dump(self, items, title, repr=None):\n html_items = []\n for key, value in items:\n html_items.append('<tr><th>%s<td><pre class=repr>%s</pre>' %\n (escape(key), value))\n if not html_items:\n html_items.append('<tr><td><em>Nothing</em>')\n return OBJECT_DUMP_HTML % {\n 'title': escape(title),\n 'repr': repr and '<pre class=repr>%s</pre>' % repr or '',\n 'items': '\\n'.join(html_items)\n }\n", "path": "werkzeug/debug/repr.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\n werkzeug.debug.repr\n ~~~~~~~~~~~~~~~~~~~\n\n This module implements object representations for debugging purposes.\n Unlike the default repr these reprs expose a lot more information and\n produce HTML instead of ASCII.\n\n Together with the CSS and JavaScript files of the debugger this gives\n a colorful and more compact output.\n\n :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.\n :license: BSD.\n\"\"\"\nimport sys\nimport re\nimport codecs\nfrom traceback import format_exception_only\ntry:\n from collections import deque\nexcept ImportError: # pragma: no cover\n deque = None\nfrom werkzeug.utils import escape\nfrom werkzeug._compat import iteritems, PY2, text_type, integer_types, \\\n string_types\n\n\nmissing = object()\n_paragraph_re = re.compile(r'(?:\\r\\n|\\r|\\n){2,}')\nRegexType = type(_paragraph_re)\n\n\nHELP_HTML = '''\\\n<div class=box>\n <h3>%(title)s</h3>\n <pre class=help>%(text)s</pre>\n</div>\\\n'''\nOBJECT_DUMP_HTML = '''\\\n<div class=box>\n <h3>%(title)s</h3>\n %(repr)s\n <table>%(items)s</table>\n</div>\\\n'''\n\n\ndef debug_repr(obj):\n \"\"\"Creates a debug repr of an object as HTML unicode string.\"\"\"\n return DebugReprGenerator().repr(obj)\n\n\ndef dump(obj=missing):\n \"\"\"Print the object details to stdout._write (for the interactive\n console of the web debugger.\n \"\"\"\n gen = DebugReprGenerator()\n if obj is missing:\n rv = gen.dump_locals(sys._getframe(1).f_locals)\n else:\n rv = gen.dump_object(obj)\n sys.stdout._write(rv)\n\n\nclass _Helper(object):\n\n \"\"\"Displays an HTML version of the normal help, for the interactive\n debugger only because it requires a patched sys.stdout.\n \"\"\"\n\n def __repr__(self):\n return 'Type help(object) for help about object.'\n\n def __call__(self, topic=None):\n if topic is None:\n sys.stdout._write('<span class=help>%s</span>' % repr(self))\n return\n import pydoc\n pydoc.help(topic)\n rv = sys.stdout.reset()\n if isinstance(rv, bytes):\n rv = rv.decode('utf-8', 'ignore')\n paragraphs = _paragraph_re.split(rv)\n if len(paragraphs) > 1:\n title = paragraphs[0]\n text = '\\n\\n'.join(paragraphs[1:])\n else: # pragma: no cover\n title = 'Help'\n text = paragraphs[0]\n sys.stdout._write(HELP_HTML % {'title': title, 'text': text})\n\n\nhelper = _Helper()\n\n\ndef _add_subclass_info(inner, obj, base):\n if isinstance(base, tuple):\n for base in base:\n if type(obj) is base:\n return inner\n elif type(obj) is base:\n return inner\n module = ''\n if obj.__class__.__module__ not in ('__builtin__', 'exceptions'):\n module = '<span class=\"module\">%s.</span>' % obj.__class__.__module__\n return '%s%s(%s)' % (module, obj.__class__.__name__, inner)\n\n\nclass DebugReprGenerator(object):\n\n def __init__(self):\n self._stack = []\n\n def _sequence_repr_maker(left, right, base=object(), limit=8):\n def proxy(self, obj, recursive):\n if recursive:\n return _add_subclass_info(left + '...' + right, obj, base)\n buf = [left]\n have_extended_section = False\n for idx, item in enumerate(obj):\n if idx:\n buf.append(', ')\n if idx == limit:\n buf.append('<span class=\"extended\">')\n have_extended_section = True\n buf.append(self.repr(item))\n if have_extended_section:\n buf.append('</span>')\n buf.append(right)\n return _add_subclass_info(u''.join(buf), obj, base)\n return proxy\n\n list_repr = _sequence_repr_maker('[', ']', list)\n tuple_repr = _sequence_repr_maker('(', ')', tuple)\n set_repr = _sequence_repr_maker('set([', '])', set)\n frozenset_repr = _sequence_repr_maker('frozenset([', '])', frozenset)\n if deque is not None:\n deque_repr = _sequence_repr_maker('<span class=\"module\">collections.'\n '</span>deque([', '])', deque)\n del _sequence_repr_maker\n\n def regex_repr(self, obj):\n pattern = repr(obj.pattern)\n if PY2:\n pattern = pattern.decode('string-escape', 'ignore')\n else:\n pattern = codecs.decode(pattern, 'unicode-escape', 'ignore')\n if pattern[:1] == 'u':\n pattern = 'ur' + pattern[1:]\n else:\n pattern = 'r' + pattern\n return u're.compile(<span class=\"string regex\">%s</span>)' % pattern\n\n def string_repr(self, obj, limit=70):\n buf = ['<span class=\"string\">']\n r = repr(obj)\n\n # shorten the repr when the hidden part would be at least 3 chars\n if len(r) - limit > 2:\n buf.extend((\n escape(r[:limit]),\n '<span class=\"extended\">', escape(r[limit:]), '</span>',\n ))\n else:\n buf.append(escape(r))\n\n buf.append('</span>')\n out = u\"\".join(buf)\n\n # if the repr looks like a standard string, add subclass info if needed\n if r[0] in \"'\\\"\" or (r[0] in \"ub\" and r[1] in \"'\\\"\"):\n return _add_subclass_info(out, obj, (bytes, text_type))\n\n # otherwise, assume the repr distinguishes the subclass already\n return out\n\n def dict_repr(self, d, recursive, limit=5):\n if recursive:\n return _add_subclass_info(u'{...}', d, dict)\n buf = ['{']\n have_extended_section = False\n for idx, (key, value) in enumerate(iteritems(d)):\n if idx:\n buf.append(', ')\n if idx == limit - 1:\n buf.append('<span class=\"extended\">')\n have_extended_section = True\n buf.append('<span class=\"pair\"><span class=\"key\">%s</span>: '\n '<span class=\"value\">%s</span></span>' %\n (self.repr(key), self.repr(value)))\n if have_extended_section:\n buf.append('</span>')\n buf.append('}')\n return _add_subclass_info(u''.join(buf), d, dict)\n\n def object_repr(self, obj):\n r = repr(obj)\n if PY2:\n r = r.decode('utf-8', 'replace')\n return u'<span class=\"object\">%s</span>' % escape(r)\n\n def dispatch_repr(self, obj, recursive):\n if obj is helper:\n return u'<span class=\"help\">%r</span>' % helper\n if isinstance(obj, (integer_types, float, complex)):\n return u'<span class=\"number\">%r</span>' % obj\n if isinstance(obj, string_types) or isinstance(obj, bytes):\n return self.string_repr(obj)\n if isinstance(obj, RegexType):\n return self.regex_repr(obj)\n if isinstance(obj, list):\n return self.list_repr(obj, recursive)\n if isinstance(obj, tuple):\n return self.tuple_repr(obj, recursive)\n if isinstance(obj, set):\n return self.set_repr(obj, recursive)\n if isinstance(obj, frozenset):\n return self.frozenset_repr(obj, recursive)\n if isinstance(obj, dict):\n return self.dict_repr(obj, recursive)\n if deque is not None and isinstance(obj, deque):\n return self.deque_repr(obj, recursive)\n return self.object_repr(obj)\n\n def fallback_repr(self):\n try:\n info = ''.join(format_exception_only(*sys.exc_info()[:2]))\n except Exception: # pragma: no cover\n info = '?'\n if PY2:\n info = info.decode('utf-8', 'ignore')\n return u'<span class=\"brokenrepr\"><broken repr (%s)>' \\\n u'</span>' % escape(info.strip())\n\n def repr(self, obj):\n recursive = False\n for item in self._stack:\n if item is obj:\n recursive = True\n break\n self._stack.append(obj)\n try:\n try:\n return self.dispatch_repr(obj, recursive)\n except Exception:\n return self.fallback_repr()\n finally:\n self._stack.pop()\n\n def dump_object(self, obj):\n repr = items = None\n if isinstance(obj, dict):\n title = 'Contents of'\n items = []\n for key, value in iteritems(obj):\n if not isinstance(key, string_types):\n items = None\n break\n items.append((key, self.repr(value)))\n if items is None:\n items = []\n repr = self.repr(obj)\n for key in dir(obj):\n try:\n items.append((key, self.repr(getattr(obj, key))))\n except Exception:\n pass\n title = 'Details for'\n title += ' ' + object.__repr__(obj)[1:-1]\n return self.render_object_dump(items, title, repr)\n\n def dump_locals(self, d):\n items = [(key, self.repr(value)) for key, value in d.items()]\n return self.render_object_dump(items, 'Local variables in frame')\n\n def render_object_dump(self, items, title, repr=None):\n html_items = []\n for key, value in items:\n html_items.append('<tr><th>%s<td><pre class=repr>%s</pre>' %\n (escape(key), value))\n if not html_items:\n html_items.append('<tr><td><em>Nothing</em>')\n return OBJECT_DUMP_HTML % {\n 'title': escape(title),\n 'repr': repr and '<pre class=repr>%s</pre>' % repr or '',\n 'items': '\\n'.join(html_items)\n }\n", "path": "werkzeug/debug/repr.py"}]} | 3,409 | 504 |
gh_patches_debug_6805 | rasdani/github-patches | git_diff | Pylons__pyramid-2672 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
reify docstring doesn't render properly
The testsetup section is omitted from the rendered docs at http://pyramid.readthedocs.io/en/latest/api/decorator.html#pyramid.decorator.reify
Unfortunately this section is essential for understanding the example.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyramid/decorator.py`
Content:
```
1 from functools import update_wrapper
2
3
4 class reify(object):
5 """ Use as a class method decorator. It operates almost exactly like the
6 Python ``@property`` decorator, but it puts the result of the method it
7 decorates into the instance dict after the first call, effectively
8 replacing the function it decorates with an instance variable. It is, in
9 Python parlance, a non-data descriptor. An example:
10
11 .. testsetup::
12
13 from pyramid.decorator import reify
14
15 class Foo(object):
16 @reify
17 def jammy(self):
18 print('jammy called')
19 return 1
20
21 And usage of Foo:
22
23 .. doctest::
24
25 >>> f = Foo()
26 >>> v = f.jammy
27 jammy called
28 >>> print(v)
29 1
30 >>> f.jammy
31 1
32 >>> # jammy func not called the second time; it replaced itself with 1
33 >>> # Note: reassignment is possible
34 >>> f.jammy = 2
35 >>> f.jammy
36 2
37 """
38 def __init__(self, wrapped):
39 self.wrapped = wrapped
40 update_wrapper(self, wrapped)
41
42 def __get__(self, inst, objtype=None):
43 if inst is None:
44 return self
45 val = self.wrapped(inst)
46 setattr(inst, self.wrapped.__name__, val)
47 return val
48
49
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pyramid/decorator.py b/pyramid/decorator.py
--- a/pyramid/decorator.py
+++ b/pyramid/decorator.py
@@ -8,6 +8,16 @@
replacing the function it decorates with an instance variable. It is, in
Python parlance, a non-data descriptor. An example:
+ .. code-block:: python
+
+ from pyramid.decorator import reify
+
+ class Foo(object):
+ @reify
+ def jammy(self):
+ print('jammy called')
+ return 1
+
.. testsetup::
from pyramid.decorator import reify
| {"golden_diff": "diff --git a/pyramid/decorator.py b/pyramid/decorator.py\n--- a/pyramid/decorator.py\n+++ b/pyramid/decorator.py\n@@ -8,6 +8,16 @@\n replacing the function it decorates with an instance variable. It is, in\n Python parlance, a non-data descriptor. An example:\n \n+ .. code-block:: python\n+\n+ from pyramid.decorator import reify\n+\n+ class Foo(object):\n+ @reify\n+ def jammy(self):\n+ print('jammy called')\n+ return 1\n+\n .. testsetup::\n \n from pyramid.decorator import reify\n", "issue": "reify docstring doesn't render properly\nThe testsetup section is omitted from the rendered docs at http://pyramid.readthedocs.io/en/latest/api/decorator.html#pyramid.decorator.reify\n\nUnfortunately this section is essential for understanding the example.\n\n", "before_files": [{"content": "from functools import update_wrapper\n\n\nclass reify(object):\n \"\"\" Use as a class method decorator. It operates almost exactly like the\n Python ``@property`` decorator, but it puts the result of the method it\n decorates into the instance dict after the first call, effectively\n replacing the function it decorates with an instance variable. It is, in\n Python parlance, a non-data descriptor. An example:\n\n .. testsetup::\n\n from pyramid.decorator import reify\n\n class Foo(object):\n @reify\n def jammy(self):\n print('jammy called')\n return 1\n\n And usage of Foo:\n\n .. doctest::\n\n >>> f = Foo()\n >>> v = f.jammy\n jammy called\n >>> print(v)\n 1\n >>> f.jammy\n 1\n >>> # jammy func not called the second time; it replaced itself with 1\n >>> # Note: reassignment is possible\n >>> f.jammy = 2\n >>> f.jammy\n 2\n \"\"\"\n def __init__(self, wrapped):\n self.wrapped = wrapped\n update_wrapper(self, wrapped)\n\n def __get__(self, inst, objtype=None):\n if inst is None:\n return self\n val = self.wrapped(inst)\n setattr(inst, self.wrapped.__name__, val)\n return val\n\n", "path": "pyramid/decorator.py"}], "after_files": [{"content": "from functools import update_wrapper\n\n\nclass reify(object):\n \"\"\" Use as a class method decorator. It operates almost exactly like the\n Python ``@property`` decorator, but it puts the result of the method it\n decorates into the instance dict after the first call, effectively\n replacing the function it decorates with an instance variable. It is, in\n Python parlance, a non-data descriptor. An example:\n\n .. code-block:: python\n\n from pyramid.decorator import reify\n\n class Foo(object):\n @reify\n def jammy(self):\n print('jammy called')\n return 1\n\n .. testsetup::\n\n from pyramid.decorator import reify\n\n class Foo(object):\n @reify\n def jammy(self):\n print('jammy called')\n return 1\n\n And usage of Foo:\n\n .. doctest::\n\n >>> f = Foo()\n >>> v = f.jammy\n jammy called\n >>> print(v)\n 1\n >>> f.jammy\n 1\n >>> # jammy func not called the second time; it replaced itself with 1\n >>> # Note: reassignment is possible\n >>> f.jammy = 2\n >>> f.jammy\n 2\n \"\"\"\n def __init__(self, wrapped):\n self.wrapped = wrapped\n update_wrapper(self, wrapped)\n\n def __get__(self, inst, objtype=None):\n if inst is None:\n return self\n val = self.wrapped(inst)\n setattr(inst, self.wrapped.__name__, val)\n return val\n\n", "path": "pyramid/decorator.py"}]} | 719 | 149 |
gh_patches_debug_39030 | rasdani/github-patches | git_diff | microsoft__nni-2370 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
failed to train with remote mode
hello, everyone,
I used the experiment examples/trials/weight_sharing/ga_squad/config_remote.yml, and I modified the machineList with my own machine, but the experiment was waiting and the log is below. Is there any suggestion?
```
[9/29/2019, 4:40:41 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]
[9/29/2019, 4:40:41 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]
[9/29/2019, 4:40:44 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]
[9/29/2019, 4:40:44 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]
[9/29/2019, 4:40:47 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]
[9/29/2019, 4:40:47 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]
[9/29/2019, 4:40:50 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]
[9/29/2019, 4:40:50 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]
[9/29/2019, 4:40:53 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]
[9/29/2019, 4:40:53 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]
[9/29/2019, 4:40:56 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]
[9/29/2019, 4:40:56 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]
[9/29/2019, 4:40:59 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]
[9/29/2019, 4:40:59 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]
[9/29/2019, 4:41:02 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]
[9/29/2019, 4:41:02 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]
[9/29/2019, 4:41:05 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]
[9/29/2019, 4:41:05 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]
[9/29/2019, 4:41:08 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]
[9/29/2019, 4:41:08 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]
[9/29/2019, 4:41:11 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]
[9/29/2019, 4:41:11 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]
[9/29/2019, 4:41:14 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]
[9/29/2019, 4:41:14 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]
[9/29/2019, 4:41:17 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]
[9/29/2019, 4:41:17 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]
[9/29/2019, 4:41:20 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]
[9/29/2019, 4:41:20 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]
[9/29/2019, 4:41:23 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]
[9/29/2019, 4:41:23 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]
[9/29/2019, 4:41:26 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]
[9/29/2019, 4:41:26 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]
[9/29/2019, 4:41:29 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]
[9/29/2019, 4:41:29 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]
[9/29/2019, 4:41:32 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]
[9/29/2019, 4:41:32 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]
[9/29/2019, 4:41:35 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tools/nni_gpu_tool/gpu_metrics_collector.py`
Content:
```
1 # Copyright (c) Microsoft Corporation.
2 # Licensed under the MIT license.
3
4 import json
5 import os
6 import subprocess
7 import sys
8 import time
9 import traceback
10
11 from xml.dom import minidom
12
13 def check_ready_to_run():
14 if sys.platform == 'win32':
15 pgrep_output = subprocess.check_output(
16 'wmic process where "CommandLine like \'%nni_gpu_tool.gpu_metrics_collector%\' and name like \'%python%\'" get processId')
17 pidList = pgrep_output.decode("utf-8").strip().split()
18 pidList.pop(0) # remove the key word 'ProcessId'
19 pidList = list(map(int, pidList))
20 pidList.remove(os.getpid())
21 return not pidList
22 else:
23 pgrep_output = subprocess.check_output('pgrep -fxu "$(whoami)" \'python3 -m nni_gpu_tool.gpu_metrics_collector\'', shell=True)
24 pidList = []
25 for pid in pgrep_output.splitlines():
26 pidList.append(int(pid))
27 pidList.remove(os.getpid())
28 return not pidList
29
30 def main(argv):
31 metrics_output_dir = os.environ['METRIC_OUTPUT_DIR']
32 if check_ready_to_run() == False:
33 # GPU metrics collector is already running. Exit
34 exit(2)
35 cmd = 'nvidia-smi -q -x'.split()
36 while(True):
37 try:
38 smi_output = subprocess.check_output(cmd)
39 except Exception:
40 traceback.print_exc()
41 gen_empty_gpu_metric(metrics_output_dir)
42 break
43 parse_nvidia_smi_result(smi_output, metrics_output_dir)
44 # TODO: change to sleep time configurable via arguments
45 time.sleep(5)
46
47 def parse_nvidia_smi_result(smi, outputDir):
48 try:
49 old_umask = os.umask(0)
50 xmldoc = minidom.parseString(smi)
51 gpuList = xmldoc.getElementsByTagName('gpu')
52 with open(os.path.join(outputDir, "gpu_metrics"), 'a') as outputFile:
53 outPut = {}
54 outPut["Timestamp"] = time.asctime(time.localtime())
55 outPut["gpuCount"] = len(gpuList)
56 outPut["gpuInfos"] = []
57 for gpuIndex, gpu in enumerate(gpuList):
58 gpuInfo = {}
59 gpuInfo['index'] = gpuIndex
60 gpuInfo['gpuUtil'] = gpu.getElementsByTagName('utilization')[0]\
61 .getElementsByTagName('gpu_util')[0]\
62 .childNodes[0].data.replace("%", "").strip()
63 gpuInfo['gpuMemUtil'] = gpu.getElementsByTagName('utilization')[0]\
64 .getElementsByTagName('memory_util')[0]\
65 .childNodes[0].data.replace("%", "").strip()
66 processes = gpu.getElementsByTagName('processes')
67 runningProNumber = len(processes[0].getElementsByTagName('process_info'))
68 gpuInfo['activeProcessNum'] = runningProNumber
69
70 outPut["gpuInfos"].append(gpuInfo)
71 print(outPut)
72 outputFile.write("{}\n".format(json.dumps(outPut, sort_keys=True)))
73 outputFile.flush();
74 except:
75 # e_info = sys.exc_info()
76 print('xmldoc paring error')
77 finally:
78 os.umask(old_umask)
79
80 def gen_empty_gpu_metric(outputDir):
81 try:
82 old_umask = os.umask(0)
83 with open(os.path.join(outputDir, "gpu_metrics"), 'a') as outputFile:
84 outPut = {}
85 outPut["Timestamp"] = time.asctime(time.localtime())
86 outPut["gpuCount"] = 0
87 outPut["gpuInfos"] = []
88 print(outPut)
89 outputFile.write("{}\n".format(json.dumps(outPut, sort_keys=True)))
90 outputFile.flush()
91 except Exception:
92 traceback.print_exc()
93 finally:
94 os.umask(old_umask)
95
96
97 if __name__ == "__main__":
98 main(sys.argv[1:])
99
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tools/nni_gpu_tool/gpu_metrics_collector.py b/tools/nni_gpu_tool/gpu_metrics_collector.py
--- a/tools/nni_gpu_tool/gpu_metrics_collector.py
+++ b/tools/nni_gpu_tool/gpu_metrics_collector.py
@@ -10,27 +10,31 @@
from xml.dom import minidom
+
def check_ready_to_run():
if sys.platform == 'win32':
pgrep_output = subprocess.check_output(
'wmic process where "CommandLine like \'%nni_gpu_tool.gpu_metrics_collector%\' and name like \'%python%\'" get processId')
pidList = pgrep_output.decode("utf-8").strip().split()
- pidList.pop(0) # remove the key word 'ProcessId'
+ pidList.pop(0) # remove the key word 'ProcessId'
pidList = list(map(int, pidList))
pidList.remove(os.getpid())
return not pidList
else:
- pgrep_output = subprocess.check_output('pgrep -fxu "$(whoami)" \'python3 -m nni_gpu_tool.gpu_metrics_collector\'', shell=True)
+ pgrep_output = subprocess.check_output('pgrep -afu "$(whoami)" \'python3 -m nni_gpu_tool.gpu_metrics_collector\'', shell=True)
pidList = []
for pid in pgrep_output.splitlines():
- pidList.append(int(pid))
- pidList.remove(os.getpid())
+ pid = pid.decode()
+ if "pgrep " in pid or pid.startswith('%s ' % os.getpid()) or pid.startswith('%s ' % os.getppid()):
+ continue
+ pidList.append(pid)
return not pidList
+
def main(argv):
metrics_output_dir = os.environ['METRIC_OUTPUT_DIR']
if check_ready_to_run() == False:
- # GPU metrics collector is already running. Exit
+ print("GPU metrics collector is already running. exiting...")
exit(2)
cmd = 'nvidia-smi -q -x'.split()
while(True):
@@ -44,6 +48,7 @@
# TODO: change to sleep time configurable via arguments
time.sleep(5)
+
def parse_nvidia_smi_result(smi, outputDir):
try:
old_umask = os.umask(0)
@@ -70,13 +75,14 @@
outPut["gpuInfos"].append(gpuInfo)
print(outPut)
outputFile.write("{}\n".format(json.dumps(outPut, sort_keys=True)))
- outputFile.flush();
- except:
+ outputFile.flush()
+ except Exception as error:
# e_info = sys.exc_info()
- print('xmldoc paring error')
+ print('gpu_metrics_collector error: %s' % error)
finally:
os.umask(old_umask)
+
def gen_empty_gpu_metric(outputDir):
try:
old_umask = os.umask(0)
| {"golden_diff": "diff --git a/tools/nni_gpu_tool/gpu_metrics_collector.py b/tools/nni_gpu_tool/gpu_metrics_collector.py\n--- a/tools/nni_gpu_tool/gpu_metrics_collector.py\n+++ b/tools/nni_gpu_tool/gpu_metrics_collector.py\n@@ -10,27 +10,31 @@\n \n from xml.dom import minidom\n \n+\n def check_ready_to_run():\n if sys.platform == 'win32':\n pgrep_output = subprocess.check_output(\n 'wmic process where \"CommandLine like \\'%nni_gpu_tool.gpu_metrics_collector%\\' and name like \\'%python%\\'\" get processId')\n pidList = pgrep_output.decode(\"utf-8\").strip().split()\n- pidList.pop(0) # remove the key word 'ProcessId'\n+ pidList.pop(0) # remove the key word 'ProcessId'\n pidList = list(map(int, pidList))\n pidList.remove(os.getpid())\n return not pidList\n else:\n- pgrep_output = subprocess.check_output('pgrep -fxu \"$(whoami)\" \\'python3 -m nni_gpu_tool.gpu_metrics_collector\\'', shell=True)\n+ pgrep_output = subprocess.check_output('pgrep -afu \"$(whoami)\" \\'python3 -m nni_gpu_tool.gpu_metrics_collector\\'', shell=True)\n pidList = []\n for pid in pgrep_output.splitlines():\n- pidList.append(int(pid))\n- pidList.remove(os.getpid())\n+ pid = pid.decode()\n+ if \"pgrep \" in pid or pid.startswith('%s ' % os.getpid()) or pid.startswith('%s ' % os.getppid()):\n+ continue\n+ pidList.append(pid)\n return not pidList\n \n+\n def main(argv):\n metrics_output_dir = os.environ['METRIC_OUTPUT_DIR']\n if check_ready_to_run() == False:\n- # GPU metrics collector is already running. Exit\n+ print(\"GPU metrics collector is already running. exiting...\")\n exit(2)\n cmd = 'nvidia-smi -q -x'.split()\n while(True):\n@@ -44,6 +48,7 @@\n # TODO: change to sleep time configurable via arguments\n time.sleep(5)\n \n+\n def parse_nvidia_smi_result(smi, outputDir):\n try:\n old_umask = os.umask(0)\n@@ -70,13 +75,14 @@\n outPut[\"gpuInfos\"].append(gpuInfo)\n print(outPut)\n outputFile.write(\"{}\\n\".format(json.dumps(outPut, sort_keys=True)))\n- outputFile.flush();\n- except:\n+ outputFile.flush()\n+ except Exception as error:\n # e_info = sys.exc_info()\n- print('xmldoc paring error')\n+ print('gpu_metrics_collector error: %s' % error)\n finally:\n os.umask(old_umask)\n \n+\n def gen_empty_gpu_metric(outputDir):\n try:\n old_umask = os.umask(0)\n", "issue": "failed to train with remote mode\nhello, everyone,\r\n I used the experiment examples/trials/weight_sharing/ga_squad/config_remote.yml, and I modified the machineList with my own machine, but the experiment was waiting and the log is below. Is there any suggestion?\r\n```\r\n[9/29/2019, 4:40:41 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]\r\n[9/29/2019, 4:40:41 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]\r\n[9/29/2019, 4:40:44 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]\r\n[9/29/2019, 4:40:44 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]\r\n[9/29/2019, 4:40:47 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]\r\n[9/29/2019, 4:40:47 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]\r\n[9/29/2019, 4:40:50 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]\r\n[9/29/2019, 4:40:50 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]\r\n[9/29/2019, 4:40:53 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]\r\n[9/29/2019, 4:40:53 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]\r\n[9/29/2019, 4:40:56 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]\r\n[9/29/2019, 4:40:56 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]\r\n[9/29/2019, 4:40:59 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]\r\n[9/29/2019, 4:40:59 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]\r\n[9/29/2019, 4:41:02 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]\r\n[9/29/2019, 4:41:02 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]\r\n[9/29/2019, 4:41:05 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]\r\n[9/29/2019, 4:41:05 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]\r\n[9/29/2019, 4:41:08 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]\r\n[9/29/2019, 4:41:08 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]\r\n[9/29/2019, 4:41:11 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]\r\n[9/29/2019, 4:41:11 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]\r\n[9/29/2019, 4:41:14 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]\r\n[9/29/2019, 4:41:14 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]\r\n[9/29/2019, 4:41:17 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]\r\n[9/29/2019, 4:41:17 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]\r\n[9/29/2019, 4:41:20 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]\r\n[9/29/2019, 4:41:20 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]\r\n[9/29/2019, 4:41:23 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]\r\n[9/29/2019, 4:41:23 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]\r\n[9/29/2019, 4:41:26 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]\r\n[9/29/2019, 4:41:26 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]\r\n[9/29/2019, 4:41:29 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]\r\n[9/29/2019, 4:41:29 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]\r\n[9/29/2019, 4:41:32 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]\r\n[9/29/2019, 4:41:32 PM] INFO [ 'Right now no available GPU can be allocated for trial iwzEY, will try to schedule later' ]\r\n[9/29/2019, 4:41:35 PM] WARNING [ 'Scheduler: trialJob id iwzEY, no machine can be scheduled, return TMP_NO_AVAILABLE_GPU ' ]\r\n```\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation.\n# Licensed under the MIT license.\n\nimport json\nimport os\nimport subprocess\nimport sys\nimport time\nimport traceback\n\nfrom xml.dom import minidom\n\ndef check_ready_to_run():\n if sys.platform == 'win32':\n pgrep_output = subprocess.check_output(\n 'wmic process where \"CommandLine like \\'%nni_gpu_tool.gpu_metrics_collector%\\' and name like \\'%python%\\'\" get processId')\n pidList = pgrep_output.decode(\"utf-8\").strip().split()\n pidList.pop(0) # remove the key word 'ProcessId'\n pidList = list(map(int, pidList))\n pidList.remove(os.getpid())\n return not pidList\n else:\n pgrep_output = subprocess.check_output('pgrep -fxu \"$(whoami)\" \\'python3 -m nni_gpu_tool.gpu_metrics_collector\\'', shell=True)\n pidList = []\n for pid in pgrep_output.splitlines():\n pidList.append(int(pid))\n pidList.remove(os.getpid())\n return not pidList\n\ndef main(argv):\n metrics_output_dir = os.environ['METRIC_OUTPUT_DIR']\n if check_ready_to_run() == False:\n # GPU metrics collector is already running. Exit\n exit(2)\n cmd = 'nvidia-smi -q -x'.split()\n while(True):\n try:\n smi_output = subprocess.check_output(cmd)\n except Exception:\n traceback.print_exc()\n gen_empty_gpu_metric(metrics_output_dir)\n break\n parse_nvidia_smi_result(smi_output, metrics_output_dir)\n # TODO: change to sleep time configurable via arguments\n time.sleep(5)\n\ndef parse_nvidia_smi_result(smi, outputDir):\n try:\n old_umask = os.umask(0)\n xmldoc = minidom.parseString(smi)\n gpuList = xmldoc.getElementsByTagName('gpu')\n with open(os.path.join(outputDir, \"gpu_metrics\"), 'a') as outputFile:\n outPut = {}\n outPut[\"Timestamp\"] = time.asctime(time.localtime())\n outPut[\"gpuCount\"] = len(gpuList)\n outPut[\"gpuInfos\"] = []\n for gpuIndex, gpu in enumerate(gpuList):\n gpuInfo = {}\n gpuInfo['index'] = gpuIndex\n gpuInfo['gpuUtil'] = gpu.getElementsByTagName('utilization')[0]\\\n .getElementsByTagName('gpu_util')[0]\\\n .childNodes[0].data.replace(\"%\", \"\").strip()\n gpuInfo['gpuMemUtil'] = gpu.getElementsByTagName('utilization')[0]\\\n .getElementsByTagName('memory_util')[0]\\\n .childNodes[0].data.replace(\"%\", \"\").strip()\n processes = gpu.getElementsByTagName('processes')\n runningProNumber = len(processes[0].getElementsByTagName('process_info'))\n gpuInfo['activeProcessNum'] = runningProNumber\n\n outPut[\"gpuInfos\"].append(gpuInfo)\n print(outPut)\n outputFile.write(\"{}\\n\".format(json.dumps(outPut, sort_keys=True)))\n outputFile.flush();\n except:\n # e_info = sys.exc_info()\n print('xmldoc paring error')\n finally:\n os.umask(old_umask)\n\ndef gen_empty_gpu_metric(outputDir):\n try:\n old_umask = os.umask(0)\n with open(os.path.join(outputDir, \"gpu_metrics\"), 'a') as outputFile:\n outPut = {}\n outPut[\"Timestamp\"] = time.asctime(time.localtime())\n outPut[\"gpuCount\"] = 0\n outPut[\"gpuInfos\"] = []\n print(outPut)\n outputFile.write(\"{}\\n\".format(json.dumps(outPut, sort_keys=True)))\n outputFile.flush()\n except Exception:\n traceback.print_exc()\n finally:\n os.umask(old_umask)\n\n\nif __name__ == \"__main__\":\n main(sys.argv[1:])\n", "path": "tools/nni_gpu_tool/gpu_metrics_collector.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation.\n# Licensed under the MIT license.\n\nimport json\nimport os\nimport subprocess\nimport sys\nimport time\nimport traceback\n\nfrom xml.dom import minidom\n\n\ndef check_ready_to_run():\n if sys.platform == 'win32':\n pgrep_output = subprocess.check_output(\n 'wmic process where \"CommandLine like \\'%nni_gpu_tool.gpu_metrics_collector%\\' and name like \\'%python%\\'\" get processId')\n pidList = pgrep_output.decode(\"utf-8\").strip().split()\n pidList.pop(0) # remove the key word 'ProcessId'\n pidList = list(map(int, pidList))\n pidList.remove(os.getpid())\n return not pidList\n else:\n pgrep_output = subprocess.check_output('pgrep -afu \"$(whoami)\" \\'python3 -m nni_gpu_tool.gpu_metrics_collector\\'', shell=True)\n pidList = []\n for pid in pgrep_output.splitlines():\n pid = pid.decode()\n if \"pgrep \" in pid or pid.startswith('%s ' % os.getpid()) or pid.startswith('%s ' % os.getppid()):\n continue\n pidList.append(pid)\n return not pidList\n\n\ndef main(argv):\n metrics_output_dir = os.environ['METRIC_OUTPUT_DIR']\n if check_ready_to_run() == False:\n print(\"GPU metrics collector is already running. exiting...\")\n exit(2)\n cmd = 'nvidia-smi -q -x'.split()\n while(True):\n try:\n smi_output = subprocess.check_output(cmd)\n except Exception:\n traceback.print_exc()\n gen_empty_gpu_metric(metrics_output_dir)\n break\n parse_nvidia_smi_result(smi_output, metrics_output_dir)\n # TODO: change to sleep time configurable via arguments\n time.sleep(5)\n\n\ndef parse_nvidia_smi_result(smi, outputDir):\n try:\n old_umask = os.umask(0)\n xmldoc = minidom.parseString(smi)\n gpuList = xmldoc.getElementsByTagName('gpu')\n with open(os.path.join(outputDir, \"gpu_metrics\"), 'a') as outputFile:\n outPut = {}\n outPut[\"Timestamp\"] = time.asctime(time.localtime())\n outPut[\"gpuCount\"] = len(gpuList)\n outPut[\"gpuInfos\"] = []\n for gpuIndex, gpu in enumerate(gpuList):\n gpuInfo = {}\n gpuInfo['index'] = gpuIndex\n gpuInfo['gpuUtil'] = gpu.getElementsByTagName('utilization')[0]\\\n .getElementsByTagName('gpu_util')[0]\\\n .childNodes[0].data.replace(\"%\", \"\").strip()\n gpuInfo['gpuMemUtil'] = gpu.getElementsByTagName('utilization')[0]\\\n .getElementsByTagName('memory_util')[0]\\\n .childNodes[0].data.replace(\"%\", \"\").strip()\n processes = gpu.getElementsByTagName('processes')\n runningProNumber = len(processes[0].getElementsByTagName('process_info'))\n gpuInfo['activeProcessNum'] = runningProNumber\n\n outPut[\"gpuInfos\"].append(gpuInfo)\n print(outPut)\n outputFile.write(\"{}\\n\".format(json.dumps(outPut, sort_keys=True)))\n outputFile.flush()\n except Exception as error:\n # e_info = sys.exc_info()\n print('gpu_metrics_collector error: %s' % error)\n finally:\n os.umask(old_umask)\n\n\ndef gen_empty_gpu_metric(outputDir):\n try:\n old_umask = os.umask(0)\n with open(os.path.join(outputDir, \"gpu_metrics\"), 'a') as outputFile:\n outPut = {}\n outPut[\"Timestamp\"] = time.asctime(time.localtime())\n outPut[\"gpuCount\"] = 0\n outPut[\"gpuInfos\"] = []\n print(outPut)\n outputFile.write(\"{}\\n\".format(json.dumps(outPut, sort_keys=True)))\n outputFile.flush()\n except Exception:\n traceback.print_exc()\n finally:\n os.umask(old_umask)\n\n\nif __name__ == \"__main__\":\n main(sys.argv[1:])\n", "path": "tools/nni_gpu_tool/gpu_metrics_collector.py"}]} | 3,076 | 661 |
gh_patches_debug_34086 | rasdani/github-patches | git_diff | nipy__nipype-2912 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Latest nibabel release breaks test
### Summary
https://github.com/nipy/nipype/blob/fcbe28e58ecabedbcc2dcaf09bf9a7329ae43fcf/nipype/interfaces/tests/test_image.py#L17
blows up with nibabel 2.4.0
### Actual behavior
`AssertionError`
### Expected behavior
Pass
### How to replicate the behavior
View any recent CI build (e.g. https://travis-ci.org/nipy/nipype/jobs/514714882)
Relevant PR - nipy/nibabel#741
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nipype/interfaces/image.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-
3 # vi: set ft=python sts=4 ts=4 sw=4 et:
4
5 from ..utils.filemanip import fname_presuffix
6 from .base import (SimpleInterface, TraitedSpec, BaseInterfaceInputSpec,
7 traits, File)
8
9
10 class RescaleInputSpec(BaseInterfaceInputSpec):
11 in_file = File(exists=True, mandatory=True,
12 desc='Skull-stripped image to rescale')
13 ref_file = File(exists=True, mandatory=True,
14 desc='Skull-stripped reference image')
15 invert = traits.Bool(desc='Invert contrast of rescaled image')
16 percentile = traits.Range(low=0., high=50., value=0., usedefault=True,
17 desc='Percentile to use for reference to allow '
18 'for outliers - 1 indicates the 1st and '
19 '99th percentiles in the input file will '
20 'be mapped to the 99th and 1st percentiles '
21 'in the reference; 0 indicates minima and '
22 'maxima will be mapped')
23
24
25 class RescaleOutputSpec(TraitedSpec):
26 out_file = File(exists=True, desc='Rescaled image')
27
28
29 class Rescale(SimpleInterface):
30 """Rescale an image
31
32 Rescales the non-zero portion of ``in_file`` to match the bounds of the
33 non-zero portion of ``ref_file``.
34 Reference values in the input and reference images are defined by the
35 ``percentile`` parameter, and the reference values in each image are
36 identified and the remaining values are scaled accordingly.
37 In the case of ``percentile == 0``, the reference values are the maxima
38 and minima of each image.
39 If the ``invert`` parameter is set, the input file is inverted prior to
40 rescaling.
41
42 Examples
43 --------
44
45 To use a high-resolution T1w image as a registration target for a T2\*
46 image, it may be useful to invert the T1w image and rescale to the T2\*
47 range.
48 Using the 1st and 99th percentiles may reduce the impact of outlier
49 voxels.
50
51 >>> from nipype.interfaces.image import Rescale
52 >>> invert_t1w = Rescale(invert=True)
53 >>> invert_t1w.inputs.in_file = 'structural.nii'
54 >>> invert_t1w.inputs.ref_file = 'functional.nii'
55 >>> invert_t1w.inputs.percentile = 1.
56 >>> res = invert_t1w.run() # doctest: +SKIP
57
58 """
59 input_spec = RescaleInputSpec
60 output_spec = RescaleOutputSpec
61
62 def _run_interface(self, runtime):
63 import numpy as np
64 import nibabel as nb
65
66 img = nb.load(self.inputs.in_file)
67 data = img.get_data()
68 ref_data = nb.load(self.inputs.ref_file).get_data()
69
70 in_mask = data > 0
71 ref_mask = ref_data > 0
72
73 q = [self.inputs.percentile, 100. - self.inputs.percentile]
74 in_low, in_high = np.percentile(data[in_mask], q)
75 ref_low, ref_high = np.percentile(ref_data[ref_mask], q)
76 scale_factor = (ref_high - ref_low) / (in_high - in_low)
77
78 signal = in_high - data if self.inputs.invert else data - in_low
79 out_data = in_mask * (signal * scale_factor + ref_low)
80
81 suffix = '_inv' if self.inputs.invert else '_rescaled'
82 out_file = fname_presuffix(self.inputs.in_file, suffix=suffix,
83 newpath=runtime.cwd)
84 img.__class__(out_data, img.affine, img.header).to_filename(out_file)
85
86 self._results['out_file'] = out_file
87 return runtime
88
89
90 _axes = ('RL', 'AP', 'SI')
91 _orientations = tuple(
92 ''.join((x[i], y[j], z[k]))
93 for x in _axes for y in _axes for z in _axes
94 if x != y != z != x
95 for i in (0, 1) for j in (0, 1) for k in (0, 1))
96
97
98 class ReorientInputSpec(BaseInterfaceInputSpec):
99 in_file = File(exists=True, mandatory=True, desc='Input image')
100 orientation = traits.Enum(_orientations, usedefault=True,
101 desc='Target axis orientation')
102
103
104 class ReorientOutputSpec(TraitedSpec):
105 out_file = File(exists=True, desc='Reoriented image')
106 transform = File(exists=True,
107 desc='Affine transform from input orientation to output')
108
109
110 class Reorient(SimpleInterface):
111 """Conform an image to a given orientation
112
113 Flips and reorder the image data array so that the axes match the
114 directions indicated in ``orientation``.
115 The default ``RAS`` orientation corresponds to the first axis being ordered
116 from left to right, the second axis from posterior to anterior, and the
117 third axis from inferior to superior.
118
119 For oblique images, the original orientation is considered to be the
120 closest plumb orientation.
121
122 No resampling is performed, and thus the output image is not de-obliqued
123 or registered to any other image or template.
124
125 The effective transform is calculated from the original affine matrix to
126 the reoriented affine matrix.
127
128 Examples
129 --------
130
131 If an image is not reoriented, the original file is not modified
132
133 .. testsetup::
134
135 >>> def print_affine(matrix):
136 ... print(str(matrix).replace(']', ' ').replace('[', ' '))
137
138 >>> import numpy as np
139 >>> from nipype.interfaces.image import Reorient
140 >>> reorient = Reorient(orientation='LPS')
141 >>> reorient.inputs.in_file = 'segmentation0.nii.gz'
142 >>> res = reorient.run()
143 >>> res.outputs.out_file
144 'segmentation0.nii.gz'
145
146 >>> print_affine(np.loadtxt(res.outputs.transform))
147 1. 0. 0. 0.
148 0. 1. 0. 0.
149 0. 0. 1. 0.
150 0. 0. 0. 1.
151
152 >>> reorient.inputs.orientation = 'RAS'
153 >>> res = reorient.run()
154 >>> res.outputs.out_file # doctest: +ELLIPSIS
155 '.../segmentation0_ras.nii.gz'
156
157 >>> print_affine(np.loadtxt(res.outputs.transform))
158 -1. 0. 0. 60.
159 0. -1. 0. 72.
160 0. 0. 1. 0.
161 0. 0. 0. 1.
162
163 .. testcleanup::
164
165 >>> import os
166 >>> os.unlink(res.outputs.out_file)
167 >>> os.unlink(res.outputs.transform)
168
169 """
170 input_spec = ReorientInputSpec
171 output_spec = ReorientOutputSpec
172
173 def _run_interface(self, runtime):
174 import numpy as np
175 import nibabel as nb
176 from nibabel.orientations import (
177 axcodes2ornt, ornt_transform, inv_ornt_aff)
178
179 fname = self.inputs.in_file
180 orig_img = nb.load(fname)
181
182 # Find transform from current (approximate) orientation to
183 # target, in nibabel orientation matrix and affine forms
184 orig_ornt = nb.io_orientation(orig_img.affine)
185 targ_ornt = axcodes2ornt(self.inputs.orientation)
186 transform = ornt_transform(orig_ornt, targ_ornt)
187 affine_xfm = inv_ornt_aff(transform, orig_img.shape)
188
189 # Check can be eliminated when minimum nibabel version >= 2.2
190 if hasattr(orig_img, 'as_reoriented'):
191 reoriented = orig_img.as_reoriented(transform)
192 else:
193 reoriented = _as_reoriented_backport(orig_img, transform)
194
195 # Image may be reoriented
196 if reoriented is not orig_img:
197 suffix = '_' + self.inputs.orientation.lower()
198 out_name = fname_presuffix(fname, suffix=suffix,
199 newpath=runtime.cwd)
200 reoriented.to_filename(out_name)
201 else:
202 out_name = fname
203
204 mat_name = fname_presuffix(fname, suffix='.mat',
205 newpath=runtime.cwd, use_ext=False)
206 np.savetxt(mat_name, affine_xfm, fmt='%.08f')
207
208 self._results['out_file'] = out_name
209 self._results['transform'] = mat_name
210
211 return runtime
212
213
214 def _as_reoriented_backport(img, ornt):
215 """Backport of img.as_reoriented as of nibabel 2.2.0"""
216 import numpy as np
217 import nibabel as nb
218 from nibabel.orientations import inv_ornt_aff
219 if np.array_equal(ornt, [[0, 1], [1, 1], [2, 1]]):
220 return img
221
222 t_arr = nb.apply_orientation(img.get_data(), ornt)
223 new_aff = img.affine.dot(inv_ornt_aff(ornt, img.shape))
224 reoriented = img.__class__(t_arr, new_aff, img.header)
225
226 if isinstance(reoriented, nb.Nifti1Pair):
227 # Also apply the transform to the dim_info fields
228 new_dim = list(reoriented.header.get_dim_info())
229 for idx, value in enumerate(new_dim):
230 # For each value, leave as None if it was that way,
231 # otherwise check where we have mapped it to
232 if value is None:
233 continue
234 new_dim[idx] = np.where(ornt[:, 0] == idx)[0]
235
236 reoriented.header.set_dim_info(*new_dim)
237
238 return reoriented
239
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nipype/interfaces/image.py b/nipype/interfaces/image.py
--- a/nipype/interfaces/image.py
+++ b/nipype/interfaces/image.py
@@ -5,6 +5,7 @@
from ..utils.filemanip import fname_presuffix
from .base import (SimpleInterface, TraitedSpec, BaseInterfaceInputSpec,
traits, File)
+from .. import LooseVersion
class RescaleInputSpec(BaseInterfaceInputSpec):
@@ -186,8 +187,8 @@
transform = ornt_transform(orig_ornt, targ_ornt)
affine_xfm = inv_ornt_aff(transform, orig_img.shape)
- # Check can be eliminated when minimum nibabel version >= 2.2
- if hasattr(orig_img, 'as_reoriented'):
+ # Check can be eliminated when minimum nibabel version >= 2.4
+ if LooseVersion(nb.__version__) >= LooseVersion('2.4.0'):
reoriented = orig_img.as_reoriented(transform)
else:
reoriented = _as_reoriented_backport(orig_img, transform)
@@ -212,7 +213,7 @@
def _as_reoriented_backport(img, ornt):
- """Backport of img.as_reoriented as of nibabel 2.2.0"""
+ """Backport of img.as_reoriented as of nibabel 2.4.0"""
import numpy as np
import nibabel as nb
from nibabel.orientations import inv_ornt_aff
@@ -225,13 +226,8 @@
if isinstance(reoriented, nb.Nifti1Pair):
# Also apply the transform to the dim_info fields
- new_dim = list(reoriented.header.get_dim_info())
- for idx, value in enumerate(new_dim):
- # For each value, leave as None if it was that way,
- # otherwise check where we have mapped it to
- if value is None:
- continue
- new_dim[idx] = np.where(ornt[:, 0] == idx)[0]
+ new_dim = [None if orig_dim is None else int(ornt[orig_dim, 0])
+ for orig_dim in img.header.get_dim_info()]
reoriented.header.set_dim_info(*new_dim)
| {"golden_diff": "diff --git a/nipype/interfaces/image.py b/nipype/interfaces/image.py\n--- a/nipype/interfaces/image.py\n+++ b/nipype/interfaces/image.py\n@@ -5,6 +5,7 @@\n from ..utils.filemanip import fname_presuffix\n from .base import (SimpleInterface, TraitedSpec, BaseInterfaceInputSpec,\n traits, File)\n+from .. import LooseVersion\n \n \n class RescaleInputSpec(BaseInterfaceInputSpec):\n@@ -186,8 +187,8 @@\n transform = ornt_transform(orig_ornt, targ_ornt)\n affine_xfm = inv_ornt_aff(transform, orig_img.shape)\n \n- # Check can be eliminated when minimum nibabel version >= 2.2\n- if hasattr(orig_img, 'as_reoriented'):\n+ # Check can be eliminated when minimum nibabel version >= 2.4\n+ if LooseVersion(nb.__version__) >= LooseVersion('2.4.0'):\n reoriented = orig_img.as_reoriented(transform)\n else:\n reoriented = _as_reoriented_backport(orig_img, transform)\n@@ -212,7 +213,7 @@\n \n \n def _as_reoriented_backport(img, ornt):\n- \"\"\"Backport of img.as_reoriented as of nibabel 2.2.0\"\"\"\n+ \"\"\"Backport of img.as_reoriented as of nibabel 2.4.0\"\"\"\n import numpy as np\n import nibabel as nb\n from nibabel.orientations import inv_ornt_aff\n@@ -225,13 +226,8 @@\n \n if isinstance(reoriented, nb.Nifti1Pair):\n # Also apply the transform to the dim_info fields\n- new_dim = list(reoriented.header.get_dim_info())\n- for idx, value in enumerate(new_dim):\n- # For each value, leave as None if it was that way,\n- # otherwise check where we have mapped it to\n- if value is None:\n- continue\n- new_dim[idx] = np.where(ornt[:, 0] == idx)[0]\n+ new_dim = [None if orig_dim is None else int(ornt[orig_dim, 0])\n+ for orig_dim in img.header.get_dim_info()]\n \n reoriented.header.set_dim_info(*new_dim)\n", "issue": "Latest nibabel release breaks test\n### Summary\r\nhttps://github.com/nipy/nipype/blob/fcbe28e58ecabedbcc2dcaf09bf9a7329ae43fcf/nipype/interfaces/tests/test_image.py#L17\r\n\r\nblows up with nibabel 2.4.0\r\n### Actual behavior\r\n`AssertionError`\r\n### Expected behavior\r\nPass\r\n### How to replicate the behavior\r\nView any recent CI build (e.g. https://travis-ci.org/nipy/nipype/jobs/514714882)\r\n\r\nRelevant PR - nipy/nibabel#741\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-\n# vi: set ft=python sts=4 ts=4 sw=4 et:\n\nfrom ..utils.filemanip import fname_presuffix\nfrom .base import (SimpleInterface, TraitedSpec, BaseInterfaceInputSpec,\n traits, File)\n\n\nclass RescaleInputSpec(BaseInterfaceInputSpec):\n in_file = File(exists=True, mandatory=True,\n desc='Skull-stripped image to rescale')\n ref_file = File(exists=True, mandatory=True,\n desc='Skull-stripped reference image')\n invert = traits.Bool(desc='Invert contrast of rescaled image')\n percentile = traits.Range(low=0., high=50., value=0., usedefault=True,\n desc='Percentile to use for reference to allow '\n 'for outliers - 1 indicates the 1st and '\n '99th percentiles in the input file will '\n 'be mapped to the 99th and 1st percentiles '\n 'in the reference; 0 indicates minima and '\n 'maxima will be mapped')\n\n\nclass RescaleOutputSpec(TraitedSpec):\n out_file = File(exists=True, desc='Rescaled image')\n\n\nclass Rescale(SimpleInterface):\n \"\"\"Rescale an image\n\n Rescales the non-zero portion of ``in_file`` to match the bounds of the\n non-zero portion of ``ref_file``.\n Reference values in the input and reference images are defined by the\n ``percentile`` parameter, and the reference values in each image are\n identified and the remaining values are scaled accordingly.\n In the case of ``percentile == 0``, the reference values are the maxima\n and minima of each image.\n If the ``invert`` parameter is set, the input file is inverted prior to\n rescaling.\n\n Examples\n --------\n\n To use a high-resolution T1w image as a registration target for a T2\\*\n image, it may be useful to invert the T1w image and rescale to the T2\\*\n range.\n Using the 1st and 99th percentiles may reduce the impact of outlier\n voxels.\n\n >>> from nipype.interfaces.image import Rescale\n >>> invert_t1w = Rescale(invert=True)\n >>> invert_t1w.inputs.in_file = 'structural.nii'\n >>> invert_t1w.inputs.ref_file = 'functional.nii'\n >>> invert_t1w.inputs.percentile = 1.\n >>> res = invert_t1w.run() # doctest: +SKIP\n\n \"\"\"\n input_spec = RescaleInputSpec\n output_spec = RescaleOutputSpec\n\n def _run_interface(self, runtime):\n import numpy as np\n import nibabel as nb\n\n img = nb.load(self.inputs.in_file)\n data = img.get_data()\n ref_data = nb.load(self.inputs.ref_file).get_data()\n\n in_mask = data > 0\n ref_mask = ref_data > 0\n\n q = [self.inputs.percentile, 100. - self.inputs.percentile]\n in_low, in_high = np.percentile(data[in_mask], q)\n ref_low, ref_high = np.percentile(ref_data[ref_mask], q)\n scale_factor = (ref_high - ref_low) / (in_high - in_low)\n\n signal = in_high - data if self.inputs.invert else data - in_low\n out_data = in_mask * (signal * scale_factor + ref_low)\n\n suffix = '_inv' if self.inputs.invert else '_rescaled'\n out_file = fname_presuffix(self.inputs.in_file, suffix=suffix,\n newpath=runtime.cwd)\n img.__class__(out_data, img.affine, img.header).to_filename(out_file)\n\n self._results['out_file'] = out_file\n return runtime\n\n\n_axes = ('RL', 'AP', 'SI')\n_orientations = tuple(\n ''.join((x[i], y[j], z[k]))\n for x in _axes for y in _axes for z in _axes\n if x != y != z != x\n for i in (0, 1) for j in (0, 1) for k in (0, 1))\n\n\nclass ReorientInputSpec(BaseInterfaceInputSpec):\n in_file = File(exists=True, mandatory=True, desc='Input image')\n orientation = traits.Enum(_orientations, usedefault=True,\n desc='Target axis orientation')\n\n\nclass ReorientOutputSpec(TraitedSpec):\n out_file = File(exists=True, desc='Reoriented image')\n transform = File(exists=True,\n desc='Affine transform from input orientation to output')\n\n\nclass Reorient(SimpleInterface):\n \"\"\"Conform an image to a given orientation\n\nFlips and reorder the image data array so that the axes match the\ndirections indicated in ``orientation``.\nThe default ``RAS`` orientation corresponds to the first axis being ordered\nfrom left to right, the second axis from posterior to anterior, and the\nthird axis from inferior to superior.\n\nFor oblique images, the original orientation is considered to be the\nclosest plumb orientation.\n\nNo resampling is performed, and thus the output image is not de-obliqued\nor registered to any other image or template.\n\nThe effective transform is calculated from the original affine matrix to\nthe reoriented affine matrix.\n\nExamples\n--------\n\nIf an image is not reoriented, the original file is not modified\n\n.. testsetup::\n\n >>> def print_affine(matrix):\n ... print(str(matrix).replace(']', ' ').replace('[', ' '))\n\n>>> import numpy as np\n>>> from nipype.interfaces.image import Reorient\n>>> reorient = Reorient(orientation='LPS')\n>>> reorient.inputs.in_file = 'segmentation0.nii.gz'\n>>> res = reorient.run()\n>>> res.outputs.out_file\n'segmentation0.nii.gz'\n\n>>> print_affine(np.loadtxt(res.outputs.transform))\n1. 0. 0. 0.\n0. 1. 0. 0.\n0. 0. 1. 0.\n0. 0. 0. 1.\n\n>>> reorient.inputs.orientation = 'RAS'\n>>> res = reorient.run()\n>>> res.outputs.out_file # doctest: +ELLIPSIS\n'.../segmentation0_ras.nii.gz'\n\n>>> print_affine(np.loadtxt(res.outputs.transform))\n-1. 0. 0. 60.\n 0. -1. 0. 72.\n 0. 0. 1. 0.\n 0. 0. 0. 1.\n\n.. testcleanup::\n\n >>> import os\n >>> os.unlink(res.outputs.out_file)\n >>> os.unlink(res.outputs.transform)\n\n\"\"\"\n input_spec = ReorientInputSpec\n output_spec = ReorientOutputSpec\n\n def _run_interface(self, runtime):\n import numpy as np\n import nibabel as nb\n from nibabel.orientations import (\n axcodes2ornt, ornt_transform, inv_ornt_aff)\n\n fname = self.inputs.in_file\n orig_img = nb.load(fname)\n\n # Find transform from current (approximate) orientation to\n # target, in nibabel orientation matrix and affine forms\n orig_ornt = nb.io_orientation(orig_img.affine)\n targ_ornt = axcodes2ornt(self.inputs.orientation)\n transform = ornt_transform(orig_ornt, targ_ornt)\n affine_xfm = inv_ornt_aff(transform, orig_img.shape)\n\n # Check can be eliminated when minimum nibabel version >= 2.2\n if hasattr(orig_img, 'as_reoriented'):\n reoriented = orig_img.as_reoriented(transform)\n else:\n reoriented = _as_reoriented_backport(orig_img, transform)\n\n # Image may be reoriented\n if reoriented is not orig_img:\n suffix = '_' + self.inputs.orientation.lower()\n out_name = fname_presuffix(fname, suffix=suffix,\n newpath=runtime.cwd)\n reoriented.to_filename(out_name)\n else:\n out_name = fname\n\n mat_name = fname_presuffix(fname, suffix='.mat',\n newpath=runtime.cwd, use_ext=False)\n np.savetxt(mat_name, affine_xfm, fmt='%.08f')\n\n self._results['out_file'] = out_name\n self._results['transform'] = mat_name\n\n return runtime\n\n\ndef _as_reoriented_backport(img, ornt):\n \"\"\"Backport of img.as_reoriented as of nibabel 2.2.0\"\"\"\n import numpy as np\n import nibabel as nb\n from nibabel.orientations import inv_ornt_aff\n if np.array_equal(ornt, [[0, 1], [1, 1], [2, 1]]):\n return img\n\n t_arr = nb.apply_orientation(img.get_data(), ornt)\n new_aff = img.affine.dot(inv_ornt_aff(ornt, img.shape))\n reoriented = img.__class__(t_arr, new_aff, img.header)\n\n if isinstance(reoriented, nb.Nifti1Pair):\n # Also apply the transform to the dim_info fields\n new_dim = list(reoriented.header.get_dim_info())\n for idx, value in enumerate(new_dim):\n # For each value, leave as None if it was that way,\n # otherwise check where we have mapped it to\n if value is None:\n continue\n new_dim[idx] = np.where(ornt[:, 0] == idx)[0]\n\n reoriented.header.set_dim_info(*new_dim)\n\n return reoriented\n", "path": "nipype/interfaces/image.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-\n# vi: set ft=python sts=4 ts=4 sw=4 et:\n\nfrom ..utils.filemanip import fname_presuffix\nfrom .base import (SimpleInterface, TraitedSpec, BaseInterfaceInputSpec,\n traits, File)\nfrom .. import LooseVersion\n\n\nclass RescaleInputSpec(BaseInterfaceInputSpec):\n in_file = File(exists=True, mandatory=True,\n desc='Skull-stripped image to rescale')\n ref_file = File(exists=True, mandatory=True,\n desc='Skull-stripped reference image')\n invert = traits.Bool(desc='Invert contrast of rescaled image')\n percentile = traits.Range(low=0., high=50., value=0., usedefault=True,\n desc='Percentile to use for reference to allow '\n 'for outliers - 1 indicates the 1st and '\n '99th percentiles in the input file will '\n 'be mapped to the 99th and 1st percentiles '\n 'in the reference; 0 indicates minima and '\n 'maxima will be mapped')\n\n\nclass RescaleOutputSpec(TraitedSpec):\n out_file = File(exists=True, desc='Rescaled image')\n\n\nclass Rescale(SimpleInterface):\n \"\"\"Rescale an image\n\n Rescales the non-zero portion of ``in_file`` to match the bounds of the\n non-zero portion of ``ref_file``.\n Reference values in the input and reference images are defined by the\n ``percentile`` parameter, and the reference values in each image are\n identified and the remaining values are scaled accordingly.\n In the case of ``percentile == 0``, the reference values are the maxima\n and minima of each image.\n If the ``invert`` parameter is set, the input file is inverted prior to\n rescaling.\n\n Examples\n --------\n\n To use a high-resolution T1w image as a registration target for a T2\\*\n image, it may be useful to invert the T1w image and rescale to the T2\\*\n range.\n Using the 1st and 99th percentiles may reduce the impact of outlier\n voxels.\n\n >>> from nipype.interfaces.image import Rescale\n >>> invert_t1w = Rescale(invert=True)\n >>> invert_t1w.inputs.in_file = 'structural.nii'\n >>> invert_t1w.inputs.ref_file = 'functional.nii'\n >>> invert_t1w.inputs.percentile = 1.\n >>> res = invert_t1w.run() # doctest: +SKIP\n\n \"\"\"\n input_spec = RescaleInputSpec\n output_spec = RescaleOutputSpec\n\n def _run_interface(self, runtime):\n import numpy as np\n import nibabel as nb\n\n img = nb.load(self.inputs.in_file)\n data = img.get_data()\n ref_data = nb.load(self.inputs.ref_file).get_data()\n\n in_mask = data > 0\n ref_mask = ref_data > 0\n\n q = [self.inputs.percentile, 100. - self.inputs.percentile]\n in_low, in_high = np.percentile(data[in_mask], q)\n ref_low, ref_high = np.percentile(ref_data[ref_mask], q)\n scale_factor = (ref_high - ref_low) / (in_high - in_low)\n\n signal = in_high - data if self.inputs.invert else data - in_low\n out_data = in_mask * (signal * scale_factor + ref_low)\n\n suffix = '_inv' if self.inputs.invert else '_rescaled'\n out_file = fname_presuffix(self.inputs.in_file, suffix=suffix,\n newpath=runtime.cwd)\n img.__class__(out_data, img.affine, img.header).to_filename(out_file)\n\n self._results['out_file'] = out_file\n return runtime\n\n\n_axes = ('RL', 'AP', 'SI')\n_orientations = tuple(\n ''.join((x[i], y[j], z[k]))\n for x in _axes for y in _axes for z in _axes\n if x != y != z != x\n for i in (0, 1) for j in (0, 1) for k in (0, 1))\n\n\nclass ReorientInputSpec(BaseInterfaceInputSpec):\n in_file = File(exists=True, mandatory=True, desc='Input image')\n orientation = traits.Enum(_orientations, usedefault=True,\n desc='Target axis orientation')\n\n\nclass ReorientOutputSpec(TraitedSpec):\n out_file = File(exists=True, desc='Reoriented image')\n transform = File(exists=True,\n desc='Affine transform from input orientation to output')\n\n\nclass Reorient(SimpleInterface):\n \"\"\"Conform an image to a given orientation\n\nFlips and reorder the image data array so that the axes match the\ndirections indicated in ``orientation``.\nThe default ``RAS`` orientation corresponds to the first axis being ordered\nfrom left to right, the second axis from posterior to anterior, and the\nthird axis from inferior to superior.\n\nFor oblique images, the original orientation is considered to be the\nclosest plumb orientation.\n\nNo resampling is performed, and thus the output image is not de-obliqued\nor registered to any other image or template.\n\nThe effective transform is calculated from the original affine matrix to\nthe reoriented affine matrix.\n\nExamples\n--------\n\nIf an image is not reoriented, the original file is not modified\n\n.. testsetup::\n\n >>> def print_affine(matrix):\n ... print(str(matrix).replace(']', ' ').replace('[', ' '))\n\n>>> import numpy as np\n>>> from nipype.interfaces.image import Reorient\n>>> reorient = Reorient(orientation='LPS')\n>>> reorient.inputs.in_file = 'segmentation0.nii.gz'\n>>> res = reorient.run()\n>>> res.outputs.out_file\n'segmentation0.nii.gz'\n\n>>> print_affine(np.loadtxt(res.outputs.transform))\n1. 0. 0. 0.\n0. 1. 0. 0.\n0. 0. 1. 0.\n0. 0. 0. 1.\n\n>>> reorient.inputs.orientation = 'RAS'\n>>> res = reorient.run()\n>>> res.outputs.out_file # doctest: +ELLIPSIS\n'.../segmentation0_ras.nii.gz'\n\n>>> print_affine(np.loadtxt(res.outputs.transform))\n-1. 0. 0. 60.\n 0. -1. 0. 72.\n 0. 0. 1. 0.\n 0. 0. 0. 1.\n\n.. testcleanup::\n\n >>> import os\n >>> os.unlink(res.outputs.out_file)\n >>> os.unlink(res.outputs.transform)\n\n\"\"\"\n input_spec = ReorientInputSpec\n output_spec = ReorientOutputSpec\n\n def _run_interface(self, runtime):\n import numpy as np\n import nibabel as nb\n from nibabel.orientations import (\n axcodes2ornt, ornt_transform, inv_ornt_aff)\n\n fname = self.inputs.in_file\n orig_img = nb.load(fname)\n\n # Find transform from current (approximate) orientation to\n # target, in nibabel orientation matrix and affine forms\n orig_ornt = nb.io_orientation(orig_img.affine)\n targ_ornt = axcodes2ornt(self.inputs.orientation)\n transform = ornt_transform(orig_ornt, targ_ornt)\n affine_xfm = inv_ornt_aff(transform, orig_img.shape)\n\n # Check can be eliminated when minimum nibabel version >= 2.4\n if LooseVersion(nb.__version__) >= LooseVersion('2.4.0'):\n reoriented = orig_img.as_reoriented(transform)\n else:\n reoriented = _as_reoriented_backport(orig_img, transform)\n\n # Image may be reoriented\n if reoriented is not orig_img:\n suffix = '_' + self.inputs.orientation.lower()\n out_name = fname_presuffix(fname, suffix=suffix,\n newpath=runtime.cwd)\n reoriented.to_filename(out_name)\n else:\n out_name = fname\n\n mat_name = fname_presuffix(fname, suffix='.mat',\n newpath=runtime.cwd, use_ext=False)\n np.savetxt(mat_name, affine_xfm, fmt='%.08f')\n\n self._results['out_file'] = out_name\n self._results['transform'] = mat_name\n\n return runtime\n\n\ndef _as_reoriented_backport(img, ornt):\n \"\"\"Backport of img.as_reoriented as of nibabel 2.4.0\"\"\"\n import numpy as np\n import nibabel as nb\n from nibabel.orientations import inv_ornt_aff\n if np.array_equal(ornt, [[0, 1], [1, 1], [2, 1]]):\n return img\n\n t_arr = nb.apply_orientation(img.get_data(), ornt)\n new_aff = img.affine.dot(inv_ornt_aff(ornt, img.shape))\n reoriented = img.__class__(t_arr, new_aff, img.header)\n\n if isinstance(reoriented, nb.Nifti1Pair):\n # Also apply the transform to the dim_info fields\n new_dim = [None if orig_dim is None else int(ornt[orig_dim, 0])\n for orig_dim in img.header.get_dim_info()]\n\n reoriented.header.set_dim_info(*new_dim)\n\n return reoriented\n", "path": "nipype/interfaces/image.py"}]} | 3,165 | 503 |
gh_patches_debug_17650 | rasdani/github-patches | git_diff | conan-io__conan-4834 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
conan export-pkg with --package-folder does not set the metadata info
When an export-pkg command is used with the ``--package-folder``, the *metadata.json* is not updated with the package ID and package revision.
Then, when the user tries to upload the package ir errors with:
```
ERROR: Binary package hello/1.0@user/testing:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9 not found
```
Here is a test reproducing the issue:
```
def upload_export_pkg_test(self):
"""
Package metadata is created when doing an export-pkg and then uploading the package
"""
server1 = TestServer([("*/*@*/*", "*")], [("*/*@*/*", "*")], users={"lasote": "mypass"})
servers = OrderedDict()
servers["server1"] = server1
client = TestClient(servers=servers)
client.save({"release/kk.lib": ""})
client.run("user lasote -r server1 -p mypass")
client.run("new hello/1.0 --header")
client.run("export-pkg . user/testing -pf release")
client.run("upload hello/1.0@user/testing --all -r server1")
```
The original issue comes from #4768
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conans/client/cmd/export_pkg.py`
Content:
```
1 import os
2
3 from conans.client import packager
4 from conans.client.graph.graph_manager import load_deps_info
5 from conans.errors import ConanException
6 from conans.model.manifest import FileTreeManifest
7 from conans.model.ref import PackageReference
8 from conans.util.files import rmdir
9
10
11 def export_pkg(cache, graph_manager, hook_manager, recorder, output,
12 ref, source_folder, build_folder, package_folder, install_folder,
13 graph_info, force):
14
15 conan_file_path = cache.conanfile(ref)
16 if not os.path.exists(conan_file_path):
17 raise ConanException("Package recipe '%s' does not exist" % str(ref))
18
19 # The graph has to be loaded with build_mode=[ref.name], so that node is not tried
20 # to be downloaded from remotes
21 deps_graph, _ = graph_manager.load_graph(ref, None, graph_info=graph_info, build_mode=[ref.name],
22 check_updates=False, update=False,
23 remote_name=None, recorder=recorder)
24 # this is a bit tricky, but works. The root (virtual), has only 1 neighbor,
25 # which is the exported pkg
26 nodes = deps_graph.root.neighbors()
27 conanfile = nodes[0].conanfile
28 from conans.client.conan_api import existing_info_files
29 if install_folder and existing_info_files(install_folder):
30 load_deps_info(install_folder, conanfile, required=True)
31 package_id = nodes[0].package_id
32 output.info("Packaging to %s" % package_id)
33 pref = PackageReference(ref, package_id)
34 dest_package_folder = cache.package(pref, short_paths=conanfile.short_paths)
35
36 if os.path.exists(dest_package_folder):
37 if force:
38 rmdir(dest_package_folder)
39 else:
40 raise ConanException("Package already exists. Please use --force, -f to "
41 "overwrite it")
42
43 recipe_hash = cache.package_layout(ref).recipe_manifest().summary_hash
44 conanfile.info.recipe_hash = recipe_hash
45 conanfile.develop = True
46 if package_folder:
47 packager.export_pkg(conanfile, package_id, package_folder, dest_package_folder,
48 hook_manager, conan_file_path, ref)
49 else:
50 packager.create_package(conanfile, package_id, source_folder, build_folder,
51 dest_package_folder, install_folder, hook_manager, conan_file_path,
52 ref, local=True)
53 with cache.package_layout(ref).update_metadata() as metadata:
54 readed_manifest = FileTreeManifest.load(dest_package_folder)
55 metadata.packages[package_id].revision = readed_manifest.summary_hash
56 metadata.packages[package_id].recipe_revision = metadata.recipe.revision
57
58 recorder.package_exported(pref)
59
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/conans/client/cmd/export_pkg.py b/conans/client/cmd/export_pkg.py
--- a/conans/client/cmd/export_pkg.py
+++ b/conans/client/cmd/export_pkg.py
@@ -50,9 +50,9 @@
packager.create_package(conanfile, package_id, source_folder, build_folder,
dest_package_folder, install_folder, hook_manager, conan_file_path,
ref, local=True)
- with cache.package_layout(ref).update_metadata() as metadata:
- readed_manifest = FileTreeManifest.load(dest_package_folder)
- metadata.packages[package_id].revision = readed_manifest.summary_hash
- metadata.packages[package_id].recipe_revision = metadata.recipe.revision
+ with cache.package_layout(ref).update_metadata() as metadata:
+ readed_manifest = FileTreeManifest.load(dest_package_folder)
+ metadata.packages[package_id].revision = readed_manifest.summary_hash
+ metadata.packages[package_id].recipe_revision = metadata.recipe.revision
recorder.package_exported(pref)
| {"golden_diff": "diff --git a/conans/client/cmd/export_pkg.py b/conans/client/cmd/export_pkg.py\n--- a/conans/client/cmd/export_pkg.py\n+++ b/conans/client/cmd/export_pkg.py\n@@ -50,9 +50,9 @@\n packager.create_package(conanfile, package_id, source_folder, build_folder,\n dest_package_folder, install_folder, hook_manager, conan_file_path,\n ref, local=True)\n- with cache.package_layout(ref).update_metadata() as metadata:\n- readed_manifest = FileTreeManifest.load(dest_package_folder)\n- metadata.packages[package_id].revision = readed_manifest.summary_hash\n- metadata.packages[package_id].recipe_revision = metadata.recipe.revision\n+ with cache.package_layout(ref).update_metadata() as metadata:\n+ readed_manifest = FileTreeManifest.load(dest_package_folder)\n+ metadata.packages[package_id].revision = readed_manifest.summary_hash\n+ metadata.packages[package_id].recipe_revision = metadata.recipe.revision\n \n recorder.package_exported(pref)\n", "issue": "conan export-pkg with --package-folder does not set the metadata info\nWhen an export-pkg command is used with the ``--package-folder``, the *metadata.json* is not updated with the package ID and package revision.\r\n\r\nThen, when the user tries to upload the package ir errors with:\r\n```\r\nERROR: Binary package hello/1.0@user/testing:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9 not found\r\n```\r\n\r\nHere is a test reproducing the issue:\r\n\r\n```\r\n def upload_export_pkg_test(self):\r\n \"\"\"\r\n Package metadata is created when doing an export-pkg and then uploading the package\r\n \"\"\"\r\n server1 = TestServer([(\"*/*@*/*\", \"*\")], [(\"*/*@*/*\", \"*\")], users={\"lasote\": \"mypass\"})\r\n servers = OrderedDict()\r\n servers[\"server1\"] = server1\r\n client = TestClient(servers=servers)\r\n client.save({\"release/kk.lib\": \"\"})\r\n client.run(\"user lasote -r server1 -p mypass\")\r\n client.run(\"new hello/1.0 --header\")\r\n client.run(\"export-pkg . user/testing -pf release\")\r\n client.run(\"upload hello/1.0@user/testing --all -r server1\")\r\n```\r\n\r\nThe original issue comes from #4768\n", "before_files": [{"content": "import os\n\nfrom conans.client import packager\nfrom conans.client.graph.graph_manager import load_deps_info\nfrom conans.errors import ConanException\nfrom conans.model.manifest import FileTreeManifest\nfrom conans.model.ref import PackageReference\nfrom conans.util.files import rmdir\n\n\ndef export_pkg(cache, graph_manager, hook_manager, recorder, output,\n ref, source_folder, build_folder, package_folder, install_folder,\n graph_info, force):\n\n conan_file_path = cache.conanfile(ref)\n if not os.path.exists(conan_file_path):\n raise ConanException(\"Package recipe '%s' does not exist\" % str(ref))\n\n # The graph has to be loaded with build_mode=[ref.name], so that node is not tried\n # to be downloaded from remotes\n deps_graph, _ = graph_manager.load_graph(ref, None, graph_info=graph_info, build_mode=[ref.name],\n check_updates=False, update=False,\n remote_name=None, recorder=recorder)\n # this is a bit tricky, but works. The root (virtual), has only 1 neighbor,\n # which is the exported pkg\n nodes = deps_graph.root.neighbors()\n conanfile = nodes[0].conanfile\n from conans.client.conan_api import existing_info_files\n if install_folder and existing_info_files(install_folder):\n load_deps_info(install_folder, conanfile, required=True)\n package_id = nodes[0].package_id\n output.info(\"Packaging to %s\" % package_id)\n pref = PackageReference(ref, package_id)\n dest_package_folder = cache.package(pref, short_paths=conanfile.short_paths)\n\n if os.path.exists(dest_package_folder):\n if force:\n rmdir(dest_package_folder)\n else:\n raise ConanException(\"Package already exists. Please use --force, -f to \"\n \"overwrite it\")\n\n recipe_hash = cache.package_layout(ref).recipe_manifest().summary_hash\n conanfile.info.recipe_hash = recipe_hash\n conanfile.develop = True\n if package_folder:\n packager.export_pkg(conanfile, package_id, package_folder, dest_package_folder,\n hook_manager, conan_file_path, ref)\n else:\n packager.create_package(conanfile, package_id, source_folder, build_folder,\n dest_package_folder, install_folder, hook_manager, conan_file_path,\n ref, local=True)\n with cache.package_layout(ref).update_metadata() as metadata:\n readed_manifest = FileTreeManifest.load(dest_package_folder)\n metadata.packages[package_id].revision = readed_manifest.summary_hash\n metadata.packages[package_id].recipe_revision = metadata.recipe.revision\n\n recorder.package_exported(pref)\n", "path": "conans/client/cmd/export_pkg.py"}], "after_files": [{"content": "import os\n\nfrom conans.client import packager\nfrom conans.client.graph.graph_manager import load_deps_info\nfrom conans.errors import ConanException\nfrom conans.model.manifest import FileTreeManifest\nfrom conans.model.ref import PackageReference\nfrom conans.util.files import rmdir\n\n\ndef export_pkg(cache, graph_manager, hook_manager, recorder, output,\n ref, source_folder, build_folder, package_folder, install_folder,\n graph_info, force):\n\n conan_file_path = cache.conanfile(ref)\n if not os.path.exists(conan_file_path):\n raise ConanException(\"Package recipe '%s' does not exist\" % str(ref))\n\n # The graph has to be loaded with build_mode=[ref.name], so that node is not tried\n # to be downloaded from remotes\n deps_graph, _ = graph_manager.load_graph(ref, None, graph_info=graph_info, build_mode=[ref.name],\n check_updates=False, update=False,\n remote_name=None, recorder=recorder)\n # this is a bit tricky, but works. The root (virtual), has only 1 neighbor,\n # which is the exported pkg\n nodes = deps_graph.root.neighbors()\n conanfile = nodes[0].conanfile\n from conans.client.conan_api import existing_info_files\n if install_folder and existing_info_files(install_folder):\n load_deps_info(install_folder, conanfile, required=True)\n package_id = nodes[0].package_id\n output.info(\"Packaging to %s\" % package_id)\n pref = PackageReference(ref, package_id)\n dest_package_folder = cache.package(pref, short_paths=conanfile.short_paths)\n\n if os.path.exists(dest_package_folder):\n if force:\n rmdir(dest_package_folder)\n else:\n raise ConanException(\"Package already exists. Please use --force, -f to \"\n \"overwrite it\")\n\n recipe_hash = cache.package_layout(ref).recipe_manifest().summary_hash\n conanfile.info.recipe_hash = recipe_hash\n conanfile.develop = True\n if package_folder:\n packager.export_pkg(conanfile, package_id, package_folder, dest_package_folder,\n hook_manager, conan_file_path, ref)\n else:\n packager.create_package(conanfile, package_id, source_folder, build_folder,\n dest_package_folder, install_folder, hook_manager, conan_file_path,\n ref, local=True)\n with cache.package_layout(ref).update_metadata() as metadata:\n readed_manifest = FileTreeManifest.load(dest_package_folder)\n metadata.packages[package_id].revision = readed_manifest.summary_hash\n metadata.packages[package_id].recipe_revision = metadata.recipe.revision\n\n recorder.package_exported(pref)\n", "path": "conans/client/cmd/export_pkg.py"}]} | 1,259 | 223 |
gh_patches_debug_17967 | rasdani/github-patches | git_diff | WeblateOrg__weblate-11641 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
No way to set default notification setting for watched projects
### Describe the issue
Thanks for this great project. On the notifications part there is some room for improvements as when you come from transifex, you get a lot of useful notifications which encourage user to finish the translation of a language when stuff changes. Would it be possible to set some defaults for the "New translation component was created" and "New string is available for translation" settings. As IMO for these settings an instant notification should be sent out. Probably some more would be nice too.
### I already tried
- [X] I've read and searched [the documentation](https://docs.weblate.org/).
- [X] I've searched for similar issues in this repository.
### Steps to reproduce the behavior
1. Got to /accounts/profile/#notifications__1
2. Check the component and translation notifications
### Expected behavior
Some default settings to adjust, either in the GUI or by configuration variable.
### Screenshots
_No response_
### Exception traceback
_No response_
### How do you run Weblate?
Docker container
### Weblate versions
_No response_
### Weblate deploy checks
_No response_
### Additional context
Kind of related to #5155.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `weblate/accounts/data.py`
Content:
```
1 # Copyright © Michal Čihař <[email protected]>
2 #
3 # SPDX-License-Identifier: GPL-3.0-or-later
4
5 from weblate.accounts.notifications import (
6 FREQ_INSTANT,
7 SCOPE_ADMIN,
8 SCOPE_ALL,
9 SCOPE_WATCHED,
10 )
11
12 DEFAULT_NOTIFICATIONS = [
13 (SCOPE_ALL, FREQ_INSTANT, "MentionCommentNotificaton"),
14 (SCOPE_WATCHED, FREQ_INSTANT, "LastAuthorCommentNotificaton"),
15 (SCOPE_WATCHED, FREQ_INSTANT, "MentionCommentNotificaton"),
16 (SCOPE_WATCHED, FREQ_INSTANT, "NewAnnouncementNotificaton"),
17 (SCOPE_ADMIN, FREQ_INSTANT, "MergeFailureNotification"),
18 (SCOPE_ADMIN, FREQ_INSTANT, "ParseErrorNotification"),
19 (SCOPE_ADMIN, FREQ_INSTANT, "NewTranslationNotificaton"),
20 (SCOPE_ADMIN, FREQ_INSTANT, "NewAlertNotificaton"),
21 (SCOPE_ADMIN, FREQ_INSTANT, "NewAnnouncementNotificaton"),
22 ]
23
24
25 def create_default_notifications(user) -> None:
26 for scope, frequency, notification in DEFAULT_NOTIFICATIONS:
27 user.subscription_set.get_or_create(
28 scope=scope, notification=notification, defaults={"frequency": frequency}
29 )
30
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/weblate/accounts/data.py b/weblate/accounts/data.py
--- a/weblate/accounts/data.py
+++ b/weblate/accounts/data.py
@@ -4,6 +4,7 @@
from weblate.accounts.notifications import (
FREQ_INSTANT,
+ FREQ_WEEKLY,
SCOPE_ADMIN,
SCOPE_ALL,
SCOPE_WATCHED,
@@ -14,6 +15,7 @@
(SCOPE_WATCHED, FREQ_INSTANT, "LastAuthorCommentNotificaton"),
(SCOPE_WATCHED, FREQ_INSTANT, "MentionCommentNotificaton"),
(SCOPE_WATCHED, FREQ_INSTANT, "NewAnnouncementNotificaton"),
+ (SCOPE_WATCHED, FREQ_WEEKLY, "NewStringNotificaton"),
(SCOPE_ADMIN, FREQ_INSTANT, "MergeFailureNotification"),
(SCOPE_ADMIN, FREQ_INSTANT, "ParseErrorNotification"),
(SCOPE_ADMIN, FREQ_INSTANT, "NewTranslationNotificaton"),
| {"golden_diff": "diff --git a/weblate/accounts/data.py b/weblate/accounts/data.py\n--- a/weblate/accounts/data.py\n+++ b/weblate/accounts/data.py\n@@ -4,6 +4,7 @@\n \n from weblate.accounts.notifications import (\n FREQ_INSTANT,\n+ FREQ_WEEKLY,\n SCOPE_ADMIN,\n SCOPE_ALL,\n SCOPE_WATCHED,\n@@ -14,6 +15,7 @@\n (SCOPE_WATCHED, FREQ_INSTANT, \"LastAuthorCommentNotificaton\"),\n (SCOPE_WATCHED, FREQ_INSTANT, \"MentionCommentNotificaton\"),\n (SCOPE_WATCHED, FREQ_INSTANT, \"NewAnnouncementNotificaton\"),\n+ (SCOPE_WATCHED, FREQ_WEEKLY, \"NewStringNotificaton\"),\n (SCOPE_ADMIN, FREQ_INSTANT, \"MergeFailureNotification\"),\n (SCOPE_ADMIN, FREQ_INSTANT, \"ParseErrorNotification\"),\n (SCOPE_ADMIN, FREQ_INSTANT, \"NewTranslationNotificaton\"),\n", "issue": "No way to set default notification setting for watched projects\n### Describe the issue\n\nThanks for this great project. On the notifications part there is some room for improvements as when you come from transifex, you get a lot of useful notifications which encourage user to finish the translation of a language when stuff changes. Would it be possible to set some defaults for the \"New translation component was created\" and \"New string is available for translation\" settings. As IMO for these settings an instant notification should be sent out. Probably some more would be nice too.\n\n### I already tried\n\n- [X] I've read and searched [the documentation](https://docs.weblate.org/).\n- [X] I've searched for similar issues in this repository.\n\n### Steps to reproduce the behavior\n\n1. Got to /accounts/profile/#notifications__1\r\n2. Check the component and translation notifications\n\n### Expected behavior\n\nSome default settings to adjust, either in the GUI or by configuration variable.\n\n### Screenshots\n\n_No response_\n\n### Exception traceback\n\n_No response_\n\n### How do you run Weblate?\n\nDocker container\n\n### Weblate versions\n\n_No response_\n\n### Weblate deploy checks\n\n_No response_\n\n### Additional context\n\nKind of related to #5155.\n", "before_files": [{"content": "# Copyright \u00a9 Michal \u010ciha\u0159 <[email protected]>\n#\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nfrom weblate.accounts.notifications import (\n FREQ_INSTANT,\n SCOPE_ADMIN,\n SCOPE_ALL,\n SCOPE_WATCHED,\n)\n\nDEFAULT_NOTIFICATIONS = [\n (SCOPE_ALL, FREQ_INSTANT, \"MentionCommentNotificaton\"),\n (SCOPE_WATCHED, FREQ_INSTANT, \"LastAuthorCommentNotificaton\"),\n (SCOPE_WATCHED, FREQ_INSTANT, \"MentionCommentNotificaton\"),\n (SCOPE_WATCHED, FREQ_INSTANT, \"NewAnnouncementNotificaton\"),\n (SCOPE_ADMIN, FREQ_INSTANT, \"MergeFailureNotification\"),\n (SCOPE_ADMIN, FREQ_INSTANT, \"ParseErrorNotification\"),\n (SCOPE_ADMIN, FREQ_INSTANT, \"NewTranslationNotificaton\"),\n (SCOPE_ADMIN, FREQ_INSTANT, \"NewAlertNotificaton\"),\n (SCOPE_ADMIN, FREQ_INSTANT, \"NewAnnouncementNotificaton\"),\n]\n\n\ndef create_default_notifications(user) -> None:\n for scope, frequency, notification in DEFAULT_NOTIFICATIONS:\n user.subscription_set.get_or_create(\n scope=scope, notification=notification, defaults={\"frequency\": frequency}\n )\n", "path": "weblate/accounts/data.py"}], "after_files": [{"content": "# Copyright \u00a9 Michal \u010ciha\u0159 <[email protected]>\n#\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nfrom weblate.accounts.notifications import (\n FREQ_INSTANT,\n FREQ_WEEKLY,\n SCOPE_ADMIN,\n SCOPE_ALL,\n SCOPE_WATCHED,\n)\n\nDEFAULT_NOTIFICATIONS = [\n (SCOPE_ALL, FREQ_INSTANT, \"MentionCommentNotificaton\"),\n (SCOPE_WATCHED, FREQ_INSTANT, \"LastAuthorCommentNotificaton\"),\n (SCOPE_WATCHED, FREQ_INSTANT, \"MentionCommentNotificaton\"),\n (SCOPE_WATCHED, FREQ_INSTANT, \"NewAnnouncementNotificaton\"),\n (SCOPE_WATCHED, FREQ_WEEKLY, \"NewStringNotificaton\"),\n (SCOPE_ADMIN, FREQ_INSTANT, \"MergeFailureNotification\"),\n (SCOPE_ADMIN, FREQ_INSTANT, \"ParseErrorNotification\"),\n (SCOPE_ADMIN, FREQ_INSTANT, \"NewTranslationNotificaton\"),\n (SCOPE_ADMIN, FREQ_INSTANT, \"NewAlertNotificaton\"),\n (SCOPE_ADMIN, FREQ_INSTANT, \"NewAnnouncementNotificaton\"),\n]\n\n\ndef create_default_notifications(user) -> None:\n for scope, frequency, notification in DEFAULT_NOTIFICATIONS:\n user.subscription_set.get_or_create(\n scope=scope, notification=notification, defaults={\"frequency\": frequency}\n )\n", "path": "weblate/accounts/data.py"}]} | 858 | 226 |
gh_patches_debug_28982 | rasdani/github-patches | git_diff | bokeh__bokeh-5327 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Examples with deprecation warnings
As of 0.12.3, the following examples have deprecation warnings
- plotting/file/hover.py
- plotting/file/tap.py
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/plotting/file/tap.py`
Content:
```
1 import itertools
2 import numpy as np
3
4 from bokeh.plotting import ColumnDataSource, figure, show, output_file
5 from bokeh.models import TapTool
6
7 xx, yy = np.meshgrid(range(0,101,4), range(0,101,4))
8 x = xx.flatten()
9 y = yy.flatten()
10 N = len(x)
11 inds = [str(i) for i in np.arange(N)]
12 radii = np.random.random(size=N)*0.4 + 1.7
13 colors = [
14 "#%02x%02x%02x" % (int(r), int(g), 150) for r, g in zip(50+2*x, 30+2*y)
15 ]
16
17 source = ColumnDataSource(data=dict(
18 x=x,
19 y=y,
20 radius=radii,
21 colors=colors,
22 foo=list(itertools.permutations("abcdef"))[:N],
23 bar=np.random.normal(size=N),
24 ))
25
26 TOOLS="crosshair,pan,wheel_zoom,box_zoom,reset,tap,save"
27
28 p = figure(title="Tappy Scatter", tools=TOOLS)
29
30 cr = p.circle(x, y, radius=radii, source=source,
31 fill_color=colors, fill_alpha=0.6, line_color=None)
32
33 tr = p.text(x, y, text=inds, alpha=0.5, text_font_size="5pt",
34 text_baseline="middle", text_align="center")
35
36 # in the browser console, you will see messages when circles are clicked
37 tool = p.select_one(TapTool).renderers = [cr]
38
39 output_file("tap.html", title="tap.py example")
40
41 show(p) # open a browser
42
```
Path: `examples/plotting/file/hover.py`
Content:
```
1 import itertools
2
3 import numpy as np
4
5 from bokeh.plotting import ColumnDataSource, figure, show, output_file
6 from bokeh.models import HoverTool
7
8 TOOLS="crosshair,pan,wheel_zoom,box_zoom,reset,hover,save"
9
10 xx, yy = np.meshgrid(range(0,101,4), range(0,101,4))
11 x = xx.flatten()
12 y = yy.flatten()
13 N = len(x)
14 inds = [str(i) for i in np.arange(N)]
15 radii = np.random.random(size=N)*0.4 + 1.7
16 colors = [
17 "#%02x%02x%02x" % (int(r), int(g), 150) for r, g in zip(50+2*x, 30+2*y)
18 ]
19
20 source = ColumnDataSource(data=dict(
21 x=x,
22 y=y,
23 radius=radii,
24 colors=colors,
25 foo=list(itertools.permutations("abcdef"))[:N],
26 bar=np.random.normal(size=N),
27 ))
28
29 p = figure(title="Hoverful Scatter", tools=TOOLS)
30
31 p.circle(x, y, radius=radii, source=source,
32 fill_color=colors, fill_alpha=0.6, line_color=None)
33
34 p.text(x, y, text=inds, alpha=0.5, text_font_size="5pt",
35 text_baseline="middle", text_align="center")
36
37 hover = p.select_one(HoverTool).tooltips = [
38 ("index", "$index"),
39 ("(x,y)", "($x, $y)"),
40 ("radius", "@radius"),
41 ("fill color", "$color[hex, swatch]:fill_color"),
42 ("foo", "@foo"),
43 ("bar", "@bar"),
44 ]
45
46 output_file("hover.html", title="hover.py example")
47
48 show(p) # open a browser
49
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/plotting/file/hover.py b/examples/plotting/file/hover.py
--- a/examples/plotting/file/hover.py
+++ b/examples/plotting/file/hover.py
@@ -28,8 +28,8 @@
p = figure(title="Hoverful Scatter", tools=TOOLS)
-p.circle(x, y, radius=radii, source=source,
- fill_color=colors, fill_alpha=0.6, line_color=None)
+p.circle(x='x', y='y', radius='radius', source=source,
+ fill_color='colors', fill_alpha=0.6, line_color=None)
p.text(x, y, text=inds, alpha=0.5, text_font_size="5pt",
text_baseline="middle", text_align="center")
diff --git a/examples/plotting/file/tap.py b/examples/plotting/file/tap.py
--- a/examples/plotting/file/tap.py
+++ b/examples/plotting/file/tap.py
@@ -1,7 +1,6 @@
-import itertools
import numpy as np
-from bokeh.plotting import ColumnDataSource, figure, show, output_file
+from bokeh.plotting import figure, show, output_file
from bokeh.models import TapTool
xx, yy = np.meshgrid(range(0,101,4), range(0,101,4))
@@ -14,20 +13,11 @@
"#%02x%02x%02x" % (int(r), int(g), 150) for r, g in zip(50+2*x, 30+2*y)
]
-source = ColumnDataSource(data=dict(
- x=x,
- y=y,
- radius=radii,
- colors=colors,
- foo=list(itertools.permutations("abcdef"))[:N],
- bar=np.random.normal(size=N),
-))
-
TOOLS="crosshair,pan,wheel_zoom,box_zoom,reset,tap,save"
p = figure(title="Tappy Scatter", tools=TOOLS)
-cr = p.circle(x, y, radius=radii, source=source,
+cr = p.circle(x, y, radius=radii,
fill_color=colors, fill_alpha=0.6, line_color=None)
tr = p.text(x, y, text=inds, alpha=0.5, text_font_size="5pt",
| {"golden_diff": "diff --git a/examples/plotting/file/hover.py b/examples/plotting/file/hover.py\n--- a/examples/plotting/file/hover.py\n+++ b/examples/plotting/file/hover.py\n@@ -28,8 +28,8 @@\n \n p = figure(title=\"Hoverful Scatter\", tools=TOOLS)\n \n-p.circle(x, y, radius=radii, source=source,\n- fill_color=colors, fill_alpha=0.6, line_color=None)\n+p.circle(x='x', y='y', radius='radius', source=source,\n+ fill_color='colors', fill_alpha=0.6, line_color=None)\n \n p.text(x, y, text=inds, alpha=0.5, text_font_size=\"5pt\",\n text_baseline=\"middle\", text_align=\"center\")\ndiff --git a/examples/plotting/file/tap.py b/examples/plotting/file/tap.py\n--- a/examples/plotting/file/tap.py\n+++ b/examples/plotting/file/tap.py\n@@ -1,7 +1,6 @@\n-import itertools\n import numpy as np\n \n-from bokeh.plotting import ColumnDataSource, figure, show, output_file\n+from bokeh.plotting import figure, show, output_file\n from bokeh.models import TapTool\n \n xx, yy = np.meshgrid(range(0,101,4), range(0,101,4))\n@@ -14,20 +13,11 @@\n \"#%02x%02x%02x\" % (int(r), int(g), 150) for r, g in zip(50+2*x, 30+2*y)\n ]\n \n-source = ColumnDataSource(data=dict(\n- x=x,\n- y=y,\n- radius=radii,\n- colors=colors,\n- foo=list(itertools.permutations(\"abcdef\"))[:N],\n- bar=np.random.normal(size=N),\n-))\n-\n TOOLS=\"crosshair,pan,wheel_zoom,box_zoom,reset,tap,save\"\n \n p = figure(title=\"Tappy Scatter\", tools=TOOLS)\n \n-cr = p.circle(x, y, radius=radii, source=source,\n+cr = p.circle(x, y, radius=radii,\n fill_color=colors, fill_alpha=0.6, line_color=None)\n \n tr = p.text(x, y, text=inds, alpha=0.5, text_font_size=\"5pt\",\n", "issue": "Examples with deprecation warnings\nAs of 0.12.3, the following examples have deprecation warnings\n- plotting/file/hover.py\n- plotting/file/tap.py\n\n", "before_files": [{"content": "import itertools\nimport numpy as np\n\nfrom bokeh.plotting import ColumnDataSource, figure, show, output_file\nfrom bokeh.models import TapTool\n\nxx, yy = np.meshgrid(range(0,101,4), range(0,101,4))\nx = xx.flatten()\ny = yy.flatten()\nN = len(x)\ninds = [str(i) for i in np.arange(N)]\nradii = np.random.random(size=N)*0.4 + 1.7\ncolors = [\n \"#%02x%02x%02x\" % (int(r), int(g), 150) for r, g in zip(50+2*x, 30+2*y)\n]\n\nsource = ColumnDataSource(data=dict(\n x=x,\n y=y,\n radius=radii,\n colors=colors,\n foo=list(itertools.permutations(\"abcdef\"))[:N],\n bar=np.random.normal(size=N),\n))\n\nTOOLS=\"crosshair,pan,wheel_zoom,box_zoom,reset,tap,save\"\n\np = figure(title=\"Tappy Scatter\", tools=TOOLS)\n\ncr = p.circle(x, y, radius=radii, source=source,\n fill_color=colors, fill_alpha=0.6, line_color=None)\n\ntr = p.text(x, y, text=inds, alpha=0.5, text_font_size=\"5pt\",\n text_baseline=\"middle\", text_align=\"center\")\n\n# in the browser console, you will see messages when circles are clicked\ntool = p.select_one(TapTool).renderers = [cr]\n\noutput_file(\"tap.html\", title=\"tap.py example\")\n\nshow(p) # open a browser\n", "path": "examples/plotting/file/tap.py"}, {"content": "import itertools\n\nimport numpy as np\n\nfrom bokeh.plotting import ColumnDataSource, figure, show, output_file\nfrom bokeh.models import HoverTool\n\nTOOLS=\"crosshair,pan,wheel_zoom,box_zoom,reset,hover,save\"\n\nxx, yy = np.meshgrid(range(0,101,4), range(0,101,4))\nx = xx.flatten()\ny = yy.flatten()\nN = len(x)\ninds = [str(i) for i in np.arange(N)]\nradii = np.random.random(size=N)*0.4 + 1.7\ncolors = [\n \"#%02x%02x%02x\" % (int(r), int(g), 150) for r, g in zip(50+2*x, 30+2*y)\n]\n\nsource = ColumnDataSource(data=dict(\n x=x,\n y=y,\n radius=radii,\n colors=colors,\n foo=list(itertools.permutations(\"abcdef\"))[:N],\n bar=np.random.normal(size=N),\n))\n\np = figure(title=\"Hoverful Scatter\", tools=TOOLS)\n\np.circle(x, y, radius=radii, source=source,\n fill_color=colors, fill_alpha=0.6, line_color=None)\n\np.text(x, y, text=inds, alpha=0.5, text_font_size=\"5pt\",\n text_baseline=\"middle\", text_align=\"center\")\n\nhover = p.select_one(HoverTool).tooltips = [\n (\"index\", \"$index\"),\n (\"(x,y)\", \"($x, $y)\"),\n (\"radius\", \"@radius\"),\n (\"fill color\", \"$color[hex, swatch]:fill_color\"),\n (\"foo\", \"@foo\"),\n (\"bar\", \"@bar\"),\n]\n\noutput_file(\"hover.html\", title=\"hover.py example\")\n\nshow(p) # open a browser\n", "path": "examples/plotting/file/hover.py"}], "after_files": [{"content": "import numpy as np\n\nfrom bokeh.plotting import figure, show, output_file\nfrom bokeh.models import TapTool\n\nxx, yy = np.meshgrid(range(0,101,4), range(0,101,4))\nx = xx.flatten()\ny = yy.flatten()\nN = len(x)\ninds = [str(i) for i in np.arange(N)]\nradii = np.random.random(size=N)*0.4 + 1.7\ncolors = [\n \"#%02x%02x%02x\" % (int(r), int(g), 150) for r, g in zip(50+2*x, 30+2*y)\n]\n\nTOOLS=\"crosshair,pan,wheel_zoom,box_zoom,reset,tap,save\"\n\np = figure(title=\"Tappy Scatter\", tools=TOOLS)\n\ncr = p.circle(x, y, radius=radii,\n fill_color=colors, fill_alpha=0.6, line_color=None)\n\ntr = p.text(x, y, text=inds, alpha=0.5, text_font_size=\"5pt\",\n text_baseline=\"middle\", text_align=\"center\")\n\n# in the browser console, you will see messages when circles are clicked\ntool = p.select_one(TapTool).renderers = [cr]\n\noutput_file(\"tap.html\", title=\"tap.py example\")\n\nshow(p) # open a browser\n", "path": "examples/plotting/file/tap.py"}, {"content": "import itertools\n\nimport numpy as np\n\nfrom bokeh.plotting import ColumnDataSource, figure, show, output_file\nfrom bokeh.models import HoverTool\n\nTOOLS=\"crosshair,pan,wheel_zoom,box_zoom,reset,hover,save\"\n\nxx, yy = np.meshgrid(range(0,101,4), range(0,101,4))\nx = xx.flatten()\ny = yy.flatten()\nN = len(x)\ninds = [str(i) for i in np.arange(N)]\nradii = np.random.random(size=N)*0.4 + 1.7\ncolors = [\n \"#%02x%02x%02x\" % (int(r), int(g), 150) for r, g in zip(50+2*x, 30+2*y)\n]\n\nsource = ColumnDataSource(data=dict(\n x=x,\n y=y,\n radius=radii,\n colors=colors,\n foo=list(itertools.permutations(\"abcdef\"))[:N],\n bar=np.random.normal(size=N),\n))\n\np = figure(title=\"Hoverful Scatter\", tools=TOOLS)\n\np.circle(x='x', y='y', radius='radius', source=source,\n fill_color='colors', fill_alpha=0.6, line_color=None)\n\np.text(x, y, text=inds, alpha=0.5, text_font_size=\"5pt\",\n text_baseline=\"middle\", text_align=\"center\")\n\nhover = p.select_one(HoverTool).tooltips = [\n (\"index\", \"$index\"),\n (\"(x,y)\", \"($x, $y)\"),\n (\"radius\", \"@radius\"),\n (\"fill color\", \"$color[hex, swatch]:fill_color\"),\n (\"foo\", \"@foo\"),\n (\"bar\", \"@bar\"),\n]\n\noutput_file(\"hover.html\", title=\"hover.py example\")\n\nshow(p) # open a browser\n", "path": "examples/plotting/file/hover.py"}]} | 1,261 | 531 |
gh_patches_debug_96 | rasdani/github-patches | git_diff | pymodbus-dev__pymodbus-1422 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
py.typed missing in pip install
### Versions
- Python: 3.11
- OS: macOS
- Pymodbus: 3.2.0 (pip install)
- Modbus Hardware (if used):
### Description
running mypy on my project complains about missing
```
artisanlib/modbusport.py:68: error: Skipping analyzing "pymodbus.constants": module is installed, but missing library stubs or py.typed marker [import]
artisanlib/modbusport.py:69: error: Skipping analyzing "pymodbus.payload": module is installed, but missing library stubs or py.typed marker [import]
artisanlib/modbusport.py:241: error: Skipping analyzing "pymodbus.client": module is installed, but missing library stubs or py.typed marker [import]
artisanlib/modbusport.py:385: error: Skipping analyzing "pymodbus.pdu": module is installed, but missing library stubs or py.typed marker [import]
```
despite all your nice work on adding type annotations. The file py.typed id not get installed along via
```
# sudo -H python3 -m pip install pymodbus --upgrade
```
```
# cd /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pymodbus
# ls ./py.typed
ls: ./py.typed: No such file or directory
```
I added it as follows and the mypy errors went away.
```
# cd /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pymodbus
# sudo touch py.typed
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python3
2 """Installs pymodbus using setuptools."""
3
4
5 # --------------------------------------------------------------------------- #
6 # initialization
7 # --------------------------------------------------------------------------- #
8 from setuptools import setup
9
10
11 dependencies = {}
12 with open("requirements.txt") as reqs:
13 option = None
14 for line in reqs.read().split("\n"):
15 if line == "":
16 option = None
17 elif line.startswith("# install:"):
18 option = line.split(":")[1]
19 dependencies[option] = []
20 elif not line.startswith("#") and option:
21 dependencies[option].append(line)
22
23 install_req = dependencies["required"]
24 del dependencies["required"]
25
26
27 # --------------------------------------------------------------------------- #
28 # configuration
29 # --------------------------------------------------------------------------- #
30 setup(
31 install_requires=install_req,
32 extras_require=dependencies,
33 )
34
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -30,4 +30,5 @@
setup(
install_requires=install_req,
extras_require=dependencies,
+ package_data={"pymodbus": ["py.typed"]},
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -30,4 +30,5 @@\n setup(\n install_requires=install_req,\n extras_require=dependencies,\n+ package_data={\"pymodbus\": [\"py.typed\"]},\n )\n", "issue": "py.typed missing in pip install\n\r\n### Versions\r\n\r\n- Python: 3.11\r\n- OS: macOS\r\n- Pymodbus: 3.2.0 (pip install)\r\n- Modbus Hardware (if used):\r\n\r\n### Description\r\n\r\nrunning mypy on my project complains about missing \r\n\r\n```\r\nartisanlib/modbusport.py:68: error: Skipping analyzing \"pymodbus.constants\": module is installed, but missing library stubs or py.typed marker [import]\r\nartisanlib/modbusport.py:69: error: Skipping analyzing \"pymodbus.payload\": module is installed, but missing library stubs or py.typed marker [import]\r\nartisanlib/modbusport.py:241: error: Skipping analyzing \"pymodbus.client\": module is installed, but missing library stubs or py.typed marker [import]\r\nartisanlib/modbusport.py:385: error: Skipping analyzing \"pymodbus.pdu\": module is installed, but missing library stubs or py.typed marker [import]\r\n```\r\n\r\ndespite all your nice work on adding type annotations. The file py.typed id not get installed along via\r\n\r\n```\r\n# sudo -H python3 -m pip install pymodbus --upgrade\r\n```\r\n\r\n```\r\n# cd /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pymodbus\r\n# ls ./py.typed\r\nls: ./py.typed: No such file or directory\r\n```\r\n\r\nI added it as follows and the mypy errors went away.\r\n\r\n```\r\n# cd /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pymodbus\r\n# sudo touch py.typed\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python3\n\"\"\"Installs pymodbus using setuptools.\"\"\"\n\n\n# --------------------------------------------------------------------------- #\n# initialization\n# --------------------------------------------------------------------------- #\nfrom setuptools import setup\n\n\ndependencies = {}\nwith open(\"requirements.txt\") as reqs:\n option = None\n for line in reqs.read().split(\"\\n\"):\n if line == \"\":\n option = None\n elif line.startswith(\"# install:\"):\n option = line.split(\":\")[1]\n dependencies[option] = []\n elif not line.startswith(\"#\") and option:\n dependencies[option].append(line)\n\ninstall_req = dependencies[\"required\"]\ndel dependencies[\"required\"]\n\n\n# --------------------------------------------------------------------------- #\n# configuration\n# --------------------------------------------------------------------------- #\nsetup(\n install_requires=install_req,\n extras_require=dependencies,\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\"\"\"Installs pymodbus using setuptools.\"\"\"\n\n\n# --------------------------------------------------------------------------- #\n# initialization\n# --------------------------------------------------------------------------- #\nfrom setuptools import setup\n\n\ndependencies = {}\nwith open(\"requirements.txt\") as reqs:\n option = None\n for line in reqs.read().split(\"\\n\"):\n if line == \"\":\n option = None\n elif line.startswith(\"# install:\"):\n option = line.split(\":\")[1]\n dependencies[option] = []\n elif not line.startswith(\"#\") and option:\n dependencies[option].append(line)\n\ninstall_req = dependencies[\"required\"]\ndel dependencies[\"required\"]\n\n\n# --------------------------------------------------------------------------- #\n# configuration\n# --------------------------------------------------------------------------- #\nsetup(\n install_requires=install_req,\n extras_require=dependencies,\n package_data={\"pymodbus\": [\"py.typed\"]},\n)\n", "path": "setup.py"}]} | 847 | 64 |
gh_patches_debug_33087 | rasdani/github-patches | git_diff | digitalfabrik__integreat-cms-1096 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
API: parents endpoint
Endpoint to get parents of a page up to the root. This must be compatible to Integreat API v3.
https://github.com/Integreat/cms/wiki/REST-APIv3-Documentation#parents-of-a-post
@maxammann @Simple-codinger is this used by any app? Otherwise we could move this to another milestone.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `integreat_cms/api/v3/pages.py`
Content:
```
1 """
2 pages API endpoint
3 """
4 from django.conf import settings
5 from django.http import JsonResponse, Http404
6 from django.shortcuts import get_object_or_404
7
8 from ...cms.models import Region, Page
9 from ..decorators import json_response
10
11
12 def transform_page(page_translation):
13 """
14 Function to create a dict from a single page_translation Object.
15
16 :param page_translation: single page translation object
17 :type page_translation: ~integreat_cms.cms.models.pages.page_translation.PageTranslation
18
19 :return: data necessary for API
20 :rtype: dict
21 """
22 if page_translation.page.icon:
23 thumbnail = settings.BASE_URL + page_translation.page.icon.url
24 else:
25 thumbnail = None
26 if page_translation.page.parent:
27 parent = {
28 "id": page_translation.page.parent.id,
29 "url": page_translation.page.parent.get_translation(
30 page_translation.language.slug
31 ).backend_base_link,
32 "path": "/"
33 + page_translation.page.parent.get_translation(
34 page_translation.language.slug
35 ).permalink
36 + "/",
37 }
38 else:
39 parent = {
40 "id": 0,
41 "url": None,
42 "path": None,
43 }
44 return {
45 "id": page_translation.id,
46 "url": page_translation.backend_base_link,
47 "path": "/" + page_translation.permalink + "/",
48 "title": page_translation.title,
49 "modified_gmt": page_translation.combined_last_updated,
50 "excerpt": page_translation.text,
51 "content": page_translation.combined_text,
52 "parent": parent,
53 "order": page_translation.page.lft, # use left edge indicator of mptt model for order
54 "available_languages": page_translation.available_languages,
55 "thumbnail": thumbnail,
56 "hash": None,
57 }
58
59
60 @json_response
61 # pylint: disable=unused-argument
62 def pages(request, region_slug, language_slug):
63 """
64 Function to iterate through all non-archived pages of a region and return them as JSON.
65
66 :param request: Django request
67 :type request: ~django.http.HttpRequest
68 :param region_slug: slug of a region
69 :type region_slug: str
70 :param language_slug: language slug
71 :type language_slug: str
72
73 :return: JSON object according to APIv3 pages endpoint definition
74 :rtype: ~django.http.JsonResponse
75 """
76 region = Region.get_current_region(request)
77 result = []
78 for page in region.get_pages():
79 page_translation = page.get_public_translation(language_slug)
80 if page_translation:
81 result.append(transform_page(page_translation))
82 return JsonResponse(
83 result, safe=False
84 ) # Turn off Safe-Mode to allow serializing arrays
85
86
87 def get_single_page(request, language_slug):
88 """
89 Helper function returning the desired page or a 404 if the
90 requested page does not exist.
91
92 :param request: The request that has been sent to the Django server
93 :type request: ~django.http.HttpRequest
94
95 :param language_slug: Code to identify the desired language
96 :type language_slug: str
97
98 :raises ~django.http.Http404: HTTP status 404 if the request is malformed or no page with the given id or url exists.
99
100 :raises RuntimeError: If neither the id nor the url parameter is given
101
102 :return: the requested page
103 :rtype: ~integreat_cms.cms.models.pages.page.Page
104 """
105 region = Region.get_current_region(request)
106
107 if request.GET.get("id"):
108 page = get_object_or_404(region.pages, id=request.GET.get("id"))
109
110 elif request.GET.get("url"):
111 # Strip leading and trailing slashes to avoid ambiguous urls
112 url = request.GET.get("url").strip("/")
113 # The last path component of the url is the page translation slug
114 page_translation_slug = url.split("/")[-1]
115 # Get page by filtering for translation slug and translation language slug
116 filtered_pages = region.pages.filter(
117 translations__slug=page_translation_slug,
118 translations__language__slug=language_slug,
119 ).distinct()
120
121 if len(filtered_pages) != 1:
122 raise Http404("No matching page translation found for url.")
123 page = filtered_pages[0]
124
125 else:
126 raise RuntimeError("Either the id or the url parameter is required.")
127
128 return page
129
130
131 @json_response
132 # pylint: disable=unused-argument
133 def single_page(request, region_slug, language_slug):
134 """
135 View function returning the desired page as a JSON or a 404 if the
136 requested page does not exist.
137
138 :param request: The request that has been sent to the Django server
139 :type request: ~django.http.HttpRequest
140
141 :param region_slug: Slug defining the region
142 :type region_slug: str
143
144 :param language_slug: Code to identify the desired language
145 :type language_slug: str
146
147 :raises ~django.http.Http404: HTTP status 404 if the request is malformed or no page with the given id or url exists.
148
149 :return: JSON with the requested page and a HTTP status 200.
150 :rtype: ~django.http.JsonResponse
151 """
152 try:
153 page = get_single_page(request, language_slug)
154 except RuntimeError as e:
155 return JsonResponse({"error": str(e)}, status=400)
156 # Get most recent public revision of the page
157 page_translation = page.get_public_translation(language_slug)
158 if page_translation:
159 return JsonResponse(transform_page(page_translation), safe=False)
160
161 raise Http404("No Page matches the given url or id.")
162
163
164 @json_response
165 # pylint: disable=unused-argument
166 def children(request, region_slug, language_slug):
167 """
168 Retrieves all children for a single page
169
170 :param request: The request that has been sent to the Django server
171 :type request: ~django.http.HttpRequest
172
173 :param region_slug: Slug defining the region
174 :type region_slug: str
175
176 :param language_slug: Code to identify the desired language
177 :type language_slug: str
178
179 :raises ~django.http.Http404: HTTP status 404 if the request is malformed or no page with the given id or url exists.
180
181 :return: JSON with the requested page descendants
182 :rtype: ~django.http.JsonResponse
183 """
184 depth = int(request.GET.get("depth", 1))
185 try:
186 # try to get a single ancestor page based on the requests query string
187 root_pages = [get_single_page(request, language_slug)]
188 except RuntimeError:
189 # if neither id nor url is set then get all root pages
190 root_pages = Page.get_root_pages(region_slug)
191 # simulate a virtual root node for WP compatibility
192 # so that depth = 1 returns only those pages without parents (immediate children of this virtual root page)
193 # like in wordpress depth = 0 will return no results in this case
194 depth = depth - 1
195 result = []
196 for root in root_pages:
197 descendants = root.get_descendants_max_depth(True, depth)
198 for descendant in descendants:
199 public_translation = descendant.get_public_translation(language_slug)
200 if public_translation:
201 result.append(transform_page(public_translation))
202 return JsonResponse(result, safe=False)
203
```
Path: `integreat_cms/api/urls.py`
Content:
```
1 """
2 Expansion of API-Endpoints for the CMS
3 """
4 from django.conf.urls import include, url
5
6 from .v3.events import events
7 from .v3.feedback import (
8 page_feedback,
9 search_result_feedback,
10 region_feedback,
11 offer_feedback,
12 offer_list_feedback,
13 event_list_feedback,
14 event_feedback,
15 poi_feedback,
16 map_feedback,
17 imprint_page_feedback,
18 legacy_feedback_endpoint,
19 )
20 from .v3.imprint import imprint
21 from .v3.languages import languages
22 from .v3.locations import locations
23 from .v3.pages import pages, children, single_page
24 from .v3.pdf_export import pdf_export
25 from .v3.push_notifications import sent_push_notifications
26 from .v3.regions import regions, liveregions, hiddenregions, pushnew
27 from .v3.offers import offers
28
29 content_api_urlpatterns = [
30 url(r"^pages/?$", pages, name="api_pages"),
31 url(r"^locations/?$", locations, name="api_locations"),
32 url(r"^events/?$", events, name="api_events"),
33 url(
34 r"^(?:page|post)/?$",
35 single_page,
36 name="api_single_page",
37 ),
38 url(r"^children/?$", children, name="api_children"),
39 url(
40 r"^pdf/?$",
41 pdf_export,
42 name="api_pdf_export",
43 ),
44 url(
45 r"^sent_push_notifications/?$",
46 sent_push_notifications,
47 name="api_sent_push_notifications",
48 ),
49 url(
50 r"^(?:imprint|disclaimer)/?$",
51 imprint,
52 name="api_imprint",
53 ),
54 url(r"^(?:offers|extras)/?$", offers, name="api_offers"),
55 url(
56 r"^feedback/",
57 include(
58 [
59 url(
60 r"^$",
61 legacy_feedback_endpoint.legacy_feedback_endpoint,
62 name="api_legacy_feedback_endpoint",
63 ),
64 url(
65 r"^categories/?$",
66 region_feedback.region_feedback,
67 name="api_region_feedback",
68 ),
69 url(
70 r"^page/?$",
71 page_feedback.page_feedback,
72 name="api_page_feedback",
73 ),
74 url(
75 r"^poi/?$",
76 poi_feedback.poi_feedback,
77 name="api_poi_feedback",
78 ),
79 url(
80 r"^event/?$",
81 event_feedback.event_feedback,
82 name="api_event_feedback",
83 ),
84 url(
85 r"^events/?$",
86 event_list_feedback.event_list_feedback,
87 name="api_event_list_feedback",
88 ),
89 url(
90 r"^imprint-page/?$",
91 imprint_page_feedback.imprint_page_feedback,
92 name="api_imprint_page_feedbacks",
93 ),
94 url(
95 r"^map/?$",
96 map_feedback.map_feedback,
97 name="api_map_feedback",
98 ),
99 url(
100 r"^search/?$",
101 search_result_feedback.search_result_feedback,
102 name="api_search_result_feedback",
103 ),
104 url(
105 r"^(?:extras|offers)/?$",
106 offer_list_feedback.offer_list_feedback,
107 name="api_offer_list_feedback",
108 ),
109 url(
110 r"^(?:extra|offer)/?$",
111 offer_feedback.offer_feedback,
112 name="api_offer_feedback",
113 ),
114 ]
115 ),
116 ),
117 ]
118
119 region_api_urlpatterns = [
120 url(r"^$", regions, name="api_regions"),
121 url(r"^live/?$", liveregions, name="api_regions_live"),
122 url(r"^hidden/?$", hiddenregions, name="api_regions_hidden"),
123 url(r"^pushnew/?$", pushnew, name="api_pushnew"),
124 ]
125
126
127 urlpatterns = [
128 url(r"^api/regions/", include(region_api_urlpatterns)),
129 url(r"^wp-json/extensions/v3/sites/", include(region_api_urlpatterns)),
130 url(
131 r"^api/(?P<region_slug>[-\w]+)/",
132 include(
133 [
134 url(r"^languages/?$", languages, name="api_languages"),
135 url(r"^(?:offers|extras)/?$", offers, name="api_offers"),
136 url(r"^(?P<language_slug>[-\w]+)/", include(content_api_urlpatterns)),
137 ]
138 ),
139 ),
140 url(
141 r"^(?P<region_slug>[-\w]+)/",
142 include(
143 [
144 url(
145 r"^de/wp-json/extensions/v3/languages/?$",
146 languages,
147 name="api_languages",
148 ),
149 url(
150 r"^(?P<language_slug>[-\w]+)/wp-json/extensions/v3/",
151 include(content_api_urlpatterns),
152 ),
153 ]
154 ),
155 ),
156 ]
157
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/integreat_cms/api/urls.py b/integreat_cms/api/urls.py
--- a/integreat_cms/api/urls.py
+++ b/integreat_cms/api/urls.py
@@ -20,7 +20,7 @@
from .v3.imprint import imprint
from .v3.languages import languages
from .v3.locations import locations
-from .v3.pages import pages, children, single_page
+from .v3.pages import pages, children, parents, single_page
from .v3.pdf_export import pdf_export
from .v3.push_notifications import sent_push_notifications
from .v3.regions import regions, liveregions, hiddenregions, pushnew
@@ -36,6 +36,7 @@
name="api_single_page",
),
url(r"^children/?$", children, name="api_children"),
+ url(r"^parents/?$", parents, name="api_parents"),
url(
r"^pdf/?$",
pdf_export,
diff --git a/integreat_cms/api/v3/pages.py b/integreat_cms/api/v3/pages.py
--- a/integreat_cms/api/v3/pages.py
+++ b/integreat_cms/api/v3/pages.py
@@ -200,3 +200,33 @@
if public_translation:
result.append(transform_page(public_translation))
return JsonResponse(result, safe=False)
+
+
+@json_response
+# pylint: disable=unused-argument
+def parents(request, region_slug, language_slug):
+ """
+ Retrieves all ancestors (parent and all nodes up to the root node) of a page
+
+ :param request: The request that has been sent to the Django server
+ :type request: ~django.http.HttpRequest
+
+ :param region_slug: Slug defining the region
+ :type region_slug: str
+
+ :param language_slug: Code to identify the desired language
+ :type language_slug: str
+
+ :raises ~django.http.Http404: HTTP status 404 if the request is malformed or no page with the given id or url exists.
+
+ :return: JSON with the requested page ancestors
+ :rtype: ~django.http.JsonResponse
+ """
+ current_page = get_single_page(request, language_slug)
+ result = []
+ for ancestor in current_page.get_ancestors(include_self=False):
+ public_translation = ancestor.get_public_translation(language_slug)
+ if not public_translation:
+ raise Http404("No Page matches the given url or id.")
+ result.append(transform_page(public_translation))
+ return JsonResponse(result, safe=False)
| {"golden_diff": "diff --git a/integreat_cms/api/urls.py b/integreat_cms/api/urls.py\n--- a/integreat_cms/api/urls.py\n+++ b/integreat_cms/api/urls.py\n@@ -20,7 +20,7 @@\n from .v3.imprint import imprint\n from .v3.languages import languages\n from .v3.locations import locations\n-from .v3.pages import pages, children, single_page\n+from .v3.pages import pages, children, parents, single_page\n from .v3.pdf_export import pdf_export\n from .v3.push_notifications import sent_push_notifications\n from .v3.regions import regions, liveregions, hiddenregions, pushnew\n@@ -36,6 +36,7 @@\n name=\"api_single_page\",\n ),\n url(r\"^children/?$\", children, name=\"api_children\"),\n+ url(r\"^parents/?$\", parents, name=\"api_parents\"),\n url(\n r\"^pdf/?$\",\n pdf_export,\ndiff --git a/integreat_cms/api/v3/pages.py b/integreat_cms/api/v3/pages.py\n--- a/integreat_cms/api/v3/pages.py\n+++ b/integreat_cms/api/v3/pages.py\n@@ -200,3 +200,33 @@\n if public_translation:\n result.append(transform_page(public_translation))\n return JsonResponse(result, safe=False)\n+\n+\n+@json_response\n+# pylint: disable=unused-argument\n+def parents(request, region_slug, language_slug):\n+ \"\"\"\n+ Retrieves all ancestors (parent and all nodes up to the root node) of a page\n+\n+ :param request: The request that has been sent to the Django server\n+ :type request: ~django.http.HttpRequest\n+\n+ :param region_slug: Slug defining the region\n+ :type region_slug: str\n+\n+ :param language_slug: Code to identify the desired language\n+ :type language_slug: str\n+\n+ :raises ~django.http.Http404: HTTP status 404 if the request is malformed or no page with the given id or url exists.\n+\n+ :return: JSON with the requested page ancestors\n+ :rtype: ~django.http.JsonResponse\n+ \"\"\"\n+ current_page = get_single_page(request, language_slug)\n+ result = []\n+ for ancestor in current_page.get_ancestors(include_self=False):\n+ public_translation = ancestor.get_public_translation(language_slug)\n+ if not public_translation:\n+ raise Http404(\"No Page matches the given url or id.\")\n+ result.append(transform_page(public_translation))\n+ return JsonResponse(result, safe=False)\n", "issue": "API: parents endpoint\nEndpoint to get parents of a page up to the root. This must be compatible to Integreat API v3.\r\n\r\nhttps://github.com/Integreat/cms/wiki/REST-APIv3-Documentation#parents-of-a-post\r\n\r\n@maxammann @Simple-codinger is this used by any app? Otherwise we could move this to another milestone.\n", "before_files": [{"content": "\"\"\"\npages API endpoint\n\"\"\"\nfrom django.conf import settings\nfrom django.http import JsonResponse, Http404\nfrom django.shortcuts import get_object_or_404\n\nfrom ...cms.models import Region, Page\nfrom ..decorators import json_response\n\n\ndef transform_page(page_translation):\n \"\"\"\n Function to create a dict from a single page_translation Object.\n\n :param page_translation: single page translation object\n :type page_translation: ~integreat_cms.cms.models.pages.page_translation.PageTranslation\n\n :return: data necessary for API\n :rtype: dict\n \"\"\"\n if page_translation.page.icon:\n thumbnail = settings.BASE_URL + page_translation.page.icon.url\n else:\n thumbnail = None\n if page_translation.page.parent:\n parent = {\n \"id\": page_translation.page.parent.id,\n \"url\": page_translation.page.parent.get_translation(\n page_translation.language.slug\n ).backend_base_link,\n \"path\": \"/\"\n + page_translation.page.parent.get_translation(\n page_translation.language.slug\n ).permalink\n + \"/\",\n }\n else:\n parent = {\n \"id\": 0,\n \"url\": None,\n \"path\": None,\n }\n return {\n \"id\": page_translation.id,\n \"url\": page_translation.backend_base_link,\n \"path\": \"/\" + page_translation.permalink + \"/\",\n \"title\": page_translation.title,\n \"modified_gmt\": page_translation.combined_last_updated,\n \"excerpt\": page_translation.text,\n \"content\": page_translation.combined_text,\n \"parent\": parent,\n \"order\": page_translation.page.lft, # use left edge indicator of mptt model for order\n \"available_languages\": page_translation.available_languages,\n \"thumbnail\": thumbnail,\n \"hash\": None,\n }\n\n\n@json_response\n# pylint: disable=unused-argument\ndef pages(request, region_slug, language_slug):\n \"\"\"\n Function to iterate through all non-archived pages of a region and return them as JSON.\n\n :param request: Django request\n :type request: ~django.http.HttpRequest\n :param region_slug: slug of a region\n :type region_slug: str\n :param language_slug: language slug\n :type language_slug: str\n\n :return: JSON object according to APIv3 pages endpoint definition\n :rtype: ~django.http.JsonResponse\n \"\"\"\n region = Region.get_current_region(request)\n result = []\n for page in region.get_pages():\n page_translation = page.get_public_translation(language_slug)\n if page_translation:\n result.append(transform_page(page_translation))\n return JsonResponse(\n result, safe=False\n ) # Turn off Safe-Mode to allow serializing arrays\n\n\ndef get_single_page(request, language_slug):\n \"\"\"\n Helper function returning the desired page or a 404 if the\n requested page does not exist.\n\n :param request: The request that has been sent to the Django server\n :type request: ~django.http.HttpRequest\n\n :param language_slug: Code to identify the desired language\n :type language_slug: str\n\n :raises ~django.http.Http404: HTTP status 404 if the request is malformed or no page with the given id or url exists.\n\n :raises RuntimeError: If neither the id nor the url parameter is given\n\n :return: the requested page\n :rtype: ~integreat_cms.cms.models.pages.page.Page\n \"\"\"\n region = Region.get_current_region(request)\n\n if request.GET.get(\"id\"):\n page = get_object_or_404(region.pages, id=request.GET.get(\"id\"))\n\n elif request.GET.get(\"url\"):\n # Strip leading and trailing slashes to avoid ambiguous urls\n url = request.GET.get(\"url\").strip(\"/\")\n # The last path component of the url is the page translation slug\n page_translation_slug = url.split(\"/\")[-1]\n # Get page by filtering for translation slug and translation language slug\n filtered_pages = region.pages.filter(\n translations__slug=page_translation_slug,\n translations__language__slug=language_slug,\n ).distinct()\n\n if len(filtered_pages) != 1:\n raise Http404(\"No matching page translation found for url.\")\n page = filtered_pages[0]\n\n else:\n raise RuntimeError(\"Either the id or the url parameter is required.\")\n\n return page\n\n\n@json_response\n# pylint: disable=unused-argument\ndef single_page(request, region_slug, language_slug):\n \"\"\"\n View function returning the desired page as a JSON or a 404 if the\n requested page does not exist.\n\n :param request: The request that has been sent to the Django server\n :type request: ~django.http.HttpRequest\n\n :param region_slug: Slug defining the region\n :type region_slug: str\n\n :param language_slug: Code to identify the desired language\n :type language_slug: str\n\n :raises ~django.http.Http404: HTTP status 404 if the request is malformed or no page with the given id or url exists.\n\n :return: JSON with the requested page and a HTTP status 200.\n :rtype: ~django.http.JsonResponse\n \"\"\"\n try:\n page = get_single_page(request, language_slug)\n except RuntimeError as e:\n return JsonResponse({\"error\": str(e)}, status=400)\n # Get most recent public revision of the page\n page_translation = page.get_public_translation(language_slug)\n if page_translation:\n return JsonResponse(transform_page(page_translation), safe=False)\n\n raise Http404(\"No Page matches the given url or id.\")\n\n\n@json_response\n# pylint: disable=unused-argument\ndef children(request, region_slug, language_slug):\n \"\"\"\n Retrieves all children for a single page\n\n :param request: The request that has been sent to the Django server\n :type request: ~django.http.HttpRequest\n\n :param region_slug: Slug defining the region\n :type region_slug: str\n\n :param language_slug: Code to identify the desired language\n :type language_slug: str\n\n :raises ~django.http.Http404: HTTP status 404 if the request is malformed or no page with the given id or url exists.\n\n :return: JSON with the requested page descendants\n :rtype: ~django.http.JsonResponse\n \"\"\"\n depth = int(request.GET.get(\"depth\", 1))\n try:\n # try to get a single ancestor page based on the requests query string\n root_pages = [get_single_page(request, language_slug)]\n except RuntimeError:\n # if neither id nor url is set then get all root pages\n root_pages = Page.get_root_pages(region_slug)\n # simulate a virtual root node for WP compatibility\n # so that depth = 1 returns only those pages without parents (immediate children of this virtual root page)\n # like in wordpress depth = 0 will return no results in this case\n depth = depth - 1\n result = []\n for root in root_pages:\n descendants = root.get_descendants_max_depth(True, depth)\n for descendant in descendants:\n public_translation = descendant.get_public_translation(language_slug)\n if public_translation:\n result.append(transform_page(public_translation))\n return JsonResponse(result, safe=False)\n", "path": "integreat_cms/api/v3/pages.py"}, {"content": "\"\"\"\nExpansion of API-Endpoints for the CMS\n\"\"\"\nfrom django.conf.urls import include, url\n\nfrom .v3.events import events\nfrom .v3.feedback import (\n page_feedback,\n search_result_feedback,\n region_feedback,\n offer_feedback,\n offer_list_feedback,\n event_list_feedback,\n event_feedback,\n poi_feedback,\n map_feedback,\n imprint_page_feedback,\n legacy_feedback_endpoint,\n)\nfrom .v3.imprint import imprint\nfrom .v3.languages import languages\nfrom .v3.locations import locations\nfrom .v3.pages import pages, children, single_page\nfrom .v3.pdf_export import pdf_export\nfrom .v3.push_notifications import sent_push_notifications\nfrom .v3.regions import regions, liveregions, hiddenregions, pushnew\nfrom .v3.offers import offers\n\ncontent_api_urlpatterns = [\n url(r\"^pages/?$\", pages, name=\"api_pages\"),\n url(r\"^locations/?$\", locations, name=\"api_locations\"),\n url(r\"^events/?$\", events, name=\"api_events\"),\n url(\n r\"^(?:page|post)/?$\",\n single_page,\n name=\"api_single_page\",\n ),\n url(r\"^children/?$\", children, name=\"api_children\"),\n url(\n r\"^pdf/?$\",\n pdf_export,\n name=\"api_pdf_export\",\n ),\n url(\n r\"^sent_push_notifications/?$\",\n sent_push_notifications,\n name=\"api_sent_push_notifications\",\n ),\n url(\n r\"^(?:imprint|disclaimer)/?$\",\n imprint,\n name=\"api_imprint\",\n ),\n url(r\"^(?:offers|extras)/?$\", offers, name=\"api_offers\"),\n url(\n r\"^feedback/\",\n include(\n [\n url(\n r\"^$\",\n legacy_feedback_endpoint.legacy_feedback_endpoint,\n name=\"api_legacy_feedback_endpoint\",\n ),\n url(\n r\"^categories/?$\",\n region_feedback.region_feedback,\n name=\"api_region_feedback\",\n ),\n url(\n r\"^page/?$\",\n page_feedback.page_feedback,\n name=\"api_page_feedback\",\n ),\n url(\n r\"^poi/?$\",\n poi_feedback.poi_feedback,\n name=\"api_poi_feedback\",\n ),\n url(\n r\"^event/?$\",\n event_feedback.event_feedback,\n name=\"api_event_feedback\",\n ),\n url(\n r\"^events/?$\",\n event_list_feedback.event_list_feedback,\n name=\"api_event_list_feedback\",\n ),\n url(\n r\"^imprint-page/?$\",\n imprint_page_feedback.imprint_page_feedback,\n name=\"api_imprint_page_feedbacks\",\n ),\n url(\n r\"^map/?$\",\n map_feedback.map_feedback,\n name=\"api_map_feedback\",\n ),\n url(\n r\"^search/?$\",\n search_result_feedback.search_result_feedback,\n name=\"api_search_result_feedback\",\n ),\n url(\n r\"^(?:extras|offers)/?$\",\n offer_list_feedback.offer_list_feedback,\n name=\"api_offer_list_feedback\",\n ),\n url(\n r\"^(?:extra|offer)/?$\",\n offer_feedback.offer_feedback,\n name=\"api_offer_feedback\",\n ),\n ]\n ),\n ),\n]\n\nregion_api_urlpatterns = [\n url(r\"^$\", regions, name=\"api_regions\"),\n url(r\"^live/?$\", liveregions, name=\"api_regions_live\"),\n url(r\"^hidden/?$\", hiddenregions, name=\"api_regions_hidden\"),\n url(r\"^pushnew/?$\", pushnew, name=\"api_pushnew\"),\n]\n\n\nurlpatterns = [\n url(r\"^api/regions/\", include(region_api_urlpatterns)),\n url(r\"^wp-json/extensions/v3/sites/\", include(region_api_urlpatterns)),\n url(\n r\"^api/(?P<region_slug>[-\\w]+)/\",\n include(\n [\n url(r\"^languages/?$\", languages, name=\"api_languages\"),\n url(r\"^(?:offers|extras)/?$\", offers, name=\"api_offers\"),\n url(r\"^(?P<language_slug>[-\\w]+)/\", include(content_api_urlpatterns)),\n ]\n ),\n ),\n url(\n r\"^(?P<region_slug>[-\\w]+)/\",\n include(\n [\n url(\n r\"^de/wp-json/extensions/v3/languages/?$\",\n languages,\n name=\"api_languages\",\n ),\n url(\n r\"^(?P<language_slug>[-\\w]+)/wp-json/extensions/v3/\",\n include(content_api_urlpatterns),\n ),\n ]\n ),\n ),\n]\n", "path": "integreat_cms/api/urls.py"}], "after_files": [{"content": "\"\"\"\npages API endpoint\n\"\"\"\nfrom django.conf import settings\nfrom django.http import JsonResponse, Http404\nfrom django.shortcuts import get_object_or_404\n\nfrom ...cms.models import Region, Page\nfrom ..decorators import json_response\n\n\ndef transform_page(page_translation):\n \"\"\"\n Function to create a dict from a single page_translation Object.\n\n :param page_translation: single page translation object\n :type page_translation: ~integreat_cms.cms.models.pages.page_translation.PageTranslation\n\n :return: data necessary for API\n :rtype: dict\n \"\"\"\n if page_translation.page.icon:\n thumbnail = settings.BASE_URL + page_translation.page.icon.url\n else:\n thumbnail = None\n if page_translation.page.parent:\n parent = {\n \"id\": page_translation.page.parent.id,\n \"url\": page_translation.page.parent.get_translation(\n page_translation.language.slug\n ).backend_base_link,\n \"path\": \"/\"\n + page_translation.page.parent.get_translation(\n page_translation.language.slug\n ).permalink\n + \"/\",\n }\n else:\n parent = {\n \"id\": 0,\n \"url\": None,\n \"path\": None,\n }\n return {\n \"id\": page_translation.id,\n \"url\": page_translation.backend_base_link,\n \"path\": \"/\" + page_translation.permalink + \"/\",\n \"title\": page_translation.title,\n \"modified_gmt\": page_translation.combined_last_updated,\n \"excerpt\": page_translation.text,\n \"content\": page_translation.combined_text,\n \"parent\": parent,\n \"order\": page_translation.page.lft, # use left edge indicator of mptt model for order\n \"available_languages\": page_translation.available_languages,\n \"thumbnail\": thumbnail,\n \"hash\": None,\n }\n\n\n@json_response\n# pylint: disable=unused-argument\ndef pages(request, region_slug, language_slug):\n \"\"\"\n Function to iterate through all non-archived pages of a region and return them as JSON.\n\n :param request: Django request\n :type request: ~django.http.HttpRequest\n :param region_slug: slug of a region\n :type region_slug: str\n :param language_slug: language slug\n :type language_slug: str\n\n :return: JSON object according to APIv3 pages endpoint definition\n :rtype: ~django.http.JsonResponse\n \"\"\"\n region = Region.get_current_region(request)\n result = []\n for page in region.get_pages():\n page_translation = page.get_public_translation(language_slug)\n if page_translation:\n result.append(transform_page(page_translation))\n return JsonResponse(\n result, safe=False\n ) # Turn off Safe-Mode to allow serializing arrays\n\n\ndef get_single_page(request, language_slug):\n \"\"\"\n Helper function returning the desired page or a 404 if the\n requested page does not exist.\n\n :param request: The request that has been sent to the Django server\n :type request: ~django.http.HttpRequest\n\n :param language_slug: Code to identify the desired language\n :type language_slug: str\n\n :raises ~django.http.Http404: HTTP status 404 if the request is malformed or no page with the given id or url exists.\n\n :raises RuntimeError: If neither the id nor the url parameter is given\n\n :return: the requested page\n :rtype: ~integreat_cms.cms.models.pages.page.Page\n \"\"\"\n region = Region.get_current_region(request)\n\n if request.GET.get(\"id\"):\n page = get_object_or_404(region.pages, id=request.GET.get(\"id\"))\n\n elif request.GET.get(\"url\"):\n # Strip leading and trailing slashes to avoid ambiguous urls\n url = request.GET.get(\"url\").strip(\"/\")\n # The last path component of the url is the page translation slug\n page_translation_slug = url.split(\"/\")[-1]\n # Get page by filtering for translation slug and translation language slug\n filtered_pages = region.pages.filter(\n translations__slug=page_translation_slug,\n translations__language__slug=language_slug,\n ).distinct()\n\n if len(filtered_pages) != 1:\n raise Http404(\"No matching page translation found for url.\")\n page = filtered_pages[0]\n\n else:\n raise RuntimeError(\"Either the id or the url parameter is required.\")\n\n return page\n\n\n@json_response\n# pylint: disable=unused-argument\ndef single_page(request, region_slug, language_slug):\n \"\"\"\n View function returning the desired page as a JSON or a 404 if the\n requested page does not exist.\n\n :param request: The request that has been sent to the Django server\n :type request: ~django.http.HttpRequest\n\n :param region_slug: Slug defining the region\n :type region_slug: str\n\n :param language_slug: Code to identify the desired language\n :type language_slug: str\n\n :raises ~django.http.Http404: HTTP status 404 if the request is malformed or no page with the given id or url exists.\n\n :return: JSON with the requested page and a HTTP status 200.\n :rtype: ~django.http.JsonResponse\n \"\"\"\n try:\n page = get_single_page(request, language_slug)\n except RuntimeError as e:\n return JsonResponse({\"error\": str(e)}, status=400)\n # Get most recent public revision of the page\n page_translation = page.get_public_translation(language_slug)\n if page_translation:\n return JsonResponse(transform_page(page_translation), safe=False)\n\n raise Http404(\"No Page matches the given url or id.\")\n\n\n@json_response\n# pylint: disable=unused-argument\ndef children(request, region_slug, language_slug):\n \"\"\"\n Retrieves all children for a single page\n\n :param request: The request that has been sent to the Django server\n :type request: ~django.http.HttpRequest\n\n :param region_slug: Slug defining the region\n :type region_slug: str\n\n :param language_slug: Code to identify the desired language\n :type language_slug: str\n\n :raises ~django.http.Http404: HTTP status 404 if the request is malformed or no page with the given id or url exists.\n\n :return: JSON with the requested page descendants\n :rtype: ~django.http.JsonResponse\n \"\"\"\n depth = int(request.GET.get(\"depth\", 1))\n try:\n # try to get a single ancestor page based on the requests query string\n root_pages = [get_single_page(request, language_slug)]\n except RuntimeError:\n # if neither id nor url is set then get all root pages\n root_pages = Page.get_root_pages(region_slug)\n # simulate a virtual root node for WP compatibility\n # so that depth = 1 returns only those pages without parents (immediate children of this virtual root page)\n # like in wordpress depth = 0 will return no results in this case\n depth = depth - 1\n result = []\n for root in root_pages:\n descendants = root.get_descendants_max_depth(True, depth)\n for descendant in descendants:\n public_translation = descendant.get_public_translation(language_slug)\n if public_translation:\n result.append(transform_page(public_translation))\n return JsonResponse(result, safe=False)\n\n\n@json_response\n# pylint: disable=unused-argument\ndef parents(request, region_slug, language_slug):\n \"\"\"\n Retrieves all ancestors (parent and all nodes up to the root node) of a page\n\n :param request: The request that has been sent to the Django server\n :type request: ~django.http.HttpRequest\n\n :param region_slug: Slug defining the region\n :type region_slug: str\n\n :param language_slug: Code to identify the desired language\n :type language_slug: str\n\n :raises ~django.http.Http404: HTTP status 404 if the request is malformed or no page with the given id or url exists.\n\n :return: JSON with the requested page ancestors\n :rtype: ~django.http.JsonResponse\n \"\"\"\n current_page = get_single_page(request, language_slug)\n result = []\n for ancestor in current_page.get_ancestors(include_self=False):\n public_translation = ancestor.get_public_translation(language_slug)\n if not public_translation:\n raise Http404(\"No Page matches the given url or id.\")\n result.append(transform_page(public_translation))\n return JsonResponse(result, safe=False)\n", "path": "integreat_cms/api/v3/pages.py"}, {"content": "\"\"\"\nExpansion of API-Endpoints for the CMS\n\"\"\"\nfrom django.conf.urls import include, url\n\nfrom .v3.events import events\nfrom .v3.feedback import (\n page_feedback,\n search_result_feedback,\n region_feedback,\n offer_feedback,\n offer_list_feedback,\n event_list_feedback,\n event_feedback,\n poi_feedback,\n map_feedback,\n imprint_page_feedback,\n legacy_feedback_endpoint,\n)\nfrom .v3.imprint import imprint\nfrom .v3.languages import languages\nfrom .v3.locations import locations\nfrom .v3.pages import pages, children, parents, single_page\nfrom .v3.pdf_export import pdf_export\nfrom .v3.push_notifications import sent_push_notifications\nfrom .v3.regions import regions, liveregions, hiddenregions, pushnew\nfrom .v3.offers import offers\n\ncontent_api_urlpatterns = [\n url(r\"^pages/?$\", pages, name=\"api_pages\"),\n url(r\"^locations/?$\", locations, name=\"api_locations\"),\n url(r\"^events/?$\", events, name=\"api_events\"),\n url(\n r\"^(?:page|post)/?$\",\n single_page,\n name=\"api_single_page\",\n ),\n url(r\"^children/?$\", children, name=\"api_children\"),\n url(r\"^parents/?$\", parents, name=\"api_parents\"),\n url(\n r\"^pdf/?$\",\n pdf_export,\n name=\"api_pdf_export\",\n ),\n url(\n r\"^sent_push_notifications/?$\",\n sent_push_notifications,\n name=\"api_sent_push_notifications\",\n ),\n url(\n r\"^(?:imprint|disclaimer)/?$\",\n imprint,\n name=\"api_imprint\",\n ),\n url(r\"^(?:offers|extras)/?$\", offers, name=\"api_offers\"),\n url(\n r\"^feedback/\",\n include(\n [\n url(\n r\"^$\",\n legacy_feedback_endpoint.legacy_feedback_endpoint,\n name=\"api_legacy_feedback_endpoint\",\n ),\n url(\n r\"^categories/?$\",\n region_feedback.region_feedback,\n name=\"api_region_feedback\",\n ),\n url(\n r\"^page/?$\",\n page_feedback.page_feedback,\n name=\"api_page_feedback\",\n ),\n url(\n r\"^poi/?$\",\n poi_feedback.poi_feedback,\n name=\"api_poi_feedback\",\n ),\n url(\n r\"^event/?$\",\n event_feedback.event_feedback,\n name=\"api_event_feedback\",\n ),\n url(\n r\"^events/?$\",\n event_list_feedback.event_list_feedback,\n name=\"api_event_list_feedback\",\n ),\n url(\n r\"^imprint-page/?$\",\n imprint_page_feedback.imprint_page_feedback,\n name=\"api_imprint_page_feedbacks\",\n ),\n url(\n r\"^map/?$\",\n map_feedback.map_feedback,\n name=\"api_map_feedback\",\n ),\n url(\n r\"^search/?$\",\n search_result_feedback.search_result_feedback,\n name=\"api_search_result_feedback\",\n ),\n url(\n r\"^(?:extras|offers)/?$\",\n offer_list_feedback.offer_list_feedback,\n name=\"api_offer_list_feedback\",\n ),\n url(\n r\"^(?:extra|offer)/?$\",\n offer_feedback.offer_feedback,\n name=\"api_offer_feedback\",\n ),\n ]\n ),\n ),\n]\n\nregion_api_urlpatterns = [\n url(r\"^$\", regions, name=\"api_regions\"),\n url(r\"^live/?$\", liveregions, name=\"api_regions_live\"),\n url(r\"^hidden/?$\", hiddenregions, name=\"api_regions_hidden\"),\n url(r\"^pushnew/?$\", pushnew, name=\"api_pushnew\"),\n]\n\n\nurlpatterns = [\n url(r\"^api/regions/\", include(region_api_urlpatterns)),\n url(r\"^wp-json/extensions/v3/sites/\", include(region_api_urlpatterns)),\n url(\n r\"^api/(?P<region_slug>[-\\w]+)/\",\n include(\n [\n url(r\"^languages/?$\", languages, name=\"api_languages\"),\n url(r\"^(?:offers|extras)/?$\", offers, name=\"api_offers\"),\n url(r\"^(?P<language_slug>[-\\w]+)/\", include(content_api_urlpatterns)),\n ]\n ),\n ),\n url(\n r\"^(?P<region_slug>[-\\w]+)/\",\n include(\n [\n url(\n r\"^de/wp-json/extensions/v3/languages/?$\",\n languages,\n name=\"api_languages\",\n ),\n url(\n r\"^(?P<language_slug>[-\\w]+)/wp-json/extensions/v3/\",\n include(content_api_urlpatterns),\n ),\n ]\n ),\n ),\n]\n", "path": "integreat_cms/api/urls.py"}]} | 3,777 | 582 |
gh_patches_debug_23502 | rasdani/github-patches | git_diff | uccser__cs-unplugged-255 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Create example test for templatetags
Create example test for templatetags
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `csunplugged/general/templatetags/render_html_field.py`
Content:
```
1 """Module for the custom render_html_field template tag."""
2
3 from django import template
4 from django.template import Template, Variable, TemplateSyntaxError
5
6
7 class RenderHTMLFieldNode(template.Node):
8 """Class used for the custom render_html_field template tag."""
9
10 def __init__(self, item_to_be_rendered):
11 """Create the RenderHTMLFieldNode object."""
12 self.item_to_be_rendered = Variable(item_to_be_rendered)
13
14 def render(self, context):
15 """Render the text with the static template tag.
16
17 Returns:
18 Rendered string of text, or an empty string if the render
19 fails to convert.
20 """
21 try:
22 actual_item = '{% load static %}\n' + self.item_to_be_rendered.resolve(context)
23 return Template(actual_item).render(context)
24 except template.VariableDoesNotExist:
25 return ''
26
27
28 def render_html_field(parser, token):
29 """Run when the render_html_field template tag is used.
30
31 Returns:
32 Rendered string of text, or an empty string if the render
33 fails to convert.
34 """
35 bits = token.split_contents()
36 if len(bits) != 2:
37 raise TemplateSyntaxError("'%s' takes only one argument"
38 " (a variable representing a template to render)" % bits[0])
39 return RenderHTMLFieldNode(bits[1])
40
41
42 register = template.Library()
43 render_html_field = register.tag(render_html_field)
44
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/csunplugged/general/templatetags/render_html_field.py b/csunplugged/general/templatetags/render_html_field.py
--- a/csunplugged/general/templatetags/render_html_field.py
+++ b/csunplugged/general/templatetags/render_html_field.py
@@ -3,6 +3,12 @@
from django import template
from django.template import Template, Variable, TemplateSyntaxError
+INVALID_ATTRIBUTE_MESSAGE = "The 'render_html_field' tag was given an " \
+ "attribute that could not be converted to a string."
+
+MISSING_ATTRIBUTE_MESSAGE = "The 'render_html_field' tag was given an " \
+ "attribute that does not exist."
+
class RenderHTMLFieldNode(template.Node):
"""Class used for the custom render_html_field template tag."""
@@ -21,8 +27,10 @@
try:
actual_item = '{% load static %}\n' + self.item_to_be_rendered.resolve(context)
return Template(actual_item).render(context)
+ except TypeError:
+ raise TemplateSyntaxError(INVALID_ATTRIBUTE_MESSAGE)
except template.VariableDoesNotExist:
- return ''
+ raise TemplateSyntaxError(MISSING_ATTRIBUTE_MESSAGE)
def render_html_field(parser, token):
| {"golden_diff": "diff --git a/csunplugged/general/templatetags/render_html_field.py b/csunplugged/general/templatetags/render_html_field.py\n--- a/csunplugged/general/templatetags/render_html_field.py\n+++ b/csunplugged/general/templatetags/render_html_field.py\n@@ -3,6 +3,12 @@\n from django import template\n from django.template import Template, Variable, TemplateSyntaxError\n \n+INVALID_ATTRIBUTE_MESSAGE = \"The 'render_html_field' tag was given an \" \\\n+ \"attribute that could not be converted to a string.\"\n+\n+MISSING_ATTRIBUTE_MESSAGE = \"The 'render_html_field' tag was given an \" \\\n+ \"attribute that does not exist.\"\n+\n \n class RenderHTMLFieldNode(template.Node):\n \"\"\"Class used for the custom render_html_field template tag.\"\"\"\n@@ -21,8 +27,10 @@\n try:\n actual_item = '{% load static %}\\n' + self.item_to_be_rendered.resolve(context)\n return Template(actual_item).render(context)\n+ except TypeError:\n+ raise TemplateSyntaxError(INVALID_ATTRIBUTE_MESSAGE)\n except template.VariableDoesNotExist:\n- return ''\n+ raise TemplateSyntaxError(MISSING_ATTRIBUTE_MESSAGE)\n \n \n def render_html_field(parser, token):\n", "issue": "Create example test for templatetags\n\nCreate example test for templatetags\n\n", "before_files": [{"content": "\"\"\"Module for the custom render_html_field template tag.\"\"\"\n\nfrom django import template\nfrom django.template import Template, Variable, TemplateSyntaxError\n\n\nclass RenderHTMLFieldNode(template.Node):\n \"\"\"Class used for the custom render_html_field template tag.\"\"\"\n\n def __init__(self, item_to_be_rendered):\n \"\"\"Create the RenderHTMLFieldNode object.\"\"\"\n self.item_to_be_rendered = Variable(item_to_be_rendered)\n\n def render(self, context):\n \"\"\"Render the text with the static template tag.\n\n Returns:\n Rendered string of text, or an empty string if the render\n fails to convert.\n \"\"\"\n try:\n actual_item = '{% load static %}\\n' + self.item_to_be_rendered.resolve(context)\n return Template(actual_item).render(context)\n except template.VariableDoesNotExist:\n return ''\n\n\ndef render_html_field(parser, token):\n \"\"\"Run when the render_html_field template tag is used.\n\n Returns:\n Rendered string of text, or an empty string if the render\n fails to convert.\n \"\"\"\n bits = token.split_contents()\n if len(bits) != 2:\n raise TemplateSyntaxError(\"'%s' takes only one argument\"\n \" (a variable representing a template to render)\" % bits[0])\n return RenderHTMLFieldNode(bits[1])\n\n\nregister = template.Library()\nrender_html_field = register.tag(render_html_field)\n", "path": "csunplugged/general/templatetags/render_html_field.py"}], "after_files": [{"content": "\"\"\"Module for the custom render_html_field template tag.\"\"\"\n\nfrom django import template\nfrom django.template import Template, Variable, TemplateSyntaxError\n\nINVALID_ATTRIBUTE_MESSAGE = \"The 'render_html_field' tag was given an \" \\\n \"attribute that could not be converted to a string.\"\n\nMISSING_ATTRIBUTE_MESSAGE = \"The 'render_html_field' tag was given an \" \\\n \"attribute that does not exist.\"\n\n\nclass RenderHTMLFieldNode(template.Node):\n \"\"\"Class used for the custom render_html_field template tag.\"\"\"\n\n def __init__(self, item_to_be_rendered):\n \"\"\"Create the RenderHTMLFieldNode object.\"\"\"\n self.item_to_be_rendered = Variable(item_to_be_rendered)\n\n def render(self, context):\n \"\"\"Render the text with the static template tag.\n\n Returns:\n Rendered string of text, or an empty string if the render\n fails to convert.\n \"\"\"\n try:\n actual_item = '{% load static %}\\n' + self.item_to_be_rendered.resolve(context)\n return Template(actual_item).render(context)\n except TypeError:\n raise TemplateSyntaxError(INVALID_ATTRIBUTE_MESSAGE)\n except template.VariableDoesNotExist:\n raise TemplateSyntaxError(MISSING_ATTRIBUTE_MESSAGE)\n\n\ndef render_html_field(parser, token):\n \"\"\"Run when the render_html_field template tag is used.\n\n Returns:\n Rendered string of text, or an empty string if the render\n fails to convert.\n \"\"\"\n bits = token.split_contents()\n if len(bits) != 2:\n raise TemplateSyntaxError(\"'%s' takes only one argument\"\n \" (a variable representing a template to render)\" % bits[0])\n return RenderHTMLFieldNode(bits[1])\n\n\nregister = template.Library()\nrender_html_field = register.tag(render_html_field)\n", "path": "csunplugged/general/templatetags/render_html_field.py"}]} | 667 | 274 |
gh_patches_debug_10305 | rasdani/github-patches | git_diff | scikit-hep__pyhf-1242 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add pyhf.writexml and pyhf.readxml to public API docs
# Description
As first brought up in PR #1125, `pyhf.writexml` and `pyhf.readxml` are not currently documented in the public Python API docs. This should get fixed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import setup
2
3 extras_require = {
4 'shellcomplete': ['click_completion'],
5 'tensorflow': [
6 'tensorflow~=2.2.0', # TensorFlow minor releases are as volatile as major
7 'tensorflow-probability~=0.10.0',
8 ],
9 'torch': ['torch~=1.2'],
10 'jax': ['jax~=0.2.4', 'jaxlib~=0.1.56'],
11 'xmlio': [
12 'uproot3~=3.14',
13 'uproot~=4.0',
14 ], # uproot3 required until writing to ROOT supported in uproot4
15 'minuit': ['iminuit~=2.1'],
16 }
17 extras_require['backends'] = sorted(
18 set(
19 extras_require['tensorflow']
20 + extras_require['torch']
21 + extras_require['jax']
22 + extras_require['minuit']
23 )
24 )
25 extras_require['contrib'] = sorted({'matplotlib', 'requests'})
26 extras_require['lint'] = sorted({'flake8', 'black'})
27
28 extras_require['test'] = sorted(
29 set(
30 extras_require['backends']
31 + extras_require['xmlio']
32 + extras_require['contrib']
33 + extras_require['shellcomplete']
34 + [
35 'pytest~=6.0',
36 'pytest-cov>=2.5.1',
37 'pytest-mock',
38 'pytest-benchmark[histogram]',
39 'pytest-console-scripts',
40 'pytest-mpl',
41 'pydocstyle',
42 'coverage>=4.0', # coveralls
43 'papermill~=2.0',
44 'nteract-scrapbook~=0.2',
45 'jupyter',
46 'graphviz',
47 'jsonpatch',
48 ]
49 )
50 )
51 extras_require['docs'] = sorted(
52 {
53 'sphinx>=3.1.2',
54 'sphinxcontrib-bibtex~=1.0',
55 'sphinx-click',
56 'sphinx_rtd_theme',
57 'nbsphinx',
58 'ipywidgets',
59 'sphinx-issues',
60 'sphinx-copybutton>0.2.9',
61 }
62 )
63 extras_require['develop'] = sorted(
64 set(
65 extras_require['docs']
66 + extras_require['lint']
67 + extras_require['test']
68 + [
69 'nbdime',
70 'bump2version',
71 'ipython',
72 'pre-commit',
73 'check-manifest',
74 'codemetapy>=0.3.4',
75 'twine',
76 ]
77 )
78 )
79 extras_require['complete'] = sorted(set(sum(extras_require.values(), [])))
80
81
82 setup(
83 extras_require=extras_require,
84 use_scm_version=lambda: {'local_scheme': lambda version: ''},
85 )
86
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -49,16 +49,19 @@
)
)
extras_require['docs'] = sorted(
- {
- 'sphinx>=3.1.2',
- 'sphinxcontrib-bibtex~=1.0',
- 'sphinx-click',
- 'sphinx_rtd_theme',
- 'nbsphinx',
- 'ipywidgets',
- 'sphinx-issues',
- 'sphinx-copybutton>0.2.9',
- }
+ set(
+ extras_require['xmlio']
+ + [
+ 'sphinx>=3.1.2',
+ 'sphinxcontrib-bibtex~=1.0',
+ 'sphinx-click',
+ 'sphinx_rtd_theme',
+ 'nbsphinx',
+ 'ipywidgets',
+ 'sphinx-issues',
+ 'sphinx-copybutton>0.2.9',
+ ]
+ )
)
extras_require['develop'] = sorted(
set(
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -49,16 +49,19 @@\n )\n )\n extras_require['docs'] = sorted(\n- {\n- 'sphinx>=3.1.2',\n- 'sphinxcontrib-bibtex~=1.0',\n- 'sphinx-click',\n- 'sphinx_rtd_theme',\n- 'nbsphinx',\n- 'ipywidgets',\n- 'sphinx-issues',\n- 'sphinx-copybutton>0.2.9',\n- }\n+ set(\n+ extras_require['xmlio']\n+ + [\n+ 'sphinx>=3.1.2',\n+ 'sphinxcontrib-bibtex~=1.0',\n+ 'sphinx-click',\n+ 'sphinx_rtd_theme',\n+ 'nbsphinx',\n+ 'ipywidgets',\n+ 'sphinx-issues',\n+ 'sphinx-copybutton>0.2.9',\n+ ]\n+ )\n )\n extras_require['develop'] = sorted(\n set(\n", "issue": "Add pyhf.writexml and pyhf.readxml to public API docs\n# Description\r\n\r\nAs first brought up in PR #1125, `pyhf.writexml` and `pyhf.readxml` are not currently documented in the public Python API docs. This should get fixed.\n", "before_files": [{"content": "from setuptools import setup\n\nextras_require = {\n 'shellcomplete': ['click_completion'],\n 'tensorflow': [\n 'tensorflow~=2.2.0', # TensorFlow minor releases are as volatile as major\n 'tensorflow-probability~=0.10.0',\n ],\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.2.4', 'jaxlib~=0.1.56'],\n 'xmlio': [\n 'uproot3~=3.14',\n 'uproot~=4.0',\n ], # uproot3 required until writing to ROOT supported in uproot4\n 'minuit': ['iminuit~=2.1'],\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted({'matplotlib', 'requests'})\nextras_require['lint'] = sorted({'flake8', 'black'})\n\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + extras_require['shellcomplete']\n + [\n 'pytest~=6.0',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'coverage>=4.0', # coveralls\n 'papermill~=2.0',\n 'nteract-scrapbook~=0.2',\n 'jupyter',\n 'graphviz',\n 'jsonpatch',\n ]\n )\n)\nextras_require['docs'] = sorted(\n {\n 'sphinx>=3.1.2',\n 'sphinxcontrib-bibtex~=1.0',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>0.2.9',\n }\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['lint']\n + extras_require['test']\n + [\n 'nbdime',\n 'bump2version',\n 'ipython',\n 'pre-commit',\n 'check-manifest',\n 'codemetapy>=0.3.4',\n 'twine',\n ]\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n extras_require=extras_require,\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import setup\n\nextras_require = {\n 'shellcomplete': ['click_completion'],\n 'tensorflow': [\n 'tensorflow~=2.2.0', # TensorFlow minor releases are as volatile as major\n 'tensorflow-probability~=0.10.0',\n ],\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.2.4', 'jaxlib~=0.1.56'],\n 'xmlio': [\n 'uproot3~=3.14',\n 'uproot~=4.0',\n ], # uproot3 required until writing to ROOT supported in uproot4\n 'minuit': ['iminuit~=2.1'],\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted({'matplotlib', 'requests'})\nextras_require['lint'] = sorted({'flake8', 'black'})\n\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + extras_require['shellcomplete']\n + [\n 'pytest~=6.0',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'coverage>=4.0', # coveralls\n 'papermill~=2.0',\n 'nteract-scrapbook~=0.2',\n 'jupyter',\n 'graphviz',\n 'jsonpatch',\n ]\n )\n)\nextras_require['docs'] = sorted(\n set(\n extras_require['xmlio']\n + [\n 'sphinx>=3.1.2',\n 'sphinxcontrib-bibtex~=1.0',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>0.2.9',\n ]\n )\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['lint']\n + extras_require['test']\n + [\n 'nbdime',\n 'bump2version',\n 'ipython',\n 'pre-commit',\n 'check-manifest',\n 'codemetapy>=0.3.4',\n 'twine',\n ]\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n extras_require=extras_require,\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py"}]} | 1,068 | 239 |
gh_patches_debug_11516 | rasdani/github-patches | git_diff | mars-project__mars-2104 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Ray storage case failed for Ray 1.3.0
<!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
Ray storage case failed for Ray 1.3.0 .
Error message:
```
__________________________ test_base_operations[ray] ___________________________
storage_context = <mars.storage.ray.RayStorage object at 0x7fe06827c280>
@pytest.mark.asyncio
@require_lib
async def test_base_operations(storage_context):
storage = storage_context
data1 = np.random.rand(10, 10)
put_info1 = await storage.put(data1)
get_data1 = await storage.get(put_info1.object_id)
np.testing.assert_array_equal(data1, get_data1)
info1 = await storage.object_info(put_info1.object_id)
assert info1.size == put_info1.size
data2 = pd.DataFrame({'col1': np.arange(10),
'col2': [f'str{i}' for i in range(10)],
'col3': np.random.rand(10)},)
put_info2 = await storage.put(data2)
get_data2 = await storage.get(put_info2.object_id)
pd.testing.assert_frame_equal(data2, get_data2)
info2 = await storage.object_info(put_info2.object_id)
assert info2.size == put_info2.size
# FIXME: remove when list functionality is ready for vineyard.
if not isinstance(storage, (VineyardStorage, SharedMemoryStorage, RayStorage)):
num = len(await storage.list())
assert num == 2
await storage.delete(info2.object_id)
# test SparseMatrix
s1 = sps.csr_matrix([[1, 0, 1], [0, 0, 1]])
s = SparseNDArray(s1)
put_info3 = await storage.put(s)
get_data3 = await storage.get(put_info3.object_id)
assert isinstance(get_data3, SparseMatrix)
np.testing.assert_array_equal(get_data3.toarray(), s1.A)
np.testing.assert_array_equal(get_data3.todense(), s1.A)
# test writer and reader
t = np.random.random(10)
b = dataserializer.dumps(t)
async with await storage.open_writer(size=len(b)) as writer:
split = len(b) // 2
await writer.write(b[:split])
> await writer.write(b[split:])
mars/storage/tests/test_libs.py:174:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
mars/lib/aio/file.py:56: in __aexit__
await self.close()
mars/lib/aio/base.py:25: in method
return await self._loop.run_in_executor(self._executor, func)
../../../miniconda/envs/test/lib/python3.8/concurrent/futures/thread.py:57: in run
result = self.fn(*self.args, **self.kwargs)
mars/storage/core.py:110: in close
self._write_close()
mars/storage/ray.py:76: in _write_close
worker.core_worker.put_file_like_object(metadata, self._buffer.tell(), self._buffer,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> ???
E TypeError: put_file_like_object() takes exactly 5 positional arguments (4 given)
python/ray/_raylet.pyx:1032: TypeError
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mars/storage/ray.py`
Content:
```
1 from typing import Any, Dict, List, Tuple
2 from ..lib import sparse
3 from ..utils import lazy_import, implements, register_ray_serializer
4 from .base import StorageBackend, StorageLevel, ObjectInfo, register_storage_backend
5 from .core import BufferWrappedFileObject, StorageFileObject
6
7 ray = lazy_import("ray")
8
9
10 # TODO(fyrestone): make the SparseMatrix pickleable.
11
12 def _mars_sparse_matrix_serializer(value):
13 return [value.shape, value.spmatrix]
14
15
16 def _mars_sparse_matrix_deserializer(obj) -> sparse.SparseNDArray:
17 shape, spmatrix = obj
18 return sparse.matrix.SparseMatrix(spmatrix, shape=shape)
19
20
21 def _register_sparse_matrix_serializer():
22 # register a custom serializer for Mars SparseMatrix
23 register_ray_serializer(sparse.matrix.SparseMatrix,
24 serializer=_mars_sparse_matrix_serializer,
25 deserializer=_mars_sparse_matrix_deserializer)
26
27
28 class RayFileLikeObject:
29 def __init__(self):
30 self._buffers = []
31 self._size = 0
32
33 def write(self, content: bytes):
34 self._buffers.append(content)
35 self._size += len(content)
36
37 def readinto(self, buffer):
38 read_bytes = 0
39 for b in self._buffers:
40 read_pos = read_bytes + len(b)
41 buffer[read_bytes:read_pos] = b
42 read_bytes = read_pos
43 return read_bytes
44
45 def close(self):
46 self._buffers.clear()
47 self._size = 0
48
49 def tell(self):
50 return self._size
51
52
53 class RayFileObject(BufferWrappedFileObject):
54 def __init__(self, object_id: Any, mode: str):
55 self._object_id = object_id
56 super().__init__(mode, size=0)
57
58 def _write_init(self):
59 self._buffer = RayFileLikeObject()
60
61 def _read_init(self):
62 self._buffer = ray.get(self._object_id)
63 self._mv = memoryview(self._buffer)
64 self._size = len(self._buffer)
65
66 def write(self, content: bytes):
67 if not self._initialized:
68 self._write_init()
69 self._initialized = True
70
71 return self._buffer.write(content)
72
73 def _write_close(self):
74 worker = ray.worker.global_worker
75 metadata = ray.ray_constants.OBJECT_METADATA_TYPE_RAW
76 worker.core_worker.put_file_like_object(metadata, self._buffer.tell(), self._buffer,
77 self._object_id)
78
79 def _read_close(self):
80 pass
81
82
83 @register_storage_backend
84 class RayStorage(StorageBackend):
85 name = 'ray'
86
87 def __init__(self, *args, **kwargs):
88 pass
89
90 @classmethod
91 @implements(StorageBackend.setup)
92 async def setup(cls, **kwargs) -> Tuple[Dict, Dict]:
93 ray.init(ignore_reinit_error=True)
94 _register_sparse_matrix_serializer()
95 return dict(), dict()
96
97 @staticmethod
98 @implements(StorageBackend.teardown)
99 async def teardown(**kwargs):
100 pass
101
102 @property
103 @implements(StorageBackend.level)
104 def level(self) -> StorageLevel:
105 # TODO(fyrestone): return StorageLevel.MEMORY & StorageLevel.DISK
106 # if object spilling is available.
107 return StorageLevel.MEMORY | StorageLevel.REMOTE
108
109 @implements(StorageBackend.get)
110 async def get(self, object_id, **kwargs) -> object:
111 if kwargs: # pragma: no cover
112 raise NotImplementedError(f'Got unsupported args: {",".join(kwargs)}')
113 return await object_id
114
115 @implements(StorageBackend.put)
116 async def put(self, obj, importance=0) -> ObjectInfo:
117 object_id = ray.put(obj)
118 # We can't get the serialized bytes length from ray.put
119 return ObjectInfo(object_id=object_id)
120
121 @implements(StorageBackend.delete)
122 async def delete(self, object_id):
123 ray.internal.free(object_id)
124
125 @implements(StorageBackend.object_info)
126 async def object_info(self, object_id) -> ObjectInfo:
127 # The performance of obtaining the object size is poor.
128 return ObjectInfo(object_id=object_id)
129
130 @implements(StorageBackend.open_writer)
131 async def open_writer(self, size=None) -> StorageFileObject:
132 new_id = ray.ObjectRef.from_random()
133 ray_writer = RayFileObject(new_id, mode='w')
134 return StorageFileObject(ray_writer, object_id=new_id)
135
136 @implements(StorageBackend.open_reader)
137 async def open_reader(self, object_id) -> StorageFileObject:
138 ray_reader = RayFileObject(object_id, mode='r')
139 return StorageFileObject(ray_reader, object_id=object_id)
140
141 @implements(StorageBackend.list)
142 async def list(self) -> List:
143 raise NotImplementedError("Ray storage does not support list")
144
145 @implements(StorageBackend.prefetch)
146 async def prefetch(self, object_id):
147 pass
148
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mars/storage/ray.py b/mars/storage/ray.py
--- a/mars/storage/ray.py
+++ b/mars/storage/ray.py
@@ -73,8 +73,12 @@
def _write_close(self):
worker = ray.worker.global_worker
metadata = ray.ray_constants.OBJECT_METADATA_TYPE_RAW
- worker.core_worker.put_file_like_object(metadata, self._buffer.tell(), self._buffer,
- self._object_id)
+ args = [metadata, self._buffer.tell(), self._buffer, self._object_id]
+ try:
+ worker.core_worker.put_file_like_object(*args)
+ except TypeError:
+ args.append(None) # owner_address for ray >= 1.3.0
+ worker.core_worker.put_file_like_object(*args)
def _read_close(self):
pass
| {"golden_diff": "diff --git a/mars/storage/ray.py b/mars/storage/ray.py\n--- a/mars/storage/ray.py\n+++ b/mars/storage/ray.py\n@@ -73,8 +73,12 @@\n def _write_close(self):\n worker = ray.worker.global_worker\n metadata = ray.ray_constants.OBJECT_METADATA_TYPE_RAW\n- worker.core_worker.put_file_like_object(metadata, self._buffer.tell(), self._buffer,\n- self._object_id)\n+ args = [metadata, self._buffer.tell(), self._buffer, self._object_id]\n+ try:\n+ worker.core_worker.put_file_like_object(*args)\n+ except TypeError:\n+ args.append(None) # owner_address for ray >= 1.3.0\n+ worker.core_worker.put_file_like_object(*args)\n \n def _read_close(self):\n pass\n", "issue": "[BUG] Ray storage case failed for Ray 1.3.0\n<!--\r\nThank you for your contribution!\r\n\r\nPlease review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.\r\n-->\r\n\r\n**Describe the bug**\r\n\r\nRay storage case failed for Ray 1.3.0 .\r\n\r\nError message:\r\n\r\n```\r\n__________________________ test_base_operations[ray] ___________________________\r\n\r\nstorage_context = <mars.storage.ray.RayStorage object at 0x7fe06827c280>\r\n\r\n @pytest.mark.asyncio\r\n @require_lib\r\n async def test_base_operations(storage_context):\r\n storage = storage_context\r\n \r\n data1 = np.random.rand(10, 10)\r\n put_info1 = await storage.put(data1)\r\n get_data1 = await storage.get(put_info1.object_id)\r\n np.testing.assert_array_equal(data1, get_data1)\r\n \r\n info1 = await storage.object_info(put_info1.object_id)\r\n assert info1.size == put_info1.size\r\n \r\n data2 = pd.DataFrame({'col1': np.arange(10),\r\n 'col2': [f'str{i}' for i in range(10)],\r\n 'col3': np.random.rand(10)},)\r\n put_info2 = await storage.put(data2)\r\n get_data2 = await storage.get(put_info2.object_id)\r\n pd.testing.assert_frame_equal(data2, get_data2)\r\n \r\n info2 = await storage.object_info(put_info2.object_id)\r\n assert info2.size == put_info2.size\r\n \r\n # FIXME: remove when list functionality is ready for vineyard.\r\n if not isinstance(storage, (VineyardStorage, SharedMemoryStorage, RayStorage)):\r\n num = len(await storage.list())\r\n assert num == 2\r\n await storage.delete(info2.object_id)\r\n \r\n # test SparseMatrix\r\n s1 = sps.csr_matrix([[1, 0, 1], [0, 0, 1]])\r\n s = SparseNDArray(s1)\r\n put_info3 = await storage.put(s)\r\n get_data3 = await storage.get(put_info3.object_id)\r\n assert isinstance(get_data3, SparseMatrix)\r\n np.testing.assert_array_equal(get_data3.toarray(), s1.A)\r\n np.testing.assert_array_equal(get_data3.todense(), s1.A)\r\n \r\n # test writer and reader\r\n t = np.random.random(10)\r\n b = dataserializer.dumps(t)\r\n async with await storage.open_writer(size=len(b)) as writer:\r\n split = len(b) // 2\r\n await writer.write(b[:split])\r\n> await writer.write(b[split:])\r\n\r\nmars/storage/tests/test_libs.py:174: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\nmars/lib/aio/file.py:56: in __aexit__\r\n await self.close()\r\nmars/lib/aio/base.py:25: in method\r\n return await self._loop.run_in_executor(self._executor, func)\r\n../../../miniconda/envs/test/lib/python3.8/concurrent/futures/thread.py:57: in run\r\n result = self.fn(*self.args, **self.kwargs)\r\nmars/storage/core.py:110: in close\r\n self._write_close()\r\nmars/storage/ray.py:76: in _write_close\r\n worker.core_worker.put_file_like_object(metadata, self._buffer.tell(), self._buffer,\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\n> ???\r\nE TypeError: put_file_like_object() takes exactly 5 positional arguments (4 given)\r\n\r\npython/ray/_raylet.pyx:1032: TypeError\r\n```\n", "before_files": [{"content": "from typing import Any, Dict, List, Tuple\nfrom ..lib import sparse\nfrom ..utils import lazy_import, implements, register_ray_serializer\nfrom .base import StorageBackend, StorageLevel, ObjectInfo, register_storage_backend\nfrom .core import BufferWrappedFileObject, StorageFileObject\n\nray = lazy_import(\"ray\")\n\n\n# TODO(fyrestone): make the SparseMatrix pickleable.\n\ndef _mars_sparse_matrix_serializer(value):\n return [value.shape, value.spmatrix]\n\n\ndef _mars_sparse_matrix_deserializer(obj) -> sparse.SparseNDArray:\n shape, spmatrix = obj\n return sparse.matrix.SparseMatrix(spmatrix, shape=shape)\n\n\ndef _register_sparse_matrix_serializer():\n # register a custom serializer for Mars SparseMatrix\n register_ray_serializer(sparse.matrix.SparseMatrix,\n serializer=_mars_sparse_matrix_serializer,\n deserializer=_mars_sparse_matrix_deserializer)\n\n\nclass RayFileLikeObject:\n def __init__(self):\n self._buffers = []\n self._size = 0\n\n def write(self, content: bytes):\n self._buffers.append(content)\n self._size += len(content)\n\n def readinto(self, buffer):\n read_bytes = 0\n for b in self._buffers:\n read_pos = read_bytes + len(b)\n buffer[read_bytes:read_pos] = b\n read_bytes = read_pos\n return read_bytes\n\n def close(self):\n self._buffers.clear()\n self._size = 0\n\n def tell(self):\n return self._size\n\n\nclass RayFileObject(BufferWrappedFileObject):\n def __init__(self, object_id: Any, mode: str):\n self._object_id = object_id\n super().__init__(mode, size=0)\n\n def _write_init(self):\n self._buffer = RayFileLikeObject()\n\n def _read_init(self):\n self._buffer = ray.get(self._object_id)\n self._mv = memoryview(self._buffer)\n self._size = len(self._buffer)\n\n def write(self, content: bytes):\n if not self._initialized:\n self._write_init()\n self._initialized = True\n\n return self._buffer.write(content)\n\n def _write_close(self):\n worker = ray.worker.global_worker\n metadata = ray.ray_constants.OBJECT_METADATA_TYPE_RAW\n worker.core_worker.put_file_like_object(metadata, self._buffer.tell(), self._buffer,\n self._object_id)\n\n def _read_close(self):\n pass\n\n\n@register_storage_backend\nclass RayStorage(StorageBackend):\n name = 'ray'\n\n def __init__(self, *args, **kwargs):\n pass\n\n @classmethod\n @implements(StorageBackend.setup)\n async def setup(cls, **kwargs) -> Tuple[Dict, Dict]:\n ray.init(ignore_reinit_error=True)\n _register_sparse_matrix_serializer()\n return dict(), dict()\n\n @staticmethod\n @implements(StorageBackend.teardown)\n async def teardown(**kwargs):\n pass\n\n @property\n @implements(StorageBackend.level)\n def level(self) -> StorageLevel:\n # TODO(fyrestone): return StorageLevel.MEMORY & StorageLevel.DISK\n # if object spilling is available.\n return StorageLevel.MEMORY | StorageLevel.REMOTE\n\n @implements(StorageBackend.get)\n async def get(self, object_id, **kwargs) -> object:\n if kwargs: # pragma: no cover\n raise NotImplementedError(f'Got unsupported args: {\",\".join(kwargs)}')\n return await object_id\n\n @implements(StorageBackend.put)\n async def put(self, obj, importance=0) -> ObjectInfo:\n object_id = ray.put(obj)\n # We can't get the serialized bytes length from ray.put\n return ObjectInfo(object_id=object_id)\n\n @implements(StorageBackend.delete)\n async def delete(self, object_id):\n ray.internal.free(object_id)\n\n @implements(StorageBackend.object_info)\n async def object_info(self, object_id) -> ObjectInfo:\n # The performance of obtaining the object size is poor.\n return ObjectInfo(object_id=object_id)\n\n @implements(StorageBackend.open_writer)\n async def open_writer(self, size=None) -> StorageFileObject:\n new_id = ray.ObjectRef.from_random()\n ray_writer = RayFileObject(new_id, mode='w')\n return StorageFileObject(ray_writer, object_id=new_id)\n\n @implements(StorageBackend.open_reader)\n async def open_reader(self, object_id) -> StorageFileObject:\n ray_reader = RayFileObject(object_id, mode='r')\n return StorageFileObject(ray_reader, object_id=object_id)\n\n @implements(StorageBackend.list)\n async def list(self) -> List:\n raise NotImplementedError(\"Ray storage does not support list\")\n\n @implements(StorageBackend.prefetch)\n async def prefetch(self, object_id):\n pass\n", "path": "mars/storage/ray.py"}], "after_files": [{"content": "from typing import Any, Dict, List, Tuple\nfrom ..lib import sparse\nfrom ..utils import lazy_import, implements, register_ray_serializer\nfrom .base import StorageBackend, StorageLevel, ObjectInfo, register_storage_backend\nfrom .core import BufferWrappedFileObject, StorageFileObject\n\nray = lazy_import(\"ray\")\n\n\n# TODO(fyrestone): make the SparseMatrix pickleable.\n\ndef _mars_sparse_matrix_serializer(value):\n return [value.shape, value.spmatrix]\n\n\ndef _mars_sparse_matrix_deserializer(obj) -> sparse.SparseNDArray:\n shape, spmatrix = obj\n return sparse.matrix.SparseMatrix(spmatrix, shape=shape)\n\n\ndef _register_sparse_matrix_serializer():\n # register a custom serializer for Mars SparseMatrix\n register_ray_serializer(sparse.matrix.SparseMatrix,\n serializer=_mars_sparse_matrix_serializer,\n deserializer=_mars_sparse_matrix_deserializer)\n\n\nclass RayFileLikeObject:\n def __init__(self):\n self._buffers = []\n self._size = 0\n\n def write(self, content: bytes):\n self._buffers.append(content)\n self._size += len(content)\n\n def readinto(self, buffer):\n read_bytes = 0\n for b in self._buffers:\n read_pos = read_bytes + len(b)\n buffer[read_bytes:read_pos] = b\n read_bytes = read_pos\n return read_bytes\n\n def close(self):\n self._buffers.clear()\n self._size = 0\n\n def tell(self):\n return self._size\n\n\nclass RayFileObject(BufferWrappedFileObject):\n def __init__(self, object_id: Any, mode: str):\n self._object_id = object_id\n super().__init__(mode, size=0)\n\n def _write_init(self):\n self._buffer = RayFileLikeObject()\n\n def _read_init(self):\n self._buffer = ray.get(self._object_id)\n self._mv = memoryview(self._buffer)\n self._size = len(self._buffer)\n\n def write(self, content: bytes):\n if not self._initialized:\n self._write_init()\n self._initialized = True\n\n return self._buffer.write(content)\n\n def _write_close(self):\n worker = ray.worker.global_worker\n metadata = ray.ray_constants.OBJECT_METADATA_TYPE_RAW\n args = [metadata, self._buffer.tell(), self._buffer, self._object_id]\n try:\n worker.core_worker.put_file_like_object(*args)\n except TypeError:\n args.append(None) # owner_address for ray >= 1.3.0\n worker.core_worker.put_file_like_object(*args)\n\n def _read_close(self):\n pass\n\n\n@register_storage_backend\nclass RayStorage(StorageBackend):\n name = 'ray'\n\n def __init__(self, *args, **kwargs):\n pass\n\n @classmethod\n @implements(StorageBackend.setup)\n async def setup(cls, **kwargs) -> Tuple[Dict, Dict]:\n ray.init(ignore_reinit_error=True)\n _register_sparse_matrix_serializer()\n return dict(), dict()\n\n @staticmethod\n @implements(StorageBackend.teardown)\n async def teardown(**kwargs):\n pass\n\n @property\n @implements(StorageBackend.level)\n def level(self) -> StorageLevel:\n # TODO(fyrestone): return StorageLevel.MEMORY & StorageLevel.DISK\n # if object spilling is available.\n return StorageLevel.MEMORY | StorageLevel.REMOTE\n\n @implements(StorageBackend.get)\n async def get(self, object_id, **kwargs) -> object:\n if kwargs: # pragma: no cover\n raise NotImplementedError(f'Got unsupported args: {\",\".join(kwargs)}')\n return await object_id\n\n @implements(StorageBackend.put)\n async def put(self, obj, importance=0) -> ObjectInfo:\n object_id = ray.put(obj)\n # We can't get the serialized bytes length from ray.put\n return ObjectInfo(object_id=object_id)\n\n @implements(StorageBackend.delete)\n async def delete(self, object_id):\n ray.internal.free(object_id)\n\n @implements(StorageBackend.object_info)\n async def object_info(self, object_id) -> ObjectInfo:\n # The performance of obtaining the object size is poor.\n return ObjectInfo(object_id=object_id)\n\n @implements(StorageBackend.open_writer)\n async def open_writer(self, size=None) -> StorageFileObject:\n new_id = ray.ObjectRef.from_random()\n ray_writer = RayFileObject(new_id, mode='w')\n return StorageFileObject(ray_writer, object_id=new_id)\n\n @implements(StorageBackend.open_reader)\n async def open_reader(self, object_id) -> StorageFileObject:\n ray_reader = RayFileObject(object_id, mode='r')\n return StorageFileObject(ray_reader, object_id=object_id)\n\n @implements(StorageBackend.list)\n async def list(self) -> List:\n raise NotImplementedError(\"Ray storage does not support list\")\n\n @implements(StorageBackend.prefetch)\n async def prefetch(self, object_id):\n pass\n", "path": "mars/storage/ray.py"}]} | 2,520 | 189 |
gh_patches_debug_38716 | rasdani/github-patches | git_diff | psychopy__psychopy-1542 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
no documentation for trialHandler.data.addDataType
The TrialHandler.py demo uses the function addDataType:
```
trials.data.addDataType('choice') # this will help store things with the stimuli
trials.data.addDataType('RT') # add as many types as you like
```
But I haven't been able to find any documentation for addDataType in the psychopy.data page (http://www.psychopy.org/api/data.html). (I'd like to know what it does)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `psychopy/demos/coder/experiment control/TrialHandler.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 """
5 Demo of TrialHandler
6 """
7
8 from __future__ import division
9 from __future__ import print_function
10
11 from builtins import range
12 from random import random
13 from psychopy import data
14
15 # create your list of stimuli
16 # NB as of version 1.62 you could simply import an excel spreadsheet with this
17 # using data.importConditions('someFile.xlsx')
18 stimList = []
19 for ori in range(90, 180, 30):
20 for sf in [0.5, 1.0, 2.0]:
21 # append a python 'dictionary' to the list
22 stimList.append({'sf':sf, 'ori':ori})
23
24 # organize them with the trial handler
25 trials = data.TrialHandler(stimList, 10, extraInfo= {'participant':"Nobody", 'session':1})
26 trials.data.addDataType('choice') # this will help store things with the stimuli
27 trials.data.addDataType('RT') # add as many types as you like
28
29 # run the experiment
30 nDone = 0
31 for thisTrial in trials: # handler can act like a for loop
32 # simulate some data
33 thisReactionTime = random() + float(thisTrial['sf']) / 2.0
34 thisChoice = round(random())
35 trials.data.add('RT', thisReactionTime) # add the data to our set
36 trials.data.add('choice', thisChoice)
37 nDone += 1 # just for a quick reference
38
39 msg = 'trial %i had position %s in the list (sf=%.1f)'
40 print(msg % (nDone, trials.thisIndex, thisTrial['sf']))
41
42 # after the experiment
43 print('\n')
44 trials.printAsText(stimOut=['sf', 'ori'], # write summary data to screen
45 dataOut=['RT_mean', 'RT_std', 'choice_raw'])
46 trials.saveAsText(fileName='testData', # also write summary data to a text file
47 stimOut=['sf', 'ori'],
48 dataOut=['RT_mean', 'RT_std', 'choice_raw'])
49 trials.saveAsExcel(fileName='testData', # ...or an xlsx file (which supports sheets)
50 sheetName = 'rawData',
51 stimOut=['sf', 'ori'],
52 dataOut=['RT_mean', 'RT_std', 'choice_raw'])
53 trials.saveAsPickle(fileName = 'testData') # this saves a copy of the whole object
54 df = trials.saveAsWideText("testDataWide.txt") # wide is useful for analysis with R or SPSS. Also returns dataframe df
55
56 # The contents of this file are in the public domain.
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/psychopy/demos/coder/experiment control/TrialHandler.py b/psychopy/demos/coder/experiment control/TrialHandler.py
--- a/psychopy/demos/coder/experiment control/TrialHandler.py
+++ b/psychopy/demos/coder/experiment control/TrialHandler.py
@@ -3,6 +3,9 @@
"""
Demo of TrialHandler
+
+The contents of this file are in the public domain.
+
"""
from __future__ import division
@@ -10,6 +13,7 @@
from builtins import range
from random import random
+
from psychopy import data
# create your list of stimuli
@@ -19,12 +23,11 @@
for ori in range(90, 180, 30):
for sf in [0.5, 1.0, 2.0]:
# append a python 'dictionary' to the list
- stimList.append({'sf':sf, 'ori':ori})
+ stimList.append({'sf': sf, 'ori': ori})
# organize them with the trial handler
-trials = data.TrialHandler(stimList, 10, extraInfo= {'participant':"Nobody", 'session':1})
-trials.data.addDataType('choice') # this will help store things with the stimuli
-trials.data.addDataType('RT') # add as many types as you like
+trials = data.TrialHandler(stimList, 10,
+ extraInfo={'participant': "Nobody", 'session': 1})
# run the experiment
nDone = 0
@@ -39,18 +42,27 @@
msg = 'trial %i had position %s in the list (sf=%.1f)'
print(msg % (nDone, trials.thisIndex, thisTrial['sf']))
-# after the experiment
+# After the experiment, print a new line
print('\n')
-trials.printAsText(stimOut=['sf', 'ori'], # write summary data to screen
- dataOut=['RT_mean', 'RT_std', 'choice_raw'])
-trials.saveAsText(fileName='testData', # also write summary data to a text file
- stimOut=['sf', 'ori'],
- dataOut=['RT_mean', 'RT_std', 'choice_raw'])
-trials.saveAsExcel(fileName='testData', # ...or an xlsx file (which supports sheets)
- sheetName = 'rawData',
+
+# Write summary data to screen
+trials.printAsText(stimOut=['sf', 'ori'],
+ dataOut=['RT_mean', 'RT_std', 'choice_raw'])
+
+# Write summary data to a text file ...
+trials.saveAsText(fileName='testData',
stimOut=['sf', 'ori'],
dataOut=['RT_mean', 'RT_std', 'choice_raw'])
-trials.saveAsPickle(fileName = 'testData') # this saves a copy of the whole object
-df = trials.saveAsWideText("testDataWide.txt") # wide is useful for analysis with R or SPSS. Also returns dataframe df
-# The contents of this file are in the public domain.
+# ... or an xlsx file (which supports sheets)
+trials.saveAsExcel(fileName='testData',
+ sheetName='rawData',
+ stimOut=['sf', 'ori'],
+ dataOut=['RT_mean', 'RT_std', 'choice_raw'])
+
+# Save a copy of the whole TrialHandler object, which can be reloaded later to
+# re-create the experiment.
+trials.saveAsPickle(fileName='testData')
+
+# Wide format is useful for analysis with R or SPSS.
+df = trials.saveAsWideText('testDataWide.txt')
| {"golden_diff": "diff --git a/psychopy/demos/coder/experiment control/TrialHandler.py b/psychopy/demos/coder/experiment control/TrialHandler.py\n--- a/psychopy/demos/coder/experiment control/TrialHandler.py\t\n+++ b/psychopy/demos/coder/experiment control/TrialHandler.py\t\n@@ -3,6 +3,9 @@\n \n \"\"\"\n Demo of TrialHandler\n+\n+The contents of this file are in the public domain.\n+\n \"\"\"\n \n from __future__ import division\n@@ -10,6 +13,7 @@\n \n from builtins import range\n from random import random\n+\n from psychopy import data\n \n # create your list of stimuli\n@@ -19,12 +23,11 @@\n for ori in range(90, 180, 30):\n for sf in [0.5, 1.0, 2.0]:\n # append a python 'dictionary' to the list\n- stimList.append({'sf':sf, 'ori':ori})\n+ stimList.append({'sf': sf, 'ori': ori})\n \n # organize them with the trial handler\n-trials = data.TrialHandler(stimList, 10, extraInfo= {'participant':\"Nobody\", 'session':1})\n-trials.data.addDataType('choice') # this will help store things with the stimuli\n-trials.data.addDataType('RT') # add as many types as you like\n+trials = data.TrialHandler(stimList, 10,\n+ extraInfo={'participant': \"Nobody\", 'session': 1})\n \n # run the experiment\n nDone = 0\n@@ -39,18 +42,27 @@\n msg = 'trial %i had position %s in the list (sf=%.1f)'\n print(msg % (nDone, trials.thisIndex, thisTrial['sf']))\n \n-# after the experiment\n+# After the experiment, print a new line\n print('\\n')\n-trials.printAsText(stimOut=['sf', 'ori'], # write summary data to screen\n- dataOut=['RT_mean', 'RT_std', 'choice_raw'])\n-trials.saveAsText(fileName='testData', # also write summary data to a text file\n- stimOut=['sf', 'ori'],\n- dataOut=['RT_mean', 'RT_std', 'choice_raw'])\n-trials.saveAsExcel(fileName='testData', # ...or an xlsx file (which supports sheets)\n- sheetName = 'rawData',\n+\n+# Write summary data to screen\n+trials.printAsText(stimOut=['sf', 'ori'],\n+ dataOut=['RT_mean', 'RT_std', 'choice_raw'])\n+\n+# Write summary data to a text file ...\n+trials.saveAsText(fileName='testData',\n stimOut=['sf', 'ori'],\n dataOut=['RT_mean', 'RT_std', 'choice_raw'])\n-trials.saveAsPickle(fileName = 'testData') # this saves a copy of the whole object\n-df = trials.saveAsWideText(\"testDataWide.txt\") # wide is useful for analysis with R or SPSS. Also returns dataframe df\n \n-# The contents of this file are in the public domain.\n+# ... or an xlsx file (which supports sheets)\n+trials.saveAsExcel(fileName='testData',\n+ sheetName='rawData',\n+ stimOut=['sf', 'ori'],\n+ dataOut=['RT_mean', 'RT_std', 'choice_raw'])\n+\n+# Save a copy of the whole TrialHandler object, which can be reloaded later to\n+# re-create the experiment.\n+trials.saveAsPickle(fileName='testData')\n+\n+# Wide format is useful for analysis with R or SPSS.\n+df = trials.saveAsWideText('testDataWide.txt')\n", "issue": "no documentation for trialHandler.data.addDataType\nThe TrialHandler.py demo uses the function addDataType:\r\n\r\n```\r\ntrials.data.addDataType('choice') # this will help store things with the stimuli\r\ntrials.data.addDataType('RT') # add as many types as you like\r\n```\r\n\r\nBut I haven't been able to find any documentation for addDataType in the psychopy.data page (http://www.psychopy.org/api/data.html). (I'd like to know what it does)\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\nDemo of TrialHandler\n\"\"\"\n\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom builtins import range\nfrom random import random\nfrom psychopy import data\n\n# create your list of stimuli\n# NB as of version 1.62 you could simply import an excel spreadsheet with this\n# using data.importConditions('someFile.xlsx')\nstimList = []\nfor ori in range(90, 180, 30):\n for sf in [0.5, 1.0, 2.0]:\n # append a python 'dictionary' to the list\n stimList.append({'sf':sf, 'ori':ori})\n\n# organize them with the trial handler\ntrials = data.TrialHandler(stimList, 10, extraInfo= {'participant':\"Nobody\", 'session':1})\ntrials.data.addDataType('choice') # this will help store things with the stimuli\ntrials.data.addDataType('RT') # add as many types as you like\n\n# run the experiment\nnDone = 0\nfor thisTrial in trials: # handler can act like a for loop\n # simulate some data\n thisReactionTime = random() + float(thisTrial['sf']) / 2.0\n thisChoice = round(random())\n trials.data.add('RT', thisReactionTime) # add the data to our set\n trials.data.add('choice', thisChoice)\n nDone += 1 # just for a quick reference\n\n msg = 'trial %i had position %s in the list (sf=%.1f)'\n print(msg % (nDone, trials.thisIndex, thisTrial['sf']))\n\n# after the experiment\nprint('\\n')\ntrials.printAsText(stimOut=['sf', 'ori'], # write summary data to screen\n dataOut=['RT_mean', 'RT_std', 'choice_raw'])\ntrials.saveAsText(fileName='testData', # also write summary data to a text file\n stimOut=['sf', 'ori'],\n dataOut=['RT_mean', 'RT_std', 'choice_raw'])\ntrials.saveAsExcel(fileName='testData', # ...or an xlsx file (which supports sheets)\n sheetName = 'rawData',\n stimOut=['sf', 'ori'],\n dataOut=['RT_mean', 'RT_std', 'choice_raw'])\ntrials.saveAsPickle(fileName = 'testData') # this saves a copy of the whole object\ndf = trials.saveAsWideText(\"testDataWide.txt\") # wide is useful for analysis with R or SPSS. Also returns dataframe df\n\n# The contents of this file are in the public domain.\n", "path": "psychopy/demos/coder/experiment control/TrialHandler.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\nDemo of TrialHandler\n\nThe contents of this file are in the public domain.\n\n\"\"\"\n\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom builtins import range\nfrom random import random\n\nfrom psychopy import data\n\n# create your list of stimuli\n# NB as of version 1.62 you could simply import an excel spreadsheet with this\n# using data.importConditions('someFile.xlsx')\nstimList = []\nfor ori in range(90, 180, 30):\n for sf in [0.5, 1.0, 2.0]:\n # append a python 'dictionary' to the list\n stimList.append({'sf': sf, 'ori': ori})\n\n# organize them with the trial handler\ntrials = data.TrialHandler(stimList, 10,\n extraInfo={'participant': \"Nobody\", 'session': 1})\n\n# run the experiment\nnDone = 0\nfor thisTrial in trials: # handler can act like a for loop\n # simulate some data\n thisReactionTime = random() + float(thisTrial['sf']) / 2.0\n thisChoice = round(random())\n trials.data.add('RT', thisReactionTime) # add the data to our set\n trials.data.add('choice', thisChoice)\n nDone += 1 # just for a quick reference\n\n msg = 'trial %i had position %s in the list (sf=%.1f)'\n print(msg % (nDone, trials.thisIndex, thisTrial['sf']))\n\n# After the experiment, print a new line\nprint('\\n')\n\n# Write summary data to screen\ntrials.printAsText(stimOut=['sf', 'ori'],\n dataOut=['RT_mean', 'RT_std', 'choice_raw'])\n\n# Write summary data to a text file ...\ntrials.saveAsText(fileName='testData',\n stimOut=['sf', 'ori'],\n dataOut=['RT_mean', 'RT_std', 'choice_raw'])\n\n# ... or an xlsx file (which supports sheets)\ntrials.saveAsExcel(fileName='testData',\n sheetName='rawData',\n stimOut=['sf', 'ori'],\n dataOut=['RT_mean', 'RT_std', 'choice_raw'])\n\n# Save a copy of the whole TrialHandler object, which can be reloaded later to\n# re-create the experiment.\ntrials.saveAsPickle(fileName='testData')\n\n# Wide format is useful for analysis with R or SPSS.\ndf = trials.saveAsWideText('testDataWide.txt')\n", "path": "psychopy/demos/coder/experiment control/TrialHandler.py"}]} | 1,060 | 820 |
gh_patches_debug_35190 | rasdani/github-patches | git_diff | PrefectHQ__prefect-1961 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Attempt to log possible log writing errors
## Current behavior
Currently if the logger encounters an error it isn't immediately surfaced to Cloud users what the reason is for the logging error.
## Proposed behavior
Something along the lines of (but not limited to):
```python
try:
write_some_logs()
except Exception as exc:
sleep(1)
write_this_error_log(exc)
except Exception as exc2:
raise exc # raise exception as it normally would
```
Where if the writing of the logs fails then the exception is passed as an attempted log. Otherwise if the exception logging fails then raise as it normally would. This would give some nice insight into "missing" logs.
## Example
Possible issues such as non serializeable bytes, large payloads, etc. could be surfaced
```
self.logger.info(b"bad bytes string")
CRITICAL - CloudHandler | Failed to write log with error: Object of type bytes is not JSON serializable
```
```
self.logger.info("LARGE STRING...........")
CRITICAL - CloudHandler | Failed to write log with error: 413 Client Error: Request Entity Too Large for url: https://api.prefect.io/graphql/alpha
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/prefect/utilities/logging.py`
Content:
```
1 """
2 Utility functions for interacting with and configuring logging. The main entrypoint for retrieving loggers for
3 customization is the `get_logger` utility.
4
5 Note that Prefect Tasks come equipped with their own loggers. These can be accessed via:
6 - `self.logger` if implementing a Task class
7 - `prefect.context.get("logger")` if using the `task` decorator
8
9 When running locally, log levels and message formatting are set via your Prefect configuration file.
10 """
11 import atexit
12 import json
13 import logging
14 import sys
15 import threading
16 import time
17 from queue import Queue, Empty
18 from typing import Any
19
20 import pendulum
21
22 import prefect
23 from prefect.utilities.context import context
24
25
26 class CloudHandler(logging.StreamHandler):
27 def __init__(self) -> None:
28 super().__init__(sys.stdout)
29 self.client = None
30 self.logger = logging.getLogger("CloudHandler")
31 handler = logging.StreamHandler(sys.stdout)
32 formatter = logging.Formatter(context.config.logging.format)
33 formatter.converter = time.gmtime # type: ignore
34 handler.setFormatter(formatter)
35 self.logger.addHandler(handler)
36 self.logger.setLevel(context.config.logging.level)
37
38 @property
39 def queue(self) -> Queue:
40 if not hasattr(self, "_queue"):
41 self._queue = Queue() # type: Queue
42 self._flush = False
43 self.start()
44 return self._queue
45
46 def flush(self) -> None:
47 self._flush = True
48 if self.client is not None:
49 self.batch_upload()
50 self._thread.join()
51
52 def batch_upload(self) -> None:
53 logs = []
54 try:
55 while True:
56 log = self.queue.get(False)
57 logs.append(log)
58 except Empty:
59 pass
60
61 if logs:
62 try:
63 assert self.client is not None
64 self.client.write_run_logs(logs)
65 except Exception as exc:
66 self.logger.critical(
67 "Failed to write log with error: {}".format(str(exc))
68 )
69
70 def _monitor(self) -> None:
71 while not self._flush:
72 self.batch_upload()
73 time.sleep(self.heartbeat)
74
75 def __del__(self) -> None:
76 if hasattr(self, "_thread"):
77 self.flush()
78 atexit.unregister(self.flush)
79
80 def start(self) -> None:
81 if not hasattr(self, "_thread"):
82 self.heartbeat = context.config.cloud.logging_heartbeat
83 self._thread = t = threading.Thread(
84 target=self._monitor, name="PrefectCloudLoggingThread"
85 )
86 t.daemon = True
87 t.start()
88 atexit.register(self.flush)
89
90 def put(self, log: dict) -> None:
91 try:
92 json.dumps(log) # make sure the payload is serializable
93 self.queue.put(log)
94 except TypeError as exc:
95 self.logger.critical("Failed to write log with error: {}".format(str(exc)))
96
97 def emit(self, record) -> None: # type: ignore
98 # if we shouldn't log to cloud, don't emit
99 if not prefect.context.config.logging.log_to_cloud:
100 return
101
102 try:
103 from prefect.client import Client
104
105 if self.client is None:
106 self.client = Client() # type: ignore
107
108 assert isinstance(self.client, Client) # mypy assert
109
110 record_dict = record.__dict__.copy()
111 log = dict()
112 log["flowRunId"] = prefect.context.get("flow_run_id", None)
113 log["taskRunId"] = prefect.context.get("task_run_id", None)
114 log["timestamp"] = pendulum.from_timestamp(
115 record_dict.pop("created", time.time())
116 ).isoformat()
117 log["name"] = record_dict.pop("name", None)
118 log["message"] = record_dict.pop("message", None)
119 log["level"] = record_dict.pop("levelname", None)
120
121 if record_dict.get("exc_text") is not None:
122 log["message"] += "\n" + record_dict.pop("exc_text", "")
123 record_dict.pop("exc_info", None)
124
125 log["info"] = record_dict
126 self.put(log)
127 except Exception as exc:
128 self.logger.critical("Failed to write log with error: {}".format(str(exc)))
129
130
131 def configure_logging(testing: bool = False) -> logging.Logger:
132 """
133 Creates a "prefect" root logger with a `StreamHandler` that has level and formatting
134 set from `prefect.config`.
135
136 Args:
137 - testing (bool, optional): a boolean specifying whether this configuration
138 is for testing purposes only; this helps us isolate any global state during testing
139 by configuring a "prefect-test-logger" instead of the standard "prefect" logger
140
141 Returns:
142 - logging.Logger: a configured logging object
143 """
144 name = "prefect-test-logger" if testing else "prefect"
145 logger = logging.getLogger(name)
146 handler = logging.StreamHandler(sys.stdout)
147 formatter = logging.Formatter(context.config.logging.format)
148 formatter.converter = time.gmtime # type: ignore
149 handler.setFormatter(formatter)
150 logger.addHandler(handler)
151 logger.setLevel(context.config.logging.level)
152
153 cloud_handler = CloudHandler()
154 cloud_handler.setLevel("DEBUG")
155 logger.addHandler(cloud_handler)
156 return logger
157
158
159 prefect_logger = configure_logging()
160
161
162 def get_logger(name: str = None) -> logging.Logger:
163 """
164 Returns a "prefect" logger.
165
166 Args:
167 - name (str): if `None`, the root Prefect logger is returned. If provided, a child
168 logger of the name `"prefect.{name}"` is returned. The child logger inherits
169 the root logger's settings.
170
171 Returns:
172 - logging.Logger: a configured logging object with the appropriate name
173 """
174 if name is None:
175 return prefect_logger
176 else:
177 return prefect_logger.getChild(name)
178
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/prefect/utilities/logging.py b/src/prefect/utilities/logging.py
--- a/src/prefect/utilities/logging.py
+++ b/src/prefect/utilities/logging.py
@@ -63,9 +63,15 @@
assert self.client is not None
self.client.write_run_logs(logs)
except Exception as exc:
- self.logger.critical(
- "Failed to write log with error: {}".format(str(exc))
- )
+ message = "Failed to write log with error: {}".format(str(exc))
+ self.logger.critical(message)
+
+ # Attempt to write batch error log otherwise log invalid cloud communication
+ try:
+ assert self.client is not None
+ self.client.write_run_logs([self._make_error_log(message)])
+ except Exception as exc:
+ self.logger.critical("Unable to write logs to Prefect Cloud")
def _monitor(self) -> None:
while not self._flush:
@@ -92,7 +98,10 @@
json.dumps(log) # make sure the payload is serializable
self.queue.put(log)
except TypeError as exc:
- self.logger.critical("Failed to write log with error: {}".format(str(exc)))
+ message = "Failed to write log with error: {}".format(str(exc))
+ self.logger.critical(message)
+
+ self.queue.put(self._make_error_log(message))
def emit(self, record) -> None: # type: ignore
# if we shouldn't log to cloud, don't emit
@@ -125,7 +134,21 @@
log["info"] = record_dict
self.put(log)
except Exception as exc:
- self.logger.critical("Failed to write log with error: {}".format(str(exc)))
+ message = "Failed to write log with error: {}".format(str(exc))
+ self.logger.critical(message)
+
+ self.put(self._make_error_log(message))
+
+ def _make_error_log(self, message: str) -> dict:
+ log = dict()
+ log["flowRunId"] = prefect.context.get("flow_run_id", None)
+ log["timestamp"] = pendulum.from_timestamp(time.time()).isoformat()
+ log["name"] = self.logger.name
+ log["message"] = message
+ log["level"] = "CRITICAL"
+ log["info"] = {}
+
+ return log
def configure_logging(testing: bool = False) -> logging.Logger:
| {"golden_diff": "diff --git a/src/prefect/utilities/logging.py b/src/prefect/utilities/logging.py\n--- a/src/prefect/utilities/logging.py\n+++ b/src/prefect/utilities/logging.py\n@@ -63,9 +63,15 @@\n assert self.client is not None\n self.client.write_run_logs(logs)\n except Exception as exc:\n- self.logger.critical(\n- \"Failed to write log with error: {}\".format(str(exc))\n- )\n+ message = \"Failed to write log with error: {}\".format(str(exc))\n+ self.logger.critical(message)\n+\n+ # Attempt to write batch error log otherwise log invalid cloud communication\n+ try:\n+ assert self.client is not None\n+ self.client.write_run_logs([self._make_error_log(message)])\n+ except Exception as exc:\n+ self.logger.critical(\"Unable to write logs to Prefect Cloud\")\n \n def _monitor(self) -> None:\n while not self._flush:\n@@ -92,7 +98,10 @@\n json.dumps(log) # make sure the payload is serializable\n self.queue.put(log)\n except TypeError as exc:\n- self.logger.critical(\"Failed to write log with error: {}\".format(str(exc)))\n+ message = \"Failed to write log with error: {}\".format(str(exc))\n+ self.logger.critical(message)\n+\n+ self.queue.put(self._make_error_log(message))\n \n def emit(self, record) -> None: # type: ignore\n # if we shouldn't log to cloud, don't emit\n@@ -125,7 +134,21 @@\n log[\"info\"] = record_dict\n self.put(log)\n except Exception as exc:\n- self.logger.critical(\"Failed to write log with error: {}\".format(str(exc)))\n+ message = \"Failed to write log with error: {}\".format(str(exc))\n+ self.logger.critical(message)\n+\n+ self.put(self._make_error_log(message))\n+\n+ def _make_error_log(self, message: str) -> dict:\n+ log = dict()\n+ log[\"flowRunId\"] = prefect.context.get(\"flow_run_id\", None)\n+ log[\"timestamp\"] = pendulum.from_timestamp(time.time()).isoformat()\n+ log[\"name\"] = self.logger.name\n+ log[\"message\"] = message\n+ log[\"level\"] = \"CRITICAL\"\n+ log[\"info\"] = {}\n+\n+ return log\n \n \n def configure_logging(testing: bool = False) -> logging.Logger:\n", "issue": "Attempt to log possible log writing errors\n## Current behavior\r\nCurrently if the logger encounters an error it isn't immediately surfaced to Cloud users what the reason is for the logging error.\r\n\r\n\r\n\r\n## Proposed behavior\r\nSomething along the lines of (but not limited to):\r\n```python\r\ntry:\r\n write_some_logs()\r\nexcept Exception as exc:\r\n sleep(1)\r\n write_this_error_log(exc)\r\nexcept Exception as exc2:\r\n raise exc # raise exception as it normally would\r\n```\r\n\r\nWhere if the writing of the logs fails then the exception is passed as an attempted log. Otherwise if the exception logging fails then raise as it normally would. This would give some nice insight into \"missing\" logs.\r\n\r\n## Example\r\nPossible issues such as non serializeable bytes, large payloads, etc. could be surfaced\r\n\r\n```\r\nself.logger.info(b\"bad bytes string\")\r\nCRITICAL - CloudHandler | Failed to write log with error: Object of type bytes is not JSON serializable\r\n```\r\n\r\n```\r\nself.logger.info(\"LARGE STRING...........\")\r\nCRITICAL - CloudHandler | Failed to write log with error: 413 Client Error: Request Entity Too Large for url: https://api.prefect.io/graphql/alpha\r\n```\n", "before_files": [{"content": "\"\"\"\nUtility functions for interacting with and configuring logging. The main entrypoint for retrieving loggers for\ncustomization is the `get_logger` utility.\n\nNote that Prefect Tasks come equipped with their own loggers. These can be accessed via:\n - `self.logger` if implementing a Task class\n - `prefect.context.get(\"logger\")` if using the `task` decorator\n\nWhen running locally, log levels and message formatting are set via your Prefect configuration file.\n\"\"\"\nimport atexit\nimport json\nimport logging\nimport sys\nimport threading\nimport time\nfrom queue import Queue, Empty\nfrom typing import Any\n\nimport pendulum\n\nimport prefect\nfrom prefect.utilities.context import context\n\n\nclass CloudHandler(logging.StreamHandler):\n def __init__(self) -> None:\n super().__init__(sys.stdout)\n self.client = None\n self.logger = logging.getLogger(\"CloudHandler\")\n handler = logging.StreamHandler(sys.stdout)\n formatter = logging.Formatter(context.config.logging.format)\n formatter.converter = time.gmtime # type: ignore\n handler.setFormatter(formatter)\n self.logger.addHandler(handler)\n self.logger.setLevel(context.config.logging.level)\n\n @property\n def queue(self) -> Queue:\n if not hasattr(self, \"_queue\"):\n self._queue = Queue() # type: Queue\n self._flush = False\n self.start()\n return self._queue\n\n def flush(self) -> None:\n self._flush = True\n if self.client is not None:\n self.batch_upload()\n self._thread.join()\n\n def batch_upload(self) -> None:\n logs = []\n try:\n while True:\n log = self.queue.get(False)\n logs.append(log)\n except Empty:\n pass\n\n if logs:\n try:\n assert self.client is not None\n self.client.write_run_logs(logs)\n except Exception as exc:\n self.logger.critical(\n \"Failed to write log with error: {}\".format(str(exc))\n )\n\n def _monitor(self) -> None:\n while not self._flush:\n self.batch_upload()\n time.sleep(self.heartbeat)\n\n def __del__(self) -> None:\n if hasattr(self, \"_thread\"):\n self.flush()\n atexit.unregister(self.flush)\n\n def start(self) -> None:\n if not hasattr(self, \"_thread\"):\n self.heartbeat = context.config.cloud.logging_heartbeat\n self._thread = t = threading.Thread(\n target=self._monitor, name=\"PrefectCloudLoggingThread\"\n )\n t.daemon = True\n t.start()\n atexit.register(self.flush)\n\n def put(self, log: dict) -> None:\n try:\n json.dumps(log) # make sure the payload is serializable\n self.queue.put(log)\n except TypeError as exc:\n self.logger.critical(\"Failed to write log with error: {}\".format(str(exc)))\n\n def emit(self, record) -> None: # type: ignore\n # if we shouldn't log to cloud, don't emit\n if not prefect.context.config.logging.log_to_cloud:\n return\n\n try:\n from prefect.client import Client\n\n if self.client is None:\n self.client = Client() # type: ignore\n\n assert isinstance(self.client, Client) # mypy assert\n\n record_dict = record.__dict__.copy()\n log = dict()\n log[\"flowRunId\"] = prefect.context.get(\"flow_run_id\", None)\n log[\"taskRunId\"] = prefect.context.get(\"task_run_id\", None)\n log[\"timestamp\"] = pendulum.from_timestamp(\n record_dict.pop(\"created\", time.time())\n ).isoformat()\n log[\"name\"] = record_dict.pop(\"name\", None)\n log[\"message\"] = record_dict.pop(\"message\", None)\n log[\"level\"] = record_dict.pop(\"levelname\", None)\n\n if record_dict.get(\"exc_text\") is not None:\n log[\"message\"] += \"\\n\" + record_dict.pop(\"exc_text\", \"\")\n record_dict.pop(\"exc_info\", None)\n\n log[\"info\"] = record_dict\n self.put(log)\n except Exception as exc:\n self.logger.critical(\"Failed to write log with error: {}\".format(str(exc)))\n\n\ndef configure_logging(testing: bool = False) -> logging.Logger:\n \"\"\"\n Creates a \"prefect\" root logger with a `StreamHandler` that has level and formatting\n set from `prefect.config`.\n\n Args:\n - testing (bool, optional): a boolean specifying whether this configuration\n is for testing purposes only; this helps us isolate any global state during testing\n by configuring a \"prefect-test-logger\" instead of the standard \"prefect\" logger\n\n Returns:\n - logging.Logger: a configured logging object\n \"\"\"\n name = \"prefect-test-logger\" if testing else \"prefect\"\n logger = logging.getLogger(name)\n handler = logging.StreamHandler(sys.stdout)\n formatter = logging.Formatter(context.config.logging.format)\n formatter.converter = time.gmtime # type: ignore\n handler.setFormatter(formatter)\n logger.addHandler(handler)\n logger.setLevel(context.config.logging.level)\n\n cloud_handler = CloudHandler()\n cloud_handler.setLevel(\"DEBUG\")\n logger.addHandler(cloud_handler)\n return logger\n\n\nprefect_logger = configure_logging()\n\n\ndef get_logger(name: str = None) -> logging.Logger:\n \"\"\"\n Returns a \"prefect\" logger.\n\n Args:\n - name (str): if `None`, the root Prefect logger is returned. If provided, a child\n logger of the name `\"prefect.{name}\"` is returned. The child logger inherits\n the root logger's settings.\n\n Returns:\n - logging.Logger: a configured logging object with the appropriate name\n \"\"\"\n if name is None:\n return prefect_logger\n else:\n return prefect_logger.getChild(name)\n", "path": "src/prefect/utilities/logging.py"}], "after_files": [{"content": "\"\"\"\nUtility functions for interacting with and configuring logging. The main entrypoint for retrieving loggers for\ncustomization is the `get_logger` utility.\n\nNote that Prefect Tasks come equipped with their own loggers. These can be accessed via:\n - `self.logger` if implementing a Task class\n - `prefect.context.get(\"logger\")` if using the `task` decorator\n\nWhen running locally, log levels and message formatting are set via your Prefect configuration file.\n\"\"\"\nimport atexit\nimport json\nimport logging\nimport sys\nimport threading\nimport time\nfrom queue import Queue, Empty\nfrom typing import Any\n\nimport pendulum\n\nimport prefect\nfrom prefect.utilities.context import context\n\n\nclass CloudHandler(logging.StreamHandler):\n def __init__(self) -> None:\n super().__init__(sys.stdout)\n self.client = None\n self.logger = logging.getLogger(\"CloudHandler\")\n handler = logging.StreamHandler(sys.stdout)\n formatter = logging.Formatter(context.config.logging.format)\n formatter.converter = time.gmtime # type: ignore\n handler.setFormatter(formatter)\n self.logger.addHandler(handler)\n self.logger.setLevel(context.config.logging.level)\n\n @property\n def queue(self) -> Queue:\n if not hasattr(self, \"_queue\"):\n self._queue = Queue() # type: Queue\n self._flush = False\n self.start()\n return self._queue\n\n def flush(self) -> None:\n self._flush = True\n if self.client is not None:\n self.batch_upload()\n self._thread.join()\n\n def batch_upload(self) -> None:\n logs = []\n try:\n while True:\n log = self.queue.get(False)\n logs.append(log)\n except Empty:\n pass\n\n if logs:\n try:\n assert self.client is not None\n self.client.write_run_logs(logs)\n except Exception as exc:\n message = \"Failed to write log with error: {}\".format(str(exc))\n self.logger.critical(message)\n\n # Attempt to write batch error log otherwise log invalid cloud communication\n try:\n assert self.client is not None\n self.client.write_run_logs([self._make_error_log(message)])\n except Exception as exc:\n self.logger.critical(\"Unable to write logs to Prefect Cloud\")\n\n def _monitor(self) -> None:\n while not self._flush:\n self.batch_upload()\n time.sleep(self.heartbeat)\n\n def __del__(self) -> None:\n if hasattr(self, \"_thread\"):\n self.flush()\n atexit.unregister(self.flush)\n\n def start(self) -> None:\n if not hasattr(self, \"_thread\"):\n self.heartbeat = context.config.cloud.logging_heartbeat\n self._thread = t = threading.Thread(\n target=self._monitor, name=\"PrefectCloudLoggingThread\"\n )\n t.daemon = True\n t.start()\n atexit.register(self.flush)\n\n def put(self, log: dict) -> None:\n try:\n json.dumps(log) # make sure the payload is serializable\n self.queue.put(log)\n except TypeError as exc:\n message = \"Failed to write log with error: {}\".format(str(exc))\n self.logger.critical(message)\n\n self.queue.put(self._make_error_log(message))\n\n def emit(self, record) -> None: # type: ignore\n # if we shouldn't log to cloud, don't emit\n if not prefect.context.config.logging.log_to_cloud:\n return\n\n try:\n from prefect.client import Client\n\n if self.client is None:\n self.client = Client() # type: ignore\n\n assert isinstance(self.client, Client) # mypy assert\n\n record_dict = record.__dict__.copy()\n log = dict()\n log[\"flowRunId\"] = prefect.context.get(\"flow_run_id\", None)\n log[\"taskRunId\"] = prefect.context.get(\"task_run_id\", None)\n log[\"timestamp\"] = pendulum.from_timestamp(\n record_dict.pop(\"created\", time.time())\n ).isoformat()\n log[\"name\"] = record_dict.pop(\"name\", None)\n log[\"message\"] = record_dict.pop(\"message\", None)\n log[\"level\"] = record_dict.pop(\"levelname\", None)\n\n if record_dict.get(\"exc_text\") is not None:\n log[\"message\"] += \"\\n\" + record_dict.pop(\"exc_text\", \"\")\n record_dict.pop(\"exc_info\", None)\n\n log[\"info\"] = record_dict\n self.put(log)\n except Exception as exc:\n message = \"Failed to write log with error: {}\".format(str(exc))\n self.logger.critical(message)\n\n self.put(self._make_error_log(message))\n\n def _make_error_log(self, message: str) -> dict:\n log = dict()\n log[\"flowRunId\"] = prefect.context.get(\"flow_run_id\", None)\n log[\"timestamp\"] = pendulum.from_timestamp(time.time()).isoformat()\n log[\"name\"] = self.logger.name\n log[\"message\"] = message\n log[\"level\"] = \"CRITICAL\"\n log[\"info\"] = {}\n\n return log\n\n\ndef configure_logging(testing: bool = False) -> logging.Logger:\n \"\"\"\n Creates a \"prefect\" root logger with a `StreamHandler` that has level and formatting\n set from `prefect.config`.\n\n Args:\n - testing (bool, optional): a boolean specifying whether this configuration\n is for testing purposes only; this helps us isolate any global state during testing\n by configuring a \"prefect-test-logger\" instead of the standard \"prefect\" logger\n\n Returns:\n - logging.Logger: a configured logging object\n \"\"\"\n name = \"prefect-test-logger\" if testing else \"prefect\"\n logger = logging.getLogger(name)\n handler = logging.StreamHandler(sys.stdout)\n formatter = logging.Formatter(context.config.logging.format)\n formatter.converter = time.gmtime # type: ignore\n handler.setFormatter(formatter)\n logger.addHandler(handler)\n logger.setLevel(context.config.logging.level)\n\n cloud_handler = CloudHandler()\n cloud_handler.setLevel(\"DEBUG\")\n logger.addHandler(cloud_handler)\n return logger\n\n\nprefect_logger = configure_logging()\n\n\ndef get_logger(name: str = None) -> logging.Logger:\n \"\"\"\n Returns a \"prefect\" logger.\n\n Args:\n - name (str): if `None`, the root Prefect logger is returned. If provided, a child\n logger of the name `\"prefect.{name}\"` is returned. The child logger inherits\n the root logger's settings.\n\n Returns:\n - logging.Logger: a configured logging object with the appropriate name\n \"\"\"\n if name is None:\n return prefect_logger\n else:\n return prefect_logger.getChild(name)\n", "path": "src/prefect/utilities/logging.py"}]} | 2,201 | 552 |
gh_patches_debug_2575 | rasdani/github-patches | git_diff | coala__coala-3608 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove call_without_output from Shell.py L7
This line was used by the requirement classes, it isnt used anymore as they use sarge, so it should be removed.
difficulty/newcomer
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `coalib/misc/Shell.py`
Content:
```
1 from contextlib import contextmanager
2 import functools
3 import shlex
4 from subprocess import PIPE, Popen, call, DEVNULL
5
6
7 call_without_output = functools.partial(call, stdout=DEVNULL, stderr=DEVNULL)
8 """
9 Uses subprocess.call to execute a command, but suppresses the output and
10 the errors.
11 """
12
13
14 @contextmanager
15 def run_interactive_shell_command(command, **kwargs):
16 """
17 Runs a single command in shell and provides stdout, stderr and stdin
18 streams.
19
20 This function creates a context manager that sets up the process (using
21 ``subprocess.Popen()``), returns to caller and waits for process to exit on
22 leaving.
23
24 By default the process is opened in ``universal_newlines`` mode and creates
25 pipes for all streams (stdout, stderr and stdin) using ``subprocess.PIPE``
26 special value. These pipes are closed automatically, so if you want to get
27 the contents of the streams you should retrieve them before the context
28 manager exits.
29
30 >>> with run_interactive_shell_command(["echo", "TEXT"]) as p:
31 ... stdout = p.stdout
32 ... stdout_text = stdout.read()
33 >>> stdout_text
34 'TEXT\\n'
35 >>> stdout.closed
36 True
37
38 Custom streams provided are not closed except of ``subprocess.PIPE``.
39
40 >>> from tempfile import TemporaryFile
41 >>> stream = TemporaryFile()
42 >>> with run_interactive_shell_command(["echo", "TEXT"],
43 ... stdout=stream) as p:
44 ... stderr = p.stderr
45 >>> stderr.closed
46 True
47 >>> stream.closed
48 False
49
50 :param command: The command to run on shell. This parameter can either
51 be a sequence of arguments that are directly passed to
52 the process or a string. A string gets splitted beforehand
53 using ``shlex.split()``. If providing ``shell=True`` as a
54 keyword-argument, no ``shlex.split()`` is performed and the
55 command string goes directly to ``subprocess.Popen()``.
56 :param kwargs: Additional keyword arguments to pass to
57 ``subprocess.Popen`` that are used to spawn the process.
58 :return: A context manager yielding the process started from the
59 command.
60 """
61 if not kwargs.get('shell', False) and isinstance(command, str):
62 command = shlex.split(command)
63
64 args = {'stdout': PIPE,
65 'stderr': PIPE,
66 'stdin': PIPE,
67 'universal_newlines': True}
68 args.update(kwargs)
69
70 process = Popen(command, **args)
71 try:
72 yield process
73 finally:
74 if args['stdout'] is PIPE:
75 process.stdout.close()
76 if args['stderr'] is PIPE:
77 process.stderr.close()
78 if args['stdin'] is PIPE:
79 process.stdin.close()
80
81 process.wait()
82
83
84 def run_shell_command(command, stdin=None, **kwargs):
85 """
86 Runs a single command in shell and returns the read stdout and stderr data.
87
88 This function waits for the process (created using ``subprocess.Popen()``)
89 to exit. Effectively it wraps ``run_interactive_shell_command()`` and uses
90 ``communicate()`` on the process.
91
92 See also ``run_interactive_shell_command()``.
93
94 :param command: The command to run on shell. This parameter can either
95 be a sequence of arguments that are directly passed to
96 the process or a string. A string gets splitted beforehand
97 using ``shlex.split()``.
98 :param stdin: Initial input to send to the process.
99 :param kwargs: Additional keyword arguments to pass to
100 ``subprocess.Popen`` that is used to spawn the process.
101 :return: A tuple with ``(stdoutstring, stderrstring)``.
102 """
103 with run_interactive_shell_command(command, **kwargs) as p:
104 ret = p.communicate(stdin)
105 return ret
106
107
108 def get_shell_type(): # pragma: no cover
109 """
110 Finds the current shell type based on the outputs of common pre-defined
111 variables in them. This is useful to identify which sort of escaping
112 is required for strings.
113
114 :return: The shell type. This can be either "powershell" if Windows
115 Powershell is detected, "cmd" if command prompt is been
116 detected or "sh" if it's neither of these.
117 """
118 out = run_shell_command('echo $host.name', shell=True)[0]
119 if out.strip() == 'ConsoleHost':
120 return 'powershell'
121 out = run_shell_command('echo $0', shell=True)[0]
122 if out.strip() == '$0':
123 return 'cmd'
124 return 'sh'
125
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/coalib/misc/Shell.py b/coalib/misc/Shell.py
--- a/coalib/misc/Shell.py
+++ b/coalib/misc/Shell.py
@@ -1,14 +1,6 @@
from contextlib import contextmanager
-import functools
import shlex
-from subprocess import PIPE, Popen, call, DEVNULL
-
-
-call_without_output = functools.partial(call, stdout=DEVNULL, stderr=DEVNULL)
-"""
-Uses subprocess.call to execute a command, but suppresses the output and
-the errors.
-"""
+from subprocess import PIPE, Popen
@contextmanager
| {"golden_diff": "diff --git a/coalib/misc/Shell.py b/coalib/misc/Shell.py\n--- a/coalib/misc/Shell.py\n+++ b/coalib/misc/Shell.py\n@@ -1,14 +1,6 @@\n from contextlib import contextmanager\n-import functools\n import shlex\n-from subprocess import PIPE, Popen, call, DEVNULL\n-\n-\n-call_without_output = functools.partial(call, stdout=DEVNULL, stderr=DEVNULL)\n-\"\"\"\n-Uses subprocess.call to execute a command, but suppresses the output and\n-the errors.\n-\"\"\"\n+from subprocess import PIPE, Popen\n \n \n @contextmanager\n", "issue": "Remove call_without_output from Shell.py L7\nThis line was used by the requirement classes, it isnt used anymore as they use sarge, so it should be removed.\r\ndifficulty/newcomer\n", "before_files": [{"content": "from contextlib import contextmanager\nimport functools\nimport shlex\nfrom subprocess import PIPE, Popen, call, DEVNULL\n\n\ncall_without_output = functools.partial(call, stdout=DEVNULL, stderr=DEVNULL)\n\"\"\"\nUses subprocess.call to execute a command, but suppresses the output and\nthe errors.\n\"\"\"\n\n\n@contextmanager\ndef run_interactive_shell_command(command, **kwargs):\n \"\"\"\n Runs a single command in shell and provides stdout, stderr and stdin\n streams.\n\n This function creates a context manager that sets up the process (using\n ``subprocess.Popen()``), returns to caller and waits for process to exit on\n leaving.\n\n By default the process is opened in ``universal_newlines`` mode and creates\n pipes for all streams (stdout, stderr and stdin) using ``subprocess.PIPE``\n special value. These pipes are closed automatically, so if you want to get\n the contents of the streams you should retrieve them before the context\n manager exits.\n\n >>> with run_interactive_shell_command([\"echo\", \"TEXT\"]) as p:\n ... stdout = p.stdout\n ... stdout_text = stdout.read()\n >>> stdout_text\n 'TEXT\\\\n'\n >>> stdout.closed\n True\n\n Custom streams provided are not closed except of ``subprocess.PIPE``.\n\n >>> from tempfile import TemporaryFile\n >>> stream = TemporaryFile()\n >>> with run_interactive_shell_command([\"echo\", \"TEXT\"],\n ... stdout=stream) as p:\n ... stderr = p.stderr\n >>> stderr.closed\n True\n >>> stream.closed\n False\n\n :param command: The command to run on shell. This parameter can either\n be a sequence of arguments that are directly passed to\n the process or a string. A string gets splitted beforehand\n using ``shlex.split()``. If providing ``shell=True`` as a\n keyword-argument, no ``shlex.split()`` is performed and the\n command string goes directly to ``subprocess.Popen()``.\n :param kwargs: Additional keyword arguments to pass to\n ``subprocess.Popen`` that are used to spawn the process.\n :return: A context manager yielding the process started from the\n command.\n \"\"\"\n if not kwargs.get('shell', False) and isinstance(command, str):\n command = shlex.split(command)\n\n args = {'stdout': PIPE,\n 'stderr': PIPE,\n 'stdin': PIPE,\n 'universal_newlines': True}\n args.update(kwargs)\n\n process = Popen(command, **args)\n try:\n yield process\n finally:\n if args['stdout'] is PIPE:\n process.stdout.close()\n if args['stderr'] is PIPE:\n process.stderr.close()\n if args['stdin'] is PIPE:\n process.stdin.close()\n\n process.wait()\n\n\ndef run_shell_command(command, stdin=None, **kwargs):\n \"\"\"\n Runs a single command in shell and returns the read stdout and stderr data.\n\n This function waits for the process (created using ``subprocess.Popen()``)\n to exit. Effectively it wraps ``run_interactive_shell_command()`` and uses\n ``communicate()`` on the process.\n\n See also ``run_interactive_shell_command()``.\n\n :param command: The command to run on shell. This parameter can either\n be a sequence of arguments that are directly passed to\n the process or a string. A string gets splitted beforehand\n using ``shlex.split()``.\n :param stdin: Initial input to send to the process.\n :param kwargs: Additional keyword arguments to pass to\n ``subprocess.Popen`` that is used to spawn the process.\n :return: A tuple with ``(stdoutstring, stderrstring)``.\n \"\"\"\n with run_interactive_shell_command(command, **kwargs) as p:\n ret = p.communicate(stdin)\n return ret\n\n\ndef get_shell_type(): # pragma: no cover\n \"\"\"\n Finds the current shell type based on the outputs of common pre-defined\n variables in them. This is useful to identify which sort of escaping\n is required for strings.\n\n :return: The shell type. This can be either \"powershell\" if Windows\n Powershell is detected, \"cmd\" if command prompt is been\n detected or \"sh\" if it's neither of these.\n \"\"\"\n out = run_shell_command('echo $host.name', shell=True)[0]\n if out.strip() == 'ConsoleHost':\n return 'powershell'\n out = run_shell_command('echo $0', shell=True)[0]\n if out.strip() == '$0':\n return 'cmd'\n return 'sh'\n", "path": "coalib/misc/Shell.py"}], "after_files": [{"content": "from contextlib import contextmanager\nimport shlex\nfrom subprocess import PIPE, Popen\n\n\n@contextmanager\ndef run_interactive_shell_command(command, **kwargs):\n \"\"\"\n Runs a single command in shell and provides stdout, stderr and stdin\n streams.\n\n This function creates a context manager that sets up the process (using\n ``subprocess.Popen()``), returns to caller and waits for process to exit on\n leaving.\n\n By default the process is opened in ``universal_newlines`` mode and creates\n pipes for all streams (stdout, stderr and stdin) using ``subprocess.PIPE``\n special value. These pipes are closed automatically, so if you want to get\n the contents of the streams you should retrieve them before the context\n manager exits.\n\n >>> with run_interactive_shell_command([\"echo\", \"TEXT\"]) as p:\n ... stdout = p.stdout\n ... stdout_text = stdout.read()\n >>> stdout_text\n 'TEXT\\\\n'\n >>> stdout.closed\n True\n\n Custom streams provided are not closed except of ``subprocess.PIPE``.\n\n >>> from tempfile import TemporaryFile\n >>> stream = TemporaryFile()\n >>> with run_interactive_shell_command([\"echo\", \"TEXT\"],\n ... stdout=stream) as p:\n ... stderr = p.stderr\n >>> stderr.closed\n True\n >>> stream.closed\n False\n\n :param command: The command to run on shell. This parameter can either\n be a sequence of arguments that are directly passed to\n the process or a string. A string gets splitted beforehand\n using ``shlex.split()``. If providing ``shell=True`` as a\n keyword-argument, no ``shlex.split()`` is performed and the\n command string goes directly to ``subprocess.Popen()``.\n :param kwargs: Additional keyword arguments to pass to\n ``subprocess.Popen`` that are used to spawn the process.\n :return: A context manager yielding the process started from the\n command.\n \"\"\"\n if not kwargs.get('shell', False) and isinstance(command, str):\n command = shlex.split(command)\n\n args = {'stdout': PIPE,\n 'stderr': PIPE,\n 'stdin': PIPE,\n 'universal_newlines': True}\n args.update(kwargs)\n\n process = Popen(command, **args)\n try:\n yield process\n finally:\n if args['stdout'] is PIPE:\n process.stdout.close()\n if args['stderr'] is PIPE:\n process.stderr.close()\n if args['stdin'] is PIPE:\n process.stdin.close()\n\n process.wait()\n\n\ndef run_shell_command(command, stdin=None, **kwargs):\n \"\"\"\n Runs a single command in shell and returns the read stdout and stderr data.\n\n This function waits for the process (created using ``subprocess.Popen()``)\n to exit. Effectively it wraps ``run_interactive_shell_command()`` and uses\n ``communicate()`` on the process.\n\n See also ``run_interactive_shell_command()``.\n\n :param command: The command to run on shell. This parameter can either\n be a sequence of arguments that are directly passed to\n the process or a string. A string gets splitted beforehand\n using ``shlex.split()``.\n :param stdin: Initial input to send to the process.\n :param kwargs: Additional keyword arguments to pass to\n ``subprocess.Popen`` that is used to spawn the process.\n :return: A tuple with ``(stdoutstring, stderrstring)``.\n \"\"\"\n with run_interactive_shell_command(command, **kwargs) as p:\n ret = p.communicate(stdin)\n return ret\n\n\ndef get_shell_type(): # pragma: no cover\n \"\"\"\n Finds the current shell type based on the outputs of common pre-defined\n variables in them. This is useful to identify which sort of escaping\n is required for strings.\n\n :return: The shell type. This can be either \"powershell\" if Windows\n Powershell is detected, \"cmd\" if command prompt is been\n detected or \"sh\" if it's neither of these.\n \"\"\"\n out = run_shell_command('echo $host.name', shell=True)[0]\n if out.strip() == 'ConsoleHost':\n return 'powershell'\n out = run_shell_command('echo $0', shell=True)[0]\n if out.strip() == '$0':\n return 'cmd'\n return 'sh'\n", "path": "coalib/misc/Shell.py"}]} | 1,579 | 135 |
gh_patches_debug_52259 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-6127 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`view.flows.add` command does not exist but the examples reference it
#### Problem Description
The `view.flows.add` command does not exist but the example `duplicate-modify-replay.py` shows this command being used.
`replay.client` seems to perform both the "add to view" and "replay" function.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/addons/duplicate-modify-replay.py`
Content:
```
1 """Take incoming HTTP requests and replay them with modified parameters."""
2 from mitmproxy import ctx
3
4
5 def request(flow):
6 # Avoid an infinite loop by not replaying already replayed requests
7 if flow.is_replay == "request":
8 return
9 flow = flow.copy()
10 # Only interactive tools have a view. If we have one, add a duplicate entry
11 # for our flow.
12 if "view" in ctx.master.addons:
13 ctx.master.commands.call("view.flows.add", [flow])
14 flow.request.path = "/changed"
15 ctx.master.commands.call("replay.client", [flow])
16
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/addons/duplicate-modify-replay.py b/examples/addons/duplicate-modify-replay.py
--- a/examples/addons/duplicate-modify-replay.py
+++ b/examples/addons/duplicate-modify-replay.py
@@ -10,6 +10,6 @@
# Only interactive tools have a view. If we have one, add a duplicate entry
# for our flow.
if "view" in ctx.master.addons:
- ctx.master.commands.call("view.flows.add", [flow])
+ ctx.master.commands.call("view.flows.duplicate", [flow])
flow.request.path = "/changed"
ctx.master.commands.call("replay.client", [flow])
| {"golden_diff": "diff --git a/examples/addons/duplicate-modify-replay.py b/examples/addons/duplicate-modify-replay.py\n--- a/examples/addons/duplicate-modify-replay.py\n+++ b/examples/addons/duplicate-modify-replay.py\n@@ -10,6 +10,6 @@\n # Only interactive tools have a view. If we have one, add a duplicate entry\n # for our flow.\n if \"view\" in ctx.master.addons:\n- ctx.master.commands.call(\"view.flows.add\", [flow])\n+ ctx.master.commands.call(\"view.flows.duplicate\", [flow])\n flow.request.path = \"/changed\"\n ctx.master.commands.call(\"replay.client\", [flow])\n", "issue": "`view.flows.add` command does not exist but the examples reference it\n#### Problem Description\r\n\r\nThe `view.flows.add` command does not exist but the example `duplicate-modify-replay.py` shows this command being used.\r\n\r\n`replay.client` seems to perform both the \"add to view\" and \"replay\" function.\n", "before_files": [{"content": "\"\"\"Take incoming HTTP requests and replay them with modified parameters.\"\"\"\nfrom mitmproxy import ctx\n\n\ndef request(flow):\n # Avoid an infinite loop by not replaying already replayed requests\n if flow.is_replay == \"request\":\n return\n flow = flow.copy()\n # Only interactive tools have a view. If we have one, add a duplicate entry\n # for our flow.\n if \"view\" in ctx.master.addons:\n ctx.master.commands.call(\"view.flows.add\", [flow])\n flow.request.path = \"/changed\"\n ctx.master.commands.call(\"replay.client\", [flow])\n", "path": "examples/addons/duplicate-modify-replay.py"}], "after_files": [{"content": "\"\"\"Take incoming HTTP requests and replay them with modified parameters.\"\"\"\nfrom mitmproxy import ctx\n\n\ndef request(flow):\n # Avoid an infinite loop by not replaying already replayed requests\n if flow.is_replay == \"request\":\n return\n flow = flow.copy()\n # Only interactive tools have a view. If we have one, add a duplicate entry\n # for our flow.\n if \"view\" in ctx.master.addons:\n ctx.master.commands.call(\"view.flows.duplicate\", [flow])\n flow.request.path = \"/changed\"\n ctx.master.commands.call(\"replay.client\", [flow])\n", "path": "examples/addons/duplicate-modify-replay.py"}]} | 487 | 152 |
gh_patches_debug_39141 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-2359 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Consider removing exporters from MeterProvider __init__ parameters
From [this comment](https://github.com/open-telemetry/opentelemetry-python/pull/2296#discussion_r765002669).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opentelemetry-sdk/src/opentelemetry/sdk/_metrics/__init__.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from atexit import register, unregister
16 from logging import getLogger
17 from threading import Lock
18 from typing import Optional, Sequence
19
20 from opentelemetry._metrics import Meter as APIMeter
21 from opentelemetry._metrics import MeterProvider as APIMeterProvider
22 from opentelemetry._metrics import _DefaultMeter
23 from opentelemetry._metrics.instrument import Counter as APICounter
24 from opentelemetry._metrics.instrument import Histogram as APIHistogram
25 from opentelemetry._metrics.instrument import (
26 ObservableCounter as APIObservableCounter,
27 )
28 from opentelemetry._metrics.instrument import (
29 ObservableGauge as APIObservableGauge,
30 )
31 from opentelemetry._metrics.instrument import (
32 ObservableUpDownCounter as APIObservableUpDownCounter,
33 )
34 from opentelemetry._metrics.instrument import UpDownCounter as APIUpDownCounter
35 from opentelemetry.sdk._metrics.export import MetricExporter
36 from opentelemetry.sdk._metrics.instrument import (
37 Counter,
38 Histogram,
39 ObservableCounter,
40 ObservableGauge,
41 ObservableUpDownCounter,
42 UpDownCounter,
43 )
44 from opentelemetry.sdk._metrics.metric_reader import MetricReader
45 from opentelemetry.sdk.resources import Resource
46 from opentelemetry.sdk.util.instrumentation import InstrumentationInfo
47
48 _logger = getLogger(__name__)
49
50
51 class Meter(APIMeter):
52 def __init__(
53 self,
54 instrumentation_info: InstrumentationInfo,
55 meter_provider: APIMeterProvider,
56 ):
57 super().__init__(instrumentation_info)
58 self._instrumentation_info = instrumentation_info
59 self._meter_provider = meter_provider
60
61 def create_counter(self, name, unit=None, description=None) -> APICounter:
62 return Counter(self._instrumentation_info, name, unit, description)
63
64 def create_up_down_counter(
65 self, name, unit=None, description=None
66 ) -> APIUpDownCounter:
67 return UpDownCounter(
68 self._instrumentation_info, name, unit, description
69 )
70
71 def create_observable_counter(
72 self, name, callback, unit=None, description=None
73 ) -> APIObservableCounter:
74 return ObservableCounter(
75 self._instrumentation_info, name, callback, unit, description
76 )
77
78 def create_histogram(
79 self, name, unit=None, description=None
80 ) -> APIHistogram:
81 return Histogram(self._instrumentation_info, name, unit, description)
82
83 def create_observable_gauge(
84 self, name, callback, unit=None, description=None
85 ) -> APIObservableGauge:
86 return ObservableGauge(
87 self._instrumentation_info, name, callback, unit, description
88 )
89
90 def create_observable_up_down_counter(
91 self, name, callback, unit=None, description=None
92 ) -> APIObservableUpDownCounter:
93 return ObservableUpDownCounter(
94 self._instrumentation_info, name, callback, unit, description
95 )
96
97
98 class MeterProvider(APIMeterProvider):
99 """See `opentelemetry._metrics.MeterProvider`."""
100
101 def __init__(
102 self,
103 metric_exporters: Sequence[MetricExporter] = (),
104 metric_readers: Sequence[MetricReader] = (),
105 resource: Resource = Resource.create({}),
106 shutdown_on_exit: bool = True,
107 ):
108 self._lock = Lock()
109 self._atexit_handler = None
110
111 if shutdown_on_exit:
112 self._atexit_handler = register(self.shutdown)
113
114 self._metric_readers = metric_readers
115
116 for metric_reader in self._metric_readers:
117 metric_reader._register_meter_provider(self)
118
119 self._metric_exporters = metric_exporters
120
121 self._resource = resource
122 self._shutdown = False
123
124 def force_flush(self) -> bool:
125
126 # FIXME implement a timeout
127
128 metric_reader_result = True
129 metric_exporter_result = True
130
131 for metric_reader in self._metric_readers:
132 metric_reader_result = (
133 metric_reader_result and metric_reader.force_flush()
134 )
135
136 if not metric_reader_result:
137 _logger.warning("Unable to force flush all metric readers")
138
139 for metric_exporter in self._metric_exporters:
140 metric_exporter_result = (
141 metric_exporter_result and metric_exporter.force_flush()
142 )
143
144 if not metric_exporter_result:
145 _logger.warning("Unable to force flush all metric exporters")
146
147 return metric_reader_result and metric_exporter_result
148
149 def shutdown(self):
150 # FIXME implement a timeout
151
152 if self._shutdown:
153 _logger.warning("shutdown can only be called once")
154 return False
155
156 result = True
157
158 for metric_reader in self._metric_readers:
159 result = result and metric_reader.shutdown()
160
161 if not result:
162 _logger.warning("A MetricReader failed to shutdown")
163
164 for metric_exporter in self._metric_exporters:
165 result = result and metric_exporter.shutdown()
166
167 if not result:
168 _logger.warning("A MetricExporter failed to shutdown")
169
170 self._shutdown = True
171
172 if self._atexit_handler is not None:
173 unregister(self._atexit_handler)
174 self._atexit_handler = None
175
176 return result
177
178 def get_meter(
179 self,
180 name: str,
181 version: Optional[str] = None,
182 schema_url: Optional[str] = None,
183 ) -> Meter:
184
185 if self._shutdown:
186 _logger.warning(
187 "A shutdown `MeterProvider` can not provide a `Meter`"
188 )
189 return _DefaultMeter(name, version=version, schema_url=schema_url)
190
191 return Meter(InstrumentationInfo(name, version, schema_url), self)
192
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/_metrics/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/_metrics/__init__.py
--- a/opentelemetry-sdk/src/opentelemetry/sdk/_metrics/__init__.py
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/_metrics/__init__.py
@@ -32,7 +32,6 @@
ObservableUpDownCounter as APIObservableUpDownCounter,
)
from opentelemetry._metrics.instrument import UpDownCounter as APIUpDownCounter
-from opentelemetry.sdk._metrics.export import MetricExporter
from opentelemetry.sdk._metrics.instrument import (
Counter,
Histogram,
@@ -100,7 +99,6 @@
def __init__(
self,
- metric_exporters: Sequence[MetricExporter] = (),
metric_readers: Sequence[MetricReader] = (),
resource: Resource = Resource.create({}),
shutdown_on_exit: bool = True,
@@ -116,8 +114,6 @@
for metric_reader in self._metric_readers:
metric_reader._register_meter_provider(self)
- self._metric_exporters = metric_exporters
-
self._resource = resource
self._shutdown = False
@@ -126,7 +122,6 @@
# FIXME implement a timeout
metric_reader_result = True
- metric_exporter_result = True
for metric_reader in self._metric_readers:
metric_reader_result = (
@@ -136,15 +131,7 @@
if not metric_reader_result:
_logger.warning("Unable to force flush all metric readers")
- for metric_exporter in self._metric_exporters:
- metric_exporter_result = (
- metric_exporter_result and metric_exporter.force_flush()
- )
-
- if not metric_exporter_result:
- _logger.warning("Unable to force flush all metric exporters")
-
- return metric_reader_result and metric_exporter_result
+ return metric_reader_result
def shutdown(self):
# FIXME implement a timeout
@@ -161,12 +148,6 @@
if not result:
_logger.warning("A MetricReader failed to shutdown")
- for metric_exporter in self._metric_exporters:
- result = result and metric_exporter.shutdown()
-
- if not result:
- _logger.warning("A MetricExporter failed to shutdown")
-
self._shutdown = True
if self._atexit_handler is not None:
| {"golden_diff": "diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/_metrics/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/_metrics/__init__.py\n--- a/opentelemetry-sdk/src/opentelemetry/sdk/_metrics/__init__.py\n+++ b/opentelemetry-sdk/src/opentelemetry/sdk/_metrics/__init__.py\n@@ -32,7 +32,6 @@\n ObservableUpDownCounter as APIObservableUpDownCounter,\n )\n from opentelemetry._metrics.instrument import UpDownCounter as APIUpDownCounter\n-from opentelemetry.sdk._metrics.export import MetricExporter\n from opentelemetry.sdk._metrics.instrument import (\n Counter,\n Histogram,\n@@ -100,7 +99,6 @@\n \n def __init__(\n self,\n- metric_exporters: Sequence[MetricExporter] = (),\n metric_readers: Sequence[MetricReader] = (),\n resource: Resource = Resource.create({}),\n shutdown_on_exit: bool = True,\n@@ -116,8 +114,6 @@\n for metric_reader in self._metric_readers:\n metric_reader._register_meter_provider(self)\n \n- self._metric_exporters = metric_exporters\n-\n self._resource = resource\n self._shutdown = False\n \n@@ -126,7 +122,6 @@\n # FIXME implement a timeout\n \n metric_reader_result = True\n- metric_exporter_result = True\n \n for metric_reader in self._metric_readers:\n metric_reader_result = (\n@@ -136,15 +131,7 @@\n if not metric_reader_result:\n _logger.warning(\"Unable to force flush all metric readers\")\n \n- for metric_exporter in self._metric_exporters:\n- metric_exporter_result = (\n- metric_exporter_result and metric_exporter.force_flush()\n- )\n-\n- if not metric_exporter_result:\n- _logger.warning(\"Unable to force flush all metric exporters\")\n-\n- return metric_reader_result and metric_exporter_result\n+ return metric_reader_result\n \n def shutdown(self):\n # FIXME implement a timeout\n@@ -161,12 +148,6 @@\n if not result:\n _logger.warning(\"A MetricReader failed to shutdown\")\n \n- for metric_exporter in self._metric_exporters:\n- result = result and metric_exporter.shutdown()\n-\n- if not result:\n- _logger.warning(\"A MetricExporter failed to shutdown\")\n-\n self._shutdown = True\n \n if self._atexit_handler is not None:\n", "issue": "Consider removing exporters from MeterProvider __init__ parameters\nFrom [this comment](https://github.com/open-telemetry/opentelemetry-python/pull/2296#discussion_r765002669).\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom atexit import register, unregister\nfrom logging import getLogger\nfrom threading import Lock\nfrom typing import Optional, Sequence\n\nfrom opentelemetry._metrics import Meter as APIMeter\nfrom opentelemetry._metrics import MeterProvider as APIMeterProvider\nfrom opentelemetry._metrics import _DefaultMeter\nfrom opentelemetry._metrics.instrument import Counter as APICounter\nfrom opentelemetry._metrics.instrument import Histogram as APIHistogram\nfrom opentelemetry._metrics.instrument import (\n ObservableCounter as APIObservableCounter,\n)\nfrom opentelemetry._metrics.instrument import (\n ObservableGauge as APIObservableGauge,\n)\nfrom opentelemetry._metrics.instrument import (\n ObservableUpDownCounter as APIObservableUpDownCounter,\n)\nfrom opentelemetry._metrics.instrument import UpDownCounter as APIUpDownCounter\nfrom opentelemetry.sdk._metrics.export import MetricExporter\nfrom opentelemetry.sdk._metrics.instrument import (\n Counter,\n Histogram,\n ObservableCounter,\n ObservableGauge,\n ObservableUpDownCounter,\n UpDownCounter,\n)\nfrom opentelemetry.sdk._metrics.metric_reader import MetricReader\nfrom opentelemetry.sdk.resources import Resource\nfrom opentelemetry.sdk.util.instrumentation import InstrumentationInfo\n\n_logger = getLogger(__name__)\n\n\nclass Meter(APIMeter):\n def __init__(\n self,\n instrumentation_info: InstrumentationInfo,\n meter_provider: APIMeterProvider,\n ):\n super().__init__(instrumentation_info)\n self._instrumentation_info = instrumentation_info\n self._meter_provider = meter_provider\n\n def create_counter(self, name, unit=None, description=None) -> APICounter:\n return Counter(self._instrumentation_info, name, unit, description)\n\n def create_up_down_counter(\n self, name, unit=None, description=None\n ) -> APIUpDownCounter:\n return UpDownCounter(\n self._instrumentation_info, name, unit, description\n )\n\n def create_observable_counter(\n self, name, callback, unit=None, description=None\n ) -> APIObservableCounter:\n return ObservableCounter(\n self._instrumentation_info, name, callback, unit, description\n )\n\n def create_histogram(\n self, name, unit=None, description=None\n ) -> APIHistogram:\n return Histogram(self._instrumentation_info, name, unit, description)\n\n def create_observable_gauge(\n self, name, callback, unit=None, description=None\n ) -> APIObservableGauge:\n return ObservableGauge(\n self._instrumentation_info, name, callback, unit, description\n )\n\n def create_observable_up_down_counter(\n self, name, callback, unit=None, description=None\n ) -> APIObservableUpDownCounter:\n return ObservableUpDownCounter(\n self._instrumentation_info, name, callback, unit, description\n )\n\n\nclass MeterProvider(APIMeterProvider):\n \"\"\"See `opentelemetry._metrics.MeterProvider`.\"\"\"\n\n def __init__(\n self,\n metric_exporters: Sequence[MetricExporter] = (),\n metric_readers: Sequence[MetricReader] = (),\n resource: Resource = Resource.create({}),\n shutdown_on_exit: bool = True,\n ):\n self._lock = Lock()\n self._atexit_handler = None\n\n if shutdown_on_exit:\n self._atexit_handler = register(self.shutdown)\n\n self._metric_readers = metric_readers\n\n for metric_reader in self._metric_readers:\n metric_reader._register_meter_provider(self)\n\n self._metric_exporters = metric_exporters\n\n self._resource = resource\n self._shutdown = False\n\n def force_flush(self) -> bool:\n\n # FIXME implement a timeout\n\n metric_reader_result = True\n metric_exporter_result = True\n\n for metric_reader in self._metric_readers:\n metric_reader_result = (\n metric_reader_result and metric_reader.force_flush()\n )\n\n if not metric_reader_result:\n _logger.warning(\"Unable to force flush all metric readers\")\n\n for metric_exporter in self._metric_exporters:\n metric_exporter_result = (\n metric_exporter_result and metric_exporter.force_flush()\n )\n\n if not metric_exporter_result:\n _logger.warning(\"Unable to force flush all metric exporters\")\n\n return metric_reader_result and metric_exporter_result\n\n def shutdown(self):\n # FIXME implement a timeout\n\n if self._shutdown:\n _logger.warning(\"shutdown can only be called once\")\n return False\n\n result = True\n\n for metric_reader in self._metric_readers:\n result = result and metric_reader.shutdown()\n\n if not result:\n _logger.warning(\"A MetricReader failed to shutdown\")\n\n for metric_exporter in self._metric_exporters:\n result = result and metric_exporter.shutdown()\n\n if not result:\n _logger.warning(\"A MetricExporter failed to shutdown\")\n\n self._shutdown = True\n\n if self._atexit_handler is not None:\n unregister(self._atexit_handler)\n self._atexit_handler = None\n\n return result\n\n def get_meter(\n self,\n name: str,\n version: Optional[str] = None,\n schema_url: Optional[str] = None,\n ) -> Meter:\n\n if self._shutdown:\n _logger.warning(\n \"A shutdown `MeterProvider` can not provide a `Meter`\"\n )\n return _DefaultMeter(name, version=version, schema_url=schema_url)\n\n return Meter(InstrumentationInfo(name, version, schema_url), self)\n", "path": "opentelemetry-sdk/src/opentelemetry/sdk/_metrics/__init__.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom atexit import register, unregister\nfrom logging import getLogger\nfrom threading import Lock\nfrom typing import Optional, Sequence\n\nfrom opentelemetry._metrics import Meter as APIMeter\nfrom opentelemetry._metrics import MeterProvider as APIMeterProvider\nfrom opentelemetry._metrics import _DefaultMeter\nfrom opentelemetry._metrics.instrument import Counter as APICounter\nfrom opentelemetry._metrics.instrument import Histogram as APIHistogram\nfrom opentelemetry._metrics.instrument import (\n ObservableCounter as APIObservableCounter,\n)\nfrom opentelemetry._metrics.instrument import (\n ObservableGauge as APIObservableGauge,\n)\nfrom opentelemetry._metrics.instrument import (\n ObservableUpDownCounter as APIObservableUpDownCounter,\n)\nfrom opentelemetry._metrics.instrument import UpDownCounter as APIUpDownCounter\nfrom opentelemetry.sdk._metrics.instrument import (\n Counter,\n Histogram,\n ObservableCounter,\n ObservableGauge,\n ObservableUpDownCounter,\n UpDownCounter,\n)\nfrom opentelemetry.sdk._metrics.metric_reader import MetricReader\nfrom opentelemetry.sdk.resources import Resource\nfrom opentelemetry.sdk.util.instrumentation import InstrumentationInfo\n\n_logger = getLogger(__name__)\n\n\nclass Meter(APIMeter):\n def __init__(\n self,\n instrumentation_info: InstrumentationInfo,\n meter_provider: APIMeterProvider,\n ):\n super().__init__(instrumentation_info)\n self._instrumentation_info = instrumentation_info\n self._meter_provider = meter_provider\n\n def create_counter(self, name, unit=None, description=None) -> APICounter:\n return Counter(self._instrumentation_info, name, unit, description)\n\n def create_up_down_counter(\n self, name, unit=None, description=None\n ) -> APIUpDownCounter:\n return UpDownCounter(\n self._instrumentation_info, name, unit, description\n )\n\n def create_observable_counter(\n self, name, callback, unit=None, description=None\n ) -> APIObservableCounter:\n return ObservableCounter(\n self._instrumentation_info, name, callback, unit, description\n )\n\n def create_histogram(\n self, name, unit=None, description=None\n ) -> APIHistogram:\n return Histogram(self._instrumentation_info, name, unit, description)\n\n def create_observable_gauge(\n self, name, callback, unit=None, description=None\n ) -> APIObservableGauge:\n return ObservableGauge(\n self._instrumentation_info, name, callback, unit, description\n )\n\n def create_observable_up_down_counter(\n self, name, callback, unit=None, description=None\n ) -> APIObservableUpDownCounter:\n return ObservableUpDownCounter(\n self._instrumentation_info, name, callback, unit, description\n )\n\n\nclass MeterProvider(APIMeterProvider):\n \"\"\"See `opentelemetry._metrics.MeterProvider`.\"\"\"\n\n def __init__(\n self,\n metric_readers: Sequence[MetricReader] = (),\n resource: Resource = Resource.create({}),\n shutdown_on_exit: bool = True,\n ):\n self._lock = Lock()\n self._atexit_handler = None\n\n if shutdown_on_exit:\n self._atexit_handler = register(self.shutdown)\n\n self._metric_readers = metric_readers\n\n for metric_reader in self._metric_readers:\n metric_reader._register_meter_provider(self)\n\n self._resource = resource\n self._shutdown = False\n\n def force_flush(self) -> bool:\n\n # FIXME implement a timeout\n\n metric_reader_result = True\n\n for metric_reader in self._metric_readers:\n metric_reader_result = (\n metric_reader_result and metric_reader.force_flush()\n )\n\n if not metric_reader_result:\n _logger.warning(\"Unable to force flush all metric readers\")\n\n return metric_reader_result\n\n def shutdown(self):\n # FIXME implement a timeout\n\n if self._shutdown:\n _logger.warning(\"shutdown can only be called once\")\n return False\n\n result = True\n\n for metric_reader in self._metric_readers:\n result = result and metric_reader.shutdown()\n\n if not result:\n _logger.warning(\"A MetricReader failed to shutdown\")\n\n self._shutdown = True\n\n if self._atexit_handler is not None:\n unregister(self._atexit_handler)\n self._atexit_handler = None\n\n return result\n\n def get_meter(\n self,\n name: str,\n version: Optional[str] = None,\n schema_url: Optional[str] = None,\n ) -> Meter:\n\n if self._shutdown:\n _logger.warning(\n \"A shutdown `MeterProvider` can not provide a `Meter`\"\n )\n return _DefaultMeter(name, version=version, schema_url=schema_url)\n\n return Meter(InstrumentationInfo(name, version, schema_url), self)\n", "path": "opentelemetry-sdk/src/opentelemetry/sdk/_metrics/__init__.py"}]} | 2,100 | 545 |
gh_patches_debug_17692 | rasdani/github-patches | git_diff | sopel-irc__sopel-1262 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[wikipedia] Queries with e.g. & do not work properly
````
<dgw> ;w Miraculous Ladybug
<Kaede> KeyError: u'extract' (file "/usr/local/lib/python2.7/dist-packages/sopel/modules/wikipedia.py",
line 87, in mw_snippet)
[…]
<dgw> https://en.wikipedia.org/wiki/Miraculous:_Tales_of_Ladybug_%26_Cat_Noir
<Kaede> KeyError: u'extract' (file "/usr/local/lib/python2.7/dist-packages/sopel/modules/wikipedia.py",
line 87, in mw_snippet)
````
Basically, if a search returns a page title that contains `&`, e.g. "Miraculous: Tales of Ladybug & Cat Noir", the `wikipedia` module winds up trying to retrieve a snippet for, e.g., "Miraculous: Tales of Ladybug " and failing to do so because the page doesn't exist.
I have a fix for this in testing, but wanted to open the issue so anyone searching for the problem would find out that it is already on the radar.
Small fix for URLs with "+" in them
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sopel/modules/wikipedia.py`
Content:
```
1 # coding=utf-8
2 # Copyright 2013 Elsie Powell - embolalia.com
3 # Licensed under the Eiffel Forum License 2.
4 from __future__ import unicode_literals, absolute_import, print_function, division
5 from sopel import web, tools
6 from sopel.config.types import StaticSection, ValidatedAttribute
7 from sopel.module import NOLIMIT, commands, example, rule
8 import json
9 import re
10
11 import sys
12 if sys.version_info.major < 3:
13 from urlparse import unquote as _unquote
14 unquote = lambda s: _unquote(s.encode('utf-8')).decode('utf-8')
15 else:
16 from urllib.parse import unquote
17
18 REDIRECT = re.compile(r'^REDIRECT (.*)')
19
20
21 class WikipediaSection(StaticSection):
22 default_lang = ValidatedAttribute('default_lang', default='en')
23 """The default language to find articles from."""
24 lang_per_channel = ValidatedAttribute('lang_per_channel')
25
26
27 def setup(bot):
28 bot.config.define_section('wikipedia', WikipediaSection)
29
30 regex = re.compile('([a-z]+).(wikipedia.org/wiki/)([^ ]+)')
31 if not bot.memory.contains('url_callbacks'):
32 bot.memory['url_callbacks'] = tools.SopelMemory()
33 bot.memory['url_callbacks'][regex] = mw_info
34
35
36 def configure(config):
37 config.define_section('wikipedia', WikipediaSection)
38 config.wikipedia.configure_setting(
39 'default_lang',
40 "Enter the default language to find articles from."
41 )
42
43
44 def mw_search(server, query, num):
45 """
46 Searches the specified MediaWiki server for the given query, and returns
47 the specified number of results.
48 """
49 search_url = ('http://%s/w/api.php?format=json&action=query'
50 '&list=search&srlimit=%d&srprop=timestamp&srwhat=text'
51 '&srsearch=') % (server, num)
52 search_url += query
53 query = json.loads(web.get(search_url))
54 if 'query' in query:
55 query = query['query']['search']
56 return [r['title'] for r in query]
57 else:
58 return None
59
60
61 def say_snippet(bot, server, query, show_url=True):
62 page_name = query.replace('_', ' ')
63 query = query.replace(' ', '_')
64 try:
65 snippet = mw_snippet(server, query)
66 except KeyError:
67 if show_url:
68 bot.say("[WIKIPEDIA] Error fetching snippet for \"{}\".".format(page_name))
69 return
70 msg = '[WIKIPEDIA] {} | "{}"'.format(page_name, snippet)
71 if show_url:
72 msg = msg + ' | https://{}/wiki/{}'.format(server, query)
73 bot.say(msg)
74
75
76 def mw_snippet(server, query):
77 """
78 Retrives a snippet of the specified length from the given page on the given
79 server.
80 """
81 snippet_url = ('https://' + server + '/w/api.php?format=json'
82 '&action=query&prop=extracts&exintro&explaintext'
83 '&exchars=300&redirects&titles=')
84 snippet_url += query
85 snippet = json.loads(web.get(snippet_url))
86 snippet = snippet['query']['pages']
87
88 # For some reason, the API gives the page *number* as the key, so we just
89 # grab the first page number in the results.
90 snippet = snippet[list(snippet.keys())[0]]
91
92 return snippet['extract']
93
94
95 @rule('.*\/([a-z]+\.wikipedia.org)\/wiki\/((?!File\:)[^ ]+).*')
96 def mw_info(bot, trigger, found_match=None):
97 """
98 Retrives a snippet of the specified length from the given page on the given
99 server.
100 """
101 match = found_match or trigger
102 say_snippet(bot, match.group(1), unquote(match.group(2)), show_url=False)
103
104
105 @commands('w', 'wiki', 'wik')
106 @example('.w San Francisco')
107 def wikipedia(bot, trigger):
108 lang = bot.config.wikipedia.default_lang
109
110 # change lang if channel has custom language set
111 if (trigger.sender and not trigger.sender.is_nick() and
112 bot.config.wikipedia.lang_per_channel):
113 customlang = re.search('(' + trigger.sender + '):(\w+)',
114 bot.config.wikipedia.lang_per_channel)
115 if customlang is not None:
116 lang = customlang.group(2)
117
118 if trigger.group(2) is None:
119 bot.reply("What do you want me to look up?")
120 return NOLIMIT
121
122 query = trigger.group(2)
123 args = re.search(r'^-([a-z]{2,12})\s(.*)', query)
124 if args is not None:
125 lang = args.group(1)
126 query = args.group(2)
127
128 if not query:
129 bot.reply('What do you want me to look up?')
130 return NOLIMIT
131 server = lang + '.wikipedia.org'
132 query = mw_search(server, query, 1)
133 if not query:
134 bot.reply("I can't find any results for that.")
135 return NOLIMIT
136 else:
137 query = query[0]
138 say_snippet(bot, server, query)
139
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sopel/modules/wikipedia.py b/sopel/modules/wikipedia.py
--- a/sopel/modules/wikipedia.py
+++ b/sopel/modules/wikipedia.py
@@ -10,10 +10,12 @@
import sys
if sys.version_info.major < 3:
+ from urllib import quote as _quote
from urlparse import unquote as _unquote
+ quote = lambda s: _quote(s.encode('utf-8')).decode('utf-8')
unquote = lambda s: _unquote(s.encode('utf-8')).decode('utf-8')
else:
- from urllib.parse import unquote
+ from urllib.parse import quote, unquote
REDIRECT = re.compile(r'^REDIRECT (.*)')
@@ -60,7 +62,7 @@
def say_snippet(bot, server, query, show_url=True):
page_name = query.replace('_', ' ')
- query = query.replace(' ', '_')
+ query = quote(query.replace(' ', '_'))
try:
snippet = mw_snippet(server, query)
except KeyError:
| {"golden_diff": "diff --git a/sopel/modules/wikipedia.py b/sopel/modules/wikipedia.py\n--- a/sopel/modules/wikipedia.py\n+++ b/sopel/modules/wikipedia.py\n@@ -10,10 +10,12 @@\n \n import sys\n if sys.version_info.major < 3:\n+ from urllib import quote as _quote\n from urlparse import unquote as _unquote\n+ quote = lambda s: _quote(s.encode('utf-8')).decode('utf-8')\n unquote = lambda s: _unquote(s.encode('utf-8')).decode('utf-8')\n else:\n- from urllib.parse import unquote\n+ from urllib.parse import quote, unquote\n \n REDIRECT = re.compile(r'^REDIRECT (.*)')\n \n@@ -60,7 +62,7 @@\n \n def say_snippet(bot, server, query, show_url=True):\n page_name = query.replace('_', ' ')\n- query = query.replace(' ', '_')\n+ query = quote(query.replace(' ', '_'))\n try:\n snippet = mw_snippet(server, query)\n except KeyError:\n", "issue": "[wikipedia] Queries with e.g. & do not work properly\n````\r\n<dgw> ;w Miraculous Ladybug\r\n<Kaede> KeyError: u'extract' (file \"/usr/local/lib/python2.7/dist-packages/sopel/modules/wikipedia.py\",\r\n line 87, in mw_snippet)\r\n[\u2026]\r\n<dgw> https://en.wikipedia.org/wiki/Miraculous:_Tales_of_Ladybug_%26_Cat_Noir\r\n<Kaede> KeyError: u'extract' (file \"/usr/local/lib/python2.7/dist-packages/sopel/modules/wikipedia.py\",\r\n line 87, in mw_snippet)\r\n````\r\n\r\nBasically, if a search returns a page title that contains `&`, e.g. \"Miraculous: Tales of Ladybug & Cat Noir\", the `wikipedia` module winds up trying to retrieve a snippet for, e.g., \"Miraculous: Tales of Ladybug \" and failing to do so because the page doesn't exist.\r\n\r\nI have a fix for this in testing, but wanted to open the issue so anyone searching for the problem would find out that it is already on the radar.\nSmall fix for URLs with \"+\" in them\n\n", "before_files": [{"content": "# coding=utf-8\n# Copyright 2013 Elsie Powell - embolalia.com\n# Licensed under the Eiffel Forum License 2.\nfrom __future__ import unicode_literals, absolute_import, print_function, division\nfrom sopel import web, tools\nfrom sopel.config.types import StaticSection, ValidatedAttribute\nfrom sopel.module import NOLIMIT, commands, example, rule\nimport json\nimport re\n\nimport sys\nif sys.version_info.major < 3:\n from urlparse import unquote as _unquote\n unquote = lambda s: _unquote(s.encode('utf-8')).decode('utf-8')\nelse:\n from urllib.parse import unquote\n\nREDIRECT = re.compile(r'^REDIRECT (.*)')\n\n\nclass WikipediaSection(StaticSection):\n default_lang = ValidatedAttribute('default_lang', default='en')\n \"\"\"The default language to find articles from.\"\"\"\n lang_per_channel = ValidatedAttribute('lang_per_channel')\n\n\ndef setup(bot):\n bot.config.define_section('wikipedia', WikipediaSection)\n\n regex = re.compile('([a-z]+).(wikipedia.org/wiki/)([^ ]+)')\n if not bot.memory.contains('url_callbacks'):\n bot.memory['url_callbacks'] = tools.SopelMemory()\n bot.memory['url_callbacks'][regex] = mw_info\n\n\ndef configure(config):\n config.define_section('wikipedia', WikipediaSection)\n config.wikipedia.configure_setting(\n 'default_lang',\n \"Enter the default language to find articles from.\"\n )\n\n\ndef mw_search(server, query, num):\n \"\"\"\n Searches the specified MediaWiki server for the given query, and returns\n the specified number of results.\n \"\"\"\n search_url = ('http://%s/w/api.php?format=json&action=query'\n '&list=search&srlimit=%d&srprop=timestamp&srwhat=text'\n '&srsearch=') % (server, num)\n search_url += query\n query = json.loads(web.get(search_url))\n if 'query' in query:\n query = query['query']['search']\n return [r['title'] for r in query]\n else:\n return None\n\n\ndef say_snippet(bot, server, query, show_url=True):\n page_name = query.replace('_', ' ')\n query = query.replace(' ', '_')\n try:\n snippet = mw_snippet(server, query)\n except KeyError:\n if show_url:\n bot.say(\"[WIKIPEDIA] Error fetching snippet for \\\"{}\\\".\".format(page_name))\n return\n msg = '[WIKIPEDIA] {} | \"{}\"'.format(page_name, snippet)\n if show_url:\n msg = msg + ' | https://{}/wiki/{}'.format(server, query)\n bot.say(msg)\n\n\ndef mw_snippet(server, query):\n \"\"\"\n Retrives a snippet of the specified length from the given page on the given\n server.\n \"\"\"\n snippet_url = ('https://' + server + '/w/api.php?format=json'\n '&action=query&prop=extracts&exintro&explaintext'\n '&exchars=300&redirects&titles=')\n snippet_url += query\n snippet = json.loads(web.get(snippet_url))\n snippet = snippet['query']['pages']\n\n # For some reason, the API gives the page *number* as the key, so we just\n # grab the first page number in the results.\n snippet = snippet[list(snippet.keys())[0]]\n\n return snippet['extract']\n\n\n@rule('.*\\/([a-z]+\\.wikipedia.org)\\/wiki\\/((?!File\\:)[^ ]+).*')\ndef mw_info(bot, trigger, found_match=None):\n \"\"\"\n Retrives a snippet of the specified length from the given page on the given\n server.\n \"\"\"\n match = found_match or trigger\n say_snippet(bot, match.group(1), unquote(match.group(2)), show_url=False)\n\n\n@commands('w', 'wiki', 'wik')\n@example('.w San Francisco')\ndef wikipedia(bot, trigger):\n lang = bot.config.wikipedia.default_lang\n\n # change lang if channel has custom language set\n if (trigger.sender and not trigger.sender.is_nick() and\n bot.config.wikipedia.lang_per_channel):\n customlang = re.search('(' + trigger.sender + '):(\\w+)',\n bot.config.wikipedia.lang_per_channel)\n if customlang is not None:\n lang = customlang.group(2)\n\n if trigger.group(2) is None:\n bot.reply(\"What do you want me to look up?\")\n return NOLIMIT\n\n query = trigger.group(2)\n args = re.search(r'^-([a-z]{2,12})\\s(.*)', query)\n if args is not None:\n lang = args.group(1)\n query = args.group(2)\n\n if not query:\n bot.reply('What do you want me to look up?')\n return NOLIMIT\n server = lang + '.wikipedia.org'\n query = mw_search(server, query, 1)\n if not query:\n bot.reply(\"I can't find any results for that.\")\n return NOLIMIT\n else:\n query = query[0]\n say_snippet(bot, server, query)\n", "path": "sopel/modules/wikipedia.py"}], "after_files": [{"content": "# coding=utf-8\n# Copyright 2013 Elsie Powell - embolalia.com\n# Licensed under the Eiffel Forum License 2.\nfrom __future__ import unicode_literals, absolute_import, print_function, division\nfrom sopel import web, tools\nfrom sopel.config.types import StaticSection, ValidatedAttribute\nfrom sopel.module import NOLIMIT, commands, example, rule\nimport json\nimport re\n\nimport sys\nif sys.version_info.major < 3:\n from urllib import quote as _quote\n from urlparse import unquote as _unquote\n quote = lambda s: _quote(s.encode('utf-8')).decode('utf-8')\n unquote = lambda s: _unquote(s.encode('utf-8')).decode('utf-8')\nelse:\n from urllib.parse import quote, unquote\n\nREDIRECT = re.compile(r'^REDIRECT (.*)')\n\n\nclass WikipediaSection(StaticSection):\n default_lang = ValidatedAttribute('default_lang', default='en')\n \"\"\"The default language to find articles from.\"\"\"\n lang_per_channel = ValidatedAttribute('lang_per_channel')\n\n\ndef setup(bot):\n bot.config.define_section('wikipedia', WikipediaSection)\n\n regex = re.compile('([a-z]+).(wikipedia.org/wiki/)([^ ]+)')\n if not bot.memory.contains('url_callbacks'):\n bot.memory['url_callbacks'] = tools.SopelMemory()\n bot.memory['url_callbacks'][regex] = mw_info\n\n\ndef configure(config):\n config.define_section('wikipedia', WikipediaSection)\n config.wikipedia.configure_setting(\n 'default_lang',\n \"Enter the default language to find articles from.\"\n )\n\n\ndef mw_search(server, query, num):\n \"\"\"\n Searches the specified MediaWiki server for the given query, and returns\n the specified number of results.\n \"\"\"\n search_url = ('http://%s/w/api.php?format=json&action=query'\n '&list=search&srlimit=%d&srprop=timestamp&srwhat=text'\n '&srsearch=') % (server, num)\n search_url += query\n query = json.loads(web.get(search_url))\n if 'query' in query:\n query = query['query']['search']\n return [r['title'] for r in query]\n else:\n return None\n\n\ndef say_snippet(bot, server, query, show_url=True):\n page_name = query.replace('_', ' ')\n query = quote(query.replace(' ', '_'))\n try:\n snippet = mw_snippet(server, query)\n except KeyError:\n if show_url:\n bot.say(\"[WIKIPEDIA] Error fetching snippet for \\\"{}\\\".\".format(page_name))\n return\n msg = '[WIKIPEDIA] {} | \"{}\"'.format(page_name, snippet)\n if show_url:\n msg = msg + ' | https://{}/wiki/{}'.format(server, query)\n bot.say(msg)\n\n\ndef mw_snippet(server, query):\n \"\"\"\n Retrives a snippet of the specified length from the given page on the given\n server.\n \"\"\"\n snippet_url = ('https://' + server + '/w/api.php?format=json'\n '&action=query&prop=extracts&exintro&explaintext'\n '&exchars=300&redirects&titles=')\n snippet_url += query\n snippet = json.loads(web.get(snippet_url))\n snippet = snippet['query']['pages']\n\n # For some reason, the API gives the page *number* as the key, so we just\n # grab the first page number in the results.\n snippet = snippet[list(snippet.keys())[0]]\n\n return snippet['extract']\n\n\n@rule('.*\\/([a-z]+\\.wikipedia.org)\\/wiki\\/((?!File\\:)[^ ]+).*')\ndef mw_info(bot, trigger, found_match=None):\n \"\"\"\n Retrives a snippet of the specified length from the given page on the given\n server.\n \"\"\"\n match = found_match or trigger\n say_snippet(bot, match.group(1), unquote(match.group(2)), show_url=False)\n\n\n@commands('w', 'wiki', 'wik')\n@example('.w San Francisco')\ndef wikipedia(bot, trigger):\n lang = bot.config.wikipedia.default_lang\n\n # change lang if channel has custom language set\n if (trigger.sender and not trigger.sender.is_nick() and\n bot.config.wikipedia.lang_per_channel):\n customlang = re.search('(' + trigger.sender + '):(\\w+)',\n bot.config.wikipedia.lang_per_channel)\n if customlang is not None:\n lang = customlang.group(2)\n\n if trigger.group(2) is None:\n bot.reply(\"What do you want me to look up?\")\n return NOLIMIT\n\n query = trigger.group(2)\n args = re.search(r'^-([a-z]{2,12})\\s(.*)', query)\n if args is not None:\n lang = args.group(1)\n query = args.group(2)\n\n if not query:\n bot.reply('What do you want me to look up?')\n return NOLIMIT\n server = lang + '.wikipedia.org'\n query = mw_search(server, query, 1)\n if not query:\n bot.reply(\"I can't find any results for that.\")\n return NOLIMIT\n else:\n query = query[0]\n say_snippet(bot, server, query)\n", "path": "sopel/modules/wikipedia.py"}]} | 1,965 | 242 |
gh_patches_debug_15528 | rasdani/github-patches | git_diff | vyperlang__vyper-3745 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
update sphinx version
we are on https://github.com/vyperlang/vyper/blob/9cf66c9dd12c9a020c6945c100cc1266be262ebe/requirements-docs.txt#L1 but the latest version is v7.2.6 - https://pypi.org/project/Sphinx/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 #!/usr/bin/env python3
2 # -*- coding: utf-8 -*-
3 #
4 # Vyper documentation build configuration file, created by
5 # sphinx-quickstart on Wed Jul 26 11:18:29 2017.
6 #
7 # This file is execfile()d with the current directory set to its
8 # containing dir.
9 #
10 # Note that not all possible configuration values are present in this
11 # autogenerated file.
12 #
13 # All configuration values have a default; values that are commented out
14 # serve to show the default.
15
16 # If extensions (or modules to document with autodoc) are in another directory,
17 # add these directories to sys.path here. If the directory is relative to the
18 # documentation root, use os.path.abspath to make it absolute, like shown here.
19 #
20 # import os
21 # import sys
22 # sys.path.insert(0, os.path.abspath('.'))
23 from recommonmark.parser import CommonMarkParser
24
25 # TO DO - Create and Implement Vyper Lexer
26 # def setup(sphinx):
27 # sys.path.insert(0, os.path.abspath('./utils'))
28 # from SolidityLexer import SolidityLexer
29 # sphinx.add_lexer('Python', SolidityLexer())
30
31
32 # -- General configuration ------------------------------------------------
33
34 # If your documentation needs a minimal Sphinx version, state it here.
35 #
36 # needs_sphinx = '1.0'
37
38 # Add any Sphinx extension module names here, as strings. They can be
39 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
40 # ones.
41 extensions = [
42 "sphinx.ext.autodoc",
43 "sphinx.ext.intersphinx",
44 ]
45
46 # Add any paths that contain templates here, relative to this directory.
47 templates_path = ["_templates"]
48
49 # The suffix(es) of source filenames.
50 # You can specify multiple suffix as a list of string:
51 #
52 source_suffix = [".rst", ".md"]
53
54 # The master toctree document.
55 master_doc = "toctree"
56
57 # General information about the project.
58 project = "Vyper"
59 copyright = "2017-2020 CC-BY-4.0 Vyper Team"
60 author = "Vyper Team (originally created by Vitalik Buterin)"
61
62 # The version info for the project you're documenting, acts as replacement for
63 # |version| and |release|, also used in various other places throughout the
64 # built documents.
65 #
66 # The short X.Y version.
67 version = ""
68 # The full version, including alpha/beta/rc tags.
69 release = ""
70
71 # The language for content autogenerated by Sphinx. Refer to documentation
72 # for a list of supported languages.
73 #
74 # This is also used if you do content translation via gettext catalogs.
75 # Usually you set "language" from the command line for these cases.
76 language = "python"
77
78 # List of patterns, relative to source directory, that match files and
79 # directories to ignore when looking for source files.
80 # This patterns also effect to html_static_path and html_extra_path
81 exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
82
83 # The name of the Pygments (syntax highlighting) style to use.
84 pygments_style = "sphinx"
85
86 # If true, `todo` and `todoList` produce output, else they produce nothing.
87 todo_include_todos = False
88
89
90 # -- Options for HTML output ----------------------------------------------
91
92 # The theme to use for HTML and HTML Help pages. See the documentation for
93 # a list of builtin themes.
94 #
95 html_theme = "sphinx_rtd_theme"
96
97 # Theme options are theme-specific and customize the look and feel of a theme
98 # further. For a list of options available for each theme, see the
99 # documentation.
100 #
101 # html_theme_options = {}
102
103 # Add any paths that contain custom static files (such as style sheets) here,
104 # relative to this directory. They are copied after the builtin static files,
105 # so a file named "default.css" will overwrite the builtin "default.css".
106 html_static_path = ["_static"]
107
108 html_css_files = ["css/toggle.css", "css/dark.css"]
109
110 html_js_files = ["js/toggle.js"]
111
112 html_logo = "vyper-logo-transparent.svg"
113
114 # Custom sidebar templates, must be a dictionary that maps document names
115 # to template names.
116 #
117 # The default sidebars (for documents that don't match any pattern) are
118 # defined by theme itself. Builtin themes are using these templates by
119 # default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
120 # 'searchbox.html']``.
121 #
122 # html_sidebars = {}
123
124
125 # -- Options for HTMLHelp output ------------------------------------------
126
127 # Output file base name for HTML help builder.
128 htmlhelp_basename = "Vyperdoc"
129
130
131 # -- Options for LaTeX output ---------------------------------------------
132
133 latex_elements: dict = {
134 # The paper size ('letterpaper' or 'a4paper').
135 #
136 # 'papersize': 'letterpaper',
137 # The font size ('10pt', '11pt' or '12pt').
138 #
139 # 'pointsize': '10pt',
140 # Additional stuff for the LaTeX preamble.
141 #
142 # 'preamble': '',
143 # Latex figure (float) alignment
144 #
145 # 'figure_align': 'htbp',
146 }
147
148 # Grouping the document tree into LaTeX files. List of tuples
149 # (source start file, target name, title,
150 # author, documentclass [howto, manual, or own class]).
151 latex_documents = [
152 (
153 master_doc,
154 "Vyper.tex",
155 "Vyper Documentation",
156 "Vyper Team (originally created by Vitalik Buterin)",
157 "manual",
158 ),
159 ]
160
161
162 # -- Options for manual page output ---------------------------------------
163
164 # One entry per manual page. List of tuples
165 # (source start file, name, description, authors, manual section).
166 man_pages = [(master_doc, "vyper", "Vyper Documentation", [author], 1)]
167
168
169 # -- Options for Texinfo output -------------------------------------------
170
171 # Grouping the document tree into Texinfo files. List of tuples
172 # (source start file, target name, title, author,
173 # dir menu entry, description, category)
174 texinfo_documents = [
175 (
176 master_doc,
177 "Vyper",
178 "Vyper Documentation",
179 author,
180 "Vyper",
181 "One line description of project.",
182 "Miscellaneous",
183 ),
184 ]
185
186 source_parsers = {
187 ".md": CommonMarkParser,
188 }
189
190 intersphinx_mapping = {
191 "brownie": ("https://eth-brownie.readthedocs.io/en/stable", None),
192 "pytest": ("https://docs.pytest.org/en/latest/", None),
193 "python": ("https://docs.python.org/3.8/", None),
194 }
195
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -56,7 +56,7 @@
# General information about the project.
project = "Vyper"
-copyright = "2017-2020 CC-BY-4.0 Vyper Team"
+copyright = "2017-2024 CC-BY-4.0 Vyper Team"
author = "Vyper Team (originally created by Vitalik Buterin)"
# The version info for the project you're documenting, acts as replacement for
@@ -190,5 +190,5 @@
intersphinx_mapping = {
"brownie": ("https://eth-brownie.readthedocs.io/en/stable", None),
"pytest": ("https://docs.pytest.org/en/latest/", None),
- "python": ("https://docs.python.org/3.8/", None),
+ "python": ("https://docs.python.org/3.10/", None),
}
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -56,7 +56,7 @@\n \n # General information about the project.\n project = \"Vyper\"\n-copyright = \"2017-2020 CC-BY-4.0 Vyper Team\"\n+copyright = \"2017-2024 CC-BY-4.0 Vyper Team\"\n author = \"Vyper Team (originally created by Vitalik Buterin)\"\n \n # The version info for the project you're documenting, acts as replacement for\n@@ -190,5 +190,5 @@\n intersphinx_mapping = {\n \"brownie\": (\"https://eth-brownie.readthedocs.io/en/stable\", None),\n \"pytest\": (\"https://docs.pytest.org/en/latest/\", None),\n- \"python\": (\"https://docs.python.org/3.8/\", None),\n+ \"python\": (\"https://docs.python.org/3.10/\", None),\n }\n", "issue": "update sphinx version\nwe are on https://github.com/vyperlang/vyper/blob/9cf66c9dd12c9a020c6945c100cc1266be262ebe/requirements-docs.txt#L1 but the latest version is v7.2.6 - https://pypi.org/project/Sphinx/\n", "before_files": [{"content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n#\n# Vyper documentation build configuration file, created by\n# sphinx-quickstart on Wed Jul 26 11:18:29 2017.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\nfrom recommonmark.parser import CommonMarkParser\n\n# TO DO - Create and Implement Vyper Lexer\n# def setup(sphinx):\n# sys.path.insert(0, os.path.abspath('./utils'))\n# from SolidityLexer import SolidityLexer\n# sphinx.add_lexer('Python', SolidityLexer())\n\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.intersphinx\",\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\nsource_suffix = [\".rst\", \".md\"]\n\n# The master toctree document.\nmaster_doc = \"toctree\"\n\n# General information about the project.\nproject = \"Vyper\"\ncopyright = \"2017-2020 CC-BY-4.0 Vyper Team\"\nauthor = \"Vyper Team (originally created by Vitalik Buterin)\"\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = \"\"\n# The full version, including alpha/beta/rc tags.\nrelease = \"\"\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = \"python\"\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = [\"_build\", \"Thumbs.db\", \".DS_Store\"]\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"sphinx\"\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"sphinx_rtd_theme\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\n# html_theme_options = {}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\nhtml_css_files = [\"css/toggle.css\", \"css/dark.css\"]\n\nhtml_js_files = [\"js/toggle.js\"]\n\nhtml_logo = \"vyper-logo-transparent.svg\"\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# The default sidebars (for documents that don't match any pattern) are\n# defined by theme itself. Builtin themes are using these templates by\n# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',\n# 'searchbox.html']``.\n#\n# html_sidebars = {}\n\n\n# -- Options for HTMLHelp output ------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"Vyperdoc\"\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements: dict = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (\n master_doc,\n \"Vyper.tex\",\n \"Vyper Documentation\",\n \"Vyper Team (originally created by Vitalik Buterin)\",\n \"manual\",\n ),\n]\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, \"vyper\", \"Vyper Documentation\", [author], 1)]\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n \"Vyper\",\n \"Vyper Documentation\",\n author,\n \"Vyper\",\n \"One line description of project.\",\n \"Miscellaneous\",\n ),\n]\n\nsource_parsers = {\n \".md\": CommonMarkParser,\n}\n\nintersphinx_mapping = {\n \"brownie\": (\"https://eth-brownie.readthedocs.io/en/stable\", None),\n \"pytest\": (\"https://docs.pytest.org/en/latest/\", None),\n \"python\": (\"https://docs.python.org/3.8/\", None),\n}\n", "path": "docs/conf.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n#\n# Vyper documentation build configuration file, created by\n# sphinx-quickstart on Wed Jul 26 11:18:29 2017.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\nfrom recommonmark.parser import CommonMarkParser\n\n# TO DO - Create and Implement Vyper Lexer\n# def setup(sphinx):\n# sys.path.insert(0, os.path.abspath('./utils'))\n# from SolidityLexer import SolidityLexer\n# sphinx.add_lexer('Python', SolidityLexer())\n\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.intersphinx\",\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\nsource_suffix = [\".rst\", \".md\"]\n\n# The master toctree document.\nmaster_doc = \"toctree\"\n\n# General information about the project.\nproject = \"Vyper\"\ncopyright = \"2017-2024 CC-BY-4.0 Vyper Team\"\nauthor = \"Vyper Team (originally created by Vitalik Buterin)\"\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = \"\"\n# The full version, including alpha/beta/rc tags.\nrelease = \"\"\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = \"python\"\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = [\"_build\", \"Thumbs.db\", \".DS_Store\"]\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"sphinx\"\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"sphinx_rtd_theme\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\n# html_theme_options = {}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\nhtml_css_files = [\"css/toggle.css\", \"css/dark.css\"]\n\nhtml_js_files = [\"js/toggle.js\"]\n\nhtml_logo = \"vyper-logo-transparent.svg\"\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# The default sidebars (for documents that don't match any pattern) are\n# defined by theme itself. Builtin themes are using these templates by\n# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',\n# 'searchbox.html']``.\n#\n# html_sidebars = {}\n\n\n# -- Options for HTMLHelp output ------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"Vyperdoc\"\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements: dict = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (\n master_doc,\n \"Vyper.tex\",\n \"Vyper Documentation\",\n \"Vyper Team (originally created by Vitalik Buterin)\",\n \"manual\",\n ),\n]\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, \"vyper\", \"Vyper Documentation\", [author], 1)]\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n \"Vyper\",\n \"Vyper Documentation\",\n author,\n \"Vyper\",\n \"One line description of project.\",\n \"Miscellaneous\",\n ),\n]\n\nsource_parsers = {\n \".md\": CommonMarkParser,\n}\n\nintersphinx_mapping = {\n \"brownie\": (\"https://eth-brownie.readthedocs.io/en/stable\", None),\n \"pytest\": (\"https://docs.pytest.org/en/latest/\", None),\n \"python\": (\"https://docs.python.org/3.10/\", None),\n}\n", "path": "docs/conf.py"}]} | 2,268 | 226 |
gh_patches_debug_33382 | rasdani/github-patches | git_diff | conan-io__conan-center-index-438 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[package] sqlite3/all: Discussion of default options
Having a look at the options of the sqlite3 recipe you see most of them defaulted to false. However, there are some recipes like qt from bincrafters that already require this package with specific option enabled:
https://github.com/bincrafters/conan-qt/blob/594e40f036adcbf22d23dc2da228f389d45a76dc/conanfile.py#L256
Moreover, the sqlite3 is provided with a custom *CMakeLists.txt* with the CMake options, so we could say there are *no defaults* for these options recommended by the library authors/maintainers.
Additionally, these options are enabled by default in vcpkg repo:
```
-DSQLITE_ENABLE_RTREE
-DSQLITE_ENABLE_UNLOCK_NOTIFY
-DSQLITE_ENABLE_COLUMN_METADATA
```
This issue is open for discussion regarding the options of this package and its default values.
cc/ @ericLemanissier @SSE4 @uilianries
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `recipes/sqlite3/all/conanfile.py`
Content:
```
1 import os
2 from conans import ConanFile, CMake, tools
3
4
5 class ConanSqlite3(ConanFile):
6 name = "sqlite3"
7 description = "Self-contained, serverless, in-process SQL database engine."
8 url = "https://github.com/conan-io/conan-center-index"
9 homepage = "https://www.sqlite.org"
10 topics = ("conan", "sqlite", "database", "sql", "serverless")
11 license = "Public Domain"
12 generators = "cmake"
13 settings = "os", "compiler", "arch", "build_type"
14 exports_sources = ["CMakeLists.txt"]
15 options = {"shared": [True, False],
16 "fPIC": [True, False],
17 "threadsafe": [0, 1, 2],
18 "enable_column_metadata": [True, False],
19 "enable_explain_comments": [True, False],
20 "enable_fts3": [True, False],
21 "enable_fts4": [True, False],
22 "enable_fts5": [True, False],
23 "enable_json1": [True, False],
24 "enable_rtree": [True, False],
25 "omit_load_extension": [True, False]
26 }
27 default_options = {"shared": False,
28 "fPIC": True,
29 "threadsafe": 1,
30 "enable_column_metadata": False,
31 "enable_explain_comments": False,
32 "enable_fts3": False,
33 "enable_fts4": False,
34 "enable_fts5": False,
35 "enable_json1": False,
36 "enable_rtree": False,
37 "omit_load_extension": False
38 }
39 _source_subfolder = "source_subfolder"
40
41 def source(self):
42 tools.get(**self.conan_data["sources"][self.version])
43 url = self.conan_data["sources"][self.version]["url"]
44 archive_name = os.path.basename(url)
45 archive_name = os.path.splitext(archive_name)[0]
46 os.rename(archive_name, self._source_subfolder)
47
48 def config_options(self):
49 if self.settings.os == "Windows":
50 del self.options.fPIC
51
52 def configure(self):
53 del self.settings.compiler.libcxx
54 del self.settings.compiler.cppstd
55
56 def _configure_cmake(self):
57 cmake = CMake(self)
58 cmake.definitions["THREADSAFE"] = self.options.threadsafe
59 cmake.definitions["ENABLE_COLUMN_METADATA"] = self.options.enable_column_metadata
60 cmake.definitions["ENABLE_EXPLAIN_COMMENTS"] = self.options.enable_explain_comments
61 cmake.definitions["ENABLE_FTS3"] = self.options.enable_fts3
62 cmake.definitions["ENABLE_FTS4"] = self.options.enable_fts4
63 cmake.definitions["ENABLE_FTS5"] = self.options.enable_fts5
64 cmake.definitions["ENABLE_JSON1"] = self.options.enable_json1
65 cmake.definitions["ENABLE_RTREE"] = self.options.enable_rtree
66 cmake.definitions["OMIT_LOAD_EXTENSION"] = self.options.omit_load_extension
67 cmake.definitions["HAVE_FDATASYNC"] = True
68 cmake.definitions["HAVE_GMTIME_R"] = True
69 cmake.definitions["HAVE_LOCALTIME_R"] = True
70 cmake.definitions["HAVE_POSIX_FALLOCATE"] = True
71 cmake.definitions["HAVE_STRERROR_R"] = True
72 cmake.definitions["HAVE_USLEEP"] = True
73 if self.settings.os == "Windows":
74 cmake.definitions["HAVE_LOCALTIME_R"] = False
75 cmake.definitions["HAVE_POSIX_FALLOCATE"] = False
76 if tools.is_apple_os(self.settings.os):
77 cmake.definitions["HAVE_POSIX_FALLOCATE"] = False
78 if self.settings.os == "Android":
79 cmake.definitions["HAVE_POSIX_FALLOCATE"] = False
80 cmake.configure()
81 return cmake
82
83 def build(self):
84 cmake = self._configure_cmake()
85 cmake.build()
86
87 def package(self):
88 header = tools.load(os.path.join(self._source_subfolder, "sqlite3.h"))
89 license_content = header[3:header.find("***", 1)]
90 tools.save("LICENSE", license_content)
91
92 self.copy("LICENSE", dst="licenses")
93
94 cmake = self._configure_cmake()
95 cmake.install()
96
97 def package_info(self):
98 self.cpp_info.libs = tools.collect_libs(self)
99 if self.settings.os == "Linux":
100 if self.options.threadsafe:
101 self.cpp_info.libs.append("pthread")
102 if self.options.omit_load_extension == "False":
103 self.cpp_info.libs.append("dl")
104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/recipes/sqlite3/all/conanfile.py b/recipes/sqlite3/all/conanfile.py
--- a/recipes/sqlite3/all/conanfile.py
+++ b/recipes/sqlite3/all/conanfile.py
@@ -22,19 +22,21 @@
"enable_fts5": [True, False],
"enable_json1": [True, False],
"enable_rtree": [True, False],
- "omit_load_extension": [True, False]
+ "omit_load_extension": [True, False],
+ "enable_unlock_notify": [True, False]
}
default_options = {"shared": False,
"fPIC": True,
"threadsafe": 1,
- "enable_column_metadata": False,
+ "enable_column_metadata": True,
"enable_explain_comments": False,
"enable_fts3": False,
"enable_fts4": False,
"enable_fts5": False,
"enable_json1": False,
- "enable_rtree": False,
- "omit_load_extension": False
+ "enable_rtree": True,
+ "omit_load_extension": False,
+ "enable_unlock_notify": True
}
_source_subfolder = "source_subfolder"
@@ -64,6 +66,7 @@
cmake.definitions["ENABLE_JSON1"] = self.options.enable_json1
cmake.definitions["ENABLE_RTREE"] = self.options.enable_rtree
cmake.definitions["OMIT_LOAD_EXTENSION"] = self.options.omit_load_extension
+ cmake.definitions["SQLITE_ENABLE_UNLOCK_NOTIFY"] = self.options.enable_unlock_notify
cmake.definitions["HAVE_FDATASYNC"] = True
cmake.definitions["HAVE_GMTIME_R"] = True
cmake.definitions["HAVE_LOCALTIME_R"] = True
| {"golden_diff": "diff --git a/recipes/sqlite3/all/conanfile.py b/recipes/sqlite3/all/conanfile.py\n--- a/recipes/sqlite3/all/conanfile.py\n+++ b/recipes/sqlite3/all/conanfile.py\n@@ -22,19 +22,21 @@\n \"enable_fts5\": [True, False],\n \"enable_json1\": [True, False],\n \"enable_rtree\": [True, False],\n- \"omit_load_extension\": [True, False]\n+ \"omit_load_extension\": [True, False],\n+ \"enable_unlock_notify\": [True, False]\n }\n default_options = {\"shared\": False,\n \"fPIC\": True,\n \"threadsafe\": 1,\n- \"enable_column_metadata\": False,\n+ \"enable_column_metadata\": True,\n \"enable_explain_comments\": False,\n \"enable_fts3\": False,\n \"enable_fts4\": False,\n \"enable_fts5\": False,\n \"enable_json1\": False,\n- \"enable_rtree\": False,\n- \"omit_load_extension\": False\n+ \"enable_rtree\": True,\n+ \"omit_load_extension\": False,\n+ \"enable_unlock_notify\": True\n }\n _source_subfolder = \"source_subfolder\"\n \n@@ -64,6 +66,7 @@\n cmake.definitions[\"ENABLE_JSON1\"] = self.options.enable_json1\n cmake.definitions[\"ENABLE_RTREE\"] = self.options.enable_rtree\n cmake.definitions[\"OMIT_LOAD_EXTENSION\"] = self.options.omit_load_extension\n+ cmake.definitions[\"SQLITE_ENABLE_UNLOCK_NOTIFY\"] = self.options.enable_unlock_notify \n cmake.definitions[\"HAVE_FDATASYNC\"] = True\n cmake.definitions[\"HAVE_GMTIME_R\"] = True\n cmake.definitions[\"HAVE_LOCALTIME_R\"] = True\n", "issue": "[package] sqlite3/all: Discussion of default options\nHaving a look at the options of the sqlite3 recipe you see most of them defaulted to false. However, there are some recipes like qt from bincrafters that already require this package with specific option enabled:\r\n\r\nhttps://github.com/bincrafters/conan-qt/blob/594e40f036adcbf22d23dc2da228f389d45a76dc/conanfile.py#L256\r\n\r\nMoreover, the sqlite3 is provided with a custom *CMakeLists.txt* with the CMake options, so we could say there are *no defaults* for these options recommended by the library authors/maintainers.\r\n\r\nAdditionally, these options are enabled by default in vcpkg repo:\r\n\r\n```\r\n-DSQLITE_ENABLE_RTREE\r\n-DSQLITE_ENABLE_UNLOCK_NOTIFY\r\n-DSQLITE_ENABLE_COLUMN_METADATA\r\n```\r\n\r\nThis issue is open for discussion regarding the options of this package and its default values.\r\n\r\ncc/ @ericLemanissier @SSE4 @uilianries \n", "before_files": [{"content": "import os\nfrom conans import ConanFile, CMake, tools\n\n\nclass ConanSqlite3(ConanFile):\n name = \"sqlite3\"\n description = \"Self-contained, serverless, in-process SQL database engine.\"\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://www.sqlite.org\"\n topics = (\"conan\", \"sqlite\", \"database\", \"sql\", \"serverless\")\n license = \"Public Domain\"\n generators = \"cmake\"\n settings = \"os\", \"compiler\", \"arch\", \"build_type\"\n exports_sources = [\"CMakeLists.txt\"]\n options = {\"shared\": [True, False],\n \"fPIC\": [True, False],\n \"threadsafe\": [0, 1, 2],\n \"enable_column_metadata\": [True, False],\n \"enable_explain_comments\": [True, False],\n \"enable_fts3\": [True, False],\n \"enable_fts4\": [True, False],\n \"enable_fts5\": [True, False],\n \"enable_json1\": [True, False],\n \"enable_rtree\": [True, False],\n \"omit_load_extension\": [True, False]\n }\n default_options = {\"shared\": False,\n \"fPIC\": True,\n \"threadsafe\": 1,\n \"enable_column_metadata\": False,\n \"enable_explain_comments\": False,\n \"enable_fts3\": False,\n \"enable_fts4\": False,\n \"enable_fts5\": False,\n \"enable_json1\": False,\n \"enable_rtree\": False,\n \"omit_load_extension\": False\n }\n _source_subfolder = \"source_subfolder\"\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n url = self.conan_data[\"sources\"][self.version][\"url\"]\n archive_name = os.path.basename(url)\n archive_name = os.path.splitext(archive_name)[0]\n os.rename(archive_name, self._source_subfolder)\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n def configure(self):\n del self.settings.compiler.libcxx\n del self.settings.compiler.cppstd\n\n def _configure_cmake(self):\n cmake = CMake(self)\n cmake.definitions[\"THREADSAFE\"] = self.options.threadsafe\n cmake.definitions[\"ENABLE_COLUMN_METADATA\"] = self.options.enable_column_metadata\n cmake.definitions[\"ENABLE_EXPLAIN_COMMENTS\"] = self.options.enable_explain_comments\n cmake.definitions[\"ENABLE_FTS3\"] = self.options.enable_fts3\n cmake.definitions[\"ENABLE_FTS4\"] = self.options.enable_fts4\n cmake.definitions[\"ENABLE_FTS5\"] = self.options.enable_fts5\n cmake.definitions[\"ENABLE_JSON1\"] = self.options.enable_json1\n cmake.definitions[\"ENABLE_RTREE\"] = self.options.enable_rtree\n cmake.definitions[\"OMIT_LOAD_EXTENSION\"] = self.options.omit_load_extension\n cmake.definitions[\"HAVE_FDATASYNC\"] = True\n cmake.definitions[\"HAVE_GMTIME_R\"] = True\n cmake.definitions[\"HAVE_LOCALTIME_R\"] = True\n cmake.definitions[\"HAVE_POSIX_FALLOCATE\"] = True\n cmake.definitions[\"HAVE_STRERROR_R\"] = True\n cmake.definitions[\"HAVE_USLEEP\"] = True\n if self.settings.os == \"Windows\":\n cmake.definitions[\"HAVE_LOCALTIME_R\"] = False\n cmake.definitions[\"HAVE_POSIX_FALLOCATE\"] = False\n if tools.is_apple_os(self.settings.os):\n cmake.definitions[\"HAVE_POSIX_FALLOCATE\"] = False\n if self.settings.os == \"Android\":\n cmake.definitions[\"HAVE_POSIX_FALLOCATE\"] = False\n cmake.configure()\n return cmake\n\n def build(self):\n cmake = self._configure_cmake()\n cmake.build()\n\n def package(self):\n header = tools.load(os.path.join(self._source_subfolder, \"sqlite3.h\"))\n license_content = header[3:header.find(\"***\", 1)]\n tools.save(\"LICENSE\", license_content)\n\n self.copy(\"LICENSE\", dst=\"licenses\")\n\n cmake = self._configure_cmake()\n cmake.install()\n\n def package_info(self):\n self.cpp_info.libs = tools.collect_libs(self)\n if self.settings.os == \"Linux\":\n if self.options.threadsafe:\n self.cpp_info.libs.append(\"pthread\")\n if self.options.omit_load_extension == \"False\":\n self.cpp_info.libs.append(\"dl\")\n", "path": "recipes/sqlite3/all/conanfile.py"}], "after_files": [{"content": "import os\nfrom conans import ConanFile, CMake, tools\n\n\nclass ConanSqlite3(ConanFile):\n name = \"sqlite3\"\n description = \"Self-contained, serverless, in-process SQL database engine.\"\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://www.sqlite.org\"\n topics = (\"conan\", \"sqlite\", \"database\", \"sql\", \"serverless\")\n license = \"Public Domain\"\n generators = \"cmake\"\n settings = \"os\", \"compiler\", \"arch\", \"build_type\"\n exports_sources = [\"CMakeLists.txt\"]\n options = {\"shared\": [True, False],\n \"fPIC\": [True, False],\n \"threadsafe\": [0, 1, 2],\n \"enable_column_metadata\": [True, False],\n \"enable_explain_comments\": [True, False],\n \"enable_fts3\": [True, False],\n \"enable_fts4\": [True, False],\n \"enable_fts5\": [True, False],\n \"enable_json1\": [True, False],\n \"enable_rtree\": [True, False],\n \"omit_load_extension\": [True, False],\n \"enable_unlock_notify\": [True, False]\n }\n default_options = {\"shared\": False,\n \"fPIC\": True,\n \"threadsafe\": 1,\n \"enable_column_metadata\": True,\n \"enable_explain_comments\": False,\n \"enable_fts3\": False,\n \"enable_fts4\": False,\n \"enable_fts5\": False,\n \"enable_json1\": False,\n \"enable_rtree\": True,\n \"omit_load_extension\": False,\n \"enable_unlock_notify\": True\n }\n _source_subfolder = \"source_subfolder\"\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n url = self.conan_data[\"sources\"][self.version][\"url\"]\n archive_name = os.path.basename(url)\n archive_name = os.path.splitext(archive_name)[0]\n os.rename(archive_name, self._source_subfolder)\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n def configure(self):\n del self.settings.compiler.libcxx\n del self.settings.compiler.cppstd\n\n def _configure_cmake(self):\n cmake = CMake(self)\n cmake.definitions[\"THREADSAFE\"] = self.options.threadsafe\n cmake.definitions[\"ENABLE_COLUMN_METADATA\"] = self.options.enable_column_metadata\n cmake.definitions[\"ENABLE_EXPLAIN_COMMENTS\"] = self.options.enable_explain_comments\n cmake.definitions[\"ENABLE_FTS3\"] = self.options.enable_fts3\n cmake.definitions[\"ENABLE_FTS4\"] = self.options.enable_fts4\n cmake.definitions[\"ENABLE_FTS5\"] = self.options.enable_fts5\n cmake.definitions[\"ENABLE_JSON1\"] = self.options.enable_json1\n cmake.definitions[\"ENABLE_RTREE\"] = self.options.enable_rtree\n cmake.definitions[\"OMIT_LOAD_EXTENSION\"] = self.options.omit_load_extension\n cmake.definitions[\"SQLITE_ENABLE_UNLOCK_NOTIFY\"] = self.options.enable_unlock_notify \n cmake.definitions[\"HAVE_FDATASYNC\"] = True\n cmake.definitions[\"HAVE_GMTIME_R\"] = True\n cmake.definitions[\"HAVE_LOCALTIME_R\"] = True\n cmake.definitions[\"HAVE_POSIX_FALLOCATE\"] = True\n cmake.definitions[\"HAVE_STRERROR_R\"] = True\n cmake.definitions[\"HAVE_USLEEP\"] = True\n if self.settings.os == \"Windows\":\n cmake.definitions[\"HAVE_LOCALTIME_R\"] = False\n cmake.definitions[\"HAVE_POSIX_FALLOCATE\"] = False\n if tools.is_apple_os(self.settings.os):\n cmake.definitions[\"HAVE_POSIX_FALLOCATE\"] = False\n if self.settings.os == \"Android\":\n cmake.definitions[\"HAVE_POSIX_FALLOCATE\"] = False\n cmake.configure()\n return cmake\n\n def build(self):\n cmake = self._configure_cmake()\n cmake.build()\n\n def package(self):\n header = tools.load(os.path.join(self._source_subfolder, \"sqlite3.h\"))\n license_content = header[3:header.find(\"***\", 1)]\n tools.save(\"LICENSE\", license_content)\n\n self.copy(\"LICENSE\", dst=\"licenses\")\n\n cmake = self._configure_cmake()\n cmake.install()\n\n def package_info(self):\n self.cpp_info.libs = tools.collect_libs(self)\n if self.settings.os == \"Linux\":\n if self.options.threadsafe:\n self.cpp_info.libs.append(\"pthread\")\n if self.options.omit_load_extension == \"False\":\n self.cpp_info.libs.append(\"dl\")\n", "path": "recipes/sqlite3/all/conanfile.py"}]} | 1,689 | 405 |
gh_patches_debug_44434 | rasdani/github-patches | git_diff | fedora-infra__bodhi-3692 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Make zchunk updateinfo a per-repo option
When I added support for zchunked updateinfo.xml in #2904, I missed the fact that my patch was creating zchunked updateinfo.xml for _every_ repository. I don't think we want to be creating zchunk metadata for EPEL and Fedora 29 (and, apparently it's breaking some third-party mirroring tools, see https://bugzilla.redhat.com/show_bug.cgi?id=1721359), so we need to add a way of making the zchunked updateinfo.xml a per-repo option.
I'm happy to do the work on this, but I'm not sure where and how it should be implemented.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bodhi/server/metadata.py`
Content:
```
1 # Copyright 2007-2019 Red Hat, Inc. and others.
2 #
3 # This file is part of Bodhi.
4 #
5 # This program is free software; you can redistribute it and/or modify
6 # it under the terms of the GNU General Public License as published by
7 # the Free Software Foundation; either version 2 of the License, or
8 # (at your option) any later version.
9 #
10 # This program is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
14 #
15 # You should have received a copy of the GNU General Public License along
16 # with this program; if not, write to the Free Software Foundation, Inc.,
17 # 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
18 """Create metadata files when composing repositories."""
19 from datetime import datetime
20 import logging
21 import os
22 import shelve
23 import shutil
24 import tempfile
25
26 import createrepo_c as cr
27
28 from bodhi.server import util
29 from bodhi.server.buildsys import get_session
30 from bodhi.server.config import config
31 from bodhi.server.models import Build, UpdateStatus, UpdateRequest, UpdateSuggestion
32
33
34 __version__ = '2.0'
35 log = logging.getLogger(__name__)
36
37
38 def insert_in_repo(comp_type, repodata, filetype, extension, source):
39 """
40 Inject a file into the repodata with the help of createrepo_c.
41
42 Args:
43 comp_type (int): createrepo_c compression type indication.
44 repodata (str): The path to the repo where the metadata will be inserted.
45 filetype (str): What type of metadata will be inserted by createrepo_c.
46 This does allow any string to be inserted (custom types). There are some
47 types which are used with dnf repos as primary, updateinfo, comps, filelist etc.
48 extension (str): The file extension (xml, sqlite).
49 source (str): A file path. File holds the dump of metadata until
50 copied to the repodata folder.
51 """
52 log.info('Inserting %s.%s into %s', filetype, extension, repodata)
53 target_fname = os.path.join(repodata, '%s.%s' % (filetype, extension))
54 shutil.copyfile(source, target_fname)
55 repomd_xml = os.path.join(repodata, 'repomd.xml')
56 repomd = cr.Repomd(repomd_xml)
57 add_list = [(filetype, comp_type)]
58 if hasattr(cr, 'ZCK_COMPRESSION') and comp_type != cr.ZCK_COMPRESSION:
59 add_list.append((filetype + "_zck", cr.ZCK_COMPRESSION))
60 for (ft, ct) in add_list:
61 # create a new record for our repomd.xml
62 rec = cr.RepomdRecord(ft, target_fname)
63 # compress our metadata file with the comp_type
64 rec_comp = rec.compress_and_fill(cr.SHA256, ct)
65 # add hash to the compressed metadata file
66 rec_comp.rename_file()
67 # set type of metadata
68 rec_comp.type = ft
69 # insert metadata about our metadata in repomd.xml
70 repomd.set_record(rec_comp)
71 with open(repomd_xml, 'w') as repomd_file:
72 repomd_file.write(repomd.xml_dump())
73 os.unlink(target_fname)
74
75
76 def modifyrepo(comp_type, compose_path, filetype, extension, source):
77 """
78 Inject a file into the repodata for each architecture with the help of createrepo_c.
79
80 Args:
81 compose_path (str): The path to the compose where the metadata will be inserted.
82 filetype (str): What type of metadata will be inserted by createrepo_c.
83 This does allow any string to be inserted (custom types). There are some
84 types which are used with dnf repos as primary, updateinfo, comps, filelist etc.
85 extension (str): The file extension (xml, sqlite).
86 source (str): A file path. File holds the dump of metadata until
87 copied to the repodata folder.
88 """
89 repo_path = os.path.join(compose_path, 'compose', 'Everything')
90 for arch in os.listdir(repo_path):
91 if arch == 'source':
92 repodata = os.path.join(repo_path, arch, 'tree', 'repodata')
93 else:
94 repodata = os.path.join(repo_path, arch, 'os', 'repodata')
95 insert_in_repo(comp_type, repodata, filetype, extension, source)
96
97
98 class UpdateInfoMetadata(object):
99 """
100 This class represents the updateinfo.xml yum metadata.
101
102 It is generated during push time by the bodhi composer based on koji tags
103 and is injected into the yum repodata using the `modifyrepo_c` tool,
104 which is included in the `createrepo_c` package.
105 """
106
107 def __init__(self, release, request, db, composedir, close_shelf=True):
108 """
109 Initialize the UpdateInfoMetadata object.
110
111 Args:
112 release (bodhi.server.models.Release): The Release that is being composed.
113 request (bodhi.server.models.UpdateRequest): The Request that is being composed.
114 db (): A database session to be used for queries.
115 composedir (str): A path to the composedir.
116 close_shelf (bool): Whether to close the shelve, which is used to cache updateinfo
117 between composes.
118 """
119 self.request = request
120 if request is UpdateRequest.stable:
121 self.tag = release.stable_tag
122 else:
123 self.tag = release.testing_tag
124
125 self.db = db
126 self.updates = set()
127 self.builds = {}
128 self._from = config.get('bodhi_email')
129 if config.get('cache_dir'):
130 self.shelf = shelve.open(os.path.join(config.get('cache_dir'), '%s.shelve' % self.tag))
131 else:
132 # If we have no cache dir, let's at least cache in-memory.
133 self.shelf = {}
134 close_shelf = False
135 self._fetch_updates()
136
137 self.uinfo = cr.UpdateInfo()
138
139 self.comp_type = cr.XZ
140
141 if release.id_prefix == 'FEDORA-EPEL':
142 # FIXME: I'm not sure which versions of RHEL support xz metadata
143 # compression, so use the lowest common denominator for now.
144 self.comp_type = cr.BZ2
145
146 self.uinfo = cr.UpdateInfo()
147 for update in self.updates:
148 self.add_update(update)
149
150 if close_shelf:
151 self.shelf.close()
152
153 def _fetch_updates(self):
154 """Based on our given koji tag, populate a list of Update objects."""
155 log.debug("Fetching builds tagged with '%s'" % self.tag)
156 kojiBuilds = get_session().listTagged(self.tag, latest=True)
157 nonexistent = []
158 log.debug("%d builds found" % len(kojiBuilds))
159 for build in kojiBuilds:
160 self.builds[build['nvr']] = build
161 build_obj = self.db.query(Build).filter_by(nvr=str(build['nvr'])).first()
162 if build_obj:
163 if build_obj.update:
164 self.updates.add(build_obj.update)
165 else:
166 log.warning('%s does not have a corresponding update' % build['nvr'])
167 else:
168 nonexistent.append(build['nvr'])
169 if nonexistent:
170 log.warning("Couldn't find the following koji builds tagged as "
171 "%s in bodhi: %s" % (self.tag, nonexistent))
172
173 def get_rpms(self, koji, nvr):
174 """
175 Retrieve the given RPM nvr from the cache if available, or from Koji if not available.
176
177 Args:
178 koji (koji.ClientSession): An initialized Koji client.
179 nvr (str): The nvr for which you wish to retrieve Koji data.
180 Returns:
181 list: A list of dictionaries describing all the subpackages that are part of the given
182 nvr.
183 """
184 if str(nvr) in self.shelf:
185 return self.shelf[str(nvr)]
186
187 if nvr in self.builds:
188 buildid = self.builds[nvr]['id']
189 else:
190 buildid = koji.getBuild(nvr)['id']
191
192 rpms = koji.listBuildRPMs(buildid)
193 self.shelf[str(nvr)] = rpms
194 return rpms
195
196 def add_update(self, update):
197 """
198 Generate the extended metadata for a given update, adding it to self.uinfo.
199
200 Args:
201 update (bodhi.server.models.Update): The Update to be added to self.uinfo.
202 """
203 rec = cr.UpdateRecord()
204 rec.version = __version__
205 rec.fromstr = config.get('bodhi_email')
206 rec.status = update.status.value
207 rec.type = update.type.value
208 rec.id = update.alias.encode('utf-8')
209 rec.title = update.title.encode('utf-8')
210 rec.severity = util.severity_updateinfo_str(update.severity.value)
211 rec.summary = ('%s %s update' % (update.get_title(),
212 update.type.value)).encode('utf-8')
213 rec.description = update.notes.encode('utf-8')
214 rec.release = update.release.long_name.encode('utf-8')
215 rec.rights = config.get('updateinfo_rights')
216
217 if update.date_pushed:
218 rec.issued_date = update.date_pushed
219 else:
220 # Sometimes we only set the date_pushed after it's pushed out, however,
221 # it seems that Satellite does not like update entries without issued_date.
222 # Since we know that we are pushing it now, and the next push will get the data
223 # correctly, let's just insert utcnow().
224 rec.issued_date = datetime.utcnow()
225 if update.date_modified:
226 rec.updated_date = update.date_modified
227 else:
228 rec.updated_date = datetime.utcnow()
229
230 col = cr.UpdateCollection()
231 col.name = update.release.long_name.encode('utf-8')
232 col.shortname = update.release.name.encode('utf-8')
233
234 koji = get_session()
235 for build in update.builds:
236 rpms = self.get_rpms(koji, build.nvr)
237 for rpm in rpms:
238 pkg = cr.UpdateCollectionPackage()
239 pkg.name = rpm['name']
240 pkg.version = rpm['version']
241 pkg.release = rpm['release']
242 if rpm['epoch'] is not None:
243 pkg.epoch = str(rpm['epoch'])
244 else:
245 pkg.epoch = '0'
246 pkg.arch = rpm['arch']
247
248 # TODO: how do we handle UpdateSuggestion.logout, etc?
249 pkg.reboot_suggested = update.suggest is UpdateSuggestion.reboot
250
251 filename = '%s.%s.rpm' % (rpm['nvr'], rpm['arch'])
252 pkg.filename = filename
253
254 # Build the URL
255 if rpm['arch'] == 'src':
256 arch = 'SRPMS'
257 elif rpm['arch'] in ('noarch', 'i686'):
258 arch = 'i386'
259 else:
260 arch = rpm['arch']
261
262 pkg.src = os.path.join(
263 config.get('file_url'),
264 update.status is UpdateStatus.testing and 'testing' or '',
265 str(update.release.version), arch, filename[0], filename)
266
267 col.append(pkg)
268
269 rec.append_collection(col)
270
271 # Create references for each bug
272 for bug in update.bugs:
273 ref = cr.UpdateReference()
274 ref.type = 'bugzilla'
275 ref.id = str(bug.bug_id).encode('utf-8')
276 ref.href = bug.url.encode('utf-8')
277 ref.title = bug.title.encode('utf-8') if bug.title else ''
278 rec.append_reference(ref)
279
280 self.uinfo.append(rec)
281
282 def insert_updateinfo(self, compose_path):
283 """
284 Add the updateinfo.xml file to the repository.
285
286 Args:
287 compose_path (str): The path to the compose where the metadata will be inserted.
288 """
289 fd, tmp_file_path = tempfile.mkstemp()
290 os.write(fd, self.uinfo.xml_dump().encode('utf-8'))
291 os.close(fd)
292 modifyrepo(self.comp_type, compose_path, 'updateinfo', 'xml', tmp_file_path)
293 os.unlink(tmp_file_path)
294
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bodhi/server/metadata.py b/bodhi/server/metadata.py
--- a/bodhi/server/metadata.py
+++ b/bodhi/server/metadata.py
@@ -35,7 +35,7 @@
log = logging.getLogger(__name__)
-def insert_in_repo(comp_type, repodata, filetype, extension, source):
+def insert_in_repo(comp_type, repodata, filetype, extension, source, zchunk):
"""
Inject a file into the repodata with the help of createrepo_c.
@@ -48,6 +48,7 @@
extension (str): The file extension (xml, sqlite).
source (str): A file path. File holds the dump of metadata until
copied to the repodata folder.
+ zchunk (bool): Whether zchunk data is supported for clients of this repo.
"""
log.info('Inserting %s.%s into %s', filetype, extension, repodata)
target_fname = os.path.join(repodata, '%s.%s' % (filetype, extension))
@@ -55,7 +56,7 @@
repomd_xml = os.path.join(repodata, 'repomd.xml')
repomd = cr.Repomd(repomd_xml)
add_list = [(filetype, comp_type)]
- if hasattr(cr, 'ZCK_COMPRESSION') and comp_type != cr.ZCK_COMPRESSION:
+ if zchunk and hasattr(cr, 'ZCK_COMPRESSION') and comp_type != cr.ZCK_COMPRESSION:
add_list.append((filetype + "_zck", cr.ZCK_COMPRESSION))
for (ft, ct) in add_list:
# create a new record for our repomd.xml
@@ -73,7 +74,7 @@
os.unlink(target_fname)
-def modifyrepo(comp_type, compose_path, filetype, extension, source):
+def modifyrepo(comp_type, compose_path, filetype, extension, source, zchunk):
"""
Inject a file into the repodata for each architecture with the help of createrepo_c.
@@ -85,6 +86,7 @@
extension (str): The file extension (xml, sqlite).
source (str): A file path. File holds the dump of metadata until
copied to the repodata folder.
+ zchunk (bool): Whether zchunk data is supported for clients of this repo.
"""
repo_path = os.path.join(compose_path, 'compose', 'Everything')
for arch in os.listdir(repo_path):
@@ -92,7 +94,7 @@
repodata = os.path.join(repo_path, arch, 'tree', 'repodata')
else:
repodata = os.path.join(repo_path, arch, 'os', 'repodata')
- insert_in_repo(comp_type, repodata, filetype, extension, source)
+ insert_in_repo(comp_type, repodata, filetype, extension, source, zchunk)
class UpdateInfoMetadata(object):
@@ -138,11 +140,22 @@
self.comp_type = cr.XZ
- if release.id_prefix == 'FEDORA-EPEL':
+ # Some repos such as FEDORA-EPEL, are primarily targeted at
+ # distributions that use the yum client, which does not support zchunk metadata
+ self.legacy_repos = ['FEDORA-EPEL']
+ self.zchunk = True
+
+ if release.id_prefix in self.legacy_repos:
# FIXME: I'm not sure which versions of RHEL support xz metadata
# compression, so use the lowest common denominator for now.
self.comp_type = cr.BZ2
+ log.warning(
+ 'Zchunk data is disabled for repo {release.id_prefix} until it moves to a client'
+ ' with Zchunk support'
+ )
+ self.zchunk = False
+
self.uinfo = cr.UpdateInfo()
for update in self.updates:
self.add_update(update)
@@ -289,5 +302,10 @@
fd, tmp_file_path = tempfile.mkstemp()
os.write(fd, self.uinfo.xml_dump().encode('utf-8'))
os.close(fd)
- modifyrepo(self.comp_type, compose_path, 'updateinfo', 'xml', tmp_file_path)
+ modifyrepo(self.comp_type,
+ compose_path,
+ 'updateinfo',
+ 'xml',
+ tmp_file_path,
+ self.zchunk)
os.unlink(tmp_file_path)
| {"golden_diff": "diff --git a/bodhi/server/metadata.py b/bodhi/server/metadata.py\n--- a/bodhi/server/metadata.py\n+++ b/bodhi/server/metadata.py\n@@ -35,7 +35,7 @@\n log = logging.getLogger(__name__)\n \n \n-def insert_in_repo(comp_type, repodata, filetype, extension, source):\n+def insert_in_repo(comp_type, repodata, filetype, extension, source, zchunk):\n \"\"\"\n Inject a file into the repodata with the help of createrepo_c.\n \n@@ -48,6 +48,7 @@\n extension (str): The file extension (xml, sqlite).\n source (str): A file path. File holds the dump of metadata until\n copied to the repodata folder.\n+ zchunk (bool): Whether zchunk data is supported for clients of this repo.\n \"\"\"\n log.info('Inserting %s.%s into %s', filetype, extension, repodata)\n target_fname = os.path.join(repodata, '%s.%s' % (filetype, extension))\n@@ -55,7 +56,7 @@\n repomd_xml = os.path.join(repodata, 'repomd.xml')\n repomd = cr.Repomd(repomd_xml)\n add_list = [(filetype, comp_type)]\n- if hasattr(cr, 'ZCK_COMPRESSION') and comp_type != cr.ZCK_COMPRESSION:\n+ if zchunk and hasattr(cr, 'ZCK_COMPRESSION') and comp_type != cr.ZCK_COMPRESSION:\n add_list.append((filetype + \"_zck\", cr.ZCK_COMPRESSION))\n for (ft, ct) in add_list:\n # create a new record for our repomd.xml\n@@ -73,7 +74,7 @@\n os.unlink(target_fname)\n \n \n-def modifyrepo(comp_type, compose_path, filetype, extension, source):\n+def modifyrepo(comp_type, compose_path, filetype, extension, source, zchunk):\n \"\"\"\n Inject a file into the repodata for each architecture with the help of createrepo_c.\n \n@@ -85,6 +86,7 @@\n extension (str): The file extension (xml, sqlite).\n source (str): A file path. File holds the dump of metadata until\n copied to the repodata folder.\n+ zchunk (bool): Whether zchunk data is supported for clients of this repo.\n \"\"\"\n repo_path = os.path.join(compose_path, 'compose', 'Everything')\n for arch in os.listdir(repo_path):\n@@ -92,7 +94,7 @@\n repodata = os.path.join(repo_path, arch, 'tree', 'repodata')\n else:\n repodata = os.path.join(repo_path, arch, 'os', 'repodata')\n- insert_in_repo(comp_type, repodata, filetype, extension, source)\n+ insert_in_repo(comp_type, repodata, filetype, extension, source, zchunk)\n \n \n class UpdateInfoMetadata(object):\n@@ -138,11 +140,22 @@\n \n self.comp_type = cr.XZ\n \n- if release.id_prefix == 'FEDORA-EPEL':\n+ # Some repos such as FEDORA-EPEL, are primarily targeted at\n+ # distributions that use the yum client, which does not support zchunk metadata\n+ self.legacy_repos = ['FEDORA-EPEL']\n+ self.zchunk = True\n+\n+ if release.id_prefix in self.legacy_repos:\n # FIXME: I'm not sure which versions of RHEL support xz metadata\n # compression, so use the lowest common denominator for now.\n self.comp_type = cr.BZ2\n \n+ log.warning(\n+ 'Zchunk data is disabled for repo {release.id_prefix} until it moves to a client'\n+ ' with Zchunk support'\n+ )\n+ self.zchunk = False\n+\n self.uinfo = cr.UpdateInfo()\n for update in self.updates:\n self.add_update(update)\n@@ -289,5 +302,10 @@\n fd, tmp_file_path = tempfile.mkstemp()\n os.write(fd, self.uinfo.xml_dump().encode('utf-8'))\n os.close(fd)\n- modifyrepo(self.comp_type, compose_path, 'updateinfo', 'xml', tmp_file_path)\n+ modifyrepo(self.comp_type,\n+ compose_path,\n+ 'updateinfo',\n+ 'xml',\n+ tmp_file_path,\n+ self.zchunk)\n os.unlink(tmp_file_path)\n", "issue": "Make zchunk updateinfo a per-repo option\nWhen I added support for zchunked updateinfo.xml in #2904, I missed the fact that my patch was creating zchunked updateinfo.xml for _every_ repository. I don't think we want to be creating zchunk metadata for EPEL and Fedora 29 (and, apparently it's breaking some third-party mirroring tools, see https://bugzilla.redhat.com/show_bug.cgi?id=1721359), so we need to add a way of making the zchunked updateinfo.xml a per-repo option.\r\n\r\nI'm happy to do the work on this, but I'm not sure where and how it should be implemented.\r\n\n", "before_files": [{"content": "# Copyright 2007-2019 Red Hat, Inc. and others.\n#\n# This file is part of Bodhi.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License along\n# with this program; if not, write to the Free Software Foundation, Inc.,\n# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n\"\"\"Create metadata files when composing repositories.\"\"\"\nfrom datetime import datetime\nimport logging\nimport os\nimport shelve\nimport shutil\nimport tempfile\n\nimport createrepo_c as cr\n\nfrom bodhi.server import util\nfrom bodhi.server.buildsys import get_session\nfrom bodhi.server.config import config\nfrom bodhi.server.models import Build, UpdateStatus, UpdateRequest, UpdateSuggestion\n\n\n__version__ = '2.0'\nlog = logging.getLogger(__name__)\n\n\ndef insert_in_repo(comp_type, repodata, filetype, extension, source):\n \"\"\"\n Inject a file into the repodata with the help of createrepo_c.\n\n Args:\n comp_type (int): createrepo_c compression type indication.\n repodata (str): The path to the repo where the metadata will be inserted.\n filetype (str): What type of metadata will be inserted by createrepo_c.\n This does allow any string to be inserted (custom types). There are some\n types which are used with dnf repos as primary, updateinfo, comps, filelist etc.\n extension (str): The file extension (xml, sqlite).\n source (str): A file path. File holds the dump of metadata until\n copied to the repodata folder.\n \"\"\"\n log.info('Inserting %s.%s into %s', filetype, extension, repodata)\n target_fname = os.path.join(repodata, '%s.%s' % (filetype, extension))\n shutil.copyfile(source, target_fname)\n repomd_xml = os.path.join(repodata, 'repomd.xml')\n repomd = cr.Repomd(repomd_xml)\n add_list = [(filetype, comp_type)]\n if hasattr(cr, 'ZCK_COMPRESSION') and comp_type != cr.ZCK_COMPRESSION:\n add_list.append((filetype + \"_zck\", cr.ZCK_COMPRESSION))\n for (ft, ct) in add_list:\n # create a new record for our repomd.xml\n rec = cr.RepomdRecord(ft, target_fname)\n # compress our metadata file with the comp_type\n rec_comp = rec.compress_and_fill(cr.SHA256, ct)\n # add hash to the compressed metadata file\n rec_comp.rename_file()\n # set type of metadata\n rec_comp.type = ft\n # insert metadata about our metadata in repomd.xml\n repomd.set_record(rec_comp)\n with open(repomd_xml, 'w') as repomd_file:\n repomd_file.write(repomd.xml_dump())\n os.unlink(target_fname)\n\n\ndef modifyrepo(comp_type, compose_path, filetype, extension, source):\n \"\"\"\n Inject a file into the repodata for each architecture with the help of createrepo_c.\n\n Args:\n compose_path (str): The path to the compose where the metadata will be inserted.\n filetype (str): What type of metadata will be inserted by createrepo_c.\n This does allow any string to be inserted (custom types). There are some\n types which are used with dnf repos as primary, updateinfo, comps, filelist etc.\n extension (str): The file extension (xml, sqlite).\n source (str): A file path. File holds the dump of metadata until\n copied to the repodata folder.\n \"\"\"\n repo_path = os.path.join(compose_path, 'compose', 'Everything')\n for arch in os.listdir(repo_path):\n if arch == 'source':\n repodata = os.path.join(repo_path, arch, 'tree', 'repodata')\n else:\n repodata = os.path.join(repo_path, arch, 'os', 'repodata')\n insert_in_repo(comp_type, repodata, filetype, extension, source)\n\n\nclass UpdateInfoMetadata(object):\n \"\"\"\n This class represents the updateinfo.xml yum metadata.\n\n It is generated during push time by the bodhi composer based on koji tags\n and is injected into the yum repodata using the `modifyrepo_c` tool,\n which is included in the `createrepo_c` package.\n \"\"\"\n\n def __init__(self, release, request, db, composedir, close_shelf=True):\n \"\"\"\n Initialize the UpdateInfoMetadata object.\n\n Args:\n release (bodhi.server.models.Release): The Release that is being composed.\n request (bodhi.server.models.UpdateRequest): The Request that is being composed.\n db (): A database session to be used for queries.\n composedir (str): A path to the composedir.\n close_shelf (bool): Whether to close the shelve, which is used to cache updateinfo\n between composes.\n \"\"\"\n self.request = request\n if request is UpdateRequest.stable:\n self.tag = release.stable_tag\n else:\n self.tag = release.testing_tag\n\n self.db = db\n self.updates = set()\n self.builds = {}\n self._from = config.get('bodhi_email')\n if config.get('cache_dir'):\n self.shelf = shelve.open(os.path.join(config.get('cache_dir'), '%s.shelve' % self.tag))\n else:\n # If we have no cache dir, let's at least cache in-memory.\n self.shelf = {}\n close_shelf = False\n self._fetch_updates()\n\n self.uinfo = cr.UpdateInfo()\n\n self.comp_type = cr.XZ\n\n if release.id_prefix == 'FEDORA-EPEL':\n # FIXME: I'm not sure which versions of RHEL support xz metadata\n # compression, so use the lowest common denominator for now.\n self.comp_type = cr.BZ2\n\n self.uinfo = cr.UpdateInfo()\n for update in self.updates:\n self.add_update(update)\n\n if close_shelf:\n self.shelf.close()\n\n def _fetch_updates(self):\n \"\"\"Based on our given koji tag, populate a list of Update objects.\"\"\"\n log.debug(\"Fetching builds tagged with '%s'\" % self.tag)\n kojiBuilds = get_session().listTagged(self.tag, latest=True)\n nonexistent = []\n log.debug(\"%d builds found\" % len(kojiBuilds))\n for build in kojiBuilds:\n self.builds[build['nvr']] = build\n build_obj = self.db.query(Build).filter_by(nvr=str(build['nvr'])).first()\n if build_obj:\n if build_obj.update:\n self.updates.add(build_obj.update)\n else:\n log.warning('%s does not have a corresponding update' % build['nvr'])\n else:\n nonexistent.append(build['nvr'])\n if nonexistent:\n log.warning(\"Couldn't find the following koji builds tagged as \"\n \"%s in bodhi: %s\" % (self.tag, nonexistent))\n\n def get_rpms(self, koji, nvr):\n \"\"\"\n Retrieve the given RPM nvr from the cache if available, or from Koji if not available.\n\n Args:\n koji (koji.ClientSession): An initialized Koji client.\n nvr (str): The nvr for which you wish to retrieve Koji data.\n Returns:\n list: A list of dictionaries describing all the subpackages that are part of the given\n nvr.\n \"\"\"\n if str(nvr) in self.shelf:\n return self.shelf[str(nvr)]\n\n if nvr in self.builds:\n buildid = self.builds[nvr]['id']\n else:\n buildid = koji.getBuild(nvr)['id']\n\n rpms = koji.listBuildRPMs(buildid)\n self.shelf[str(nvr)] = rpms\n return rpms\n\n def add_update(self, update):\n \"\"\"\n Generate the extended metadata for a given update, adding it to self.uinfo.\n\n Args:\n update (bodhi.server.models.Update): The Update to be added to self.uinfo.\n \"\"\"\n rec = cr.UpdateRecord()\n rec.version = __version__\n rec.fromstr = config.get('bodhi_email')\n rec.status = update.status.value\n rec.type = update.type.value\n rec.id = update.alias.encode('utf-8')\n rec.title = update.title.encode('utf-8')\n rec.severity = util.severity_updateinfo_str(update.severity.value)\n rec.summary = ('%s %s update' % (update.get_title(),\n update.type.value)).encode('utf-8')\n rec.description = update.notes.encode('utf-8')\n rec.release = update.release.long_name.encode('utf-8')\n rec.rights = config.get('updateinfo_rights')\n\n if update.date_pushed:\n rec.issued_date = update.date_pushed\n else:\n # Sometimes we only set the date_pushed after it's pushed out, however,\n # it seems that Satellite does not like update entries without issued_date.\n # Since we know that we are pushing it now, and the next push will get the data\n # correctly, let's just insert utcnow().\n rec.issued_date = datetime.utcnow()\n if update.date_modified:\n rec.updated_date = update.date_modified\n else:\n rec.updated_date = datetime.utcnow()\n\n col = cr.UpdateCollection()\n col.name = update.release.long_name.encode('utf-8')\n col.shortname = update.release.name.encode('utf-8')\n\n koji = get_session()\n for build in update.builds:\n rpms = self.get_rpms(koji, build.nvr)\n for rpm in rpms:\n pkg = cr.UpdateCollectionPackage()\n pkg.name = rpm['name']\n pkg.version = rpm['version']\n pkg.release = rpm['release']\n if rpm['epoch'] is not None:\n pkg.epoch = str(rpm['epoch'])\n else:\n pkg.epoch = '0'\n pkg.arch = rpm['arch']\n\n # TODO: how do we handle UpdateSuggestion.logout, etc?\n pkg.reboot_suggested = update.suggest is UpdateSuggestion.reboot\n\n filename = '%s.%s.rpm' % (rpm['nvr'], rpm['arch'])\n pkg.filename = filename\n\n # Build the URL\n if rpm['arch'] == 'src':\n arch = 'SRPMS'\n elif rpm['arch'] in ('noarch', 'i686'):\n arch = 'i386'\n else:\n arch = rpm['arch']\n\n pkg.src = os.path.join(\n config.get('file_url'),\n update.status is UpdateStatus.testing and 'testing' or '',\n str(update.release.version), arch, filename[0], filename)\n\n col.append(pkg)\n\n rec.append_collection(col)\n\n # Create references for each bug\n for bug in update.bugs:\n ref = cr.UpdateReference()\n ref.type = 'bugzilla'\n ref.id = str(bug.bug_id).encode('utf-8')\n ref.href = bug.url.encode('utf-8')\n ref.title = bug.title.encode('utf-8') if bug.title else ''\n rec.append_reference(ref)\n\n self.uinfo.append(rec)\n\n def insert_updateinfo(self, compose_path):\n \"\"\"\n Add the updateinfo.xml file to the repository.\n\n Args:\n compose_path (str): The path to the compose where the metadata will be inserted.\n \"\"\"\n fd, tmp_file_path = tempfile.mkstemp()\n os.write(fd, self.uinfo.xml_dump().encode('utf-8'))\n os.close(fd)\n modifyrepo(self.comp_type, compose_path, 'updateinfo', 'xml', tmp_file_path)\n os.unlink(tmp_file_path)\n", "path": "bodhi/server/metadata.py"}], "after_files": [{"content": "# Copyright 2007-2019 Red Hat, Inc. and others.\n#\n# This file is part of Bodhi.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License along\n# with this program; if not, write to the Free Software Foundation, Inc.,\n# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n\"\"\"Create metadata files when composing repositories.\"\"\"\nfrom datetime import datetime\nimport logging\nimport os\nimport shelve\nimport shutil\nimport tempfile\n\nimport createrepo_c as cr\n\nfrom bodhi.server import util\nfrom bodhi.server.buildsys import get_session\nfrom bodhi.server.config import config\nfrom bodhi.server.models import Build, UpdateStatus, UpdateRequest, UpdateSuggestion\n\n\n__version__ = '2.0'\nlog = logging.getLogger(__name__)\n\n\ndef insert_in_repo(comp_type, repodata, filetype, extension, source, zchunk):\n \"\"\"\n Inject a file into the repodata with the help of createrepo_c.\n\n Args:\n comp_type (int): createrepo_c compression type indication.\n repodata (str): The path to the repo where the metadata will be inserted.\n filetype (str): What type of metadata will be inserted by createrepo_c.\n This does allow any string to be inserted (custom types). There are some\n types which are used with dnf repos as primary, updateinfo, comps, filelist etc.\n extension (str): The file extension (xml, sqlite).\n source (str): A file path. File holds the dump of metadata until\n copied to the repodata folder.\n zchunk (bool): Whether zchunk data is supported for clients of this repo.\n \"\"\"\n log.info('Inserting %s.%s into %s', filetype, extension, repodata)\n target_fname = os.path.join(repodata, '%s.%s' % (filetype, extension))\n shutil.copyfile(source, target_fname)\n repomd_xml = os.path.join(repodata, 'repomd.xml')\n repomd = cr.Repomd(repomd_xml)\n add_list = [(filetype, comp_type)]\n if zchunk and hasattr(cr, 'ZCK_COMPRESSION') and comp_type != cr.ZCK_COMPRESSION:\n add_list.append((filetype + \"_zck\", cr.ZCK_COMPRESSION))\n for (ft, ct) in add_list:\n # create a new record for our repomd.xml\n rec = cr.RepomdRecord(ft, target_fname)\n # compress our metadata file with the comp_type\n rec_comp = rec.compress_and_fill(cr.SHA256, ct)\n # add hash to the compressed metadata file\n rec_comp.rename_file()\n # set type of metadata\n rec_comp.type = ft\n # insert metadata about our metadata in repomd.xml\n repomd.set_record(rec_comp)\n with open(repomd_xml, 'w') as repomd_file:\n repomd_file.write(repomd.xml_dump())\n os.unlink(target_fname)\n\n\ndef modifyrepo(comp_type, compose_path, filetype, extension, source, zchunk):\n \"\"\"\n Inject a file into the repodata for each architecture with the help of createrepo_c.\n\n Args:\n compose_path (str): The path to the compose where the metadata will be inserted.\n filetype (str): What type of metadata will be inserted by createrepo_c.\n This does allow any string to be inserted (custom types). There are some\n types which are used with dnf repos as primary, updateinfo, comps, filelist etc.\n extension (str): The file extension (xml, sqlite).\n source (str): A file path. File holds the dump of metadata until\n copied to the repodata folder.\n zchunk (bool): Whether zchunk data is supported for clients of this repo.\n \"\"\"\n repo_path = os.path.join(compose_path, 'compose', 'Everything')\n for arch in os.listdir(repo_path):\n if arch == 'source':\n repodata = os.path.join(repo_path, arch, 'tree', 'repodata')\n else:\n repodata = os.path.join(repo_path, arch, 'os', 'repodata')\n insert_in_repo(comp_type, repodata, filetype, extension, source, zchunk)\n\n\nclass UpdateInfoMetadata(object):\n \"\"\"\n This class represents the updateinfo.xml yum metadata.\n\n It is generated during push time by the bodhi composer based on koji tags\n and is injected into the yum repodata using the `modifyrepo_c` tool,\n which is included in the `createrepo_c` package.\n \"\"\"\n\n def __init__(self, release, request, db, composedir, close_shelf=True):\n \"\"\"\n Initialize the UpdateInfoMetadata object.\n\n Args:\n release (bodhi.server.models.Release): The Release that is being composed.\n request (bodhi.server.models.UpdateRequest): The Request that is being composed.\n db (): A database session to be used for queries.\n composedir (str): A path to the composedir.\n close_shelf (bool): Whether to close the shelve, which is used to cache updateinfo\n between composes.\n \"\"\"\n self.request = request\n if request is UpdateRequest.stable:\n self.tag = release.stable_tag\n else:\n self.tag = release.testing_tag\n\n self.db = db\n self.updates = set()\n self.builds = {}\n self._from = config.get('bodhi_email')\n if config.get('cache_dir'):\n self.shelf = shelve.open(os.path.join(config.get('cache_dir'), '%s.shelve' % self.tag))\n else:\n # If we have no cache dir, let's at least cache in-memory.\n self.shelf = {}\n close_shelf = False\n self._fetch_updates()\n\n self.uinfo = cr.UpdateInfo()\n\n self.comp_type = cr.XZ\n\n # Some repos such as FEDORA-EPEL, are primarily targeted at\n # distributions that use the yum client, which does not support zchunk metadata\n self.legacy_repos = ['FEDORA-EPEL']\n self.zchunk = True\n\n if release.id_prefix in self.legacy_repos:\n # FIXME: I'm not sure which versions of RHEL support xz metadata\n # compression, so use the lowest common denominator for now.\n self.comp_type = cr.BZ2\n\n log.warning(\n 'Zchunk data is disabled for repo {release.id_prefix} until it moves to a client'\n ' with Zchunk support'\n )\n self.zchunk = False\n\n self.uinfo = cr.UpdateInfo()\n for update in self.updates:\n self.add_update(update)\n\n if close_shelf:\n self.shelf.close()\n\n def _fetch_updates(self):\n \"\"\"Based on our given koji tag, populate a list of Update objects.\"\"\"\n log.debug(\"Fetching builds tagged with '%s'\" % self.tag)\n kojiBuilds = get_session().listTagged(self.tag, latest=True)\n nonexistent = []\n log.debug(\"%d builds found\" % len(kojiBuilds))\n for build in kojiBuilds:\n self.builds[build['nvr']] = build\n build_obj = self.db.query(Build).filter_by(nvr=str(build['nvr'])).first()\n if build_obj:\n if build_obj.update:\n self.updates.add(build_obj.update)\n else:\n log.warning('%s does not have a corresponding update' % build['nvr'])\n else:\n nonexistent.append(build['nvr'])\n if nonexistent:\n log.warning(\"Couldn't find the following koji builds tagged as \"\n \"%s in bodhi: %s\" % (self.tag, nonexistent))\n\n def get_rpms(self, koji, nvr):\n \"\"\"\n Retrieve the given RPM nvr from the cache if available, or from Koji if not available.\n\n Args:\n koji (koji.ClientSession): An initialized Koji client.\n nvr (str): The nvr for which you wish to retrieve Koji data.\n Returns:\n list: A list of dictionaries describing all the subpackages that are part of the given\n nvr.\n \"\"\"\n if str(nvr) in self.shelf:\n return self.shelf[str(nvr)]\n\n if nvr in self.builds:\n buildid = self.builds[nvr]['id']\n else:\n buildid = koji.getBuild(nvr)['id']\n\n rpms = koji.listBuildRPMs(buildid)\n self.shelf[str(nvr)] = rpms\n return rpms\n\n def add_update(self, update):\n \"\"\"\n Generate the extended metadata for a given update, adding it to self.uinfo.\n\n Args:\n update (bodhi.server.models.Update): The Update to be added to self.uinfo.\n \"\"\"\n rec = cr.UpdateRecord()\n rec.version = __version__\n rec.fromstr = config.get('bodhi_email')\n rec.status = update.status.value\n rec.type = update.type.value\n rec.id = update.alias.encode('utf-8')\n rec.title = update.title.encode('utf-8')\n rec.severity = util.severity_updateinfo_str(update.severity.value)\n rec.summary = ('%s %s update' % (update.get_title(),\n update.type.value)).encode('utf-8')\n rec.description = update.notes.encode('utf-8')\n rec.release = update.release.long_name.encode('utf-8')\n rec.rights = config.get('updateinfo_rights')\n\n if update.date_pushed:\n rec.issued_date = update.date_pushed\n else:\n # Sometimes we only set the date_pushed after it's pushed out, however,\n # it seems that Satellite does not like update entries without issued_date.\n # Since we know that we are pushing it now, and the next push will get the data\n # correctly, let's just insert utcnow().\n rec.issued_date = datetime.utcnow()\n if update.date_modified:\n rec.updated_date = update.date_modified\n else:\n rec.updated_date = datetime.utcnow()\n\n col = cr.UpdateCollection()\n col.name = update.release.long_name.encode('utf-8')\n col.shortname = update.release.name.encode('utf-8')\n\n koji = get_session()\n for build in update.builds:\n rpms = self.get_rpms(koji, build.nvr)\n for rpm in rpms:\n pkg = cr.UpdateCollectionPackage()\n pkg.name = rpm['name']\n pkg.version = rpm['version']\n pkg.release = rpm['release']\n if rpm['epoch'] is not None:\n pkg.epoch = str(rpm['epoch'])\n else:\n pkg.epoch = '0'\n pkg.arch = rpm['arch']\n\n # TODO: how do we handle UpdateSuggestion.logout, etc?\n pkg.reboot_suggested = update.suggest is UpdateSuggestion.reboot\n\n filename = '%s.%s.rpm' % (rpm['nvr'], rpm['arch'])\n pkg.filename = filename\n\n # Build the URL\n if rpm['arch'] == 'src':\n arch = 'SRPMS'\n elif rpm['arch'] in ('noarch', 'i686'):\n arch = 'i386'\n else:\n arch = rpm['arch']\n\n pkg.src = os.path.join(\n config.get('file_url'),\n update.status is UpdateStatus.testing and 'testing' or '',\n str(update.release.version), arch, filename[0], filename)\n\n col.append(pkg)\n\n rec.append_collection(col)\n\n # Create references for each bug\n for bug in update.bugs:\n ref = cr.UpdateReference()\n ref.type = 'bugzilla'\n ref.id = str(bug.bug_id).encode('utf-8')\n ref.href = bug.url.encode('utf-8')\n ref.title = bug.title.encode('utf-8') if bug.title else ''\n rec.append_reference(ref)\n\n self.uinfo.append(rec)\n\n def insert_updateinfo(self, compose_path):\n \"\"\"\n Add the updateinfo.xml file to the repository.\n\n Args:\n compose_path (str): The path to the compose where the metadata will be inserted.\n \"\"\"\n fd, tmp_file_path = tempfile.mkstemp()\n os.write(fd, self.uinfo.xml_dump().encode('utf-8'))\n os.close(fd)\n modifyrepo(self.comp_type,\n compose_path,\n 'updateinfo',\n 'xml',\n tmp_file_path,\n self.zchunk)\n os.unlink(tmp_file_path)\n", "path": "bodhi/server/metadata.py"}]} | 3,884 | 988 |
gh_patches_debug_34319 | rasdani/github-patches | git_diff | ephios-dev__ephios-257 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Tests for consequences
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ephios/user_management/consequences.py`
Content:
```
1 import functools
2 import operator
3 from datetime import datetime
4
5 import django.dispatch
6 from django.contrib.auth import get_user_model
7 from django.contrib.auth.models import Group
8 from django.db.models import OuterRef, Q, Subquery
9 from django.db.models.fields.json import KeyTransform
10 from django.utils.formats import date_format
11 from django.utils.translation import gettext_lazy as _
12 from guardian.shortcuts import get_objects_for_user
13
14 from ephios.event_management.models import Event, Shift
15 from ephios.user_management.models import (
16 Consequence,
17 Qualification,
18 QualificationGrant,
19 UserProfile,
20 WorkingHours,
21 )
22
23 register_consequence_handlers = django.dispatch.Signal()
24
25
26 def all_consequence_handlers():
27 for _, handlers in register_consequence_handlers.send(None):
28 yield from (h() for h in handlers)
29
30
31 def consequence_handler_from_slug(slug):
32 for handler in all_consequence_handlers():
33 if handler.slug == slug:
34 return handler
35 raise ValueError(_("Consequence Handler '{slug}' was not found.").format(slug=slug))
36
37
38 def editable_consequences(user):
39 handlers = list(all_consequence_handlers())
40 qs = Consequence.objects.filter(
41 functools.reduce(
42 operator.or_,
43 (handler.editable_by_filter(user) for handler in handlers),
44 Q(),
45 )
46 ).distinct()
47 for handler in handlers:
48 qs = handler.annotate_queryset(qs)
49 return qs
50
51
52 class ConsequenceError(Exception):
53 pass
54
55
56 class BaseConsequenceHandler:
57 @property
58 def slug(self):
59 raise NotImplementedError
60
61 @classmethod
62 def execute(cls, consequence):
63 """
64 Gets a consequence and tries to execute whatever it is the consequence wants to happen.
65 """
66 raise NotImplementedError
67
68 @classmethod
69 def render(cls, consequence):
70 """
71 Return html describing the action to be done as a consequence of what.
72 Return None if you cannot handle this consequence.
73 """
74 raise NotImplementedError
75
76 @classmethod
77 def editable_by_filter(cls, user: UserProfile):
78 """
79 Return a Q object to filter consequence objects of this type that can be confirmed by the given user.
80 """
81 raise NotImplementedError
82
83 @classmethod
84 def annotate_queryset(cls, qs):
85 """
86 Annotate a queryset of heterogeneous consequences to avoid needing additional queries for rendering a consequence.
87 Does no annotations by default.
88 """
89 return qs
90
91
92 class WorkingHoursConsequenceHandler(BaseConsequenceHandler):
93 slug = "ephios.grant_working_hours"
94
95 @classmethod
96 def create(
97 cls,
98 user: UserProfile,
99 when: datetime,
100 hours: float,
101 reason: str,
102 ):
103 return Consequence.objects.create(
104 slug=cls.slug,
105 user=user,
106 data=dict(hours=hours, date=when, reason=reason),
107 )
108
109 @classmethod
110 def execute(cls, consequence):
111 WorkingHours.objects.create(
112 user=consequence.user,
113 date=consequence.data["date"],
114 hours=consequence.data["hours"],
115 reason=consequence.data.get("reason"),
116 )
117
118 @classmethod
119 def render(cls, consequence):
120 return _("{user} logs {hours:.1f} hours on {date} for {reason}").format(
121 user=consequence.user.get_full_name(),
122 hours=consequence.data.get("hours"),
123 reason=consequence.data.get("reason"),
124 date=date_format(consequence.data.get("date")),
125 )
126
127 @classmethod
128 def editable_by_filter(cls, user):
129 return Q(
130 slug=cls.slug,
131 user__groups__in=get_objects_for_user(
132 user, "decide_workinghours_for_group", klass=Group
133 ),
134 )
135
136
137 class QualificationConsequenceHandler(BaseConsequenceHandler):
138 slug = "ephios.grant_qualification"
139
140 @classmethod
141 def create(
142 cls,
143 user: UserProfile,
144 qualification: Qualification,
145 expires: datetime = None,
146 shift: Shift = None,
147 ):
148 return Consequence.objects.create(
149 slug=cls.slug,
150 user=user,
151 data=dict(
152 qualification_id=qualification.id,
153 event_id=None if shift is None else shift.event_id,
154 expires=None if expires is None else expires.isoformat(),
155 ),
156 )
157
158 @classmethod
159 def execute(cls, consequence):
160 expires_str = consequence.data["expires"]
161 expires = None if not expires_str else datetime.fromisoformat(expires_str)
162 qg, created = QualificationGrant.objects.get_or_create(
163 defaults=dict(
164 expires=expires,
165 ),
166 user=consequence.user,
167 qualification_id=consequence.data["qualification_id"],
168 )
169 if not created:
170 qg.expires = max(qg.expires, expires, key=lambda dt: dt or datetime.max)
171 qg.save()
172
173 @classmethod
174 def render(cls, consequence):
175 # Get all the strings we need from the annotations, or fetch them from DB as backup
176 try: # try the annotation
177 event_title = consequence.event_title
178 except AttributeError:
179 if event_id := consequence.data["event_id"]: # fetch from DB as backup
180 event_title = Event.objects.get(id=event_id).title
181 else: # no event has been associated
182 event_title = None
183
184 try:
185 qualification_title = consequence.qualification_title
186 except AttributeError:
187 qualification_title = Qualification.objects.get(
188 id=consequence.data["qualification_id"]
189 ).title
190
191 if expires_str := consequence.data.get("expires"):
192 expires_str = date_format(datetime.fromisoformat(expires_str))
193
194 user = consequence.user.get_full_name()
195
196 # build string based on available data
197
198 if event_title:
199 s = _("{user} acquires '{qualification}' after participating in {event}.").format(
200 user=user, qualification=qualification_title, event=event_title
201 )
202 else:
203 s = _("{user} acquires '{qualification}'.").format(
204 user=user,
205 qualification=qualification_title,
206 )
207
208 if expires_str:
209 s += " " + _("(valid until {expires_str})").format(expires_str=expires_str)
210 return s
211
212 @classmethod
213 def editable_by_filter(cls, user: UserProfile):
214 # Qualifications can be granted by people who...
215 return Q(slug=cls.slug,) & (
216 Q( # are responsible for the event the consequence originated from, if applicable
217 data__event_id__isnull=False,
218 data__event_id__in=get_objects_for_user(user, perms="change_event", klass=Event),
219 )
220 | Q( # can edit the affected user anyway
221 user__in=get_objects_for_user(
222 user, perms="user_management.change_userprofile", klass=get_user_model()
223 )
224 )
225 )
226
227 @classmethod
228 def annotate_queryset(cls, qs):
229 return qs.annotate(
230 qualification_id=KeyTransform("qualification_id", "data"),
231 event_id=KeyTransform("event_id", "data"),
232 ).annotate(
233 qualification_title=Subquery(
234 Qualification.objects.filter(id=OuterRef("qualification_id")).values("title")[:1]
235 ),
236 event_title=Subquery(Event.objects.filter(id=OuterRef("event_id")).values("title")[:1]),
237 )
238
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ephios/user_management/consequences.py b/ephios/user_management/consequences.py
--- a/ephios/user_management/consequences.py
+++ b/ephios/user_management/consequences.py
@@ -151,23 +151,23 @@
data=dict(
qualification_id=qualification.id,
event_id=None if shift is None else shift.event_id,
- expires=None if expires is None else expires.isoformat(),
+ expires=expires,
),
)
@classmethod
def execute(cls, consequence):
- expires_str = consequence.data["expires"]
- expires = None if not expires_str else datetime.fromisoformat(expires_str)
qg, created = QualificationGrant.objects.get_or_create(
defaults=dict(
- expires=expires,
+ expires=consequence.data["expires"],
),
user=consequence.user,
qualification_id=consequence.data["qualification_id"],
)
if not created:
- qg.expires = max(qg.expires, expires, key=lambda dt: dt or datetime.max)
+ qg.expires = max(
+ qg.expires, consequence.data["expires"], key=lambda dt: dt or datetime.max
+ )
qg.save()
@classmethod
@@ -188,8 +188,8 @@
id=consequence.data["qualification_id"]
).title
- if expires_str := consequence.data.get("expires"):
- expires_str = date_format(datetime.fromisoformat(expires_str))
+ if expires := consequence.data.get("expires"):
+ expires = date_format(expires)
user = consequence.user.get_full_name()
@@ -205,8 +205,8 @@
qualification=qualification_title,
)
- if expires_str:
- s += " " + _("(valid until {expires_str})").format(expires_str=expires_str)
+ if expires:
+ s += " " + _("(valid until {expires_str})").format(expires_str=expires)
return s
@classmethod
| {"golden_diff": "diff --git a/ephios/user_management/consequences.py b/ephios/user_management/consequences.py\n--- a/ephios/user_management/consequences.py\n+++ b/ephios/user_management/consequences.py\n@@ -151,23 +151,23 @@\n data=dict(\n qualification_id=qualification.id,\n event_id=None if shift is None else shift.event_id,\n- expires=None if expires is None else expires.isoformat(),\n+ expires=expires,\n ),\n )\n \n @classmethod\n def execute(cls, consequence):\n- expires_str = consequence.data[\"expires\"]\n- expires = None if not expires_str else datetime.fromisoformat(expires_str)\n qg, created = QualificationGrant.objects.get_or_create(\n defaults=dict(\n- expires=expires,\n+ expires=consequence.data[\"expires\"],\n ),\n user=consequence.user,\n qualification_id=consequence.data[\"qualification_id\"],\n )\n if not created:\n- qg.expires = max(qg.expires, expires, key=lambda dt: dt or datetime.max)\n+ qg.expires = max(\n+ qg.expires, consequence.data[\"expires\"], key=lambda dt: dt or datetime.max\n+ )\n qg.save()\n \n @classmethod\n@@ -188,8 +188,8 @@\n id=consequence.data[\"qualification_id\"]\n ).title\n \n- if expires_str := consequence.data.get(\"expires\"):\n- expires_str = date_format(datetime.fromisoformat(expires_str))\n+ if expires := consequence.data.get(\"expires\"):\n+ expires = date_format(expires)\n \n user = consequence.user.get_full_name()\n \n@@ -205,8 +205,8 @@\n qualification=qualification_title,\n )\n \n- if expires_str:\n- s += \" \" + _(\"(valid until {expires_str})\").format(expires_str=expires_str)\n+ if expires:\n+ s += \" \" + _(\"(valid until {expires_str})\").format(expires_str=expires)\n return s\n \n @classmethod\n", "issue": "Tests for consequences\n\n", "before_files": [{"content": "import functools\nimport operator\nfrom datetime import datetime\n\nimport django.dispatch\nfrom django.contrib.auth import get_user_model\nfrom django.contrib.auth.models import Group\nfrom django.db.models import OuterRef, Q, Subquery\nfrom django.db.models.fields.json import KeyTransform\nfrom django.utils.formats import date_format\nfrom django.utils.translation import gettext_lazy as _\nfrom guardian.shortcuts import get_objects_for_user\n\nfrom ephios.event_management.models import Event, Shift\nfrom ephios.user_management.models import (\n Consequence,\n Qualification,\n QualificationGrant,\n UserProfile,\n WorkingHours,\n)\n\nregister_consequence_handlers = django.dispatch.Signal()\n\n\ndef all_consequence_handlers():\n for _, handlers in register_consequence_handlers.send(None):\n yield from (h() for h in handlers)\n\n\ndef consequence_handler_from_slug(slug):\n for handler in all_consequence_handlers():\n if handler.slug == slug:\n return handler\n raise ValueError(_(\"Consequence Handler '{slug}' was not found.\").format(slug=slug))\n\n\ndef editable_consequences(user):\n handlers = list(all_consequence_handlers())\n qs = Consequence.objects.filter(\n functools.reduce(\n operator.or_,\n (handler.editable_by_filter(user) for handler in handlers),\n Q(),\n )\n ).distinct()\n for handler in handlers:\n qs = handler.annotate_queryset(qs)\n return qs\n\n\nclass ConsequenceError(Exception):\n pass\n\n\nclass BaseConsequenceHandler:\n @property\n def slug(self):\n raise NotImplementedError\n\n @classmethod\n def execute(cls, consequence):\n \"\"\"\n Gets a consequence and tries to execute whatever it is the consequence wants to happen.\n \"\"\"\n raise NotImplementedError\n\n @classmethod\n def render(cls, consequence):\n \"\"\"\n Return html describing the action to be done as a consequence of what.\n Return None if you cannot handle this consequence.\n \"\"\"\n raise NotImplementedError\n\n @classmethod\n def editable_by_filter(cls, user: UserProfile):\n \"\"\"\n Return a Q object to filter consequence objects of this type that can be confirmed by the given user.\n \"\"\"\n raise NotImplementedError\n\n @classmethod\n def annotate_queryset(cls, qs):\n \"\"\"\n Annotate a queryset of heterogeneous consequences to avoid needing additional queries for rendering a consequence.\n Does no annotations by default.\n \"\"\"\n return qs\n\n\nclass WorkingHoursConsequenceHandler(BaseConsequenceHandler):\n slug = \"ephios.grant_working_hours\"\n\n @classmethod\n def create(\n cls,\n user: UserProfile,\n when: datetime,\n hours: float,\n reason: str,\n ):\n return Consequence.objects.create(\n slug=cls.slug,\n user=user,\n data=dict(hours=hours, date=when, reason=reason),\n )\n\n @classmethod\n def execute(cls, consequence):\n WorkingHours.objects.create(\n user=consequence.user,\n date=consequence.data[\"date\"],\n hours=consequence.data[\"hours\"],\n reason=consequence.data.get(\"reason\"),\n )\n\n @classmethod\n def render(cls, consequence):\n return _(\"{user} logs {hours:.1f} hours on {date} for {reason}\").format(\n user=consequence.user.get_full_name(),\n hours=consequence.data.get(\"hours\"),\n reason=consequence.data.get(\"reason\"),\n date=date_format(consequence.data.get(\"date\")),\n )\n\n @classmethod\n def editable_by_filter(cls, user):\n return Q(\n slug=cls.slug,\n user__groups__in=get_objects_for_user(\n user, \"decide_workinghours_for_group\", klass=Group\n ),\n )\n\n\nclass QualificationConsequenceHandler(BaseConsequenceHandler):\n slug = \"ephios.grant_qualification\"\n\n @classmethod\n def create(\n cls,\n user: UserProfile,\n qualification: Qualification,\n expires: datetime = None,\n shift: Shift = None,\n ):\n return Consequence.objects.create(\n slug=cls.slug,\n user=user,\n data=dict(\n qualification_id=qualification.id,\n event_id=None if shift is None else shift.event_id,\n expires=None if expires is None else expires.isoformat(),\n ),\n )\n\n @classmethod\n def execute(cls, consequence):\n expires_str = consequence.data[\"expires\"]\n expires = None if not expires_str else datetime.fromisoformat(expires_str)\n qg, created = QualificationGrant.objects.get_or_create(\n defaults=dict(\n expires=expires,\n ),\n user=consequence.user,\n qualification_id=consequence.data[\"qualification_id\"],\n )\n if not created:\n qg.expires = max(qg.expires, expires, key=lambda dt: dt or datetime.max)\n qg.save()\n\n @classmethod\n def render(cls, consequence):\n # Get all the strings we need from the annotations, or fetch them from DB as backup\n try: # try the annotation\n event_title = consequence.event_title\n except AttributeError:\n if event_id := consequence.data[\"event_id\"]: # fetch from DB as backup\n event_title = Event.objects.get(id=event_id).title\n else: # no event has been associated\n event_title = None\n\n try:\n qualification_title = consequence.qualification_title\n except AttributeError:\n qualification_title = Qualification.objects.get(\n id=consequence.data[\"qualification_id\"]\n ).title\n\n if expires_str := consequence.data.get(\"expires\"):\n expires_str = date_format(datetime.fromisoformat(expires_str))\n\n user = consequence.user.get_full_name()\n\n # build string based on available data\n\n if event_title:\n s = _(\"{user} acquires '{qualification}' after participating in {event}.\").format(\n user=user, qualification=qualification_title, event=event_title\n )\n else:\n s = _(\"{user} acquires '{qualification}'.\").format(\n user=user,\n qualification=qualification_title,\n )\n\n if expires_str:\n s += \" \" + _(\"(valid until {expires_str})\").format(expires_str=expires_str)\n return s\n\n @classmethod\n def editable_by_filter(cls, user: UserProfile):\n # Qualifications can be granted by people who...\n return Q(slug=cls.slug,) & (\n Q( # are responsible for the event the consequence originated from, if applicable\n data__event_id__isnull=False,\n data__event_id__in=get_objects_for_user(user, perms=\"change_event\", klass=Event),\n )\n | Q( # can edit the affected user anyway\n user__in=get_objects_for_user(\n user, perms=\"user_management.change_userprofile\", klass=get_user_model()\n )\n )\n )\n\n @classmethod\n def annotate_queryset(cls, qs):\n return qs.annotate(\n qualification_id=KeyTransform(\"qualification_id\", \"data\"),\n event_id=KeyTransform(\"event_id\", \"data\"),\n ).annotate(\n qualification_title=Subquery(\n Qualification.objects.filter(id=OuterRef(\"qualification_id\")).values(\"title\")[:1]\n ),\n event_title=Subquery(Event.objects.filter(id=OuterRef(\"event_id\")).values(\"title\")[:1]),\n )\n", "path": "ephios/user_management/consequences.py"}], "after_files": [{"content": "import functools\nimport operator\nfrom datetime import datetime\n\nimport django.dispatch\nfrom django.contrib.auth import get_user_model\nfrom django.contrib.auth.models import Group\nfrom django.db.models import OuterRef, Q, Subquery\nfrom django.db.models.fields.json import KeyTransform\nfrom django.utils.formats import date_format\nfrom django.utils.translation import gettext_lazy as _\nfrom guardian.shortcuts import get_objects_for_user\n\nfrom ephios.event_management.models import Event, Shift\nfrom ephios.user_management.models import (\n Consequence,\n Qualification,\n QualificationGrant,\n UserProfile,\n WorkingHours,\n)\n\nregister_consequence_handlers = django.dispatch.Signal()\n\n\ndef all_consequence_handlers():\n for _, handlers in register_consequence_handlers.send(None):\n yield from (h() for h in handlers)\n\n\ndef consequence_handler_from_slug(slug):\n for handler in all_consequence_handlers():\n if handler.slug == slug:\n return handler\n raise ValueError(_(\"Consequence Handler '{slug}' was not found.\").format(slug=slug))\n\n\ndef editable_consequences(user):\n handlers = list(all_consequence_handlers())\n qs = Consequence.objects.filter(\n functools.reduce(\n operator.or_,\n (handler.editable_by_filter(user) for handler in handlers),\n Q(),\n )\n ).distinct()\n for handler in handlers:\n qs = handler.annotate_queryset(qs)\n return qs\n\n\nclass ConsequenceError(Exception):\n pass\n\n\nclass BaseConsequenceHandler:\n @property\n def slug(self):\n raise NotImplementedError\n\n @classmethod\n def execute(cls, consequence):\n \"\"\"\n Gets a consequence and tries to execute whatever it is the consequence wants to happen.\n \"\"\"\n raise NotImplementedError\n\n @classmethod\n def render(cls, consequence):\n \"\"\"\n Return html describing the action to be done as a consequence of what.\n Return None if you cannot handle this consequence.\n \"\"\"\n raise NotImplementedError\n\n @classmethod\n def editable_by_filter(cls, user: UserProfile):\n \"\"\"\n Return a Q object to filter consequence objects of this type that can be confirmed by the given user.\n \"\"\"\n raise NotImplementedError\n\n @classmethod\n def annotate_queryset(cls, qs):\n \"\"\"\n Annotate a queryset of heterogeneous consequences to avoid needing additional queries for rendering a consequence.\n Does no annotations by default.\n \"\"\"\n return qs\n\n\nclass WorkingHoursConsequenceHandler(BaseConsequenceHandler):\n slug = \"ephios.grant_working_hours\"\n\n @classmethod\n def create(\n cls,\n user: UserProfile,\n when: datetime,\n hours: float,\n reason: str,\n ):\n return Consequence.objects.create(\n slug=cls.slug,\n user=user,\n data=dict(hours=hours, date=when, reason=reason),\n )\n\n @classmethod\n def execute(cls, consequence):\n WorkingHours.objects.create(\n user=consequence.user,\n date=consequence.data[\"date\"],\n hours=consequence.data[\"hours\"],\n reason=consequence.data.get(\"reason\"),\n )\n\n @classmethod\n def render(cls, consequence):\n return _(\"{user} logs {hours:.1f} hours on {date} for {reason}\").format(\n user=consequence.user.get_full_name(),\n hours=consequence.data.get(\"hours\"),\n reason=consequence.data.get(\"reason\"),\n date=date_format(consequence.data.get(\"date\")),\n )\n\n @classmethod\n def editable_by_filter(cls, user):\n return Q(\n slug=cls.slug,\n user__groups__in=get_objects_for_user(\n user, \"decide_workinghours_for_group\", klass=Group\n ),\n )\n\n\nclass QualificationConsequenceHandler(BaseConsequenceHandler):\n slug = \"ephios.grant_qualification\"\n\n @classmethod\n def create(\n cls,\n user: UserProfile,\n qualification: Qualification,\n expires: datetime = None,\n shift: Shift = None,\n ):\n return Consequence.objects.create(\n slug=cls.slug,\n user=user,\n data=dict(\n qualification_id=qualification.id,\n event_id=None if shift is None else shift.event_id,\n expires=expires,\n ),\n )\n\n @classmethod\n def execute(cls, consequence):\n qg, created = QualificationGrant.objects.get_or_create(\n defaults=dict(\n expires=consequence.data[\"expires\"],\n ),\n user=consequence.user,\n qualification_id=consequence.data[\"qualification_id\"],\n )\n if not created:\n qg.expires = max(\n qg.expires, consequence.data[\"expires\"], key=lambda dt: dt or datetime.max\n )\n qg.save()\n\n @classmethod\n def render(cls, consequence):\n # Get all the strings we need from the annotations, or fetch them from DB as backup\n try: # try the annotation\n event_title = consequence.event_title\n except AttributeError:\n if event_id := consequence.data[\"event_id\"]: # fetch from DB as backup\n event_title = Event.objects.get(id=event_id).title\n else: # no event has been associated\n event_title = None\n\n try:\n qualification_title = consequence.qualification_title\n except AttributeError:\n qualification_title = Qualification.objects.get(\n id=consequence.data[\"qualification_id\"]\n ).title\n\n if expires := consequence.data.get(\"expires\"):\n expires = date_format(expires)\n\n user = consequence.user.get_full_name()\n\n # build string based on available data\n\n if event_title:\n s = _(\"{user} acquires '{qualification}' after participating in {event}.\").format(\n user=user, qualification=qualification_title, event=event_title\n )\n else:\n s = _(\"{user} acquires '{qualification}'.\").format(\n user=user,\n qualification=qualification_title,\n )\n\n if expires:\n s += \" \" + _(\"(valid until {expires_str})\").format(expires_str=expires)\n return s\n\n @classmethod\n def editable_by_filter(cls, user: UserProfile):\n # Qualifications can be granted by people who...\n return Q(slug=cls.slug,) & (\n Q( # are responsible for the event the consequence originated from, if applicable\n data__event_id__isnull=False,\n data__event_id__in=get_objects_for_user(user, perms=\"change_event\", klass=Event),\n )\n | Q( # can edit the affected user anyway\n user__in=get_objects_for_user(\n user, perms=\"user_management.change_userprofile\", klass=get_user_model()\n )\n )\n )\n\n @classmethod\n def annotate_queryset(cls, qs):\n return qs.annotate(\n qualification_id=KeyTransform(\"qualification_id\", \"data\"),\n event_id=KeyTransform(\"event_id\", \"data\"),\n ).annotate(\n qualification_title=Subquery(\n Qualification.objects.filter(id=OuterRef(\"qualification_id\")).values(\"title\")[:1]\n ),\n event_title=Subquery(Event.objects.filter(id=OuterRef(\"event_id\")).values(\"title\")[:1]),\n )\n", "path": "ephios/user_management/consequences.py"}]} | 2,411 | 446 |
gh_patches_debug_31335 | rasdani/github-patches | git_diff | pfnet__pytorch-pfn-extras-367 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ProgressBar extension doesn't show correct `iters/sec` in ppe==0.5.0
The `ProgressBar` extension shows `iters/sec` as well as the progress bar, but I noticed that in ppe==0.5.0 it becomes always `inf iters/sec`.
### ppe==0.5.0
Minimum reproduction code:
```python
# progress.py (based on https://git.io/JzHP3)
import pytorch_pfn_extras as ppe
import time
print("PPE =", ppe.__version__)
max_epochs = 3
iters_per_epoch = 20
manager = ppe.training.ExtensionsManager({}, {}, max_epochs, iters_per_epoch=iters_per_epoch)
manager.extend(ppe.training.extensions.ProgressBar(training_length=None, update_interval=1, bar_length=40))
for epoch in range(max_epochs):
for _ in range(iters_per_epoch):
with manager.run_iteration():
pass
time.sleep(0.1)
```
Result:
```
$ python progress.py
PPE = 0.5.0
total [####################....................] 51.67%
this epoch [######################..................] 55.00%
31 iter, 1 epoch / 3 epochs
inf iters/sec. Estimated time to finish: 0:00:00.
```
### ppe==0.4.5 (expected behavior)
It shows a correct iters/sec.
```
$ python progress.py
PPE = 0.4.5
total [#################.......................] 43.33%
this epoch [############............................] 30.00%
26 iter, 1 epoch / 3 epochs
10.825 iters/sec. Estimated time to finish: 0:00:03.140780.
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pytorch_pfn_extras/training/extensions/util.py`
Content:
```
1 import os
2 import sys
3 import queue
4 import time
5 from typing import NamedTuple, Optional, Sequence, TextIO, Tuple, TYPE_CHECKING
6
7 if TYPE_CHECKING:
8 from pytorch_pfn_extras.training.manager import _BaseExtensionsManager
9
10
11 if os.name == 'nt':
12 import ctypes
13 from ctypes import windll # type: ignore [attr-defined]
14
15 _STD_OUTPUT_HANDLE = -11
16
17 _COORD = ctypes.wintypes._COORD
18
19 class _CONSOLE_SCREEN_BUFFER_INFO(ctypes.Structure):
20 _fields_ = [('dwSize', _COORD), ('dwCursorPosition', _COORD),
21 ('wAttributes', ctypes.c_ushort),
22 ('srWindow', ctypes.wintypes.SMALL_RECT),
23 ('dwMaximumWindowSize', _COORD)]
24
25 def set_console_cursor_position(x: int, y: int) -> None:
26 """Set relative cursor position from current position to (x,y)"""
27
28 whnd = windll.kernel32.GetStdHandle(_STD_OUTPUT_HANDLE)
29 csbi = _CONSOLE_SCREEN_BUFFER_INFO()
30 windll.kernel32.GetConsoleScreenBufferInfo(whnd, ctypes.byref(csbi))
31 cur_pos = csbi.dwCursorPosition
32 pos = _COORD(cur_pos.X + x, cur_pos.Y + y)
33
34 # Workaround the issue that pyreadline overwrites the argtype
35 setpos = windll.kernel32.SetConsoleCursorPosition
36 argtypes = setpos.argtypes
37 setpos.argtypes = None
38 setpos(whnd, pos)
39 setpos.argtypes = argtypes
40
41 def erase_console(x: int, y: int, mode: int = 0) -> None:
42 """Erase screen.
43
44 Mode=0: From (x,y) position down to the bottom of the screen.
45 Mode=1: From (x,y) position down to the beginning of line.
46 Mode=2: Hole screen
47 """
48
49 whnd = windll.kernel32.GetStdHandle(_STD_OUTPUT_HANDLE)
50 csbi = _CONSOLE_SCREEN_BUFFER_INFO()
51 windll.kernel32.GetConsoleScreenBufferInfo(whnd, ctypes.byref(csbi))
52 cur_pos = csbi.dwCursorPosition
53 wr = ctypes.c_ulong()
54 if mode == 0:
55 num = csbi.srWindow.Right * (
56 csbi.srWindow.Bottom - cur_pos.Y) - cur_pos.X
57 windll.kernel32.FillConsoleOutputCharacterA(
58 whnd, ord(' '), num, cur_pos, ctypes.byref(wr))
59 elif mode == 1:
60 num = cur_pos.X
61 windll.kernel32.FillConsoleOutputCharacterA(
62 whnd, ord(' '), num, _COORD(0, cur_pos.Y), ctypes.byref(wr))
63 elif mode == 2:
64 os.system('cls')
65
66
67 class _QueUnit(NamedTuple):
68 iteration: int
69 epoch_detail: float
70 time: float
71
72
73 class ProgressBar:
74
75 def __init__(self, out: Optional[TextIO] = None) -> None:
76 self._out = sys.stdout if out is None else out
77 self._recent_timing: 'queue.Queue[_QueUnit]' = queue.Queue(maxsize=100)
78
79 def update_speed(
80 self,
81 iteration: int,
82 epoch_detail: float
83 ) -> Tuple[float, float]:
84 now = time.time()
85 self._recent_timing.put(_QueUnit(iteration, epoch_detail, now))
86 old = self._recent_timing.get()
87 span = now - old.time
88 if span != 0:
89 speed_t = (iteration - old.iteration) / span
90 speed_e = (epoch_detail - old.epoch_detail) / span
91 else:
92 speed_t = float('inf')
93 speed_e = float('inf')
94 return speed_t, speed_e
95
96 def get_lines(self) -> Sequence[str]:
97 raise NotImplementedError
98
99 def update(
100 self,
101 manager: Optional['_BaseExtensionsManager'] = None
102 ) -> None:
103 self.erase_console()
104
105 lines = self.get_lines()
106 for line in lines:
107 self._out.write(line)
108
109 self.move_cursor_up(len(lines))
110 self.flush()
111
112 def close(self) -> None:
113 self.erase_console()
114 self.flush()
115
116 def erase_console(self) -> None:
117 if os.name == 'nt':
118 erase_console(0, 0)
119 else:
120 self._out.write('\033[J')
121
122 def move_cursor_up(self, n: int) -> None:
123 # move the cursor to the head of the progress bar
124 if os.name == 'nt':
125 set_console_cursor_position(0, - n)
126 else:
127 self._out.write('\033[{:d}A'.format(n))
128
129 def flush(self) -> None:
130 if hasattr(self._out, 'flush'):
131 self._out.flush()
132
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pytorch_pfn_extras/training/extensions/util.py b/pytorch_pfn_extras/training/extensions/util.py
--- a/pytorch_pfn_extras/training/extensions/util.py
+++ b/pytorch_pfn_extras/training/extensions/util.py
@@ -1,8 +1,8 @@
+import collections
import os
import sys
-import queue
import time
-from typing import NamedTuple, Optional, Sequence, TextIO, Tuple, TYPE_CHECKING
+from typing import Deque, Optional, Sequence, TextIO, Tuple, TYPE_CHECKING
if TYPE_CHECKING:
from pytorch_pfn_extras.training.manager import _BaseExtensionsManager
@@ -64,17 +64,12 @@
os.system('cls')
-class _QueUnit(NamedTuple):
- iteration: int
- epoch_detail: float
- time: float
-
-
class ProgressBar:
def __init__(self, out: Optional[TextIO] = None) -> None:
self._out = sys.stdout if out is None else out
- self._recent_timing: 'queue.Queue[_QueUnit]' = queue.Queue(maxsize=100)
+ self._recent_timing: Deque[Tuple[int, float, float]] = collections.deque(
+ [], maxlen=100)
def update_speed(
self,
@@ -82,12 +77,12 @@
epoch_detail: float
) -> Tuple[float, float]:
now = time.time()
- self._recent_timing.put(_QueUnit(iteration, epoch_detail, now))
- old = self._recent_timing.get()
- span = now - old.time
+ self._recent_timing.append((iteration, epoch_detail, now))
+ old_t, old_e, old_sec = self._recent_timing[0]
+ span = now - old_sec
if span != 0:
- speed_t = (iteration - old.iteration) / span
- speed_e = (epoch_detail - old.epoch_detail) / span
+ speed_t = (iteration - old_t) / span
+ speed_e = (epoch_detail - old_e) / span
else:
speed_t = float('inf')
speed_e = float('inf')
| {"golden_diff": "diff --git a/pytorch_pfn_extras/training/extensions/util.py b/pytorch_pfn_extras/training/extensions/util.py\n--- a/pytorch_pfn_extras/training/extensions/util.py\n+++ b/pytorch_pfn_extras/training/extensions/util.py\n@@ -1,8 +1,8 @@\n+import collections\n import os\n import sys\n-import queue\n import time\n-from typing import NamedTuple, Optional, Sequence, TextIO, Tuple, TYPE_CHECKING\n+from typing import Deque, Optional, Sequence, TextIO, Tuple, TYPE_CHECKING\n \n if TYPE_CHECKING:\n from pytorch_pfn_extras.training.manager import _BaseExtensionsManager\n@@ -64,17 +64,12 @@\n os.system('cls')\n \n \n-class _QueUnit(NamedTuple):\n- iteration: int\n- epoch_detail: float\n- time: float\n-\n-\n class ProgressBar:\n \n def __init__(self, out: Optional[TextIO] = None) -> None:\n self._out = sys.stdout if out is None else out\n- self._recent_timing: 'queue.Queue[_QueUnit]' = queue.Queue(maxsize=100)\n+ self._recent_timing: Deque[Tuple[int, float, float]] = collections.deque(\n+ [], maxlen=100)\n \n def update_speed(\n self,\n@@ -82,12 +77,12 @@\n epoch_detail: float\n ) -> Tuple[float, float]:\n now = time.time()\n- self._recent_timing.put(_QueUnit(iteration, epoch_detail, now))\n- old = self._recent_timing.get()\n- span = now - old.time\n+ self._recent_timing.append((iteration, epoch_detail, now))\n+ old_t, old_e, old_sec = self._recent_timing[0]\n+ span = now - old_sec\n if span != 0:\n- speed_t = (iteration - old.iteration) / span\n- speed_e = (epoch_detail - old.epoch_detail) / span\n+ speed_t = (iteration - old_t) / span\n+ speed_e = (epoch_detail - old_e) / span\n else:\n speed_t = float('inf')\n speed_e = float('inf')\n", "issue": "ProgressBar extension doesn't show correct `iters/sec` in ppe==0.5.0\nThe `ProgressBar` extension shows `iters/sec` as well as the progress bar, but I noticed that in ppe==0.5.0 it becomes always `inf iters/sec`.\r\n\r\n### ppe==0.5.0\r\n\r\nMinimum reproduction code:\r\n```python\r\n# progress.py (based on https://git.io/JzHP3)\r\nimport pytorch_pfn_extras as ppe\r\nimport time\r\n\r\nprint(\"PPE =\", ppe.__version__)\r\n\r\nmax_epochs = 3\r\niters_per_epoch = 20\r\nmanager = ppe.training.ExtensionsManager({}, {}, max_epochs, iters_per_epoch=iters_per_epoch)\r\nmanager.extend(ppe.training.extensions.ProgressBar(training_length=None, update_interval=1, bar_length=40))\r\n\r\nfor epoch in range(max_epochs):\r\n for _ in range(iters_per_epoch):\r\n with manager.run_iteration():\r\n pass\r\n time.sleep(0.1)\r\n```\r\n\r\nResult:\r\n```\r\n$ python progress.py\r\nPPE = 0.5.0\r\n total [####################....................] 51.67%\r\nthis epoch [######################..................] 55.00%\r\n 31 iter, 1 epoch / 3 epochs\r\n inf iters/sec. Estimated time to finish: 0:00:00.\r\n```\r\n\r\n### ppe==0.4.5 (expected behavior)\r\nIt shows a correct iters/sec.\r\n```\r\n$ python progress.py\r\nPPE = 0.4.5\r\n total [#################.......................] 43.33%\r\nthis epoch [############............................] 30.00%\r\n 26 iter, 1 epoch / 3 epochs\r\n 10.825 iters/sec. Estimated time to finish: 0:00:03.140780.\r\n```\n", "before_files": [{"content": "import os\nimport sys\nimport queue\nimport time\nfrom typing import NamedTuple, Optional, Sequence, TextIO, Tuple, TYPE_CHECKING\n\nif TYPE_CHECKING:\n from pytorch_pfn_extras.training.manager import _BaseExtensionsManager\n\n\nif os.name == 'nt':\n import ctypes\n from ctypes import windll # type: ignore [attr-defined]\n\n _STD_OUTPUT_HANDLE = -11\n\n _COORD = ctypes.wintypes._COORD\n\n class _CONSOLE_SCREEN_BUFFER_INFO(ctypes.Structure):\n _fields_ = [('dwSize', _COORD), ('dwCursorPosition', _COORD),\n ('wAttributes', ctypes.c_ushort),\n ('srWindow', ctypes.wintypes.SMALL_RECT),\n ('dwMaximumWindowSize', _COORD)]\n\n def set_console_cursor_position(x: int, y: int) -> None:\n \"\"\"Set relative cursor position from current position to (x,y)\"\"\"\n\n whnd = windll.kernel32.GetStdHandle(_STD_OUTPUT_HANDLE)\n csbi = _CONSOLE_SCREEN_BUFFER_INFO()\n windll.kernel32.GetConsoleScreenBufferInfo(whnd, ctypes.byref(csbi))\n cur_pos = csbi.dwCursorPosition\n pos = _COORD(cur_pos.X + x, cur_pos.Y + y)\n\n # Workaround the issue that pyreadline overwrites the argtype\n setpos = windll.kernel32.SetConsoleCursorPosition\n argtypes = setpos.argtypes\n setpos.argtypes = None\n setpos(whnd, pos)\n setpos.argtypes = argtypes\n\n def erase_console(x: int, y: int, mode: int = 0) -> None:\n \"\"\"Erase screen.\n\n Mode=0: From (x,y) position down to the bottom of the screen.\n Mode=1: From (x,y) position down to the beginning of line.\n Mode=2: Hole screen\n \"\"\"\n\n whnd = windll.kernel32.GetStdHandle(_STD_OUTPUT_HANDLE)\n csbi = _CONSOLE_SCREEN_BUFFER_INFO()\n windll.kernel32.GetConsoleScreenBufferInfo(whnd, ctypes.byref(csbi))\n cur_pos = csbi.dwCursorPosition\n wr = ctypes.c_ulong()\n if mode == 0:\n num = csbi.srWindow.Right * (\n csbi.srWindow.Bottom - cur_pos.Y) - cur_pos.X\n windll.kernel32.FillConsoleOutputCharacterA(\n whnd, ord(' '), num, cur_pos, ctypes.byref(wr))\n elif mode == 1:\n num = cur_pos.X\n windll.kernel32.FillConsoleOutputCharacterA(\n whnd, ord(' '), num, _COORD(0, cur_pos.Y), ctypes.byref(wr))\n elif mode == 2:\n os.system('cls')\n\n\nclass _QueUnit(NamedTuple):\n iteration: int\n epoch_detail: float\n time: float\n\n\nclass ProgressBar:\n\n def __init__(self, out: Optional[TextIO] = None) -> None:\n self._out = sys.stdout if out is None else out\n self._recent_timing: 'queue.Queue[_QueUnit]' = queue.Queue(maxsize=100)\n\n def update_speed(\n self,\n iteration: int,\n epoch_detail: float\n ) -> Tuple[float, float]:\n now = time.time()\n self._recent_timing.put(_QueUnit(iteration, epoch_detail, now))\n old = self._recent_timing.get()\n span = now - old.time\n if span != 0:\n speed_t = (iteration - old.iteration) / span\n speed_e = (epoch_detail - old.epoch_detail) / span\n else:\n speed_t = float('inf')\n speed_e = float('inf')\n return speed_t, speed_e\n\n def get_lines(self) -> Sequence[str]:\n raise NotImplementedError\n\n def update(\n self,\n manager: Optional['_BaseExtensionsManager'] = None\n ) -> None:\n self.erase_console()\n\n lines = self.get_lines()\n for line in lines:\n self._out.write(line)\n\n self.move_cursor_up(len(lines))\n self.flush()\n\n def close(self) -> None:\n self.erase_console()\n self.flush()\n\n def erase_console(self) -> None:\n if os.name == 'nt':\n erase_console(0, 0)\n else:\n self._out.write('\\033[J')\n\n def move_cursor_up(self, n: int) -> None:\n # move the cursor to the head of the progress bar\n if os.name == 'nt':\n set_console_cursor_position(0, - n)\n else:\n self._out.write('\\033[{:d}A'.format(n))\n\n def flush(self) -> None:\n if hasattr(self._out, 'flush'):\n self._out.flush()\n", "path": "pytorch_pfn_extras/training/extensions/util.py"}], "after_files": [{"content": "import collections\nimport os\nimport sys\nimport time\nfrom typing import Deque, Optional, Sequence, TextIO, Tuple, TYPE_CHECKING\n\nif TYPE_CHECKING:\n from pytorch_pfn_extras.training.manager import _BaseExtensionsManager\n\n\nif os.name == 'nt':\n import ctypes\n from ctypes import windll # type: ignore [attr-defined]\n\n _STD_OUTPUT_HANDLE = -11\n\n _COORD = ctypes.wintypes._COORD\n\n class _CONSOLE_SCREEN_BUFFER_INFO(ctypes.Structure):\n _fields_ = [('dwSize', _COORD), ('dwCursorPosition', _COORD),\n ('wAttributes', ctypes.c_ushort),\n ('srWindow', ctypes.wintypes.SMALL_RECT),\n ('dwMaximumWindowSize', _COORD)]\n\n def set_console_cursor_position(x: int, y: int) -> None:\n \"\"\"Set relative cursor position from current position to (x,y)\"\"\"\n\n whnd = windll.kernel32.GetStdHandle(_STD_OUTPUT_HANDLE)\n csbi = _CONSOLE_SCREEN_BUFFER_INFO()\n windll.kernel32.GetConsoleScreenBufferInfo(whnd, ctypes.byref(csbi))\n cur_pos = csbi.dwCursorPosition\n pos = _COORD(cur_pos.X + x, cur_pos.Y + y)\n\n # Workaround the issue that pyreadline overwrites the argtype\n setpos = windll.kernel32.SetConsoleCursorPosition\n argtypes = setpos.argtypes\n setpos.argtypes = None\n setpos(whnd, pos)\n setpos.argtypes = argtypes\n\n def erase_console(x: int, y: int, mode: int = 0) -> None:\n \"\"\"Erase screen.\n\n Mode=0: From (x,y) position down to the bottom of the screen.\n Mode=1: From (x,y) position down to the beginning of line.\n Mode=2: Hole screen\n \"\"\"\n\n whnd = windll.kernel32.GetStdHandle(_STD_OUTPUT_HANDLE)\n csbi = _CONSOLE_SCREEN_BUFFER_INFO()\n windll.kernel32.GetConsoleScreenBufferInfo(whnd, ctypes.byref(csbi))\n cur_pos = csbi.dwCursorPosition\n wr = ctypes.c_ulong()\n if mode == 0:\n num = csbi.srWindow.Right * (\n csbi.srWindow.Bottom - cur_pos.Y) - cur_pos.X\n windll.kernel32.FillConsoleOutputCharacterA(\n whnd, ord(' '), num, cur_pos, ctypes.byref(wr))\n elif mode == 1:\n num = cur_pos.X\n windll.kernel32.FillConsoleOutputCharacterA(\n whnd, ord(' '), num, _COORD(0, cur_pos.Y), ctypes.byref(wr))\n elif mode == 2:\n os.system('cls')\n\n\nclass ProgressBar:\n\n def __init__(self, out: Optional[TextIO] = None) -> None:\n self._out = sys.stdout if out is None else out\n self._recent_timing: Deque[Tuple[int, float, float]] = collections.deque(\n [], maxlen=100)\n\n def update_speed(\n self,\n iteration: int,\n epoch_detail: float\n ) -> Tuple[float, float]:\n now = time.time()\n self._recent_timing.append((iteration, epoch_detail, now))\n old_t, old_e, old_sec = self._recent_timing[0]\n span = now - old_sec\n if span != 0:\n speed_t = (iteration - old_t) / span\n speed_e = (epoch_detail - old_e) / span\n else:\n speed_t = float('inf')\n speed_e = float('inf')\n return speed_t, speed_e\n\n def get_lines(self) -> Sequence[str]:\n raise NotImplementedError\n\n def update(\n self,\n manager: Optional['_BaseExtensionsManager'] = None\n ) -> None:\n self.erase_console()\n\n lines = self.get_lines()\n for line in lines:\n self._out.write(line)\n\n self.move_cursor_up(len(lines))\n self.flush()\n\n def close(self) -> None:\n self.erase_console()\n self.flush()\n\n def erase_console(self) -> None:\n if os.name == 'nt':\n erase_console(0, 0)\n else:\n self._out.write('\\033[J')\n\n def move_cursor_up(self, n: int) -> None:\n # move the cursor to the head of the progress bar\n if os.name == 'nt':\n set_console_cursor_position(0, - n)\n else:\n self._out.write('\\033[{:d}A'.format(n))\n\n def flush(self) -> None:\n if hasattr(self._out, 'flush'):\n self._out.flush()\n", "path": "pytorch_pfn_extras/training/extensions/util.py"}]} | 2,033 | 493 |
gh_patches_debug_666 | rasdani/github-patches | git_diff | digitalfabrik__integreat-cms-470 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Do not commit trivial changes to documentation
### Motivation
<!-- A clear and concise description of what the motivation for the new feature is, and what problem it is solving. -->
At the moment, our CircleCI jobs `build-documentation` and `deploy-documentation` automatically build our documentation and commit/push it to the `gh-pages` branch. Most of the times, this commit includes only trivial changes (see e.g. 6d6c89beda44ebd2448526a18636c464e1111355). This makes the "last changed" date very unreliable, because it is always the date of the last commit, not the date when the documentation actually changed the last time.
### Proposed Solution
<!-- A clear and concise description of the feature you would like to add, and how it solves the motivating problem. -->
Do not commit the changes to the documentation if only the "Last updated on ..." changes
### Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, and why you're proposed solution is better. -->
Remove the "last changed" date
Do not commit trivial changes to documentation
### Motivation
<!-- A clear and concise description of what the motivation for the new feature is, and what problem it is solving. -->
At the moment, our CircleCI jobs `build-documentation` and `deploy-documentation` automatically build our documentation and commit/push it to the `gh-pages` branch. Most of the times, this commit includes only trivial changes (see e.g. 6d6c89beda44ebd2448526a18636c464e1111355). This makes the "last changed" date very unreliable, because it is always the date of the last commit, not the date when the documentation actually changed the last time.
### Proposed Solution
<!-- A clear and concise description of the feature you would like to add, and how it solves the motivating problem. -->
Do not commit the changes to the documentation if only the "Last updated on ..." changes
### Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, and why you're proposed solution is better. -->
Remove the "last changed" date
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sphinx/conf.py`
Content:
```
1 """
2 Configuration file for the Sphinx documentation builder.
3
4 This file only contains a selection of the most common options. For a full
5 list see the documentation:
6 https://www.sphinx-doc.org/en/master/usage/configuration.html
7 """
8
9 # -- Path setup --------------------------------------------------------------
10
11 import os
12 import sys
13 import inspect
14 import importlib
15 import django
16
17 from backend.settings import VERSION
18
19 # Append project source directory to path environment variable
20 sys.path.append(os.path.abspath("../src/"))
21 os.environ["DJANGO_SETTINGS_MODULE"] = "backend.settings"
22
23
24 # Setup Django
25 django.setup()
26
27
28 def setup(app):
29 """
30 Registeration and setup.
31
32 This method does the initial setup for the docs generation.
33 """
34 # Register the docstring processor with sphinx to improve the appearance of Django models
35 app.connect("autodoc-process-docstring", process_django_models)
36
37
38 # -- Project information -----------------------------------------------------
39
40
41 project = "integreat-cms"
42 # pylint: disable=redefined-builtin
43 copyright = "2020, Integreat"
44 author = "Integreat"
45
46 # The full version, including alpha/beta/rc tags
47 release = VERSION
48
49 # -- General configuration ---------------------------------------------------
50
51 # All enabled sphinx extensions
52 extensions = [
53 "sphinx.ext.autodoc",
54 "sphinx.ext.githubpages",
55 "sphinx.ext.intersphinx",
56 "sphinx.ext.linkcode",
57 "sphinxcontrib_django",
58 "sphinx_rtd_theme",
59 ]
60
61 # Enable cross-references to other documentations
62 intersphinx_mapping = {
63 "python": ("https://docs.python.org/3.7", None),
64 "pipenv": ("https://pipenv.pypa.io/en/latest/", None),
65 "sphinx": ("https://www.sphinx-doc.org/en/master/", None),
66 "sphinx-rtd-tutorial": (
67 "https://sphinx-rtd-tutorial.readthedocs.io/en/latest/",
68 None,
69 ),
70 "django": (
71 "https://docs.djangoproject.com/en/2.2/",
72 "https://docs.djangoproject.com/en/2.2/_objects/",
73 ),
74 "django-mptt": ("https://django-mptt.readthedocs.io/en/latest/", None),
75 "wsgi": ("https://wsgi.readthedocs.io/en/latest/", None),
76 }
77
78 # The path for patched template files
79 templates_path = ["templates"]
80
81 # -- Options for HTML output -------------------------------------------------
82
83 # The theme to use for HTML and HTML Help pages.
84 html_theme = "sphinx_rtd_theme"
85 # Do not show the project name, only the logo
86 html_theme_options = {
87 "logo_only": False,
88 "collapse_navigation": False,
89 }
90 # The logo shown in the menu bar
91 html_logo = "../src/cms/static/images/integreat-logo-white.png"
92 # The facivon of the html doc files
93 html_favicon = "../src/cms/static/images/favicon.ico"
94 # The url where the docs should be published (via gh-pages)
95 html_baseurl = "https://Integreat.github.io/cms-django/"
96 # Do not include links to the documentation source (.rst files) in build
97 html_show_sourcelink = False
98 # Do not include a link to sphinx
99 html_show_sphinx = False
100 # Include last updated timestamp
101 html_last_updated_fmt = "%b %d, %Y"
102
103 # -- Modify default Django model parameter types------------------------------
104
105
106 # pylint: disable=unused-argument, too-many-locals, too-many-branches
107 def process_django_models(app, what, name, obj, options, lines):
108 """Append correct param types from fields to model documentation."""
109 if inspect.isclass(obj) and issubclass(obj, django.db.models.Model):
110 # Intersphinx mapping to django.contrib.postgres documentation does not work, so here the manual link
111 postgres_docu = (
112 intersphinx_mapping.get("django")[1][0] + "ref/contrib/postgres/fields/"
113 )
114 # include_hidden to get also ManyToManyFields
115 for field in obj._meta.get_fields(include_hidden=True):
116 field_type = type(field).__name__
117 field_module = type(field).__module__
118 if field_module == "django.contrib.postgres.fields.array":
119 # Fix intersphinx mappings for django.contrib.postgres fields
120 type_line = (
121 f":type {field.name}: `ArrayField <{postgres_docu}#arrayfield>`_"
122 )
123 elif field_module == "django.contrib.postgres.fields.jsonb":
124 # Fix intersphinx mappings for django.contrib.postgres fields
125 type_line = (
126 f":type {field.name}: `JSONField <{postgres_docu}#jsonfield>`_"
127 )
128 elif field_module in ["django.db.models.fields.related", "mptt.fields"]:
129 # Fix intersphinx mappings for related fields (ForeignKey, OneToOneField, ManyToManyField, ...)
130 # Also includes related MPTT fields (TreeForeignKey, TreeOneToOneField, TreeManyToManyField, ...)
131 remote_model = field.remote_field.get_related_field().model
132 type_line = f":type {field.name}: {field_type} to :class:`~{remote_model.__module__}.{remote_model.__name__}`"
133 elif field_module == "django.db.models.fields.reverse_related":
134 # Fix intersphinx mappings for reverse related fields (ManyToOneRel, OneToOneRel, ManyToManyRel, ...)
135 remote_model = field.remote_field.model
136 type_line = f":type {field.name}: Reverse {field_type[:-3]} Relation from :class:`~{remote_model.__module__}.{remote_model.__name__}`"
137 else:
138 if "django.db.models" in field_module:
139 # Scope with django.db.models * imports (remove all sub-module-paths)
140 field_module = "django.db.models"
141 # Fix type hint to enable correct intersphinx mappings to other documentations
142 type_line = f":type {field.name}: ~{field_module}.{field_type}"
143 # This loop gets the indexes which are needed to update the type hints of the model parameters.
144 # It makes it possible to split the parameter section into multiple parts, e.g. params inherited from a base
145 # model and params of a sub model (otherwise the type hints would not be recognized when separated from
146 # the parameter description).
147 param_index = None
148 next_param_index = None
149 type_index = None
150 for index, line in enumerate(lines):
151 if param_index is None and f":param {field.name}:" in line:
152 # The index of the field param is only used to determine the next param line
153 param_index = index
154 elif (
155 param_index is not None
156 and next_param_index is None
157 and (":param " in line or line == "")
158 ):
159 # The line of the next param after the field, this is the index where we will insert the type.
160 # Sometimes the param descriptions extend over multiple lines, so we cannot just do param_index + 1.
161 # If the line is empty, the param description is finished, even if it extends over multiple lines.
162 next_param_index = index
163 elif type_index is None and f":type {field.name}:" in line:
164 # The index of the old type hint, we will either move this line or replace it
165 type_index = index
166 break
167 if next_param_index is None:
168 # In case the current field is the last param, we just append the type at the very end of lines
169 next_param_index = len(lines)
170 # For some params, the type line is not automatically generated and thus the type_index might be `None`
171 if type_index is not None:
172 # We delete the old type index, because we will replace it with the new type line
173 del lines[type_index]
174 # Insert the new type line just before the next param
175 lines.insert(next_param_index, type_line)
176 return lines
177
178
179 # -- Source Code links to GitHub ---------------------------------------------
180
181
182 def linkcode_resolve(domain, info):
183 """Link source code to GitHub."""
184 if domain != "py" or not info["module"]:
185 return None
186 filename = info["module"].replace(".", "/")
187 module = importlib.import_module(info["module"])
188 basename = os.path.splitext(module.__file__)[0]
189 if basename.endswith("__init__"):
190 filename += "/__init__"
191 item = module
192 line_number_reference = ""
193 for piece in info["fullname"].split("."):
194 item = getattr(item, piece)
195 try:
196 line_number_reference = f"#L{inspect.getsourcelines(item)[1]}"
197 except (TypeError, IOError):
198 pass
199 return f"https://github.com/Integreat/cms-django/blob/develop/src/{filename}.py{line_number_reference}"
200
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sphinx/conf.py b/sphinx/conf.py
--- a/sphinx/conf.py
+++ b/sphinx/conf.py
@@ -56,6 +56,7 @@
"sphinx.ext.linkcode",
"sphinxcontrib_django",
"sphinx_rtd_theme",
+ "sphinx_last_updated_by_git",
]
# Enable cross-references to other documentations
| {"golden_diff": "diff --git a/sphinx/conf.py b/sphinx/conf.py\n--- a/sphinx/conf.py\n+++ b/sphinx/conf.py\n@@ -56,6 +56,7 @@\n \"sphinx.ext.linkcode\",\n \"sphinxcontrib_django\",\n \"sphinx_rtd_theme\",\n+ \"sphinx_last_updated_by_git\",\n ]\n \n # Enable cross-references to other documentations\n", "issue": "Do not commit trivial changes to documentation\n### Motivation\r\n<!-- A clear and concise description of what the motivation for the new feature is, and what problem it is solving. -->\r\nAt the moment, our CircleCI jobs `build-documentation` and `deploy-documentation` automatically build our documentation and commit/push it to the `gh-pages` branch. Most of the times, this commit includes only trivial changes (see e.g. 6d6c89beda44ebd2448526a18636c464e1111355). This makes the \"last changed\" date very unreliable, because it is always the date of the last commit, not the date when the documentation actually changed the last time.\r\n\r\n### Proposed Solution\r\n<!-- A clear and concise description of the feature you would like to add, and how it solves the motivating problem. -->\r\nDo not commit the changes to the documentation if only the \"Last updated on ...\" changes\r\n\r\n### Alternatives\r\n<!-- A clear and concise description of any alternative solutions or features you've considered, and why you're proposed solution is better. -->\r\nRemove the \"last changed\" date\r\n\nDo not commit trivial changes to documentation\n### Motivation\r\n<!-- A clear and concise description of what the motivation for the new feature is, and what problem it is solving. -->\r\nAt the moment, our CircleCI jobs `build-documentation` and `deploy-documentation` automatically build our documentation and commit/push it to the `gh-pages` branch. Most of the times, this commit includes only trivial changes (see e.g. 6d6c89beda44ebd2448526a18636c464e1111355). This makes the \"last changed\" date very unreliable, because it is always the date of the last commit, not the date when the documentation actually changed the last time.\r\n\r\n### Proposed Solution\r\n<!-- A clear and concise description of the feature you would like to add, and how it solves the motivating problem. -->\r\nDo not commit the changes to the documentation if only the \"Last updated on ...\" changes\r\n\r\n### Alternatives\r\n<!-- A clear and concise description of any alternative solutions or features you've considered, and why you're proposed solution is better. -->\r\nRemove the \"last changed\" date\r\n\n", "before_files": [{"content": "\"\"\"\nConfiguration file for the Sphinx documentation builder.\n\nThis file only contains a selection of the most common options. For a full\nlist see the documentation:\nhttps://www.sphinx-doc.org/en/master/usage/configuration.html\n\"\"\"\n\n# -- Path setup --------------------------------------------------------------\n\nimport os\nimport sys\nimport inspect\nimport importlib\nimport django\n\nfrom backend.settings import VERSION\n\n# Append project source directory to path environment variable\nsys.path.append(os.path.abspath(\"../src/\"))\nos.environ[\"DJANGO_SETTINGS_MODULE\"] = \"backend.settings\"\n\n\n# Setup Django\ndjango.setup()\n\n\ndef setup(app):\n \"\"\"\n Registeration and setup.\n\n This method does the initial setup for the docs generation.\n \"\"\"\n # Register the docstring processor with sphinx to improve the appearance of Django models\n app.connect(\"autodoc-process-docstring\", process_django_models)\n\n\n# -- Project information -----------------------------------------------------\n\n\nproject = \"integreat-cms\"\n# pylint: disable=redefined-builtin\ncopyright = \"2020, Integreat\"\nauthor = \"Integreat\"\n\n# The full version, including alpha/beta/rc tags\nrelease = VERSION\n\n# -- General configuration ---------------------------------------------------\n\n# All enabled sphinx extensions\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.githubpages\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.linkcode\",\n \"sphinxcontrib_django\",\n \"sphinx_rtd_theme\",\n]\n\n# Enable cross-references to other documentations\nintersphinx_mapping = {\n \"python\": (\"https://docs.python.org/3.7\", None),\n \"pipenv\": (\"https://pipenv.pypa.io/en/latest/\", None),\n \"sphinx\": (\"https://www.sphinx-doc.org/en/master/\", None),\n \"sphinx-rtd-tutorial\": (\n \"https://sphinx-rtd-tutorial.readthedocs.io/en/latest/\",\n None,\n ),\n \"django\": (\n \"https://docs.djangoproject.com/en/2.2/\",\n \"https://docs.djangoproject.com/en/2.2/_objects/\",\n ),\n \"django-mptt\": (\"https://django-mptt.readthedocs.io/en/latest/\", None),\n \"wsgi\": (\"https://wsgi.readthedocs.io/en/latest/\", None),\n}\n\n# The path for patched template files\ntemplates_path = [\"templates\"]\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages.\nhtml_theme = \"sphinx_rtd_theme\"\n# Do not show the project name, only the logo\nhtml_theme_options = {\n \"logo_only\": False,\n \"collapse_navigation\": False,\n}\n# The logo shown in the menu bar\nhtml_logo = \"../src/cms/static/images/integreat-logo-white.png\"\n# The facivon of the html doc files\nhtml_favicon = \"../src/cms/static/images/favicon.ico\"\n# The url where the docs should be published (via gh-pages)\nhtml_baseurl = \"https://Integreat.github.io/cms-django/\"\n# Do not include links to the documentation source (.rst files) in build\nhtml_show_sourcelink = False\n# Do not include a link to sphinx\nhtml_show_sphinx = False\n# Include last updated timestamp\nhtml_last_updated_fmt = \"%b %d, %Y\"\n\n# -- Modify default Django model parameter types------------------------------\n\n\n# pylint: disable=unused-argument, too-many-locals, too-many-branches\ndef process_django_models(app, what, name, obj, options, lines):\n \"\"\"Append correct param types from fields to model documentation.\"\"\"\n if inspect.isclass(obj) and issubclass(obj, django.db.models.Model):\n # Intersphinx mapping to django.contrib.postgres documentation does not work, so here the manual link\n postgres_docu = (\n intersphinx_mapping.get(\"django\")[1][0] + \"ref/contrib/postgres/fields/\"\n )\n # include_hidden to get also ManyToManyFields\n for field in obj._meta.get_fields(include_hidden=True):\n field_type = type(field).__name__\n field_module = type(field).__module__\n if field_module == \"django.contrib.postgres.fields.array\":\n # Fix intersphinx mappings for django.contrib.postgres fields\n type_line = (\n f\":type {field.name}: `ArrayField <{postgres_docu}#arrayfield>`_\"\n )\n elif field_module == \"django.contrib.postgres.fields.jsonb\":\n # Fix intersphinx mappings for django.contrib.postgres fields\n type_line = (\n f\":type {field.name}: `JSONField <{postgres_docu}#jsonfield>`_\"\n )\n elif field_module in [\"django.db.models.fields.related\", \"mptt.fields\"]:\n # Fix intersphinx mappings for related fields (ForeignKey, OneToOneField, ManyToManyField, ...)\n # Also includes related MPTT fields (TreeForeignKey, TreeOneToOneField, TreeManyToManyField, ...)\n remote_model = field.remote_field.get_related_field().model\n type_line = f\":type {field.name}: {field_type} to :class:`~{remote_model.__module__}.{remote_model.__name__}`\"\n elif field_module == \"django.db.models.fields.reverse_related\":\n # Fix intersphinx mappings for reverse related fields (ManyToOneRel, OneToOneRel, ManyToManyRel, ...)\n remote_model = field.remote_field.model\n type_line = f\":type {field.name}: Reverse {field_type[:-3]} Relation from :class:`~{remote_model.__module__}.{remote_model.__name__}`\"\n else:\n if \"django.db.models\" in field_module:\n # Scope with django.db.models * imports (remove all sub-module-paths)\n field_module = \"django.db.models\"\n # Fix type hint to enable correct intersphinx mappings to other documentations\n type_line = f\":type {field.name}: ~{field_module}.{field_type}\"\n # This loop gets the indexes which are needed to update the type hints of the model parameters.\n # It makes it possible to split the parameter section into multiple parts, e.g. params inherited from a base\n # model and params of a sub model (otherwise the type hints would not be recognized when separated from\n # the parameter description).\n param_index = None\n next_param_index = None\n type_index = None\n for index, line in enumerate(lines):\n if param_index is None and f\":param {field.name}:\" in line:\n # The index of the field param is only used to determine the next param line\n param_index = index\n elif (\n param_index is not None\n and next_param_index is None\n and (\":param \" in line or line == \"\")\n ):\n # The line of the next param after the field, this is the index where we will insert the type.\n # Sometimes the param descriptions extend over multiple lines, so we cannot just do param_index + 1.\n # If the line is empty, the param description is finished, even if it extends over multiple lines.\n next_param_index = index\n elif type_index is None and f\":type {field.name}:\" in line:\n # The index of the old type hint, we will either move this line or replace it\n type_index = index\n break\n if next_param_index is None:\n # In case the current field is the last param, we just append the type at the very end of lines\n next_param_index = len(lines)\n # For some params, the type line is not automatically generated and thus the type_index might be `None`\n if type_index is not None:\n # We delete the old type index, because we will replace it with the new type line\n del lines[type_index]\n # Insert the new type line just before the next param\n lines.insert(next_param_index, type_line)\n return lines\n\n\n# -- Source Code links to GitHub ---------------------------------------------\n\n\ndef linkcode_resolve(domain, info):\n \"\"\"Link source code to GitHub.\"\"\"\n if domain != \"py\" or not info[\"module\"]:\n return None\n filename = info[\"module\"].replace(\".\", \"/\")\n module = importlib.import_module(info[\"module\"])\n basename = os.path.splitext(module.__file__)[0]\n if basename.endswith(\"__init__\"):\n filename += \"/__init__\"\n item = module\n line_number_reference = \"\"\n for piece in info[\"fullname\"].split(\".\"):\n item = getattr(item, piece)\n try:\n line_number_reference = f\"#L{inspect.getsourcelines(item)[1]}\"\n except (TypeError, IOError):\n pass\n return f\"https://github.com/Integreat/cms-django/blob/develop/src/{filename}.py{line_number_reference}\"\n", "path": "sphinx/conf.py"}], "after_files": [{"content": "\"\"\"\nConfiguration file for the Sphinx documentation builder.\n\nThis file only contains a selection of the most common options. For a full\nlist see the documentation:\nhttps://www.sphinx-doc.org/en/master/usage/configuration.html\n\"\"\"\n\n# -- Path setup --------------------------------------------------------------\n\nimport os\nimport sys\nimport inspect\nimport importlib\nimport django\n\nfrom backend.settings import VERSION\n\n# Append project source directory to path environment variable\nsys.path.append(os.path.abspath(\"../src/\"))\nos.environ[\"DJANGO_SETTINGS_MODULE\"] = \"backend.settings\"\n\n\n# Setup Django\ndjango.setup()\n\n\ndef setup(app):\n \"\"\"\n Registeration and setup.\n\n This method does the initial setup for the docs generation.\n \"\"\"\n # Register the docstring processor with sphinx to improve the appearance of Django models\n app.connect(\"autodoc-process-docstring\", process_django_models)\n\n\n# -- Project information -----------------------------------------------------\n\n\nproject = \"integreat-cms\"\n# pylint: disable=redefined-builtin\ncopyright = \"2020, Integreat\"\nauthor = \"Integreat\"\n\n# The full version, including alpha/beta/rc tags\nrelease = VERSION\n\n# -- General configuration ---------------------------------------------------\n\n# All enabled sphinx extensions\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.githubpages\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.linkcode\",\n \"sphinxcontrib_django\",\n \"sphinx_rtd_theme\",\n \"sphinx_last_updated_by_git\",\n]\n\n# Enable cross-references to other documentations\nintersphinx_mapping = {\n \"python\": (\"https://docs.python.org/3.7\", None),\n \"pipenv\": (\"https://pipenv.pypa.io/en/latest/\", None),\n \"sphinx\": (\"https://www.sphinx-doc.org/en/master/\", None),\n \"sphinx-rtd-tutorial\": (\n \"https://sphinx-rtd-tutorial.readthedocs.io/en/latest/\",\n None,\n ),\n \"django\": (\n \"https://docs.djangoproject.com/en/2.2/\",\n \"https://docs.djangoproject.com/en/2.2/_objects/\",\n ),\n \"django-mptt\": (\"https://django-mptt.readthedocs.io/en/latest/\", None),\n \"wsgi\": (\"https://wsgi.readthedocs.io/en/latest/\", None),\n}\n\n# The path for patched template files\ntemplates_path = [\"templates\"]\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages.\nhtml_theme = \"sphinx_rtd_theme\"\n# Do not show the project name, only the logo\nhtml_theme_options = {\n \"logo_only\": False,\n \"collapse_navigation\": False,\n}\n# The logo shown in the menu bar\nhtml_logo = \"../src/cms/static/images/integreat-logo-white.png\"\n# The facivon of the html doc files\nhtml_favicon = \"../src/cms/static/images/favicon.ico\"\n# The url where the docs should be published (via gh-pages)\nhtml_baseurl = \"https://Integreat.github.io/cms-django/\"\n# Do not include links to the documentation source (.rst files) in build\nhtml_show_sourcelink = False\n# Do not include a link to sphinx\nhtml_show_sphinx = False\n# Include last updated timestamp\nhtml_last_updated_fmt = \"%b %d, %Y\"\n\n# -- Modify default Django model parameter types------------------------------\n\n\n# pylint: disable=unused-argument, too-many-locals, too-many-branches\ndef process_django_models(app, what, name, obj, options, lines):\n \"\"\"Append correct param types from fields to model documentation.\"\"\"\n if inspect.isclass(obj) and issubclass(obj, django.db.models.Model):\n # Intersphinx mapping to django.contrib.postgres documentation does not work, so here the manual link\n postgres_docu = (\n intersphinx_mapping.get(\"django\")[1][0] + \"ref/contrib/postgres/fields/\"\n )\n # include_hidden to get also ManyToManyFields\n for field in obj._meta.get_fields(include_hidden=True):\n field_type = type(field).__name__\n field_module = type(field).__module__\n if field_module == \"django.contrib.postgres.fields.array\":\n # Fix intersphinx mappings for django.contrib.postgres fields\n type_line = (\n f\":type {field.name}: `ArrayField <{postgres_docu}#arrayfield>`_\"\n )\n elif field_module == \"django.contrib.postgres.fields.jsonb\":\n # Fix intersphinx mappings for django.contrib.postgres fields\n type_line = (\n f\":type {field.name}: `JSONField <{postgres_docu}#jsonfield>`_\"\n )\n elif field_module in [\"django.db.models.fields.related\", \"mptt.fields\"]:\n # Fix intersphinx mappings for related fields (ForeignKey, OneToOneField, ManyToManyField, ...)\n # Also includes related MPTT fields (TreeForeignKey, TreeOneToOneField, TreeManyToManyField, ...)\n remote_model = field.remote_field.get_related_field().model\n type_line = f\":type {field.name}: {field_type} to :class:`~{remote_model.__module__}.{remote_model.__name__}`\"\n elif field_module == \"django.db.models.fields.reverse_related\":\n # Fix intersphinx mappings for reverse related fields (ManyToOneRel, OneToOneRel, ManyToManyRel, ...)\n remote_model = field.remote_field.model\n type_line = f\":type {field.name}: Reverse {field_type[:-3]} Relation from :class:`~{remote_model.__module__}.{remote_model.__name__}`\"\n else:\n if \"django.db.models\" in field_module:\n # Scope with django.db.models * imports (remove all sub-module-paths)\n field_module = \"django.db.models\"\n # Fix type hint to enable correct intersphinx mappings to other documentations\n type_line = f\":type {field.name}: ~{field_module}.{field_type}\"\n # This loop gets the indexes which are needed to update the type hints of the model parameters.\n # It makes it possible to split the parameter section into multiple parts, e.g. params inherited from a base\n # model and params of a sub model (otherwise the type hints would not be recognized when separated from\n # the parameter description).\n param_index = None\n next_param_index = None\n type_index = None\n for index, line in enumerate(lines):\n if param_index is None and f\":param {field.name}:\" in line:\n # The index of the field param is only used to determine the next param line\n param_index = index\n elif (\n param_index is not None\n and next_param_index is None\n and (\":param \" in line or line == \"\")\n ):\n # The line of the next param after the field, this is the index where we will insert the type.\n # Sometimes the param descriptions extend over multiple lines, so we cannot just do param_index + 1.\n # If the line is empty, the param description is finished, even if it extends over multiple lines.\n next_param_index = index\n elif type_index is None and f\":type {field.name}:\" in line:\n # The index of the old type hint, we will either move this line or replace it\n type_index = index\n break\n if next_param_index is None:\n # In case the current field is the last param, we just append the type at the very end of lines\n next_param_index = len(lines)\n # For some params, the type line is not automatically generated and thus the type_index might be `None`\n if type_index is not None:\n # We delete the old type index, because we will replace it with the new type line\n del lines[type_index]\n # Insert the new type line just before the next param\n lines.insert(next_param_index, type_line)\n return lines\n\n\n# -- Source Code links to GitHub ---------------------------------------------\n\n\ndef linkcode_resolve(domain, info):\n \"\"\"Link source code to GitHub.\"\"\"\n if domain != \"py\" or not info[\"module\"]:\n return None\n filename = info[\"module\"].replace(\".\", \"/\")\n module = importlib.import_module(info[\"module\"])\n basename = os.path.splitext(module.__file__)[0]\n if basename.endswith(\"__init__\"):\n filename += \"/__init__\"\n item = module\n line_number_reference = \"\"\n for piece in info[\"fullname\"].split(\".\"):\n item = getattr(item, piece)\n try:\n line_number_reference = f\"#L{inspect.getsourcelines(item)[1]}\"\n except (TypeError, IOError):\n pass\n return f\"https://github.com/Integreat/cms-django/blob/develop/src/{filename}.py{line_number_reference}\"\n", "path": "sphinx/conf.py"}]} | 3,110 | 86 |
gh_patches_debug_43707 | rasdani/github-patches | git_diff | getsentry__sentry-14913 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Prevent Referer leaking on password reset page
Right now when you get a password reset link, the page you end up on contains the reset token in the url.
Doing this causes a problem in which any page redirected to after this, and _especially_ any third party assets being loaded leak this reset token via the `Referer` header on the request.
While this isn't specifically a problem since the only things we embed are trusted vendors, in the case of SaaS, it's better to just avoid it.
My proposal is to keep the same url, but this endpoint does the following:
* Accepts the request, and puts the token either into the browser session or browser's cookie.
* Redirect away from this url, to a generic password reset form page.
* This new password reset form page then takes the values from your session or cookie.
Doing this mitigates any ability for leaking data through any Referer header in the future.
I am open to other proposals, but this is the best that comes to my mind.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/sentry/web/decorators.py`
Content:
```
1 from __future__ import absolute_import
2
3 from functools import wraps
4 from django.core.urlresolvers import reverse
5 from django.http import HttpResponseRedirect
6 from django.contrib import messages
7 from django.utils.translation import ugettext_lazy as _
8
9 from sentry.utils import auth
10 from sentry_sdk import Hub
11
12 ERR_BAD_SIGNATURE = _("The link you followed is invalid or expired.")
13
14
15 def login_required(func):
16 @wraps(func)
17 def wrapped(request, *args, **kwargs):
18 if not request.user.is_authenticated():
19 auth.initiate_login(request, next_url=request.get_full_path())
20 if "organization_slug" in kwargs:
21 redirect_uri = reverse(
22 "sentry-auth-organization", args=[kwargs["organization_slug"]]
23 )
24 else:
25 redirect_uri = auth.get_login_url()
26 return HttpResponseRedirect(redirect_uri)
27 return func(request, *args, **kwargs)
28
29 return wrapped
30
31
32 def signed_auth_required(func):
33 @wraps(func)
34 def wrapped(request, *args, **kwargs):
35 if not request.user_from_signed_request:
36 messages.add_message(request, messages.ERROR, ERR_BAD_SIGNATURE)
37 return HttpResponseRedirect(auth.get_login_url())
38 return func(request, *args, **kwargs)
39
40 return wrapped
41
42
43 def transaction_start(endpoint):
44 def decorator(func):
45 @wraps(func)
46 def wrapped(request, *args, **kwargs):
47 with Hub.current.start_transaction(op="http.server", name=endpoint, sampled=True):
48 return func(request, *args, **kwargs)
49
50 return wrapped
51
52 return decorator
53
```
Path: `src/sentry/web/frontend/accounts.py`
Content:
```
1 from __future__ import absolute_import
2
3 import logging
4 from functools import partial, update_wrapper
5
6 from django.contrib import messages
7 from django.contrib.auth import login as login_user, authenticate
8 from django.template.context_processors import csrf
9 from django.core.urlresolvers import reverse
10 from django.db import transaction
11 from django.http import HttpResponseRedirect, Http404, HttpResponse
12 from django.views.decorators.http import require_http_methods
13 from django.views.decorators.cache import never_cache
14 from django.views.decorators.csrf import csrf_protect
15 from django.utils.translation import ugettext as _
16
17 from sentry.models import UserEmail, LostPasswordHash, Project, UserOption, Authenticator
18 from sentry.security import capture_security_activity
19 from sentry.signals import email_verified
20 from sentry.web.decorators import login_required, signed_auth_required
21 from sentry.web.forms.accounts import RecoverPasswordForm, ChangePasswordRecoverForm
22 from sentry.web.helpers import render_to_response
23 from sentry.utils import auth
24
25 logger = logging.getLogger("sentry.accounts")
26
27
28 @login_required
29 def login_redirect(request):
30 login_url = auth.get_login_redirect(request)
31 return HttpResponseRedirect(login_url)
32
33
34 def expired(request, user):
35 password_hash = LostPasswordHash.for_user(user)
36 password_hash.send_email(request)
37
38 context = {"email": password_hash.user.email}
39 return render_to_response("sentry/account/recover/expired.html", context, request)
40
41
42 def recover(request):
43 from sentry.app import ratelimiter
44
45 extra = {
46 "ip_address": request.META["REMOTE_ADDR"],
47 "user_agent": request.META.get("HTTP_USER_AGENT"),
48 }
49
50 if request.method == "POST" and ratelimiter.is_limited(
51 u"accounts:recover:{}".format(extra["ip_address"]),
52 limit=5,
53 window=60, # 5 per minute should be enough for anyone
54 ):
55 logger.warning("recover.rate-limited", extra=extra)
56
57 return HttpResponse(
58 "You have made too many password recovery attempts. Please try again later.",
59 content_type="text/plain",
60 status=429,
61 )
62
63 prefill = {"user": request.GET.get("email")}
64
65 form = RecoverPasswordForm(request.POST or None, initial=prefill)
66 extra["user_recovered"] = form.data.get("user")
67
68 if form.is_valid():
69 email = form.cleaned_data["user"]
70 if email:
71 password_hash = LostPasswordHash.for_user(email)
72 password_hash.send_email(request)
73
74 extra["passwordhash_id"] = password_hash.id
75 extra["user_id"] = password_hash.user_id
76
77 logger.info("recover.sent", extra=extra)
78
79 tpl = "sentry/account/recover/sent.html"
80 context = {"email": email}
81
82 return render_to_response(tpl, context, request)
83
84 if form._errors:
85 logger.warning("recover.error", extra=extra)
86
87 tpl = "sentry/account/recover/index.html"
88 context = {"form": form}
89
90 return render_to_response(tpl, context, request)
91
92
93 def recover_confirm(request, user_id, hash, mode="recover"):
94 try:
95 password_hash = LostPasswordHash.objects.get(user=user_id, hash=hash)
96 if not password_hash.is_valid():
97 password_hash.delete()
98 raise LostPasswordHash.DoesNotExist
99 user = password_hash.user
100
101 except LostPasswordHash.DoesNotExist:
102 return render_to_response(u"sentry/account/{}/{}.html".format(mode, "failure"), {}, request)
103
104 if request.method == "POST":
105 form = ChangePasswordRecoverForm(request.POST)
106 if form.is_valid():
107 with transaction.atomic():
108 user.set_password(form.cleaned_data["password"])
109 user.refresh_session_nonce(request)
110 user.save()
111
112 # Ugly way of doing this, but Django requires the backend be set
113 user = authenticate(username=user.username, password=form.cleaned_data["password"])
114
115 # Only log the user in if there is no two-factor on the
116 # account.
117 if not Authenticator.objects.user_has_2fa(user):
118 login_user(request, user)
119
120 password_hash.delete()
121
122 capture_security_activity(
123 account=user,
124 type="password-changed",
125 actor=request.user,
126 ip_address=request.META["REMOTE_ADDR"],
127 send_email=True,
128 )
129
130 return login_redirect(request)
131 else:
132 form = ChangePasswordRecoverForm()
133
134 return render_to_response(
135 u"sentry/account/{}/{}.html".format(mode, "confirm"), {"form": form}, request
136 )
137
138
139 # Set password variation of password recovery
140 set_password_confirm = partial(recover_confirm, mode="set_password")
141 set_password_confirm = update_wrapper(set_password_confirm, recover)
142
143
144 @login_required
145 @require_http_methods(["POST"])
146 def start_confirm_email(request):
147 from sentry.app import ratelimiter
148
149 if ratelimiter.is_limited(
150 u"auth:confirm-email:{}".format(request.user.id),
151 limit=10,
152 window=60, # 10 per minute should be enough for anyone
153 ):
154 return HttpResponse(
155 "You have made too many email confirmation requests. Please try again later.",
156 content_type="text/plain",
157 status=429,
158 )
159
160 if "primary-email" in request.POST:
161 email = request.POST.get("email")
162 try:
163 email_to_send = UserEmail.objects.get(user=request.user, email=email)
164 except UserEmail.DoesNotExist:
165 msg = _("There was an error confirming your email.")
166 level = messages.ERROR
167 else:
168 request.user.send_confirm_email_singular(email_to_send)
169 msg = _("A verification email has been sent to %s.") % (email)
170 level = messages.SUCCESS
171 messages.add_message(request, level, msg)
172 return HttpResponseRedirect(reverse("sentry-account-settings"))
173 elif request.user.has_unverified_emails():
174 request.user.send_confirm_emails()
175 unverified_emails = [e.email for e in request.user.get_unverified_emails()]
176 msg = _("A verification email has been sent to %s.") % (", ").join(unverified_emails)
177 for email in unverified_emails:
178 logger.info(
179 "user.email.start_confirm",
180 extra={
181 "user_id": request.user.id,
182 "ip_address": request.META["REMOTE_ADDR"],
183 "email": email,
184 },
185 )
186 else:
187 msg = _("Your email (%s) has already been verified.") % request.user.email
188 messages.add_message(request, messages.SUCCESS, msg)
189 return HttpResponseRedirect(reverse("sentry-account-settings-emails"))
190
191
192 def confirm_email(request, user_id, hash):
193 msg = _("Thanks for confirming your email")
194 level = messages.SUCCESS
195 try:
196 email = UserEmail.objects.get(user=user_id, validation_hash=hash)
197 if not email.hash_is_valid():
198 raise UserEmail.DoesNotExist
199 except UserEmail.DoesNotExist:
200 if request.user.is_anonymous() or request.user.has_unverified_emails():
201 msg = _(
202 "There was an error confirming your email. Please try again or "
203 "visit your Account Settings to resend the verification email."
204 )
205 level = messages.ERROR
206 else:
207 email.is_verified = True
208 email.validation_hash = ""
209 email.save()
210 email_verified.send(email=email.email, sender=email)
211 logger.info(
212 "user.email.confirm",
213 extra={
214 "user_id": user_id,
215 "ip_address": request.META["REMOTE_ADDR"],
216 "email": email.email,
217 },
218 )
219 messages.add_message(request, level, msg)
220 return HttpResponseRedirect(reverse("sentry-account-settings-emails"))
221
222
223 @csrf_protect
224 @never_cache
225 @signed_auth_required
226 @transaction.atomic
227 def email_unsubscribe_project(request, project_id):
228 # For now we only support getting here from the signed link.
229 if not request.user_from_signed_request:
230 raise Http404()
231 try:
232 project = Project.objects.get(pk=project_id)
233 except Project.DoesNotExist:
234 raise Http404()
235
236 if request.method == "POST":
237 if "cancel" not in request.POST:
238 UserOption.objects.set_value(
239 user=request.user, key="mail:alert", value=0, project=project
240 )
241 return HttpResponseRedirect(auth.get_login_url())
242
243 context = csrf(request)
244 context["project"] = project
245 return render_to_response("sentry/account/email_unsubscribe_project.html", context, request)
246
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/sentry/web/decorators.py b/src/sentry/web/decorators.py
--- a/src/sentry/web/decorators.py
+++ b/src/sentry/web/decorators.py
@@ -40,6 +40,19 @@
return wrapped
+def set_referrer_policy(policy):
+ def real_decorator(func):
+ @wraps(func)
+ def wrapped(request, *args, **kwargs):
+ response = func(request, *args, **kwargs)
+ response["Referrer-Policy"] = policy
+ return response
+
+ return wrapped
+
+ return real_decorator
+
+
def transaction_start(endpoint):
def decorator(func):
@wraps(func)
diff --git a/src/sentry/web/frontend/accounts.py b/src/sentry/web/frontend/accounts.py
--- a/src/sentry/web/frontend/accounts.py
+++ b/src/sentry/web/frontend/accounts.py
@@ -17,7 +17,7 @@
from sentry.models import UserEmail, LostPasswordHash, Project, UserOption, Authenticator
from sentry.security import capture_security_activity
from sentry.signals import email_verified
-from sentry.web.decorators import login_required, signed_auth_required
+from sentry.web.decorators import login_required, signed_auth_required, set_referrer_policy
from sentry.web.forms.accounts import RecoverPasswordForm, ChangePasswordRecoverForm
from sentry.web.helpers import render_to_response
from sentry.utils import auth
@@ -25,6 +25,10 @@
logger = logging.getLogger("sentry.accounts")
+def get_template(mode, name):
+ return u"sentry/account/{}/{}.html".format(mode, name)
+
+
@login_required
def login_redirect(request):
login_url = auth.get_login_redirect(request)
@@ -36,7 +40,7 @@
password_hash.send_email(request)
context = {"email": password_hash.user.email}
- return render_to_response("sentry/account/recover/expired.html", context, request)
+ return render_to_response(get_template("recover", "expired"), context, request)
def recover(request):
@@ -76,20 +80,19 @@
logger.info("recover.sent", extra=extra)
- tpl = "sentry/account/recover/sent.html"
context = {"email": email}
- return render_to_response(tpl, context, request)
+ return render_to_response(get_template("recover", "sent"), context, request)
if form._errors:
logger.warning("recover.error", extra=extra)
- tpl = "sentry/account/recover/index.html"
context = {"form": form}
- return render_to_response(tpl, context, request)
+ return render_to_response(get_template("recover", "index"), context, request)
+@set_referrer_policy("strict-origin-when-cross-origin")
def recover_confirm(request, user_id, hash, mode="recover"):
try:
password_hash = LostPasswordHash.objects.get(user=user_id, hash=hash)
@@ -99,7 +102,7 @@
user = password_hash.user
except LostPasswordHash.DoesNotExist:
- return render_to_response(u"sentry/account/{}/{}.html".format(mode, "failure"), {}, request)
+ return render_to_response(get_template(mode, "failure"), {}, request)
if request.method == "POST":
form = ChangePasswordRecoverForm(request.POST)
@@ -131,9 +134,7 @@
else:
form = ChangePasswordRecoverForm()
- return render_to_response(
- u"sentry/account/{}/{}.html".format(mode, "confirm"), {"form": form}, request
- )
+ return render_to_response(get_template(mode, "confirm"), {"form": form}, request)
# Set password variation of password recovery
@@ -189,6 +190,7 @@
return HttpResponseRedirect(reverse("sentry-account-settings-emails"))
+@set_referrer_policy("strict-origin-when-cross-origin")
def confirm_email(request, user_id, hash):
msg = _("Thanks for confirming your email")
level = messages.SUCCESS
| {"golden_diff": "diff --git a/src/sentry/web/decorators.py b/src/sentry/web/decorators.py\n--- a/src/sentry/web/decorators.py\n+++ b/src/sentry/web/decorators.py\n@@ -40,6 +40,19 @@\n return wrapped\n \n \n+def set_referrer_policy(policy):\n+ def real_decorator(func):\n+ @wraps(func)\n+ def wrapped(request, *args, **kwargs):\n+ response = func(request, *args, **kwargs)\n+ response[\"Referrer-Policy\"] = policy\n+ return response\n+\n+ return wrapped\n+\n+ return real_decorator\n+\n+\n def transaction_start(endpoint):\n def decorator(func):\n @wraps(func)\ndiff --git a/src/sentry/web/frontend/accounts.py b/src/sentry/web/frontend/accounts.py\n--- a/src/sentry/web/frontend/accounts.py\n+++ b/src/sentry/web/frontend/accounts.py\n@@ -17,7 +17,7 @@\n from sentry.models import UserEmail, LostPasswordHash, Project, UserOption, Authenticator\n from sentry.security import capture_security_activity\n from sentry.signals import email_verified\n-from sentry.web.decorators import login_required, signed_auth_required\n+from sentry.web.decorators import login_required, signed_auth_required, set_referrer_policy\n from sentry.web.forms.accounts import RecoverPasswordForm, ChangePasswordRecoverForm\n from sentry.web.helpers import render_to_response\n from sentry.utils import auth\n@@ -25,6 +25,10 @@\n logger = logging.getLogger(\"sentry.accounts\")\n \n \n+def get_template(mode, name):\n+ return u\"sentry/account/{}/{}.html\".format(mode, name)\n+\n+\n @login_required\n def login_redirect(request):\n login_url = auth.get_login_redirect(request)\n@@ -36,7 +40,7 @@\n password_hash.send_email(request)\n \n context = {\"email\": password_hash.user.email}\n- return render_to_response(\"sentry/account/recover/expired.html\", context, request)\n+ return render_to_response(get_template(\"recover\", \"expired\"), context, request)\n \n \n def recover(request):\n@@ -76,20 +80,19 @@\n \n logger.info(\"recover.sent\", extra=extra)\n \n- tpl = \"sentry/account/recover/sent.html\"\n context = {\"email\": email}\n \n- return render_to_response(tpl, context, request)\n+ return render_to_response(get_template(\"recover\", \"sent\"), context, request)\n \n if form._errors:\n logger.warning(\"recover.error\", extra=extra)\n \n- tpl = \"sentry/account/recover/index.html\"\n context = {\"form\": form}\n \n- return render_to_response(tpl, context, request)\n+ return render_to_response(get_template(\"recover\", \"index\"), context, request)\n \n \n+@set_referrer_policy(\"strict-origin-when-cross-origin\")\n def recover_confirm(request, user_id, hash, mode=\"recover\"):\n try:\n password_hash = LostPasswordHash.objects.get(user=user_id, hash=hash)\n@@ -99,7 +102,7 @@\n user = password_hash.user\n \n except LostPasswordHash.DoesNotExist:\n- return render_to_response(u\"sentry/account/{}/{}.html\".format(mode, \"failure\"), {}, request)\n+ return render_to_response(get_template(mode, \"failure\"), {}, request)\n \n if request.method == \"POST\":\n form = ChangePasswordRecoverForm(request.POST)\n@@ -131,9 +134,7 @@\n else:\n form = ChangePasswordRecoverForm()\n \n- return render_to_response(\n- u\"sentry/account/{}/{}.html\".format(mode, \"confirm\"), {\"form\": form}, request\n- )\n+ return render_to_response(get_template(mode, \"confirm\"), {\"form\": form}, request)\n \n \n # Set password variation of password recovery\n@@ -189,6 +190,7 @@\n return HttpResponseRedirect(reverse(\"sentry-account-settings-emails\"))\n \n \n+@set_referrer_policy(\"strict-origin-when-cross-origin\")\n def confirm_email(request, user_id, hash):\n msg = _(\"Thanks for confirming your email\")\n level = messages.SUCCESS\n", "issue": "Prevent Referer leaking on password reset page\nRight now when you get a password reset link, the page you end up on contains the reset token in the url.\r\n\r\nDoing this causes a problem in which any page redirected to after this, and _especially_ any third party assets being loaded leak this reset token via the `Referer` header on the request.\r\n\r\nWhile this isn't specifically a problem since the only things we embed are trusted vendors, in the case of SaaS, it's better to just avoid it.\r\n\r\nMy proposal is to keep the same url, but this endpoint does the following:\r\n\r\n* Accepts the request, and puts the token either into the browser session or browser's cookie.\r\n* Redirect away from this url, to a generic password reset form page.\r\n* This new password reset form page then takes the values from your session or cookie.\r\n\r\nDoing this mitigates any ability for leaking data through any Referer header in the future.\r\n\r\nI am open to other proposals, but this is the best that comes to my mind.\n", "before_files": [{"content": "from __future__ import absolute_import\n\nfrom functools import wraps\nfrom django.core.urlresolvers import reverse\nfrom django.http import HttpResponseRedirect\nfrom django.contrib import messages\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom sentry.utils import auth\nfrom sentry_sdk import Hub\n\nERR_BAD_SIGNATURE = _(\"The link you followed is invalid or expired.\")\n\n\ndef login_required(func):\n @wraps(func)\n def wrapped(request, *args, **kwargs):\n if not request.user.is_authenticated():\n auth.initiate_login(request, next_url=request.get_full_path())\n if \"organization_slug\" in kwargs:\n redirect_uri = reverse(\n \"sentry-auth-organization\", args=[kwargs[\"organization_slug\"]]\n )\n else:\n redirect_uri = auth.get_login_url()\n return HttpResponseRedirect(redirect_uri)\n return func(request, *args, **kwargs)\n\n return wrapped\n\n\ndef signed_auth_required(func):\n @wraps(func)\n def wrapped(request, *args, **kwargs):\n if not request.user_from_signed_request:\n messages.add_message(request, messages.ERROR, ERR_BAD_SIGNATURE)\n return HttpResponseRedirect(auth.get_login_url())\n return func(request, *args, **kwargs)\n\n return wrapped\n\n\ndef transaction_start(endpoint):\n def decorator(func):\n @wraps(func)\n def wrapped(request, *args, **kwargs):\n with Hub.current.start_transaction(op=\"http.server\", name=endpoint, sampled=True):\n return func(request, *args, **kwargs)\n\n return wrapped\n\n return decorator\n", "path": "src/sentry/web/decorators.py"}, {"content": "from __future__ import absolute_import\n\nimport logging\nfrom functools import partial, update_wrapper\n\nfrom django.contrib import messages\nfrom django.contrib.auth import login as login_user, authenticate\nfrom django.template.context_processors import csrf\nfrom django.core.urlresolvers import reverse\nfrom django.db import transaction\nfrom django.http import HttpResponseRedirect, Http404, HttpResponse\nfrom django.views.decorators.http import require_http_methods\nfrom django.views.decorators.cache import never_cache\nfrom django.views.decorators.csrf import csrf_protect\nfrom django.utils.translation import ugettext as _\n\nfrom sentry.models import UserEmail, LostPasswordHash, Project, UserOption, Authenticator\nfrom sentry.security import capture_security_activity\nfrom sentry.signals import email_verified\nfrom sentry.web.decorators import login_required, signed_auth_required\nfrom sentry.web.forms.accounts import RecoverPasswordForm, ChangePasswordRecoverForm\nfrom sentry.web.helpers import render_to_response\nfrom sentry.utils import auth\n\nlogger = logging.getLogger(\"sentry.accounts\")\n\n\n@login_required\ndef login_redirect(request):\n login_url = auth.get_login_redirect(request)\n return HttpResponseRedirect(login_url)\n\n\ndef expired(request, user):\n password_hash = LostPasswordHash.for_user(user)\n password_hash.send_email(request)\n\n context = {\"email\": password_hash.user.email}\n return render_to_response(\"sentry/account/recover/expired.html\", context, request)\n\n\ndef recover(request):\n from sentry.app import ratelimiter\n\n extra = {\n \"ip_address\": request.META[\"REMOTE_ADDR\"],\n \"user_agent\": request.META.get(\"HTTP_USER_AGENT\"),\n }\n\n if request.method == \"POST\" and ratelimiter.is_limited(\n u\"accounts:recover:{}\".format(extra[\"ip_address\"]),\n limit=5,\n window=60, # 5 per minute should be enough for anyone\n ):\n logger.warning(\"recover.rate-limited\", extra=extra)\n\n return HttpResponse(\n \"You have made too many password recovery attempts. Please try again later.\",\n content_type=\"text/plain\",\n status=429,\n )\n\n prefill = {\"user\": request.GET.get(\"email\")}\n\n form = RecoverPasswordForm(request.POST or None, initial=prefill)\n extra[\"user_recovered\"] = form.data.get(\"user\")\n\n if form.is_valid():\n email = form.cleaned_data[\"user\"]\n if email:\n password_hash = LostPasswordHash.for_user(email)\n password_hash.send_email(request)\n\n extra[\"passwordhash_id\"] = password_hash.id\n extra[\"user_id\"] = password_hash.user_id\n\n logger.info(\"recover.sent\", extra=extra)\n\n tpl = \"sentry/account/recover/sent.html\"\n context = {\"email\": email}\n\n return render_to_response(tpl, context, request)\n\n if form._errors:\n logger.warning(\"recover.error\", extra=extra)\n\n tpl = \"sentry/account/recover/index.html\"\n context = {\"form\": form}\n\n return render_to_response(tpl, context, request)\n\n\ndef recover_confirm(request, user_id, hash, mode=\"recover\"):\n try:\n password_hash = LostPasswordHash.objects.get(user=user_id, hash=hash)\n if not password_hash.is_valid():\n password_hash.delete()\n raise LostPasswordHash.DoesNotExist\n user = password_hash.user\n\n except LostPasswordHash.DoesNotExist:\n return render_to_response(u\"sentry/account/{}/{}.html\".format(mode, \"failure\"), {}, request)\n\n if request.method == \"POST\":\n form = ChangePasswordRecoverForm(request.POST)\n if form.is_valid():\n with transaction.atomic():\n user.set_password(form.cleaned_data[\"password\"])\n user.refresh_session_nonce(request)\n user.save()\n\n # Ugly way of doing this, but Django requires the backend be set\n user = authenticate(username=user.username, password=form.cleaned_data[\"password\"])\n\n # Only log the user in if there is no two-factor on the\n # account.\n if not Authenticator.objects.user_has_2fa(user):\n login_user(request, user)\n\n password_hash.delete()\n\n capture_security_activity(\n account=user,\n type=\"password-changed\",\n actor=request.user,\n ip_address=request.META[\"REMOTE_ADDR\"],\n send_email=True,\n )\n\n return login_redirect(request)\n else:\n form = ChangePasswordRecoverForm()\n\n return render_to_response(\n u\"sentry/account/{}/{}.html\".format(mode, \"confirm\"), {\"form\": form}, request\n )\n\n\n# Set password variation of password recovery\nset_password_confirm = partial(recover_confirm, mode=\"set_password\")\nset_password_confirm = update_wrapper(set_password_confirm, recover)\n\n\n@login_required\n@require_http_methods([\"POST\"])\ndef start_confirm_email(request):\n from sentry.app import ratelimiter\n\n if ratelimiter.is_limited(\n u\"auth:confirm-email:{}\".format(request.user.id),\n limit=10,\n window=60, # 10 per minute should be enough for anyone\n ):\n return HttpResponse(\n \"You have made too many email confirmation requests. Please try again later.\",\n content_type=\"text/plain\",\n status=429,\n )\n\n if \"primary-email\" in request.POST:\n email = request.POST.get(\"email\")\n try:\n email_to_send = UserEmail.objects.get(user=request.user, email=email)\n except UserEmail.DoesNotExist:\n msg = _(\"There was an error confirming your email.\")\n level = messages.ERROR\n else:\n request.user.send_confirm_email_singular(email_to_send)\n msg = _(\"A verification email has been sent to %s.\") % (email)\n level = messages.SUCCESS\n messages.add_message(request, level, msg)\n return HttpResponseRedirect(reverse(\"sentry-account-settings\"))\n elif request.user.has_unverified_emails():\n request.user.send_confirm_emails()\n unverified_emails = [e.email for e in request.user.get_unverified_emails()]\n msg = _(\"A verification email has been sent to %s.\") % (\", \").join(unverified_emails)\n for email in unverified_emails:\n logger.info(\n \"user.email.start_confirm\",\n extra={\n \"user_id\": request.user.id,\n \"ip_address\": request.META[\"REMOTE_ADDR\"],\n \"email\": email,\n },\n )\n else:\n msg = _(\"Your email (%s) has already been verified.\") % request.user.email\n messages.add_message(request, messages.SUCCESS, msg)\n return HttpResponseRedirect(reverse(\"sentry-account-settings-emails\"))\n\n\ndef confirm_email(request, user_id, hash):\n msg = _(\"Thanks for confirming your email\")\n level = messages.SUCCESS\n try:\n email = UserEmail.objects.get(user=user_id, validation_hash=hash)\n if not email.hash_is_valid():\n raise UserEmail.DoesNotExist\n except UserEmail.DoesNotExist:\n if request.user.is_anonymous() or request.user.has_unverified_emails():\n msg = _(\n \"There was an error confirming your email. Please try again or \"\n \"visit your Account Settings to resend the verification email.\"\n )\n level = messages.ERROR\n else:\n email.is_verified = True\n email.validation_hash = \"\"\n email.save()\n email_verified.send(email=email.email, sender=email)\n logger.info(\n \"user.email.confirm\",\n extra={\n \"user_id\": user_id,\n \"ip_address\": request.META[\"REMOTE_ADDR\"],\n \"email\": email.email,\n },\n )\n messages.add_message(request, level, msg)\n return HttpResponseRedirect(reverse(\"sentry-account-settings-emails\"))\n\n\n@csrf_protect\n@never_cache\n@signed_auth_required\[email protected]\ndef email_unsubscribe_project(request, project_id):\n # For now we only support getting here from the signed link.\n if not request.user_from_signed_request:\n raise Http404()\n try:\n project = Project.objects.get(pk=project_id)\n except Project.DoesNotExist:\n raise Http404()\n\n if request.method == \"POST\":\n if \"cancel\" not in request.POST:\n UserOption.objects.set_value(\n user=request.user, key=\"mail:alert\", value=0, project=project\n )\n return HttpResponseRedirect(auth.get_login_url())\n\n context = csrf(request)\n context[\"project\"] = project\n return render_to_response(\"sentry/account/email_unsubscribe_project.html\", context, request)\n", "path": "src/sentry/web/frontend/accounts.py"}], "after_files": [{"content": "from __future__ import absolute_import\n\nfrom functools import wraps\nfrom django.core.urlresolvers import reverse\nfrom django.http import HttpResponseRedirect\nfrom django.contrib import messages\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom sentry.utils import auth\nfrom sentry_sdk import Hub\n\nERR_BAD_SIGNATURE = _(\"The link you followed is invalid or expired.\")\n\n\ndef login_required(func):\n @wraps(func)\n def wrapped(request, *args, **kwargs):\n if not request.user.is_authenticated():\n auth.initiate_login(request, next_url=request.get_full_path())\n if \"organization_slug\" in kwargs:\n redirect_uri = reverse(\n \"sentry-auth-organization\", args=[kwargs[\"organization_slug\"]]\n )\n else:\n redirect_uri = auth.get_login_url()\n return HttpResponseRedirect(redirect_uri)\n return func(request, *args, **kwargs)\n\n return wrapped\n\n\ndef signed_auth_required(func):\n @wraps(func)\n def wrapped(request, *args, **kwargs):\n if not request.user_from_signed_request:\n messages.add_message(request, messages.ERROR, ERR_BAD_SIGNATURE)\n return HttpResponseRedirect(auth.get_login_url())\n return func(request, *args, **kwargs)\n\n return wrapped\n\n\ndef set_referrer_policy(policy):\n def real_decorator(func):\n @wraps(func)\n def wrapped(request, *args, **kwargs):\n response = func(request, *args, **kwargs)\n response[\"Referrer-Policy\"] = policy\n return response\n\n return wrapped\n\n return real_decorator\n\n\ndef transaction_start(endpoint):\n def decorator(func):\n @wraps(func)\n def wrapped(request, *args, **kwargs):\n with Hub.current.start_transaction(op=\"http.server\", name=endpoint, sampled=True):\n return func(request, *args, **kwargs)\n\n return wrapped\n\n return decorator\n", "path": "src/sentry/web/decorators.py"}, {"content": "from __future__ import absolute_import\n\nimport logging\nfrom functools import partial, update_wrapper\n\nfrom django.contrib import messages\nfrom django.contrib.auth import login as login_user, authenticate\nfrom django.template.context_processors import csrf\nfrom django.core.urlresolvers import reverse\nfrom django.db import transaction\nfrom django.http import HttpResponseRedirect, Http404, HttpResponse\nfrom django.views.decorators.http import require_http_methods\nfrom django.views.decorators.cache import never_cache\nfrom django.views.decorators.csrf import csrf_protect\nfrom django.utils.translation import ugettext as _\n\nfrom sentry.models import UserEmail, LostPasswordHash, Project, UserOption, Authenticator\nfrom sentry.security import capture_security_activity\nfrom sentry.signals import email_verified\nfrom sentry.web.decorators import login_required, signed_auth_required, set_referrer_policy\nfrom sentry.web.forms.accounts import RecoverPasswordForm, ChangePasswordRecoverForm\nfrom sentry.web.helpers import render_to_response\nfrom sentry.utils import auth\n\nlogger = logging.getLogger(\"sentry.accounts\")\n\n\ndef get_template(mode, name):\n return u\"sentry/account/{}/{}.html\".format(mode, name)\n\n\n@login_required\ndef login_redirect(request):\n login_url = auth.get_login_redirect(request)\n return HttpResponseRedirect(login_url)\n\n\ndef expired(request, user):\n password_hash = LostPasswordHash.for_user(user)\n password_hash.send_email(request)\n\n context = {\"email\": password_hash.user.email}\n return render_to_response(get_template(\"recover\", \"expired\"), context, request)\n\n\ndef recover(request):\n from sentry.app import ratelimiter\n\n extra = {\n \"ip_address\": request.META[\"REMOTE_ADDR\"],\n \"user_agent\": request.META.get(\"HTTP_USER_AGENT\"),\n }\n\n if request.method == \"POST\" and ratelimiter.is_limited(\n u\"accounts:recover:{}\".format(extra[\"ip_address\"]),\n limit=5,\n window=60, # 5 per minute should be enough for anyone\n ):\n logger.warning(\"recover.rate-limited\", extra=extra)\n\n return HttpResponse(\n \"You have made too many password recovery attempts. Please try again later.\",\n content_type=\"text/plain\",\n status=429,\n )\n\n prefill = {\"user\": request.GET.get(\"email\")}\n\n form = RecoverPasswordForm(request.POST or None, initial=prefill)\n extra[\"user_recovered\"] = form.data.get(\"user\")\n\n if form.is_valid():\n email = form.cleaned_data[\"user\"]\n if email:\n password_hash = LostPasswordHash.for_user(email)\n password_hash.send_email(request)\n\n extra[\"passwordhash_id\"] = password_hash.id\n extra[\"user_id\"] = password_hash.user_id\n\n logger.info(\"recover.sent\", extra=extra)\n\n context = {\"email\": email}\n\n return render_to_response(get_template(\"recover\", \"sent\"), context, request)\n\n if form._errors:\n logger.warning(\"recover.error\", extra=extra)\n\n context = {\"form\": form}\n\n return render_to_response(get_template(\"recover\", \"index\"), context, request)\n\n\n@set_referrer_policy(\"strict-origin-when-cross-origin\")\ndef recover_confirm(request, user_id, hash, mode=\"recover\"):\n try:\n password_hash = LostPasswordHash.objects.get(user=user_id, hash=hash)\n if not password_hash.is_valid():\n password_hash.delete()\n raise LostPasswordHash.DoesNotExist\n user = password_hash.user\n\n except LostPasswordHash.DoesNotExist:\n return render_to_response(get_template(mode, \"failure\"), {}, request)\n\n if request.method == \"POST\":\n form = ChangePasswordRecoverForm(request.POST)\n if form.is_valid():\n with transaction.atomic():\n user.set_password(form.cleaned_data[\"password\"])\n user.refresh_session_nonce(request)\n user.save()\n\n # Ugly way of doing this, but Django requires the backend be set\n user = authenticate(username=user.username, password=form.cleaned_data[\"password\"])\n\n # Only log the user in if there is no two-factor on the\n # account.\n if not Authenticator.objects.user_has_2fa(user):\n login_user(request, user)\n\n password_hash.delete()\n\n capture_security_activity(\n account=user,\n type=\"password-changed\",\n actor=request.user,\n ip_address=request.META[\"REMOTE_ADDR\"],\n send_email=True,\n )\n\n return login_redirect(request)\n else:\n form = ChangePasswordRecoverForm()\n\n return render_to_response(get_template(mode, \"confirm\"), {\"form\": form}, request)\n\n\n# Set password variation of password recovery\nset_password_confirm = partial(recover_confirm, mode=\"set_password\")\nset_password_confirm = update_wrapper(set_password_confirm, recover)\n\n\n@login_required\n@require_http_methods([\"POST\"])\ndef start_confirm_email(request):\n from sentry.app import ratelimiter\n\n if ratelimiter.is_limited(\n u\"auth:confirm-email:{}\".format(request.user.id),\n limit=10,\n window=60, # 10 per minute should be enough for anyone\n ):\n return HttpResponse(\n \"You have made too many email confirmation requests. Please try again later.\",\n content_type=\"text/plain\",\n status=429,\n )\n\n if \"primary-email\" in request.POST:\n email = request.POST.get(\"email\")\n try:\n email_to_send = UserEmail.objects.get(user=request.user, email=email)\n except UserEmail.DoesNotExist:\n msg = _(\"There was an error confirming your email.\")\n level = messages.ERROR\n else:\n request.user.send_confirm_email_singular(email_to_send)\n msg = _(\"A verification email has been sent to %s.\") % (email)\n level = messages.SUCCESS\n messages.add_message(request, level, msg)\n return HttpResponseRedirect(reverse(\"sentry-account-settings\"))\n elif request.user.has_unverified_emails():\n request.user.send_confirm_emails()\n unverified_emails = [e.email for e in request.user.get_unverified_emails()]\n msg = _(\"A verification email has been sent to %s.\") % (\", \").join(unverified_emails)\n for email in unverified_emails:\n logger.info(\n \"user.email.start_confirm\",\n extra={\n \"user_id\": request.user.id,\n \"ip_address\": request.META[\"REMOTE_ADDR\"],\n \"email\": email,\n },\n )\n else:\n msg = _(\"Your email (%s) has already been verified.\") % request.user.email\n messages.add_message(request, messages.SUCCESS, msg)\n return HttpResponseRedirect(reverse(\"sentry-account-settings-emails\"))\n\n\n@set_referrer_policy(\"strict-origin-when-cross-origin\")\ndef confirm_email(request, user_id, hash):\n msg = _(\"Thanks for confirming your email\")\n level = messages.SUCCESS\n try:\n email = UserEmail.objects.get(user=user_id, validation_hash=hash)\n if not email.hash_is_valid():\n raise UserEmail.DoesNotExist\n except UserEmail.DoesNotExist:\n if request.user.is_anonymous() or request.user.has_unverified_emails():\n msg = _(\n \"There was an error confirming your email. Please try again or \"\n \"visit your Account Settings to resend the verification email.\"\n )\n level = messages.ERROR\n else:\n email.is_verified = True\n email.validation_hash = \"\"\n email.save()\n email_verified.send(email=email.email, sender=email)\n logger.info(\n \"user.email.confirm\",\n extra={\n \"user_id\": user_id,\n \"ip_address\": request.META[\"REMOTE_ADDR\"],\n \"email\": email.email,\n },\n )\n messages.add_message(request, level, msg)\n return HttpResponseRedirect(reverse(\"sentry-account-settings-emails\"))\n\n\n@csrf_protect\n@never_cache\n@signed_auth_required\[email protected]\ndef email_unsubscribe_project(request, project_id):\n # For now we only support getting here from the signed link.\n if not request.user_from_signed_request:\n raise Http404()\n try:\n project = Project.objects.get(pk=project_id)\n except Project.DoesNotExist:\n raise Http404()\n\n if request.method == \"POST\":\n if \"cancel\" not in request.POST:\n UserOption.objects.set_value(\n user=request.user, key=\"mail:alert\", value=0, project=project\n )\n return HttpResponseRedirect(auth.get_login_url())\n\n context = csrf(request)\n context[\"project\"] = project\n return render_to_response(\"sentry/account/email_unsubscribe_project.html\", context, request)\n", "path": "src/sentry/web/frontend/accounts.py"}]} | 3,334 | 898 |
gh_patches_debug_17497 | rasdani/github-patches | git_diff | open-mmlab__mmdetection-9319 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug] faster-rcnn_r50-caffe_fpn_ms-2x and 3x don't work
### Prerequisite
- [X] I have searched [Issues](https://github.com/open-mmlab/mmdetection/issues) and [Discussions](https://github.com/open-mmlab/mmdetection/discussions) but cannot get the expected help.
- [X] I have read the [FAQ documentation](https://mmdetection.readthedocs.io/en/latest/faq.html) but cannot get the expected help.
- [X] The bug has not been fixed in the [latest version (master)](https://github.com/open-mmlab/mmdetection) or [latest version (3.x)](https://github.com/open-mmlab/mmdetection/tree/dev-3.x).
### Task
I'm using the official example scripts/configs for the officially supported tasks/models/datasets.
### Branch
3.x branch https://github.com/open-mmlab/mmdetection/tree/3.x
### Environment
OrderedDict([('sys.platform', 'linux'),
('Python', '3.7.15 (default, Oct 12 2022, 19:14:55) [GCC 7.5.0]'),
('CUDA available', True),
('numpy_random_seed', 2147483648),
('GPU 0', 'Tesla T4'),
('CUDA_HOME', '/usr/local/cuda'),
('NVCC', 'Cuda compilation tools, release 11.2, V11.2.152'),
('GCC',
'x86_64-linux-gnu-gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0'),
('PyTorch', '1.9.0+cu111'),
('PyTorch compiling details',
'PyTorch built with:\n - GCC 7.3\n - C++ Version: 201402\n - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications\n - Intel(R) MKL-DNN v2.1.2 (Git Hash 98be7e8afa711dc9b66c8ff3504129cb82013cdb)\n - OpenMP 201511 (a.k.a. OpenMP 4.5)\n - NNPACK is enabled\n - CPU capability usage: AVX2\n - CUDA Runtime 11.1\n - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86\n - CuDNN 8.0.5\n - Magma 2.5.2\n - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.1, CUDNN_VERSION=8.0.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.9.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, \n'),
('TorchVision', '0.10.0+cu111'),
('OpenCV', '4.6.0'),
('MMEngine', '0.3.0'),
('MMDetection', '2.25.3+5b0d5b4')])
### Reproduces the problem - code sample
(the other models work well)
```python
!mim download mmdet --config faster-rcnn_r50-caffe_fpn_ms-2x_coco --dest ./checkpoints
```
and then we see
`faster_rcnn_r50_caffe_fpn_mstrain_2x_coco_bbox_mAP-0.397_20200504_231813-10b2de58.pth` or `faster_rcnn_r50_caffe_fpn_mstrain_3x_coco_20210526_095054-1f77628b`
### Reproduces the problem - command or script
``` python
from mmdet.apis import init_detector, inference_detector
from mmdet.utils import register_all_modules
from mmdet.registry import VISUALIZERS
import mmcv
register_all_modules()
config_file = './checkpoints/faster-rcnn_r50-caffe_fpn_ms-2x_coco.py'
checkpoint_file = './checkpoints/faster_rcnn_r50_caffe_fpn_mstrain_2x_coco_bbox_mAP-0.397_20200504_231813-10b2de58.pth'
model = init_detector(config_file, checkpoint_file, device='cpu') # or device='cuda:0'
image = mmcv.imread( "demo/demo.jpg", channel_order='rgb')
result = inference_detector(model, image)
print(result)
```
### Reproduces the problem - error message
and i see:
```
local loads checkpoint from path: ./checkpoints/faster_rcnn_r50_caffe_fpn_mstrain_2x_coco_bbox_mAP-0.397_20200504_231813-10b2de58.pth
<DetDataSample(
META INFORMATION
img_path: None
pad_shape: (800, 1216)
scale_factor: (1.8734375, 1.873536299765808)
img_id: 0
ori_shape: (427, 640)
batch_input_shape: (800, 1216)
img_shape: (800, 1199)
DATA FIELDS
gt_instances: <InstanceData(
META INFORMATION
DATA FIELDS
labels: tensor([], dtype=torch.int64)
bboxes: tensor([], size=(0, 4))
) at 0x7f8e7cabc150>
_ignored_instances: <InstanceData(
META INFORMATION
DATA FIELDS
labels: tensor([], dtype=torch.int64)
bboxes: tensor([], size=(0, 4))
) at 0x7f8e7cabce90>
_pred_instances: <InstanceData(
META INFORMATION
DATA FIELDS
scores: tensor([])
labels: tensor([], dtype=torch.int64)
bboxes: tensor([], size=(0, 4))
) at 0x7f8e7f14b190>
_gt_instances: <InstanceData(
META INFORMATION
DATA FIELDS
labels: tensor([], dtype=torch.int64)
bboxes: tensor([], size=(0, 4))
) at 0x7f8e7cabc150>
```
All results are empty.
### Additional information
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `configs/faster_rcnn/faster-rcnn_r50-caffe_fpn_ms-1x_coco.py`
Content:
```
1 _base_ = './faster-rcnn_r50_fpn_1x_coco.py'
2 model = dict(
3 backbone=dict(
4 norm_cfg=dict(requires_grad=False),
5 norm_eval=True,
6 style='caffe',
7 init_cfg=dict(
8 type='Pretrained',
9 checkpoint='open-mmlab://detectron2/resnet50_caffe')))
10 # use caffe img_norm
11 img_norm_cfg = dict(
12 mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)
13 train_pipeline = [
14 dict(type='LoadImageFromFile'),
15 dict(type='LoadAnnotations', with_bbox=True),
16 dict(
17 type='Resize',
18 img_scale=[(1333, 640), (1333, 672), (1333, 704), (1333, 736),
19 (1333, 768), (1333, 800)],
20 multiscale_mode='value',
21 keep_ratio=True),
22 dict(type='RandomFlip', flip_ratio=0.5),
23 dict(type='Normalize', **img_norm_cfg),
24 dict(type='Pad', size_divisor=32),
25 dict(type='DefaultFormatBundle'),
26 dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
27 ]
28 test_pipeline = [
29 dict(type='LoadImageFromFile'),
30 dict(
31 type='MultiScaleFlipAug',
32 img_scale=(1333, 800),
33 flip=False,
34 transforms=[
35 dict(type='Resize', keep_ratio=True),
36 dict(type='RandomFlip'),
37 dict(type='Normalize', **img_norm_cfg),
38 dict(type='Pad', size_divisor=32),
39 dict(type='ImageToTensor', keys=['img']),
40 dict(type='Collect', keys=['img']),
41 ])
42 ]
43 data = dict(
44 train=dict(pipeline=train_pipeline),
45 val=dict(pipeline=test_pipeline),
46 test=dict(pipeline=test_pipeline))
47
```
Path: `configs/faster_rcnn/faster-rcnn_r50-caffe_fpn_ms-3x_coco.py`
Content:
```
1 _base_ = 'faster-rcnn_r50_fpn_ms-3x_coco.py'
2 model = dict(
3 backbone=dict(
4 norm_cfg=dict(requires_grad=False),
5 norm_eval=True,
6 style='caffe',
7 init_cfg=dict(
8 type='Pretrained',
9 checkpoint='open-mmlab://detectron2/resnet50_caffe')))
10
11 # use caffe img_norm
12 img_norm_cfg = dict(
13 mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)
14 train_pipeline = [
15 dict(type='LoadImageFromFile'),
16 dict(type='LoadAnnotations', with_bbox=True),
17 dict(
18 type='Resize',
19 img_scale=[(1333, 640), (1333, 800)],
20 multiscale_mode='range',
21 keep_ratio=True),
22 dict(type='RandomFlip', flip_ratio=0.5),
23 dict(type='Normalize', **img_norm_cfg),
24 dict(type='Pad', size_divisor=32),
25 dict(type='DefaultFormatBundle'),
26 dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
27 ]
28 test_pipeline = [
29 dict(type='LoadImageFromFile'),
30 dict(
31 type='MultiScaleFlipAug',
32 img_scale=(1333, 800),
33 flip=False,
34 transforms=[
35 dict(type='Resize', keep_ratio=True),
36 dict(type='RandomFlip'),
37 dict(type='Normalize', **img_norm_cfg),
38 dict(type='Pad', size_divisor=32),
39 dict(type='ImageToTensor', keys=['img']),
40 dict(type='Collect', keys=['img']),
41 ])
42 ]
43
44 data = dict(
45 train=dict(dataset=dict(pipeline=train_pipeline)),
46 val=dict(pipeline=test_pipeline),
47 test=dict(pipeline=test_pipeline))
48
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/configs/faster_rcnn/faster-rcnn_r50-caffe_fpn_ms-1x_coco.py b/configs/faster_rcnn/faster-rcnn_r50-caffe_fpn_ms-1x_coco.py
--- a/configs/faster_rcnn/faster-rcnn_r50-caffe_fpn_ms-1x_coco.py
+++ b/configs/faster_rcnn/faster-rcnn_r50-caffe_fpn_ms-1x_coco.py
@@ -1,5 +1,11 @@
_base_ = './faster-rcnn_r50_fpn_1x_coco.py'
model = dict(
+ data_preprocessor=dict(
+ type='DetDataPreprocessor',
+ mean=[103.530, 116.280, 123.675],
+ std=[1.0, 1.0, 1.0],
+ bgr_to_rgb=False,
+ pad_size_divisor=32),
backbone=dict(
norm_cfg=dict(requires_grad=False),
norm_eval=True,
diff --git a/configs/faster_rcnn/faster-rcnn_r50-caffe_fpn_ms-3x_coco.py b/configs/faster_rcnn/faster-rcnn_r50-caffe_fpn_ms-3x_coco.py
--- a/configs/faster_rcnn/faster-rcnn_r50-caffe_fpn_ms-3x_coco.py
+++ b/configs/faster_rcnn/faster-rcnn_r50-caffe_fpn_ms-3x_coco.py
@@ -1,5 +1,11 @@
_base_ = 'faster-rcnn_r50_fpn_ms-3x_coco.py'
model = dict(
+ data_preprocessor=dict(
+ type='DetDataPreprocessor',
+ mean=[103.530, 116.280, 123.675],
+ std=[1.0, 1.0, 1.0],
+ bgr_to_rgb=False,
+ pad_size_divisor=32),
backbone=dict(
norm_cfg=dict(requires_grad=False),
norm_eval=True,
| {"golden_diff": "diff --git a/configs/faster_rcnn/faster-rcnn_r50-caffe_fpn_ms-1x_coco.py b/configs/faster_rcnn/faster-rcnn_r50-caffe_fpn_ms-1x_coco.py\n--- a/configs/faster_rcnn/faster-rcnn_r50-caffe_fpn_ms-1x_coco.py\n+++ b/configs/faster_rcnn/faster-rcnn_r50-caffe_fpn_ms-1x_coco.py\n@@ -1,5 +1,11 @@\n _base_ = './faster-rcnn_r50_fpn_1x_coco.py'\n model = dict(\n+ data_preprocessor=dict(\n+ type='DetDataPreprocessor',\n+ mean=[103.530, 116.280, 123.675],\n+ std=[1.0, 1.0, 1.0],\n+ bgr_to_rgb=False,\n+ pad_size_divisor=32),\n backbone=dict(\n norm_cfg=dict(requires_grad=False),\n norm_eval=True,\ndiff --git a/configs/faster_rcnn/faster-rcnn_r50-caffe_fpn_ms-3x_coco.py b/configs/faster_rcnn/faster-rcnn_r50-caffe_fpn_ms-3x_coco.py\n--- a/configs/faster_rcnn/faster-rcnn_r50-caffe_fpn_ms-3x_coco.py\n+++ b/configs/faster_rcnn/faster-rcnn_r50-caffe_fpn_ms-3x_coco.py\n@@ -1,5 +1,11 @@\n _base_ = 'faster-rcnn_r50_fpn_ms-3x_coco.py'\n model = dict(\n+ data_preprocessor=dict(\n+ type='DetDataPreprocessor',\n+ mean=[103.530, 116.280, 123.675],\n+ std=[1.0, 1.0, 1.0],\n+ bgr_to_rgb=False,\n+ pad_size_divisor=32),\n backbone=dict(\n norm_cfg=dict(requires_grad=False),\n norm_eval=True,\n", "issue": "[Bug] faster-rcnn_r50-caffe_fpn_ms-2x and 3x don't work \n### Prerequisite\r\n\r\n- [X] I have searched [Issues](https://github.com/open-mmlab/mmdetection/issues) and [Discussions](https://github.com/open-mmlab/mmdetection/discussions) but cannot get the expected help.\r\n- [X] I have read the [FAQ documentation](https://mmdetection.readthedocs.io/en/latest/faq.html) but cannot get the expected help.\r\n- [X] The bug has not been fixed in the [latest version (master)](https://github.com/open-mmlab/mmdetection) or [latest version (3.x)](https://github.com/open-mmlab/mmdetection/tree/dev-3.x).\r\n\r\n### Task\r\n\r\nI'm using the official example scripts/configs for the officially supported tasks/models/datasets.\r\n\r\n### Branch\r\n\r\n3.x branch https://github.com/open-mmlab/mmdetection/tree/3.x\r\n\r\n### Environment\r\n\r\nOrderedDict([('sys.platform', 'linux'),\r\n ('Python', '3.7.15 (default, Oct 12 2022, 19:14:55) [GCC 7.5.0]'),\r\n ('CUDA available', True),\r\n ('numpy_random_seed', 2147483648),\r\n ('GPU 0', 'Tesla T4'),\r\n ('CUDA_HOME', '/usr/local/cuda'),\r\n ('NVCC', 'Cuda compilation tools, release 11.2, V11.2.152'),\r\n ('GCC',\r\n 'x86_64-linux-gnu-gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0'),\r\n ('PyTorch', '1.9.0+cu111'),\r\n ('PyTorch compiling details',\r\n 'PyTorch built with:\\n - GCC 7.3\\n - C++ Version: 201402\\n - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications\\n - Intel(R) MKL-DNN v2.1.2 (Git Hash 98be7e8afa711dc9b66c8ff3504129cb82013cdb)\\n - OpenMP 201511 (a.k.a. OpenMP 4.5)\\n - NNPACK is enabled\\n - CPU capability usage: AVX2\\n - CUDA Runtime 11.1\\n - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86\\n - CuDNN 8.0.5\\n - Magma 2.5.2\\n - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.1, CUDNN_VERSION=8.0.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.9.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, \\n'),\r\n ('TorchVision', '0.10.0+cu111'),\r\n ('OpenCV', '4.6.0'),\r\n ('MMEngine', '0.3.0'),\r\n ('MMDetection', '2.25.3+5b0d5b4')])\r\n\r\n### Reproduces the problem - code sample\r\n\uff08the other models work well\uff09\r\n```python\r\n!mim download mmdet --config faster-rcnn_r50-caffe_fpn_ms-2x_coco --dest ./checkpoints\r\n```\r\nand then we see \r\n`faster_rcnn_r50_caffe_fpn_mstrain_2x_coco_bbox_mAP-0.397_20200504_231813-10b2de58.pth` or `faster_rcnn_r50_caffe_fpn_mstrain_3x_coco_20210526_095054-1f77628b`\r\n\r\n\r\n\r\n### Reproduces the problem - command or script\r\n\r\n``` python\r\nfrom mmdet.apis import init_detector, inference_detector\r\nfrom mmdet.utils import register_all_modules\r\nfrom mmdet.registry import VISUALIZERS\r\nimport mmcv\r\nregister_all_modules()\r\nconfig_file = './checkpoints/faster-rcnn_r50-caffe_fpn_ms-2x_coco.py'\r\ncheckpoint_file = './checkpoints/faster_rcnn_r50_caffe_fpn_mstrain_2x_coco_bbox_mAP-0.397_20200504_231813-10b2de58.pth'\r\n\r\nmodel = init_detector(config_file, checkpoint_file, device='cpu') # or device='cuda:0'\r\nimage = mmcv.imread( \"demo/demo.jpg\", channel_order='rgb')\r\nresult = inference_detector(model, image)\r\n\r\nprint(result)\r\n```\r\n\r\n\r\n### Reproduces the problem - error message\r\n\r\nand i see:\r\n```\r\nlocal loads checkpoint from path: ./checkpoints/faster_rcnn_r50_caffe_fpn_mstrain_2x_coco_bbox_mAP-0.397_20200504_231813-10b2de58.pth\r\n<DetDataSample(\r\n\r\n META INFORMATION\r\n img_path: None\r\n pad_shape: (800, 1216)\r\n scale_factor: (1.8734375, 1.873536299765808)\r\n img_id: 0\r\n ori_shape: (427, 640)\r\n batch_input_shape: (800, 1216)\r\n img_shape: (800, 1199)\r\n\r\n DATA FIELDS\r\n gt_instances: <InstanceData(\r\n \r\n META INFORMATION\r\n \r\n DATA FIELDS\r\n labels: tensor([], dtype=torch.int64)\r\n bboxes: tensor([], size=(0, 4))\r\n ) at 0x7f8e7cabc150>\r\n _ignored_instances: <InstanceData(\r\n \r\n META INFORMATION\r\n \r\n DATA FIELDS\r\n labels: tensor([], dtype=torch.int64)\r\n bboxes: tensor([], size=(0, 4))\r\n ) at 0x7f8e7cabce90>\r\n _pred_instances: <InstanceData(\r\n \r\n META INFORMATION\r\n \r\n DATA FIELDS\r\n scores: tensor([])\r\n labels: tensor([], dtype=torch.int64)\r\n bboxes: tensor([], size=(0, 4))\r\n ) at 0x7f8e7f14b190>\r\n _gt_instances: <InstanceData(\r\n \r\n META INFORMATION\r\n \r\n DATA FIELDS\r\n labels: tensor([], dtype=torch.int64)\r\n bboxes: tensor([], size=(0, 4))\r\n ) at 0x7f8e7cabc150>\r\n```\r\n\r\nAll results are empty.\r\n\r\n### Additional information\r\n\r\n_No response_\n", "before_files": [{"content": "_base_ = './faster-rcnn_r50_fpn_1x_coco.py'\nmodel = dict(\n backbone=dict(\n norm_cfg=dict(requires_grad=False),\n norm_eval=True,\n style='caffe',\n init_cfg=dict(\n type='Pretrained',\n checkpoint='open-mmlab://detectron2/resnet50_caffe')))\n# use caffe img_norm\nimg_norm_cfg = dict(\n mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)\ntrain_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(type='LoadAnnotations', with_bbox=True),\n dict(\n type='Resize',\n img_scale=[(1333, 640), (1333, 672), (1333, 704), (1333, 736),\n (1333, 768), (1333, 800)],\n multiscale_mode='value',\n keep_ratio=True),\n dict(type='RandomFlip', flip_ratio=0.5),\n dict(type='Normalize', **img_norm_cfg),\n dict(type='Pad', size_divisor=32),\n dict(type='DefaultFormatBundle'),\n dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),\n]\ntest_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(\n type='MultiScaleFlipAug',\n img_scale=(1333, 800),\n flip=False,\n transforms=[\n dict(type='Resize', keep_ratio=True),\n dict(type='RandomFlip'),\n dict(type='Normalize', **img_norm_cfg),\n dict(type='Pad', size_divisor=32),\n dict(type='ImageToTensor', keys=['img']),\n dict(type='Collect', keys=['img']),\n ])\n]\ndata = dict(\n train=dict(pipeline=train_pipeline),\n val=dict(pipeline=test_pipeline),\n test=dict(pipeline=test_pipeline))\n", "path": "configs/faster_rcnn/faster-rcnn_r50-caffe_fpn_ms-1x_coco.py"}, {"content": "_base_ = 'faster-rcnn_r50_fpn_ms-3x_coco.py'\nmodel = dict(\n backbone=dict(\n norm_cfg=dict(requires_grad=False),\n norm_eval=True,\n style='caffe',\n init_cfg=dict(\n type='Pretrained',\n checkpoint='open-mmlab://detectron2/resnet50_caffe')))\n\n# use caffe img_norm\nimg_norm_cfg = dict(\n mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)\ntrain_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(type='LoadAnnotations', with_bbox=True),\n dict(\n type='Resize',\n img_scale=[(1333, 640), (1333, 800)],\n multiscale_mode='range',\n keep_ratio=True),\n dict(type='RandomFlip', flip_ratio=0.5),\n dict(type='Normalize', **img_norm_cfg),\n dict(type='Pad', size_divisor=32),\n dict(type='DefaultFormatBundle'),\n dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),\n]\ntest_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(\n type='MultiScaleFlipAug',\n img_scale=(1333, 800),\n flip=False,\n transforms=[\n dict(type='Resize', keep_ratio=True),\n dict(type='RandomFlip'),\n dict(type='Normalize', **img_norm_cfg),\n dict(type='Pad', size_divisor=32),\n dict(type='ImageToTensor', keys=['img']),\n dict(type='Collect', keys=['img']),\n ])\n]\n\ndata = dict(\n train=dict(dataset=dict(pipeline=train_pipeline)),\n val=dict(pipeline=test_pipeline),\n test=dict(pipeline=test_pipeline))\n", "path": "configs/faster_rcnn/faster-rcnn_r50-caffe_fpn_ms-3x_coco.py"}], "after_files": [{"content": "_base_ = './faster-rcnn_r50_fpn_1x_coco.py'\nmodel = dict(\n data_preprocessor=dict(\n type='DetDataPreprocessor',\n mean=[103.530, 116.280, 123.675],\n std=[1.0, 1.0, 1.0],\n bgr_to_rgb=False,\n pad_size_divisor=32),\n backbone=dict(\n norm_cfg=dict(requires_grad=False),\n norm_eval=True,\n style='caffe',\n init_cfg=dict(\n type='Pretrained',\n checkpoint='open-mmlab://detectron2/resnet50_caffe')))\n# use caffe img_norm\nimg_norm_cfg = dict(\n mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)\ntrain_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(type='LoadAnnotations', with_bbox=True),\n dict(\n type='Resize',\n img_scale=[(1333, 640), (1333, 672), (1333, 704), (1333, 736),\n (1333, 768), (1333, 800)],\n multiscale_mode='value',\n keep_ratio=True),\n dict(type='RandomFlip', flip_ratio=0.5),\n dict(type='Normalize', **img_norm_cfg),\n dict(type='Pad', size_divisor=32),\n dict(type='DefaultFormatBundle'),\n dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),\n]\ntest_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(\n type='MultiScaleFlipAug',\n img_scale=(1333, 800),\n flip=False,\n transforms=[\n dict(type='Resize', keep_ratio=True),\n dict(type='RandomFlip'),\n dict(type='Normalize', **img_norm_cfg),\n dict(type='Pad', size_divisor=32),\n dict(type='ImageToTensor', keys=['img']),\n dict(type='Collect', keys=['img']),\n ])\n]\ndata = dict(\n train=dict(pipeline=train_pipeline),\n val=dict(pipeline=test_pipeline),\n test=dict(pipeline=test_pipeline))\n", "path": "configs/faster_rcnn/faster-rcnn_r50-caffe_fpn_ms-1x_coco.py"}, {"content": "_base_ = 'faster-rcnn_r50_fpn_ms-3x_coco.py'\nmodel = dict(\n data_preprocessor=dict(\n type='DetDataPreprocessor',\n mean=[103.530, 116.280, 123.675],\n std=[1.0, 1.0, 1.0],\n bgr_to_rgb=False,\n pad_size_divisor=32),\n backbone=dict(\n norm_cfg=dict(requires_grad=False),\n norm_eval=True,\n style='caffe',\n init_cfg=dict(\n type='Pretrained',\n checkpoint='open-mmlab://detectron2/resnet50_caffe')))\n\n# use caffe img_norm\nimg_norm_cfg = dict(\n mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)\ntrain_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(type='LoadAnnotations', with_bbox=True),\n dict(\n type='Resize',\n img_scale=[(1333, 640), (1333, 800)],\n multiscale_mode='range',\n keep_ratio=True),\n dict(type='RandomFlip', flip_ratio=0.5),\n dict(type='Normalize', **img_norm_cfg),\n dict(type='Pad', size_divisor=32),\n dict(type='DefaultFormatBundle'),\n dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),\n]\ntest_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(\n type='MultiScaleFlipAug',\n img_scale=(1333, 800),\n flip=False,\n transforms=[\n dict(type='Resize', keep_ratio=True),\n dict(type='RandomFlip'),\n dict(type='Normalize', **img_norm_cfg),\n dict(type='Pad', size_divisor=32),\n dict(type='ImageToTensor', keys=['img']),\n dict(type='Collect', keys=['img']),\n ])\n]\n\ndata = dict(\n train=dict(dataset=dict(pipeline=train_pipeline)),\n val=dict(pipeline=test_pipeline),\n test=dict(pipeline=test_pipeline))\n", "path": "configs/faster_rcnn/faster-rcnn_r50-caffe_fpn_ms-3x_coco.py"}]} | 3,408 | 499 |
gh_patches_debug_4150 | rasdani/github-patches | git_diff | certbot__certbot-696 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Command line arguments for plugins are broken
It seems the command line arguments for plugins created in `add_parser_arguments(cls, add):` cant be extracted with `self.conf("argument_name")`. I suspect something with the plugin namespace prefix is going wrong.
My code is here https://github.com/e00E/lets-encrypt-icecast/blob/master/icecast.py .
In line 129 I add an argument via `add("configuration_file")`
and I try to use it in line 150 like so
```
self.config_file = self.conf("configuration_file")
if self.config_file is not None and not os.path.isfile(self.config_file):
raise PluginError('User supplied icecast configuration file does not exist.')
```
When running `letsencrypt install` it crashes with the last log message
```
Traceback (most recent call last):
File "./letsencrypt/venv/bin/letsencrypt", line 9, in <module>
load_entry_point('letsencrypt==0.1', 'console_scripts', 'letsencrypt')()
File "/home/e/icecast/letsencrypt/letsencrypt/cli.py", line 778, in main
return args.func(args, config, plugins)
File "/home/e/icecast/letsencrypt/letsencrypt/cli.py", line 232, in install
installer = display_ops.pick_installer(config, args.installer, plugins)
File "/home/e/icecast/letsencrypt/letsencrypt/display/ops.py", line 105, in pick_installer
config, default, plugins, question, (interfaces.IInstaller,))
File "/home/e/icecast/letsencrypt/letsencrypt/display/ops.py", line 72, in pick_plugin
verified.prepare()
File "/home/e/icecast/letsencrypt/letsencrypt/plugins/disco.py", line 197, in prepare
return [plugin_ep.prepare() for plugin_ep in self._plugins.itervalues()]
File "/home/e/icecast/letsencrypt/letsencrypt/plugins/disco.py", line 98, in prepare
self._initialized.prepare()
File "/home/e/icecast/letsencrypt/icecast.py", line 151, in prepare
self.config_file = self.conf("configuration_file")
File "/home/e/icecast/letsencrypt/letsencrypt/plugins/common.py", line 60, in conf
return getattr(self.config, self.dest(var))
File "/home/e/icecast/letsencrypt/letsencrypt/configuration.py", line 39, in __getattr__
return getattr(self.namespace, name)
AttributeError: 'Namespace' object has no attribute 'letsencrypt-icecast:icecast_installer_configuration_file'
```
I think if an argument is not specified on the command line, the argument should default to None and not throw an AttributeError. But even if I manually have default=None or set default to anything I still get this error. Curiously the attribute error is about `letsencrypt-icecast:icecast_installer_configuration_file` while when running `letsencrypt --help all` the corresponding command line argument is listed as `--letsencrypt-icecast:icecast_installer-configuration_file` (note the the dash instead of underscore near the end) which leads me to believe the conversion of the argument name in the plugin to the global name is going wrong.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `letsencrypt/plugins/common.py`
Content:
```
1 """Plugin common functions."""
2 import os
3 import pkg_resources
4 import re
5 import shutil
6 import tempfile
7
8 import zope.interface
9
10 from acme.jose import util as jose_util
11
12 from letsencrypt import constants
13 from letsencrypt import interfaces
14 from letsencrypt import le_util
15
16
17 def option_namespace(name):
18 """ArgumentParser options namespace (prefix of all options)."""
19 return name + "-"
20
21 def dest_namespace(name):
22 """ArgumentParser dest namespace (prefix of all destinations)."""
23 return name + "_"
24
25 private_ips_regex = re.compile( # pylint: disable=invalid-name
26 r"(^127\.0\.0\.1)|(^10\.)|(^172\.1[6-9]\.)|"
27 r"(^172\.2[0-9]\.)|(^172\.3[0-1]\.)|(^192\.168\.)")
28 hostname_regex = re.compile( # pylint: disable=invalid-name
29 r"^(([a-z0-9]|[a-z0-9][a-z0-9\-]*[a-z0-9])\.)*[a-z]+$", re.IGNORECASE)
30
31
32 class Plugin(object):
33 """Generic plugin."""
34 zope.interface.implements(interfaces.IPlugin)
35 # classProvides is not inherited, subclasses must define it on their own
36 #zope.interface.classProvides(interfaces.IPluginFactory)
37
38 def __init__(self, config, name):
39 self.config = config
40 self.name = name
41
42 @property
43 def option_namespace(self):
44 """ArgumentParser options namespace (prefix of all options)."""
45 return option_namespace(self.name)
46
47 @property
48 def dest_namespace(self):
49 """ArgumentParser dest namespace (prefix of all destinations)."""
50 return dest_namespace(self.name)
51
52 def dest(self, var):
53 """Find a destination for given variable ``var``."""
54 # this should do exactly the same what ArgumentParser(arg),
55 # does to "arg" to compute "dest"
56 return self.dest_namespace + var.replace("-", "_")
57
58 def conf(self, var):
59 """Find a configuration value for variable ``var``."""
60 return getattr(self.config, self.dest(var))
61
62 @classmethod
63 def inject_parser_options(cls, parser, name):
64 """Inject parser options.
65
66 See `~.IPlugin.inject_parser_options` for docs.
67
68 """
69 # dummy function, doesn't check if dest.startswith(self.dest_namespace)
70 def add(arg_name_no_prefix, *args, **kwargs):
71 # pylint: disable=missing-docstring
72 return parser.add_argument(
73 "--{0}{1}".format(option_namespace(name), arg_name_no_prefix),
74 *args, **kwargs)
75 return cls.add_parser_arguments(add)
76
77 @jose_util.abstractclassmethod
78 def add_parser_arguments(cls, add):
79 """Add plugin arguments to the CLI argument parser.
80
81 :param callable add: Function that proxies calls to
82 `argparse.ArgumentParser.add_argument` prepending options
83 with unique plugin name prefix.
84
85 """
86
87 # other
88
89 class Addr(object):
90 r"""Represents an virtual host address.
91
92 :param str addr: addr part of vhost address
93 :param str port: port number or \*, or ""
94
95 """
96 def __init__(self, tup):
97 self.tup = tup
98
99 @classmethod
100 def fromstring(cls, str_addr):
101 """Initialize Addr from string."""
102 tup = str_addr.partition(':')
103 return cls((tup[0], tup[2]))
104
105 def __str__(self):
106 if self.tup[1]:
107 return "%s:%s" % self.tup
108 return self.tup[0]
109
110 def __eq__(self, other):
111 if isinstance(other, self.__class__):
112 return self.tup == other.tup
113 return False
114
115 def __hash__(self):
116 return hash(self.tup)
117
118 def get_addr(self):
119 """Return addr part of Addr object."""
120 return self.tup[0]
121
122 def get_port(self):
123 """Return port."""
124 return self.tup[1]
125
126 def get_addr_obj(self, port):
127 """Return new address object with same addr and new port."""
128 return self.__class__((self.tup[0], port))
129
130
131 class Dvsni(object):
132 """Class that perform DVSNI challenges."""
133
134 def __init__(self, configurator):
135 self.configurator = configurator
136 self.achalls = []
137 self.indices = []
138 self.challenge_conf = os.path.join(
139 configurator.config.config_dir, "le_dvsni_cert_challenge.conf")
140 # self.completed = 0
141
142 def add_chall(self, achall, idx=None):
143 """Add challenge to DVSNI object to perform at once.
144
145 :param achall: Annotated DVSNI challenge.
146 :type achall: :class:`letsencrypt.achallenges.DVSNI`
147
148 :param int idx: index to challenge in a larger array
149
150 """
151 self.achalls.append(achall)
152 if idx is not None:
153 self.indices.append(idx)
154
155 def get_cert_path(self, achall):
156 """Returns standardized name for challenge certificate.
157
158 :param achall: Annotated DVSNI challenge.
159 :type achall: :class:`letsencrypt.achallenges.DVSNI`
160
161 :returns: certificate file name
162 :rtype: str
163
164 """
165 return os.path.join(self.configurator.config.work_dir,
166 achall.chall.encode("token") + ".crt")
167
168 def get_key_path(self, achall):
169 """Get standardized path to challenge key."""
170 return os.path.join(self.configurator.config.work_dir,
171 achall.chall.encode("token") + '.pem')
172
173 def _setup_challenge_cert(self, achall, s=None):
174 # pylint: disable=invalid-name
175 """Generate and write out challenge certificate."""
176 cert_path = self.get_cert_path(achall)
177 key_path = self.get_key_path(achall)
178 # Register the path before you write out the file
179 self.configurator.reverter.register_file_creation(True, key_path)
180 self.configurator.reverter.register_file_creation(True, cert_path)
181
182 response, cert_pem, key_pem = achall.gen_cert_and_response(s)
183
184 # Write out challenge cert and key
185 with open(cert_path, "wb") as cert_chall_fd:
186 cert_chall_fd.write(cert_pem)
187 with le_util.safe_open(key_path, 'wb', chmod=0o400) as key_file:
188 key_file.write(key_pem)
189
190 return response
191
192
193 # test utils used by letsencrypt_apache/letsencrypt_nginx (hence
194 # "pragma: no cover") TODO: this might quickly lead to dead code (also
195 # c.f. #383)
196
197 def setup_ssl_options(config_dir, src, dest): # pragma: no cover
198 """Move the ssl_options into position and return the path."""
199 option_path = os.path.join(config_dir, dest)
200 shutil.copyfile(src, option_path)
201 return option_path
202
203
204 def dir_setup(test_dir, pkg): # pragma: no cover
205 """Setup the directories necessary for the configurator."""
206 temp_dir = tempfile.mkdtemp("temp")
207 config_dir = tempfile.mkdtemp("config")
208 work_dir = tempfile.mkdtemp("work")
209
210 os.chmod(temp_dir, constants.CONFIG_DIRS_MODE)
211 os.chmod(config_dir, constants.CONFIG_DIRS_MODE)
212 os.chmod(work_dir, constants.CONFIG_DIRS_MODE)
213
214 test_configs = pkg_resources.resource_filename(
215 pkg, os.path.join("testdata", test_dir))
216
217 shutil.copytree(
218 test_configs, os.path.join(temp_dir, test_dir), symlinks=True)
219
220 return temp_dir, config_dir, work_dir
221
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/letsencrypt/plugins/common.py b/letsencrypt/plugins/common.py
--- a/letsencrypt/plugins/common.py
+++ b/letsencrypt/plugins/common.py
@@ -20,7 +20,7 @@
def dest_namespace(name):
"""ArgumentParser dest namespace (prefix of all destinations)."""
- return name + "_"
+ return name.replace("-", "_") + "_"
private_ips_regex = re.compile( # pylint: disable=invalid-name
r"(^127\.0\.0\.1)|(^10\.)|(^172\.1[6-9]\.)|"
| {"golden_diff": "diff --git a/letsencrypt/plugins/common.py b/letsencrypt/plugins/common.py\n--- a/letsencrypt/plugins/common.py\n+++ b/letsencrypt/plugins/common.py\n@@ -20,7 +20,7 @@\n \n def dest_namespace(name):\n \"\"\"ArgumentParser dest namespace (prefix of all destinations).\"\"\"\n- return name + \"_\"\n+ return name.replace(\"-\", \"_\") + \"_\"\n \n private_ips_regex = re.compile( # pylint: disable=invalid-name\n r\"(^127\\.0\\.0\\.1)|(^10\\.)|(^172\\.1[6-9]\\.)|\"\n", "issue": "Command line arguments for plugins are broken\nIt seems the command line arguments for plugins created in `add_parser_arguments(cls, add):` cant be extracted with `self.conf(\"argument_name\")`. I suspect something with the plugin namespace prefix is going wrong.\n\nMy code is here https://github.com/e00E/lets-encrypt-icecast/blob/master/icecast.py .\nIn line 129 I add an argument via `add(\"configuration_file\")`\nand I try to use it in line 150 like so\n\n```\nself.config_file = self.conf(\"configuration_file\")\nif self.config_file is not None and not os.path.isfile(self.config_file):\n raise PluginError('User supplied icecast configuration file does not exist.')\n```\n\nWhen running `letsencrypt install` it crashes with the last log message\n\n```\nTraceback (most recent call last):\n File \"./letsencrypt/venv/bin/letsencrypt\", line 9, in <module>\n load_entry_point('letsencrypt==0.1', 'console_scripts', 'letsencrypt')()\n File \"/home/e/icecast/letsencrypt/letsencrypt/cli.py\", line 778, in main\n return args.func(args, config, plugins)\n File \"/home/e/icecast/letsencrypt/letsencrypt/cli.py\", line 232, in install\n installer = display_ops.pick_installer(config, args.installer, plugins)\n File \"/home/e/icecast/letsencrypt/letsencrypt/display/ops.py\", line 105, in pick_installer\n config, default, plugins, question, (interfaces.IInstaller,))\n File \"/home/e/icecast/letsencrypt/letsencrypt/display/ops.py\", line 72, in pick_plugin\n verified.prepare()\n File \"/home/e/icecast/letsencrypt/letsencrypt/plugins/disco.py\", line 197, in prepare\n return [plugin_ep.prepare() for plugin_ep in self._plugins.itervalues()]\n File \"/home/e/icecast/letsencrypt/letsencrypt/plugins/disco.py\", line 98, in prepare\n self._initialized.prepare()\n File \"/home/e/icecast/letsencrypt/icecast.py\", line 151, in prepare\n self.config_file = self.conf(\"configuration_file\")\n File \"/home/e/icecast/letsencrypt/letsencrypt/plugins/common.py\", line 60, in conf\n return getattr(self.config, self.dest(var))\n File \"/home/e/icecast/letsencrypt/letsencrypt/configuration.py\", line 39, in __getattr__\n return getattr(self.namespace, name)\nAttributeError: 'Namespace' object has no attribute 'letsencrypt-icecast:icecast_installer_configuration_file'\n```\n\nI think if an argument is not specified on the command line, the argument should default to None and not throw an AttributeError. But even if I manually have default=None or set default to anything I still get this error. Curiously the attribute error is about `letsencrypt-icecast:icecast_installer_configuration_file` while when running `letsencrypt --help all` the corresponding command line argument is listed as `--letsencrypt-icecast:icecast_installer-configuration_file` (note the the dash instead of underscore near the end) which leads me to believe the conversion of the argument name in the plugin to the global name is going wrong.\n\n", "before_files": [{"content": "\"\"\"Plugin common functions.\"\"\"\nimport os\nimport pkg_resources\nimport re\nimport shutil\nimport tempfile\n\nimport zope.interface\n\nfrom acme.jose import util as jose_util\n\nfrom letsencrypt import constants\nfrom letsencrypt import interfaces\nfrom letsencrypt import le_util\n\n\ndef option_namespace(name):\n \"\"\"ArgumentParser options namespace (prefix of all options).\"\"\"\n return name + \"-\"\n\ndef dest_namespace(name):\n \"\"\"ArgumentParser dest namespace (prefix of all destinations).\"\"\"\n return name + \"_\"\n\nprivate_ips_regex = re.compile( # pylint: disable=invalid-name\n r\"(^127\\.0\\.0\\.1)|(^10\\.)|(^172\\.1[6-9]\\.)|\"\n r\"(^172\\.2[0-9]\\.)|(^172\\.3[0-1]\\.)|(^192\\.168\\.)\")\nhostname_regex = re.compile( # pylint: disable=invalid-name\n r\"^(([a-z0-9]|[a-z0-9][a-z0-9\\-]*[a-z0-9])\\.)*[a-z]+$\", re.IGNORECASE)\n\n\nclass Plugin(object):\n \"\"\"Generic plugin.\"\"\"\n zope.interface.implements(interfaces.IPlugin)\n # classProvides is not inherited, subclasses must define it on their own\n #zope.interface.classProvides(interfaces.IPluginFactory)\n\n def __init__(self, config, name):\n self.config = config\n self.name = name\n\n @property\n def option_namespace(self):\n \"\"\"ArgumentParser options namespace (prefix of all options).\"\"\"\n return option_namespace(self.name)\n\n @property\n def dest_namespace(self):\n \"\"\"ArgumentParser dest namespace (prefix of all destinations).\"\"\"\n return dest_namespace(self.name)\n\n def dest(self, var):\n \"\"\"Find a destination for given variable ``var``.\"\"\"\n # this should do exactly the same what ArgumentParser(arg),\n # does to \"arg\" to compute \"dest\"\n return self.dest_namespace + var.replace(\"-\", \"_\")\n\n def conf(self, var):\n \"\"\"Find a configuration value for variable ``var``.\"\"\"\n return getattr(self.config, self.dest(var))\n\n @classmethod\n def inject_parser_options(cls, parser, name):\n \"\"\"Inject parser options.\n\n See `~.IPlugin.inject_parser_options` for docs.\n\n \"\"\"\n # dummy function, doesn't check if dest.startswith(self.dest_namespace)\n def add(arg_name_no_prefix, *args, **kwargs):\n # pylint: disable=missing-docstring\n return parser.add_argument(\n \"--{0}{1}\".format(option_namespace(name), arg_name_no_prefix),\n *args, **kwargs)\n return cls.add_parser_arguments(add)\n\n @jose_util.abstractclassmethod\n def add_parser_arguments(cls, add):\n \"\"\"Add plugin arguments to the CLI argument parser.\n\n :param callable add: Function that proxies calls to\n `argparse.ArgumentParser.add_argument` prepending options\n with unique plugin name prefix.\n\n \"\"\"\n\n# other\n\nclass Addr(object):\n r\"\"\"Represents an virtual host address.\n\n :param str addr: addr part of vhost address\n :param str port: port number or \\*, or \"\"\n\n \"\"\"\n def __init__(self, tup):\n self.tup = tup\n\n @classmethod\n def fromstring(cls, str_addr):\n \"\"\"Initialize Addr from string.\"\"\"\n tup = str_addr.partition(':')\n return cls((tup[0], tup[2]))\n\n def __str__(self):\n if self.tup[1]:\n return \"%s:%s\" % self.tup\n return self.tup[0]\n\n def __eq__(self, other):\n if isinstance(other, self.__class__):\n return self.tup == other.tup\n return False\n\n def __hash__(self):\n return hash(self.tup)\n\n def get_addr(self):\n \"\"\"Return addr part of Addr object.\"\"\"\n return self.tup[0]\n\n def get_port(self):\n \"\"\"Return port.\"\"\"\n return self.tup[1]\n\n def get_addr_obj(self, port):\n \"\"\"Return new address object with same addr and new port.\"\"\"\n return self.__class__((self.tup[0], port))\n\n\nclass Dvsni(object):\n \"\"\"Class that perform DVSNI challenges.\"\"\"\n\n def __init__(self, configurator):\n self.configurator = configurator\n self.achalls = []\n self.indices = []\n self.challenge_conf = os.path.join(\n configurator.config.config_dir, \"le_dvsni_cert_challenge.conf\")\n # self.completed = 0\n\n def add_chall(self, achall, idx=None):\n \"\"\"Add challenge to DVSNI object to perform at once.\n\n :param achall: Annotated DVSNI challenge.\n :type achall: :class:`letsencrypt.achallenges.DVSNI`\n\n :param int idx: index to challenge in a larger array\n\n \"\"\"\n self.achalls.append(achall)\n if idx is not None:\n self.indices.append(idx)\n\n def get_cert_path(self, achall):\n \"\"\"Returns standardized name for challenge certificate.\n\n :param achall: Annotated DVSNI challenge.\n :type achall: :class:`letsencrypt.achallenges.DVSNI`\n\n :returns: certificate file name\n :rtype: str\n\n \"\"\"\n return os.path.join(self.configurator.config.work_dir,\n achall.chall.encode(\"token\") + \".crt\")\n\n def get_key_path(self, achall):\n \"\"\"Get standardized path to challenge key.\"\"\"\n return os.path.join(self.configurator.config.work_dir,\n achall.chall.encode(\"token\") + '.pem')\n\n def _setup_challenge_cert(self, achall, s=None):\n # pylint: disable=invalid-name\n \"\"\"Generate and write out challenge certificate.\"\"\"\n cert_path = self.get_cert_path(achall)\n key_path = self.get_key_path(achall)\n # Register the path before you write out the file\n self.configurator.reverter.register_file_creation(True, key_path)\n self.configurator.reverter.register_file_creation(True, cert_path)\n\n response, cert_pem, key_pem = achall.gen_cert_and_response(s)\n\n # Write out challenge cert and key\n with open(cert_path, \"wb\") as cert_chall_fd:\n cert_chall_fd.write(cert_pem)\n with le_util.safe_open(key_path, 'wb', chmod=0o400) as key_file:\n key_file.write(key_pem)\n\n return response\n\n\n# test utils used by letsencrypt_apache/letsencrypt_nginx (hence\n# \"pragma: no cover\") TODO: this might quickly lead to dead code (also\n# c.f. #383)\n\ndef setup_ssl_options(config_dir, src, dest): # pragma: no cover\n \"\"\"Move the ssl_options into position and return the path.\"\"\"\n option_path = os.path.join(config_dir, dest)\n shutil.copyfile(src, option_path)\n return option_path\n\n\ndef dir_setup(test_dir, pkg): # pragma: no cover\n \"\"\"Setup the directories necessary for the configurator.\"\"\"\n temp_dir = tempfile.mkdtemp(\"temp\")\n config_dir = tempfile.mkdtemp(\"config\")\n work_dir = tempfile.mkdtemp(\"work\")\n\n os.chmod(temp_dir, constants.CONFIG_DIRS_MODE)\n os.chmod(config_dir, constants.CONFIG_DIRS_MODE)\n os.chmod(work_dir, constants.CONFIG_DIRS_MODE)\n\n test_configs = pkg_resources.resource_filename(\n pkg, os.path.join(\"testdata\", test_dir))\n\n shutil.copytree(\n test_configs, os.path.join(temp_dir, test_dir), symlinks=True)\n\n return temp_dir, config_dir, work_dir\n", "path": "letsencrypt/plugins/common.py"}], "after_files": [{"content": "\"\"\"Plugin common functions.\"\"\"\nimport os\nimport pkg_resources\nimport re\nimport shutil\nimport tempfile\n\nimport zope.interface\n\nfrom acme.jose import util as jose_util\n\nfrom letsencrypt import constants\nfrom letsencrypt import interfaces\nfrom letsencrypt import le_util\n\n\ndef option_namespace(name):\n \"\"\"ArgumentParser options namespace (prefix of all options).\"\"\"\n return name + \"-\"\n\ndef dest_namespace(name):\n \"\"\"ArgumentParser dest namespace (prefix of all destinations).\"\"\"\n return name.replace(\"-\", \"_\") + \"_\"\n\nprivate_ips_regex = re.compile( # pylint: disable=invalid-name\n r\"(^127\\.0\\.0\\.1)|(^10\\.)|(^172\\.1[6-9]\\.)|\"\n r\"(^172\\.2[0-9]\\.)|(^172\\.3[0-1]\\.)|(^192\\.168\\.)\")\nhostname_regex = re.compile( # pylint: disable=invalid-name\n r\"^(([a-z0-9]|[a-z0-9][a-z0-9\\-]*[a-z0-9])\\.)*[a-z]+$\", re.IGNORECASE)\n\n\nclass Plugin(object):\n \"\"\"Generic plugin.\"\"\"\n zope.interface.implements(interfaces.IPlugin)\n # classProvides is not inherited, subclasses must define it on their own\n #zope.interface.classProvides(interfaces.IPluginFactory)\n\n def __init__(self, config, name):\n self.config = config\n self.name = name\n\n @property\n def option_namespace(self):\n \"\"\"ArgumentParser options namespace (prefix of all options).\"\"\"\n return option_namespace(self.name)\n\n @property\n def dest_namespace(self):\n \"\"\"ArgumentParser dest namespace (prefix of all destinations).\"\"\"\n return dest_namespace(self.name)\n\n def dest(self, var):\n \"\"\"Find a destination for given variable ``var``.\"\"\"\n # this should do exactly the same what ArgumentParser(arg),\n # does to \"arg\" to compute \"dest\"\n return self.dest_namespace + var.replace(\"-\", \"_\")\n\n def conf(self, var):\n \"\"\"Find a configuration value for variable ``var``.\"\"\"\n return getattr(self.config, self.dest(var))\n\n @classmethod\n def inject_parser_options(cls, parser, name):\n \"\"\"Inject parser options.\n\n See `~.IPlugin.inject_parser_options` for docs.\n\n \"\"\"\n # dummy function, doesn't check if dest.startswith(self.dest_namespace)\n def add(arg_name_no_prefix, *args, **kwargs):\n # pylint: disable=missing-docstring\n return parser.add_argument(\n \"--{0}{1}\".format(option_namespace(name), arg_name_no_prefix),\n *args, **kwargs)\n return cls.add_parser_arguments(add)\n\n @jose_util.abstractclassmethod\n def add_parser_arguments(cls, add):\n \"\"\"Add plugin arguments to the CLI argument parser.\n\n :param callable add: Function that proxies calls to\n `argparse.ArgumentParser.add_argument` prepending options\n with unique plugin name prefix.\n\n \"\"\"\n\n# other\n\nclass Addr(object):\n r\"\"\"Represents an virtual host address.\n\n :param str addr: addr part of vhost address\n :param str port: port number or \\*, or \"\"\n\n \"\"\"\n def __init__(self, tup):\n self.tup = tup\n\n @classmethod\n def fromstring(cls, str_addr):\n \"\"\"Initialize Addr from string.\"\"\"\n tup = str_addr.partition(':')\n return cls((tup[0], tup[2]))\n\n def __str__(self):\n if self.tup[1]:\n return \"%s:%s\" % self.tup\n return self.tup[0]\n\n def __eq__(self, other):\n if isinstance(other, self.__class__):\n return self.tup == other.tup\n return False\n\n def __hash__(self):\n return hash(self.tup)\n\n def get_addr(self):\n \"\"\"Return addr part of Addr object.\"\"\"\n return self.tup[0]\n\n def get_port(self):\n \"\"\"Return port.\"\"\"\n return self.tup[1]\n\n def get_addr_obj(self, port):\n \"\"\"Return new address object with same addr and new port.\"\"\"\n return self.__class__((self.tup[0], port))\n\n\nclass Dvsni(object):\n \"\"\"Class that perform DVSNI challenges.\"\"\"\n\n def __init__(self, configurator):\n self.configurator = configurator\n self.achalls = []\n self.indices = []\n self.challenge_conf = os.path.join(\n configurator.config.config_dir, \"le_dvsni_cert_challenge.conf\")\n # self.completed = 0\n\n def add_chall(self, achall, idx=None):\n \"\"\"Add challenge to DVSNI object to perform at once.\n\n :param achall: Annotated DVSNI challenge.\n :type achall: :class:`letsencrypt.achallenges.DVSNI`\n\n :param int idx: index to challenge in a larger array\n\n \"\"\"\n self.achalls.append(achall)\n if idx is not None:\n self.indices.append(idx)\n\n def get_cert_path(self, achall):\n \"\"\"Returns standardized name for challenge certificate.\n\n :param achall: Annotated DVSNI challenge.\n :type achall: :class:`letsencrypt.achallenges.DVSNI`\n\n :returns: certificate file name\n :rtype: str\n\n \"\"\"\n return os.path.join(self.configurator.config.work_dir,\n achall.chall.encode(\"token\") + \".crt\")\n\n def get_key_path(self, achall):\n \"\"\"Get standardized path to challenge key.\"\"\"\n return os.path.join(self.configurator.config.work_dir,\n achall.chall.encode(\"token\") + '.pem')\n\n def _setup_challenge_cert(self, achall, s=None):\n # pylint: disable=invalid-name\n \"\"\"Generate and write out challenge certificate.\"\"\"\n cert_path = self.get_cert_path(achall)\n key_path = self.get_key_path(achall)\n # Register the path before you write out the file\n self.configurator.reverter.register_file_creation(True, key_path)\n self.configurator.reverter.register_file_creation(True, cert_path)\n\n response, cert_pem, key_pem = achall.gen_cert_and_response(s)\n\n # Write out challenge cert and key\n with open(cert_path, \"wb\") as cert_chall_fd:\n cert_chall_fd.write(cert_pem)\n with le_util.safe_open(key_path, 'wb', chmod=0o400) as key_file:\n key_file.write(key_pem)\n\n return response\n\n\n# test utils used by letsencrypt_apache/letsencrypt_nginx (hence\n# \"pragma: no cover\") TODO: this might quickly lead to dead code (also\n# c.f. #383)\n\ndef setup_ssl_options(config_dir, src, dest): # pragma: no cover\n \"\"\"Move the ssl_options into position and return the path.\"\"\"\n option_path = os.path.join(config_dir, dest)\n shutil.copyfile(src, option_path)\n return option_path\n\n\ndef dir_setup(test_dir, pkg): # pragma: no cover\n \"\"\"Setup the directories necessary for the configurator.\"\"\"\n temp_dir = tempfile.mkdtemp(\"temp\")\n config_dir = tempfile.mkdtemp(\"config\")\n work_dir = tempfile.mkdtemp(\"work\")\n\n os.chmod(temp_dir, constants.CONFIG_DIRS_MODE)\n os.chmod(config_dir, constants.CONFIG_DIRS_MODE)\n os.chmod(work_dir, constants.CONFIG_DIRS_MODE)\n\n test_configs = pkg_resources.resource_filename(\n pkg, os.path.join(\"testdata\", test_dir))\n\n shutil.copytree(\n test_configs, os.path.join(temp_dir, test_dir), symlinks=True)\n\n return temp_dir, config_dir, work_dir\n", "path": "letsencrypt/plugins/common.py"}]} | 3,244 | 136 |
gh_patches_debug_6260 | rasdani/github-patches | git_diff | localstack__localstack-3683 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
EventBridge PutEvents not working when Detail is missing
# Type of request: This is a ...
[X ] bug report
[ ] feature request
# Detailed description
Calling PutEvents operation returns 500 if no `Detail` is specified. This case is similar to #3043
## Expected behavior
According to the doc https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_PutEventsRequestEntry.html `Details` is optional.
```
Detail
A valid JSON string. There is no other schema imposed. The JSON string may contain fields and nested subobjects.
Type: String
Required: No
```
So this should works and set `Detail` as `{}`
```
eb_client = boto3.client('events', endpoint_url='http://localhost:4587')
eb_client.put_events(
Entries=[
{
'DetailType': 'Test'
}
]
)
```
## Actual behavior
500 is returned
# Steps to reproduce
1. Start LocalStack with SERVICES=events
2. Run the client code
## Command used to start LocalStack
```
docker run --rm --name localstack -p 4587:4587 -e SERVICES=events -e DEBUG=true localstack/localstack
```
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
CLI:
```
aws events put-events --endpoint-url http://localhost:4587 --entries '[{"DetailType": "Test"}]'
```
Or via python:
```
eb_client = boto3.client('events', endpoint_url='http://localhost:4587')
eb_client.put_events(
Entries=[
{
'DetailType': 'Test',
'Detail': '{}'
}
]
)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `localstack/services/events/events_starter.py`
Content:
```
1 import re
2 import datetime
3 import json
4 import uuid
5 import logging
6 import ipaddress
7 from moto.events.models import Rule as rule_model
8 from moto.events.responses import EventsHandler as events_handler
9 from localstack import config
10 from localstack.constants import APPLICATION_AMZ_JSON_1_1, TEST_AWS_ACCOUNT_ID
11 from localstack.utils.aws import aws_stack
12 from localstack.utils.common import short_uid, extract_jsonpath
13 from localstack.services.infra import start_moto_server
14 from localstack.services.events.scheduler import JobScheduler
15 from localstack.services.events.events_listener import _create_and_register_temp_dir, _dump_events_to_files
16
17
18 LOG = logging.getLogger(__name__)
19
20 DEFAULT_EVENT_BUS_NAME = 'default'
21
22 # Event rules storage
23 EVENT_RULES = {
24 DEFAULT_EVENT_BUS_NAME: set()
25 }
26
27 CONTENT_BASE_FILTER_KEYWORDS = [
28 'prefix', 'anything-but', 'numeric', 'cidr', 'exists'
29 ]
30
31
32 def filter_event_with_target_input_path(target, event):
33 input_path = target.get('InputPath')
34 if input_path:
35 event = extract_jsonpath(event, input_path)
36 return event
37
38
39 def filter_event_based_on_event_format(self, rule, event):
40 def filter_event(event_pattern, event):
41 for key, value in event_pattern.items():
42 event_value = event.get(key.lower())
43 if not event_value:
44 return False
45
46 if isinstance(value, list) and not identify_content_base_parameter_in_pattern(value):
47 if isinstance(event_value, list) and \
48 get_two_lists_intersection(value, event_value) == []:
49 return False
50 elif not isinstance(event_value, list) and \
51 isinstance(event_value, (str, int)) and \
52 event_value not in value:
53 return False
54
55 elif isinstance(value, list) and identify_content_base_parameter_in_pattern(value):
56 if not filter_event_with_content_base_parameter(value, event_value):
57 return False
58
59 elif isinstance(value, (str, int)):
60 try:
61 if isinstance(json.loads(value), dict) and \
62 not filter_event(json.loads(value), event_value):
63 return False
64 except json.decoder.JSONDecodeError:
65 return False
66 return True
67
68 rule_information = self.events_backend.describe_rule(rule)
69 if rule_information.event_pattern:
70 event_pattern = json.loads(rule_information.event_pattern)
71 if not filter_event(event_pattern, event):
72 return False
73 return True
74
75
76 def process_events(event, targets):
77 for target in targets:
78 arn = target['Arn']
79 changed_event = filter_event_with_target_input_path(target, event)
80 aws_stack.send_event_to_target(arn, changed_event)
81
82
83 def apply_patches():
84 # Fix events arn
85 def rule_model_generate_arn(self, name):
86 return 'arn:aws:events:{region_name}:{account_id}:rule/{name}'.format(
87 region_name=self.region_name, account_id=TEST_AWS_ACCOUNT_ID, name=name
88 )
89
90 events_handler_put_rule_orig = events_handler.put_rule
91
92 def events_handler_put_rule(self):
93 name = self._get_param('Name')
94 event_bus = self._get_param('EventBusName') or DEFAULT_EVENT_BUS_NAME
95
96 if event_bus not in EVENT_RULES:
97 EVENT_RULES[event_bus] = set()
98
99 EVENT_RULES[event_bus].add(name)
100
101 return events_handler_put_rule_orig(self)
102
103 events_handler_delete_rule_orig = events_handler.delete_rule
104
105 def events_handler_delete_rule(self):
106 name = self._get_param('Name')
107 event_bus = self._get_param('EventBusName') or DEFAULT_EVENT_BUS_NAME
108
109 rules_set = EVENT_RULES.get(event_bus, set())
110 if name not in rules_set:
111 return self.error('ValidationException', 'Rule "%s" not found for event bus "%s"' % (name, event_bus))
112 rules_set.remove(name)
113
114 return events_handler_delete_rule_orig(self)
115
116 # 2101 Events put-targets does not respond
117 def events_handler_put_targets(self):
118 rule_name = self._get_param('Rule')
119 targets = self._get_param('Targets')
120
121 if not rule_name:
122 return self.error('ValidationException', 'Parameter Rule is required.')
123
124 if not targets:
125 return self.error('ValidationException', 'Parameter Targets is required.')
126
127 if not self.events_backend.put_targets(rule_name, targets):
128 return self.error(
129 'ResourceNotFoundException', 'Rule ' + rule_name + ' does not exist.'
130 )
131
132 return json.dumps({'FailedEntryCount': 0, 'FailedEntries': []}), self.response_headers
133
134 def events_handler_put_events(self):
135 entries = self._get_param('Entries')
136 events = list(
137 map(lambda event: {'event': event, 'uuid': str(uuid.uuid4())}, entries)
138 )
139
140 _create_and_register_temp_dir()
141 _dump_events_to_files(events)
142
143 for event_envelope in events:
144 event = event_envelope['event']
145 event_bus = event.get('EventBusName') or DEFAULT_EVENT_BUS_NAME
146
147 rules = EVENT_RULES.get(event_bus, [])
148
149 formatted_event = {
150 'version': '0',
151 'id': event_envelope['uuid'],
152 'detail-type': event.get('DetailType'),
153 'source': event.get('Source'),
154 'account': TEST_AWS_ACCOUNT_ID,
155 'time': datetime.datetime.utcnow().strftime('%Y-%m-%dT%H:%M:%SZ'),
156 'region': self.region,
157 'resources': event.get('Resources', []),
158 'detail': json.loads(event.get('Detail')),
159 }
160
161 targets = []
162 for rule in rules:
163 if filter_event_based_on_event_format(self, rule, formatted_event):
164 targets.extend(self.events_backend.list_targets_by_rule(rule)['Targets'])
165
166 # process event
167 process_events(formatted_event, targets)
168
169 content = {
170 'Entries': list(map(lambda event: {'EventId': event['uuid']}, events))
171 }
172
173 self.response_headers.update({
174 'Content-Type': APPLICATION_AMZ_JSON_1_1,
175 'x-amzn-RequestId': short_uid()
176 })
177
178 return json.dumps(content), self.response_headers
179
180 rule_model._generate_arn = rule_model_generate_arn
181 events_handler.put_rule = events_handler_put_rule
182 events_handler.delete_rule = events_handler_delete_rule
183 events_handler.put_targets = events_handler_put_targets
184 events_handler.put_events = events_handler_put_events
185
186
187 def start_scheduler():
188 JobScheduler.start()
189
190
191 def start_events(port=None, asynchronous=None, update_listener=None):
192 port = port or config.PORT_EVENTS
193
194 apply_patches()
195 start_scheduler()
196
197 return start_moto_server(
198 key='events',
199 port=port,
200 name='Cloudwatch Events',
201 asynchronous=asynchronous,
202 update_listener=update_listener
203 )
204
205
206 # ---------------
207 # HELPER METHODS
208 # ---------------
209
210
211 def get_two_lists_intersection(lst1, lst2):
212 lst3 = [value for value in lst1 if value in lst2]
213 return lst3
214
215
216 def identify_content_base_parameter_in_pattern(parameters):
217 if any([list(param.keys())[0] in CONTENT_BASE_FILTER_KEYWORDS for param in parameters if isinstance(param, dict)]):
218 return True
219
220
221 def filter_event_with_content_base_parameter(pattern_value, event_value):
222 for element in pattern_value:
223 if (isinstance(element, (str, int))) \
224 and (event_value == element or element in event_value):
225 return True
226 elif isinstance(element, dict):
227 element_key = list(element.keys())[0]
228 element_value = element.get(element_key)
229 if element_key.lower() == 'prefix':
230 if re.match(r'^{}'.format(element_value), event_value):
231 return True
232 elif element_key.lower() == 'exists':
233 if element_value and event_value:
234 return True
235 elif not element_value and not event_value:
236 return True
237 elif element_key.lower() == 'cidr':
238 ips = [str(ip) for ip in ipaddress.IPv4Network(element_value)]
239 if event_value in ips:
240 return True
241 elif element_key.lower() == 'numeric':
242 if check_valid_numeric_content_base_rule(element_value):
243 for index in range(len(element_value)):
244 if isinstance(element_value[index], int):
245 continue
246 if element_value[index] == '>' and \
247 isinstance(element_value[index + 1], int) and \
248 event_value <= element_value[index + 1]:
249 break
250 elif element_value[index] == '>=' and \
251 isinstance(element_value[index + 1], int) and \
252 event_value < element_value[index + 1]:
253 break
254 elif element_value[index] == '<' and \
255 isinstance(element_value[index + 1], int) and \
256 event_value >= element_value[index + 1]:
257 break
258 elif element_value[index] == '<=' and \
259 isinstance(element_value[index + 1], int) and \
260 event_value > element_value[index + 1]:
261 break
262 else:
263 return True
264
265 elif element_key.lower() == 'anything-but':
266 if isinstance(element_value, list) and \
267 event_value not in element_value:
268 return True
269 elif (isinstance(element_value, (str, int))) and \
270 event_value != element_value:
271 return True
272 elif isinstance(element_value, dict):
273 nested_key = list(element_value)[0]
274 if nested_key == 'prefix' and \
275 not re.match(r'^{}'.format(element_value.get(nested_key)), event_value):
276 return True
277 return False
278
279
280 def check_valid_numeric_content_base_rule(list_of_operators):
281 if len(list_of_operators) > 4:
282 return False
283
284 if '=' in list_of_operators:
285 return False
286
287 if len(list_of_operators) > 2:
288 upper_limit = None
289 lower_limit = None
290 for index in range(len(list_of_operators)):
291 if not isinstance(list_of_operators[index], int) and \
292 '<' in list_of_operators[index]:
293 upper_limit = list_of_operators[index + 1]
294 if not isinstance(list_of_operators[index], int) and \
295 '>' in list_of_operators[index]:
296 lower_limit = list_of_operators[index + 1]
297 if upper_limit and lower_limit and upper_limit < lower_limit:
298 return False
299 index = index + 1
300 return True
301
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/localstack/services/events/events_starter.py b/localstack/services/events/events_starter.py
--- a/localstack/services/events/events_starter.py
+++ b/localstack/services/events/events_starter.py
@@ -155,7 +155,7 @@
'time': datetime.datetime.utcnow().strftime('%Y-%m-%dT%H:%M:%SZ'),
'region': self.region,
'resources': event.get('Resources', []),
- 'detail': json.loads(event.get('Detail')),
+ 'detail': json.loads(event.get('Detail', '{}')),
}
targets = []
| {"golden_diff": "diff --git a/localstack/services/events/events_starter.py b/localstack/services/events/events_starter.py\n--- a/localstack/services/events/events_starter.py\n+++ b/localstack/services/events/events_starter.py\n@@ -155,7 +155,7 @@\n 'time': datetime.datetime.utcnow().strftime('%Y-%m-%dT%H:%M:%SZ'),\n 'region': self.region,\n 'resources': event.get('Resources', []),\n- 'detail': json.loads(event.get('Detail')),\n+ 'detail': json.loads(event.get('Detail', '{}')),\n }\n \n targets = []\n", "issue": "EventBridge PutEvents not working when Detail is missing\n# Type of request: This is a ...\r\n\r\n[X ] bug report\r\n[ ] feature request\r\n\r\n# Detailed description\r\nCalling PutEvents operation returns 500 if no `Detail` is specified. This case is similar to #3043\r\n\r\n## Expected behavior\r\nAccording to the doc https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_PutEventsRequestEntry.html `Details` is optional. \r\n\r\n```\r\n Detail\r\n A valid JSON string. There is no other schema imposed. The JSON string may contain fields and nested subobjects.\r\n Type: String\r\n Required: No\r\n```\r\n\r\nSo this should works and set `Detail` as `{}`\r\n\r\n```\r\neb_client = boto3.client('events', endpoint_url='http://localhost:4587')\r\neb_client.put_events(\r\n Entries=[\r\n {\r\n 'DetailType': 'Test'\r\n }\r\n ]\r\n)\r\n```\r\n\r\n## Actual behavior\r\n500 is returned\r\n\r\n# Steps to reproduce\r\n\r\n1. Start LocalStack with SERVICES=events\r\n2. Run the client code\r\n\r\n## Command used to start LocalStack\r\n\r\n```\r\ndocker run --rm --name localstack -p 4587:4587 -e SERVICES=events -e DEBUG=true localstack/localstack\r\n```\r\n\r\n## Client code (AWS SDK code snippet, or sequence of \"awslocal\" commands)\r\n\r\nCLI:\r\n\r\n```\r\naws events put-events --endpoint-url http://localhost:4587 --entries '[{\"DetailType\": \"Test\"}]'\r\n```\r\n\r\nOr via python:\r\n\r\n```\r\neb_client = boto3.client('events', endpoint_url='http://localhost:4587')\r\neb_client.put_events(\r\n Entries=[\r\n {\r\n 'DetailType': 'Test',\r\n 'Detail': '{}'\r\n }\r\n ]\r\n)\r\n```\r\n\n", "before_files": [{"content": "import re\nimport datetime\nimport json\nimport uuid\nimport logging\nimport ipaddress\nfrom moto.events.models import Rule as rule_model\nfrom moto.events.responses import EventsHandler as events_handler\nfrom localstack import config\nfrom localstack.constants import APPLICATION_AMZ_JSON_1_1, TEST_AWS_ACCOUNT_ID\nfrom localstack.utils.aws import aws_stack\nfrom localstack.utils.common import short_uid, extract_jsonpath\nfrom localstack.services.infra import start_moto_server\nfrom localstack.services.events.scheduler import JobScheduler\nfrom localstack.services.events.events_listener import _create_and_register_temp_dir, _dump_events_to_files\n\n\nLOG = logging.getLogger(__name__)\n\nDEFAULT_EVENT_BUS_NAME = 'default'\n\n# Event rules storage\nEVENT_RULES = {\n DEFAULT_EVENT_BUS_NAME: set()\n}\n\nCONTENT_BASE_FILTER_KEYWORDS = [\n 'prefix', 'anything-but', 'numeric', 'cidr', 'exists'\n]\n\n\ndef filter_event_with_target_input_path(target, event):\n input_path = target.get('InputPath')\n if input_path:\n event = extract_jsonpath(event, input_path)\n return event\n\n\ndef filter_event_based_on_event_format(self, rule, event):\n def filter_event(event_pattern, event):\n for key, value in event_pattern.items():\n event_value = event.get(key.lower())\n if not event_value:\n return False\n\n if isinstance(value, list) and not identify_content_base_parameter_in_pattern(value):\n if isinstance(event_value, list) and \\\n get_two_lists_intersection(value, event_value) == []:\n return False\n elif not isinstance(event_value, list) and \\\n isinstance(event_value, (str, int)) and \\\n event_value not in value:\n return False\n\n elif isinstance(value, list) and identify_content_base_parameter_in_pattern(value):\n if not filter_event_with_content_base_parameter(value, event_value):\n return False\n\n elif isinstance(value, (str, int)):\n try:\n if isinstance(json.loads(value), dict) and \\\n not filter_event(json.loads(value), event_value):\n return False\n except json.decoder.JSONDecodeError:\n return False\n return True\n\n rule_information = self.events_backend.describe_rule(rule)\n if rule_information.event_pattern:\n event_pattern = json.loads(rule_information.event_pattern)\n if not filter_event(event_pattern, event):\n return False\n return True\n\n\ndef process_events(event, targets):\n for target in targets:\n arn = target['Arn']\n changed_event = filter_event_with_target_input_path(target, event)\n aws_stack.send_event_to_target(arn, changed_event)\n\n\ndef apply_patches():\n # Fix events arn\n def rule_model_generate_arn(self, name):\n return 'arn:aws:events:{region_name}:{account_id}:rule/{name}'.format(\n region_name=self.region_name, account_id=TEST_AWS_ACCOUNT_ID, name=name\n )\n\n events_handler_put_rule_orig = events_handler.put_rule\n\n def events_handler_put_rule(self):\n name = self._get_param('Name')\n event_bus = self._get_param('EventBusName') or DEFAULT_EVENT_BUS_NAME\n\n if event_bus not in EVENT_RULES:\n EVENT_RULES[event_bus] = set()\n\n EVENT_RULES[event_bus].add(name)\n\n return events_handler_put_rule_orig(self)\n\n events_handler_delete_rule_orig = events_handler.delete_rule\n\n def events_handler_delete_rule(self):\n name = self._get_param('Name')\n event_bus = self._get_param('EventBusName') or DEFAULT_EVENT_BUS_NAME\n\n rules_set = EVENT_RULES.get(event_bus, set())\n if name not in rules_set:\n return self.error('ValidationException', 'Rule \"%s\" not found for event bus \"%s\"' % (name, event_bus))\n rules_set.remove(name)\n\n return events_handler_delete_rule_orig(self)\n\n # 2101 Events put-targets does not respond\n def events_handler_put_targets(self):\n rule_name = self._get_param('Rule')\n targets = self._get_param('Targets')\n\n if not rule_name:\n return self.error('ValidationException', 'Parameter Rule is required.')\n\n if not targets:\n return self.error('ValidationException', 'Parameter Targets is required.')\n\n if not self.events_backend.put_targets(rule_name, targets):\n return self.error(\n 'ResourceNotFoundException', 'Rule ' + rule_name + ' does not exist.'\n )\n\n return json.dumps({'FailedEntryCount': 0, 'FailedEntries': []}), self.response_headers\n\n def events_handler_put_events(self):\n entries = self._get_param('Entries')\n events = list(\n map(lambda event: {'event': event, 'uuid': str(uuid.uuid4())}, entries)\n )\n\n _create_and_register_temp_dir()\n _dump_events_to_files(events)\n\n for event_envelope in events:\n event = event_envelope['event']\n event_bus = event.get('EventBusName') or DEFAULT_EVENT_BUS_NAME\n\n rules = EVENT_RULES.get(event_bus, [])\n\n formatted_event = {\n 'version': '0',\n 'id': event_envelope['uuid'],\n 'detail-type': event.get('DetailType'),\n 'source': event.get('Source'),\n 'account': TEST_AWS_ACCOUNT_ID,\n 'time': datetime.datetime.utcnow().strftime('%Y-%m-%dT%H:%M:%SZ'),\n 'region': self.region,\n 'resources': event.get('Resources', []),\n 'detail': json.loads(event.get('Detail')),\n }\n\n targets = []\n for rule in rules:\n if filter_event_based_on_event_format(self, rule, formatted_event):\n targets.extend(self.events_backend.list_targets_by_rule(rule)['Targets'])\n\n # process event\n process_events(formatted_event, targets)\n\n content = {\n 'Entries': list(map(lambda event: {'EventId': event['uuid']}, events))\n }\n\n self.response_headers.update({\n 'Content-Type': APPLICATION_AMZ_JSON_1_1,\n 'x-amzn-RequestId': short_uid()\n })\n\n return json.dumps(content), self.response_headers\n\n rule_model._generate_arn = rule_model_generate_arn\n events_handler.put_rule = events_handler_put_rule\n events_handler.delete_rule = events_handler_delete_rule\n events_handler.put_targets = events_handler_put_targets\n events_handler.put_events = events_handler_put_events\n\n\ndef start_scheduler():\n JobScheduler.start()\n\n\ndef start_events(port=None, asynchronous=None, update_listener=None):\n port = port or config.PORT_EVENTS\n\n apply_patches()\n start_scheduler()\n\n return start_moto_server(\n key='events',\n port=port,\n name='Cloudwatch Events',\n asynchronous=asynchronous,\n update_listener=update_listener\n )\n\n\n# ---------------\n# HELPER METHODS\n# ---------------\n\n\ndef get_two_lists_intersection(lst1, lst2):\n lst3 = [value for value in lst1 if value in lst2]\n return lst3\n\n\ndef identify_content_base_parameter_in_pattern(parameters):\n if any([list(param.keys())[0] in CONTENT_BASE_FILTER_KEYWORDS for param in parameters if isinstance(param, dict)]):\n return True\n\n\ndef filter_event_with_content_base_parameter(pattern_value, event_value):\n for element in pattern_value:\n if (isinstance(element, (str, int))) \\\n and (event_value == element or element in event_value):\n return True\n elif isinstance(element, dict):\n element_key = list(element.keys())[0]\n element_value = element.get(element_key)\n if element_key.lower() == 'prefix':\n if re.match(r'^{}'.format(element_value), event_value):\n return True\n elif element_key.lower() == 'exists':\n if element_value and event_value:\n return True\n elif not element_value and not event_value:\n return True\n elif element_key.lower() == 'cidr':\n ips = [str(ip) for ip in ipaddress.IPv4Network(element_value)]\n if event_value in ips:\n return True\n elif element_key.lower() == 'numeric':\n if check_valid_numeric_content_base_rule(element_value):\n for index in range(len(element_value)):\n if isinstance(element_value[index], int):\n continue\n if element_value[index] == '>' and \\\n isinstance(element_value[index + 1], int) and \\\n event_value <= element_value[index + 1]:\n break\n elif element_value[index] == '>=' and \\\n isinstance(element_value[index + 1], int) and \\\n event_value < element_value[index + 1]:\n break\n elif element_value[index] == '<' and \\\n isinstance(element_value[index + 1], int) and \\\n event_value >= element_value[index + 1]:\n break\n elif element_value[index] == '<=' and \\\n isinstance(element_value[index + 1], int) and \\\n event_value > element_value[index + 1]:\n break\n else:\n return True\n\n elif element_key.lower() == 'anything-but':\n if isinstance(element_value, list) and \\\n event_value not in element_value:\n return True\n elif (isinstance(element_value, (str, int))) and \\\n event_value != element_value:\n return True\n elif isinstance(element_value, dict):\n nested_key = list(element_value)[0]\n if nested_key == 'prefix' and \\\n not re.match(r'^{}'.format(element_value.get(nested_key)), event_value):\n return True\n return False\n\n\ndef check_valid_numeric_content_base_rule(list_of_operators):\n if len(list_of_operators) > 4:\n return False\n\n if '=' in list_of_operators:\n return False\n\n if len(list_of_operators) > 2:\n upper_limit = None\n lower_limit = None\n for index in range(len(list_of_operators)):\n if not isinstance(list_of_operators[index], int) and \\\n '<' in list_of_operators[index]:\n upper_limit = list_of_operators[index + 1]\n if not isinstance(list_of_operators[index], int) and \\\n '>' in list_of_operators[index]:\n lower_limit = list_of_operators[index + 1]\n if upper_limit and lower_limit and upper_limit < lower_limit:\n return False\n index = index + 1\n return True\n", "path": "localstack/services/events/events_starter.py"}], "after_files": [{"content": "import re\nimport datetime\nimport json\nimport uuid\nimport logging\nimport ipaddress\nfrom moto.events.models import Rule as rule_model\nfrom moto.events.responses import EventsHandler as events_handler\nfrom localstack import config\nfrom localstack.constants import APPLICATION_AMZ_JSON_1_1, TEST_AWS_ACCOUNT_ID\nfrom localstack.utils.aws import aws_stack\nfrom localstack.utils.common import short_uid, extract_jsonpath\nfrom localstack.services.infra import start_moto_server\nfrom localstack.services.events.scheduler import JobScheduler\nfrom localstack.services.events.events_listener import _create_and_register_temp_dir, _dump_events_to_files\n\n\nLOG = logging.getLogger(__name__)\n\nDEFAULT_EVENT_BUS_NAME = 'default'\n\n# Event rules storage\nEVENT_RULES = {\n DEFAULT_EVENT_BUS_NAME: set()\n}\n\nCONTENT_BASE_FILTER_KEYWORDS = [\n 'prefix', 'anything-but', 'numeric', 'cidr', 'exists'\n]\n\n\ndef filter_event_with_target_input_path(target, event):\n input_path = target.get('InputPath')\n if input_path:\n event = extract_jsonpath(event, input_path)\n return event\n\n\ndef filter_event_based_on_event_format(self, rule, event):\n def filter_event(event_pattern, event):\n for key, value in event_pattern.items():\n event_value = event.get(key.lower())\n if not event_value:\n return False\n\n if isinstance(value, list) and not identify_content_base_parameter_in_pattern(value):\n if isinstance(event_value, list) and \\\n get_two_lists_intersection(value, event_value) == []:\n return False\n elif not isinstance(event_value, list) and \\\n isinstance(event_value, (str, int)) and \\\n event_value not in value:\n return False\n\n elif isinstance(value, list) and identify_content_base_parameter_in_pattern(value):\n if not filter_event_with_content_base_parameter(value, event_value):\n return False\n\n elif isinstance(value, (str, int)):\n try:\n if isinstance(json.loads(value), dict) and \\\n not filter_event(json.loads(value), event_value):\n return False\n except json.decoder.JSONDecodeError:\n return False\n return True\n\n rule_information = self.events_backend.describe_rule(rule)\n if rule_information.event_pattern:\n event_pattern = json.loads(rule_information.event_pattern)\n if not filter_event(event_pattern, event):\n return False\n return True\n\n\ndef process_events(event, targets):\n for target in targets:\n arn = target['Arn']\n changed_event = filter_event_with_target_input_path(target, event)\n aws_stack.send_event_to_target(arn, changed_event)\n\n\ndef apply_patches():\n # Fix events arn\n def rule_model_generate_arn(self, name):\n return 'arn:aws:events:{region_name}:{account_id}:rule/{name}'.format(\n region_name=self.region_name, account_id=TEST_AWS_ACCOUNT_ID, name=name\n )\n\n events_handler_put_rule_orig = events_handler.put_rule\n\n def events_handler_put_rule(self):\n name = self._get_param('Name')\n event_bus = self._get_param('EventBusName') or DEFAULT_EVENT_BUS_NAME\n\n if event_bus not in EVENT_RULES:\n EVENT_RULES[event_bus] = set()\n\n EVENT_RULES[event_bus].add(name)\n\n return events_handler_put_rule_orig(self)\n\n events_handler_delete_rule_orig = events_handler.delete_rule\n\n def events_handler_delete_rule(self):\n name = self._get_param('Name')\n event_bus = self._get_param('EventBusName') or DEFAULT_EVENT_BUS_NAME\n\n rules_set = EVENT_RULES.get(event_bus, set())\n if name not in rules_set:\n return self.error('ValidationException', 'Rule \"%s\" not found for event bus \"%s\"' % (name, event_bus))\n rules_set.remove(name)\n\n return events_handler_delete_rule_orig(self)\n\n # 2101 Events put-targets does not respond\n def events_handler_put_targets(self):\n rule_name = self._get_param('Rule')\n targets = self._get_param('Targets')\n\n if not rule_name:\n return self.error('ValidationException', 'Parameter Rule is required.')\n\n if not targets:\n return self.error('ValidationException', 'Parameter Targets is required.')\n\n if not self.events_backend.put_targets(rule_name, targets):\n return self.error(\n 'ResourceNotFoundException', 'Rule ' + rule_name + ' does not exist.'\n )\n\n return json.dumps({'FailedEntryCount': 0, 'FailedEntries': []}), self.response_headers\n\n def events_handler_put_events(self):\n entries = self._get_param('Entries')\n events = list(\n map(lambda event: {'event': event, 'uuid': str(uuid.uuid4())}, entries)\n )\n\n _create_and_register_temp_dir()\n _dump_events_to_files(events)\n\n for event_envelope in events:\n event = event_envelope['event']\n event_bus = event.get('EventBusName') or DEFAULT_EVENT_BUS_NAME\n\n rules = EVENT_RULES.get(event_bus, [])\n\n formatted_event = {\n 'version': '0',\n 'id': event_envelope['uuid'],\n 'detail-type': event.get('DetailType'),\n 'source': event.get('Source'),\n 'account': TEST_AWS_ACCOUNT_ID,\n 'time': datetime.datetime.utcnow().strftime('%Y-%m-%dT%H:%M:%SZ'),\n 'region': self.region,\n 'resources': event.get('Resources', []),\n 'detail': json.loads(event.get('Detail', '{}')),\n }\n\n targets = []\n for rule in rules:\n if filter_event_based_on_event_format(self, rule, formatted_event):\n targets.extend(self.events_backend.list_targets_by_rule(rule)['Targets'])\n\n # process event\n process_events(formatted_event, targets)\n\n content = {\n 'Entries': list(map(lambda event: {'EventId': event['uuid']}, events))\n }\n\n self.response_headers.update({\n 'Content-Type': APPLICATION_AMZ_JSON_1_1,\n 'x-amzn-RequestId': short_uid()\n })\n\n return json.dumps(content), self.response_headers\n\n rule_model._generate_arn = rule_model_generate_arn\n events_handler.put_rule = events_handler_put_rule\n events_handler.delete_rule = events_handler_delete_rule\n events_handler.put_targets = events_handler_put_targets\n events_handler.put_events = events_handler_put_events\n\n\ndef start_scheduler():\n JobScheduler.start()\n\n\ndef start_events(port=None, asynchronous=None, update_listener=None):\n port = port or config.PORT_EVENTS\n\n apply_patches()\n start_scheduler()\n\n return start_moto_server(\n key='events',\n port=port,\n name='Cloudwatch Events',\n asynchronous=asynchronous,\n update_listener=update_listener\n )\n\n\n# ---------------\n# HELPER METHODS\n# ---------------\n\n\ndef get_two_lists_intersection(lst1, lst2):\n lst3 = [value for value in lst1 if value in lst2]\n return lst3\n\n\ndef identify_content_base_parameter_in_pattern(parameters):\n if any([list(param.keys())[0] in CONTENT_BASE_FILTER_KEYWORDS for param in parameters if isinstance(param, dict)]):\n return True\n\n\ndef filter_event_with_content_base_parameter(pattern_value, event_value):\n for element in pattern_value:\n if (isinstance(element, (str, int))) \\\n and (event_value == element or element in event_value):\n return True\n elif isinstance(element, dict):\n element_key = list(element.keys())[0]\n element_value = element.get(element_key)\n if element_key.lower() == 'prefix':\n if re.match(r'^{}'.format(element_value), event_value):\n return True\n elif element_key.lower() == 'exists':\n if element_value and event_value:\n return True\n elif not element_value and not event_value:\n return True\n elif element_key.lower() == 'cidr':\n ips = [str(ip) for ip in ipaddress.IPv4Network(element_value)]\n if event_value in ips:\n return True\n elif element_key.lower() == 'numeric':\n if check_valid_numeric_content_base_rule(element_value):\n for index in range(len(element_value)):\n if isinstance(element_value[index], int):\n continue\n if element_value[index] == '>' and \\\n isinstance(element_value[index + 1], int) and \\\n event_value <= element_value[index + 1]:\n break\n elif element_value[index] == '>=' and \\\n isinstance(element_value[index + 1], int) and \\\n event_value < element_value[index + 1]:\n break\n elif element_value[index] == '<' and \\\n isinstance(element_value[index + 1], int) and \\\n event_value >= element_value[index + 1]:\n break\n elif element_value[index] == '<=' and \\\n isinstance(element_value[index + 1], int) and \\\n event_value > element_value[index + 1]:\n break\n else:\n return True\n\n elif element_key.lower() == 'anything-but':\n if isinstance(element_value, list) and \\\n event_value not in element_value:\n return True\n elif (isinstance(element_value, (str, int))) and \\\n event_value != element_value:\n return True\n elif isinstance(element_value, dict):\n nested_key = list(element_value)[0]\n if nested_key == 'prefix' and \\\n not re.match(r'^{}'.format(element_value.get(nested_key)), event_value):\n return True\n return False\n\n\ndef check_valid_numeric_content_base_rule(list_of_operators):\n if len(list_of_operators) > 4:\n return False\n\n if '=' in list_of_operators:\n return False\n\n if len(list_of_operators) > 2:\n upper_limit = None\n lower_limit = None\n for index in range(len(list_of_operators)):\n if not isinstance(list_of_operators[index], int) and \\\n '<' in list_of_operators[index]:\n upper_limit = list_of_operators[index + 1]\n if not isinstance(list_of_operators[index], int) and \\\n '>' in list_of_operators[index]:\n lower_limit = list_of_operators[index + 1]\n if upper_limit and lower_limit and upper_limit < lower_limit:\n return False\n index = index + 1\n return True\n", "path": "localstack/services/events/events_starter.py"}]} | 3,681 | 129 |
gh_patches_debug_10298 | rasdani/github-patches | git_diff | kubeflow__pipelines-4331 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
allow output artifact store configuration (vs hard coded)
it seems like the output artifacts are always stored in a specific minio service, port, namespace, bucket, secrets, etc (`minio-service.kubeflow:9000`).
see: https://github.com/kubeflow/pipelines/blob/f40a22a3f4a8e06d20cf3e3f425b5058d5c87e0b/sdk/python/kfp/compiler/_op_to_template.py#L148
it would be great to make it flexible, e.g. allow using S3, or change namespace or bucket names.
i suggest making it configurable, i can do such PR if we agree its needed.
flexible pipeline service (host) path in client SDK
when creating an SDK `Client()` the path to `ml-pipeline` API service is loaded from a hard coded value (`ml-pipeline.kubeflow.svc.cluster.local:8888`) which indicate a specific k8s namespace. it can be valuable to load that default value from an env variable, i.e. changing the line in `_client.py` from:
`config.host = host if host else Client.IN_CLUSTER_DNS_NAME`
to:
`config.host = host or os.environ.get('ML_PIPELINE_DNS_NAME',Client.IN_CLUSTER_DNS_NAME)`
also note that when a user provide the `host` parameter, the ipython output points to the API server and not to the UI service (see the logic in `_get_url_prefix()`), it seems like a potential bug
if its acceptable i can submit a PR for the line change above
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `components/ibm-components/watson/train/src/wml-train.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 # define the function to train a model on wml
14
15 def getSecret(secret):
16 with open(secret, 'r') as f:
17 res = f.readline().strip('\'')
18 f.close()
19 return res
20
21 def train(args):
22 from watson_machine_learning_client import WatsonMachineLearningAPIClient
23 from minio import Minio
24 from urllib.parse import urlsplit
25 from pathlib import Path
26 import os,time
27
28 wml_train_code = args.train_code
29 wml_execution_command = args.execution_command.strip('\'')
30 wml_framework_name = args.framework if args.framework else 'tensorflow'
31 wml_framework_version = args.framework_version if args.framework_version else '1.15'
32 wml_runtime_name = args.runtime if args.runtime else 'python'
33 wml_runtime_version = args.runtime_version if args.runtime_version else '3.6'
34 wml_run_definition = args.run_definition if args.run_definition else 'python-tensorflow-definition'
35 wml_run_name = args.run_name if args.run_name else 'python-tensorflow-run'
36 wml_author_name = args.author_name if args.author_name else 'default-author'
37 wml_compute_name = args.compute_name if args.compute_name else 'k80'
38 wml_compute_nodes = args.compute_nodes if args.compute_nodes else '1'
39
40 wml_runtime_version_v4 = wml_framework_version + '-py' + wml_runtime_version
41 wml_compute_nodes_v4 = int(wml_compute_nodes)
42
43 # retrieve credentials
44 wml_url = getSecret("/app/secrets/wml_url")
45 wml_apikey = getSecret("/app/secrets/wml_apikey")
46 wml_instance_id = getSecret("/app/secrets/wml_instance_id")
47
48 wml_data_source_type = getSecret("/app/secrets/wml_data_source_type")
49
50 cos_endpoint = getSecret("/app/secrets/cos_endpoint")
51 cos_endpoint_parts = urlsplit(cos_endpoint)
52 if bool(cos_endpoint_parts.scheme):
53 cos_endpoint_hostname = cos_endpoint_parts.hostname
54 else:
55 cos_endpoint_hostname = cos_endpoint
56 cos_endpoint = 'https://' + cos_endpoint
57 cos_access_key = getSecret("/app/secrets/cos_access_key")
58 cos_secret_key = getSecret("/app/secrets/cos_secret_key")
59 cos_input_bucket = getSecret("/app/secrets/cos_input_bucket")
60 cos_output_bucket = getSecret("/app/secrets/cos_output_bucket")
61
62 # download model code
63 model_code = os.path.join('/app', wml_train_code)
64
65 cos = Minio(cos_endpoint_hostname,
66 access_key=cos_access_key,
67 secret_key=cos_secret_key,
68 secure=True)
69
70 cos.fget_object(cos_input_bucket, wml_train_code, model_code)
71
72 # set up the WML client
73 wml_credentials = {
74 "url": wml_url,
75 "instance_id": wml_instance_id,
76 "apikey": wml_apikey
77 }
78 client = WatsonMachineLearningAPIClient(wml_credentials)
79 # define the model
80 lib_meta = {
81 client.runtimes.LibraryMetaNames.NAME: wml_run_definition,
82 client.runtimes.LibraryMetaNames.VERSION: wml_framework_version,
83 client.runtimes.LibraryMetaNames.FILEPATH: model_code,
84 client.runtimes.LibraryMetaNames.PLATFORM: {"name": wml_framework_name, "versions": [wml_framework_version]}
85 }
86 # check exisiting library
87 library_details = client.runtimes.get_library_details()
88 for library_detail in library_details['resources']:
89 if library_detail['entity']['name'] == wml_run_definition:
90 # Delete library if exist because we cannot update model_code
91 uid = client.runtimes.get_library_uid(library_detail)
92 client.repository.delete(uid)
93 break
94 custom_library_details = client.runtimes.store_library(lib_meta)
95 custom_library_uid = client.runtimes.get_library_uid(custom_library_details)
96
97 # create a pipeline with the model definitions included
98 doc = {
99 "doc_type": "pipeline",
100 "version": "2.0",
101 "primary_pipeline": wml_framework_name,
102 "pipelines": [{
103 "id": wml_framework_name,
104 "runtime_ref": "hybrid",
105 "nodes": [{
106 "id": "training",
107 "type": "model_node",
108 "op": "dl_train",
109 "runtime_ref": wml_run_name,
110 "inputs": [],
111 "outputs": [],
112 "parameters": {
113 "name": "tf-mnist",
114 "description": wml_run_definition,
115 "command": wml_execution_command,
116 "training_lib_href": "/v4/libraries/"+custom_library_uid,
117 "compute": {
118 "name": wml_compute_name,
119 "nodes": wml_compute_nodes_v4
120 }
121 }
122 }]
123 }],
124 "runtimes": [{
125 "id": wml_run_name,
126 "name": wml_framework_name,
127 "version": wml_runtime_version_v4
128 }]
129 }
130
131 metadata = {
132 client.repository.PipelineMetaNames.NAME: wml_run_name,
133 client.repository.PipelineMetaNames.DOCUMENT: doc
134 }
135 pipeline_id = client.pipelines.get_uid(client.repository.store_pipeline(meta_props=metadata))
136
137 client.pipelines.get_details(pipeline_id)
138
139 # start the training run for v4
140 metadata = {
141 client.training.ConfigurationMetaNames.TRAINING_RESULTS_REFERENCE: {
142 "name": "training-results-reference_name",
143 "connection": {
144 "endpoint_url": cos_endpoint,
145 "access_key_id": cos_access_key,
146 "secret_access_key": cos_secret_key
147 },
148 "location": {
149 "bucket": cos_output_bucket
150 },
151 "type": wml_data_source_type
152 },
153 client.training.ConfigurationMetaNames.TRAINING_DATA_REFERENCES:[{
154 "name": "training_input_data",
155 "type": wml_data_source_type,
156 "connection": {
157 "endpoint_url": cos_endpoint,
158 "access_key_id": cos_access_key,
159 "secret_access_key": cos_secret_key
160 },
161 "location": {
162 "bucket": cos_input_bucket
163 }
164 }],
165 client.training.ConfigurationMetaNames.PIPELINE_UID: pipeline_id
166 }
167
168 training_id = client.training.get_uid(client.training.run(meta_props=metadata))
169 print("training_id", client.training.get_details(training_id))
170 print("get status", client.training.get_status(training_id))
171 # for v4
172 run_details = client.training.get_details(training_id)
173 run_uid = training_id
174
175 # print logs
176 client.training.monitor_logs(run_uid)
177 client.training.monitor_metrics(run_uid)
178
179 # checking the result
180 status = client.training.get_status(run_uid)
181 print("status: ", status)
182 while status['state'] != 'completed':
183 time.sleep(20)
184 status = client.training.get_status(run_uid)
185 print(status)
186
187 Path(args.output_run_uid_path).parent.mkdir(parents=True, exist_ok=True)
188 Path(args.output_run_uid_path).write_text(run_uid)
189
190 # Get training details
191 training_details = client.training.get_details(run_uid)
192 print("training_details", training_details)
193
194 training_uid = training_details['entity']['results_reference']['location']['training']
195 Path(args.output_training_uid_path).parent.mkdir(parents=True, exist_ok=True)
196 Path(args.output_training_uid_path).write_text(training_uid)
197
198 if __name__ == "__main__":
199 import argparse
200 parser = argparse.ArgumentParser()
201 parser.add_argument('--train-code', type=str, required=True)
202 parser.add_argument('--execution-command', type=str, required=True)
203 parser.add_argument('--framework', type=str)
204 parser.add_argument('--framework-version', type=str)
205 parser.add_argument('--runtime', type=str)
206 parser.add_argument('--runtime-version', type=str)
207 parser.add_argument('--run-definition', type=str)
208 parser.add_argument('--run-name', type=str)
209 parser.add_argument('--author-name', type=str)
210 parser.add_argument('--config', type=str, default="secret_name")
211 parser.add_argument('--compute-name', type=str)
212 parser.add_argument('--compute-nodes', type=str)
213 parser.add_argument('--output-run-uid-path', type=str, default="/tmp/run_uid")
214 parser.add_argument('--output-training-uid-path', type=str, default="/tmp/training_uid")
215 args = parser.parse_args()
216 # Check secret name is not empty
217 if (not args.config):
218 print("Secret for this pipeline is not properly created, exiting with status 1...")
219 exit(1)
220 train(args)
221
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/components/ibm-components/watson/train/src/wml-train.py b/components/ibm-components/watson/train/src/wml-train.py
--- a/components/ibm-components/watson/train/src/wml-train.py
+++ b/components/ibm-components/watson/train/src/wml-train.py
@@ -191,9 +191,8 @@
training_details = client.training.get_details(run_uid)
print("training_details", training_details)
- training_uid = training_details['entity']['results_reference']['location']['training']
Path(args.output_training_uid_path).parent.mkdir(parents=True, exist_ok=True)
- Path(args.output_training_uid_path).write_text(training_uid)
+ Path(args.output_training_uid_path).write_text(run_uid)
if __name__ == "__main__":
import argparse
| {"golden_diff": "diff --git a/components/ibm-components/watson/train/src/wml-train.py b/components/ibm-components/watson/train/src/wml-train.py\n--- a/components/ibm-components/watson/train/src/wml-train.py\n+++ b/components/ibm-components/watson/train/src/wml-train.py\n@@ -191,9 +191,8 @@\n training_details = client.training.get_details(run_uid)\n print(\"training_details\", training_details)\n \n- training_uid = training_details['entity']['results_reference']['location']['training']\n Path(args.output_training_uid_path).parent.mkdir(parents=True, exist_ok=True)\n- Path(args.output_training_uid_path).write_text(training_uid)\n+ Path(args.output_training_uid_path).write_text(run_uid)\n \n if __name__ == \"__main__\":\n import argparse\n", "issue": "allow output artifact store configuration (vs hard coded)\nit seems like the output artifacts are always stored in a specific minio service, port, namespace, bucket, secrets, etc (`minio-service.kubeflow:9000`). \r\n\r\nsee: https://github.com/kubeflow/pipelines/blob/f40a22a3f4a8e06d20cf3e3f425b5058d5c87e0b/sdk/python/kfp/compiler/_op_to_template.py#L148\r\n\r\nit would be great to make it flexible, e.g. allow using S3, or change namespace or bucket names.\r\ni suggest making it configurable, i can do such PR if we agree its needed. \nflexible pipeline service (host) path in client SDK \nwhen creating an SDK `Client()` the path to `ml-pipeline` API service is loaded from a hard coded value (`ml-pipeline.kubeflow.svc.cluster.local:8888`) which indicate a specific k8s namespace. it can be valuable to load that default value from an env variable, i.e. changing the line in `_client.py` from:\r\n\r\n`config.host = host if host else Client.IN_CLUSTER_DNS_NAME`\r\n\r\nto:\r\n\r\n`config.host = host or os.environ.get('ML_PIPELINE_DNS_NAME',Client.IN_CLUSTER_DNS_NAME)`\r\n\r\nalso note that when a user provide the `host` parameter, the ipython output points to the API server and not to the UI service (see the logic in `_get_url_prefix()`), it seems like a potential bug\r\n\r\nif its acceptable i can submit a PR for the line change above\r\n \n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# define the function to train a model on wml\n\ndef getSecret(secret):\n with open(secret, 'r') as f:\n res = f.readline().strip('\\'')\n f.close()\n return res\n\ndef train(args):\n from watson_machine_learning_client import WatsonMachineLearningAPIClient\n from minio import Minio\n from urllib.parse import urlsplit\n from pathlib import Path\n import os,time\n\n wml_train_code = args.train_code\n wml_execution_command = args.execution_command.strip('\\'')\n wml_framework_name = args.framework if args.framework else 'tensorflow'\n wml_framework_version = args.framework_version if args.framework_version else '1.15'\n wml_runtime_name = args.runtime if args.runtime else 'python'\n wml_runtime_version = args.runtime_version if args.runtime_version else '3.6'\n wml_run_definition = args.run_definition if args.run_definition else 'python-tensorflow-definition'\n wml_run_name = args.run_name if args.run_name else 'python-tensorflow-run'\n wml_author_name = args.author_name if args.author_name else 'default-author'\n wml_compute_name = args.compute_name if args.compute_name else 'k80'\n wml_compute_nodes = args.compute_nodes if args.compute_nodes else '1'\n\n wml_runtime_version_v4 = wml_framework_version + '-py' + wml_runtime_version\n wml_compute_nodes_v4 = int(wml_compute_nodes)\n\n # retrieve credentials\n wml_url = getSecret(\"/app/secrets/wml_url\")\n wml_apikey = getSecret(\"/app/secrets/wml_apikey\")\n wml_instance_id = getSecret(\"/app/secrets/wml_instance_id\")\n\n wml_data_source_type = getSecret(\"/app/secrets/wml_data_source_type\")\n \n cos_endpoint = getSecret(\"/app/secrets/cos_endpoint\")\n cos_endpoint_parts = urlsplit(cos_endpoint)\n if bool(cos_endpoint_parts.scheme):\n cos_endpoint_hostname = cos_endpoint_parts.hostname\n else:\n cos_endpoint_hostname = cos_endpoint\n cos_endpoint = 'https://' + cos_endpoint\n cos_access_key = getSecret(\"/app/secrets/cos_access_key\")\n cos_secret_key = getSecret(\"/app/secrets/cos_secret_key\")\n cos_input_bucket = getSecret(\"/app/secrets/cos_input_bucket\")\n cos_output_bucket = getSecret(\"/app/secrets/cos_output_bucket\")\n\n # download model code\n model_code = os.path.join('/app', wml_train_code)\n\n cos = Minio(cos_endpoint_hostname,\n access_key=cos_access_key,\n secret_key=cos_secret_key,\n secure=True)\n\n cos.fget_object(cos_input_bucket, wml_train_code, model_code)\n\n # set up the WML client\n wml_credentials = {\n \"url\": wml_url,\n \"instance_id\": wml_instance_id,\n \"apikey\": wml_apikey\n }\n client = WatsonMachineLearningAPIClient(wml_credentials)\n # define the model\n lib_meta = {\n client.runtimes.LibraryMetaNames.NAME: wml_run_definition,\n client.runtimes.LibraryMetaNames.VERSION: wml_framework_version,\n client.runtimes.LibraryMetaNames.FILEPATH: model_code,\n client.runtimes.LibraryMetaNames.PLATFORM: {\"name\": wml_framework_name, \"versions\": [wml_framework_version]}\n }\n # check exisiting library\n library_details = client.runtimes.get_library_details()\n for library_detail in library_details['resources']:\n if library_detail['entity']['name'] == wml_run_definition:\n # Delete library if exist because we cannot update model_code\n uid = client.runtimes.get_library_uid(library_detail)\n client.repository.delete(uid)\n break\n custom_library_details = client.runtimes.store_library(lib_meta)\n custom_library_uid = client.runtimes.get_library_uid(custom_library_details)\n\n # create a pipeline with the model definitions included\n doc = {\n \"doc_type\": \"pipeline\",\n \"version\": \"2.0\",\n \"primary_pipeline\": wml_framework_name,\n \"pipelines\": [{\n \"id\": wml_framework_name,\n \"runtime_ref\": \"hybrid\",\n \"nodes\": [{\n \"id\": \"training\",\n \"type\": \"model_node\",\n \"op\": \"dl_train\",\n \"runtime_ref\": wml_run_name,\n \"inputs\": [],\n \"outputs\": [],\n \"parameters\": {\n \"name\": \"tf-mnist\",\n \"description\": wml_run_definition,\n \"command\": wml_execution_command,\n \"training_lib_href\": \"/v4/libraries/\"+custom_library_uid,\n \"compute\": {\n \"name\": wml_compute_name,\n \"nodes\": wml_compute_nodes_v4\n }\n }\n }]\n }],\n \"runtimes\": [{\n \"id\": wml_run_name,\n \"name\": wml_framework_name,\n \"version\": wml_runtime_version_v4\n }]\n }\n\n metadata = {\n client.repository.PipelineMetaNames.NAME: wml_run_name,\n client.repository.PipelineMetaNames.DOCUMENT: doc\n }\n pipeline_id = client.pipelines.get_uid(client.repository.store_pipeline(meta_props=metadata))\n \n client.pipelines.get_details(pipeline_id)\n\n # start the training run for v4\n metadata = {\n client.training.ConfigurationMetaNames.TRAINING_RESULTS_REFERENCE: {\n \"name\": \"training-results-reference_name\",\n \"connection\": {\n \"endpoint_url\": cos_endpoint,\n \"access_key_id\": cos_access_key,\n \"secret_access_key\": cos_secret_key\n },\n \"location\": {\n \"bucket\": cos_output_bucket\n },\n \"type\": wml_data_source_type\n },\n client.training.ConfigurationMetaNames.TRAINING_DATA_REFERENCES:[{\n \"name\": \"training_input_data\",\n \"type\": wml_data_source_type,\n \"connection\": {\n \"endpoint_url\": cos_endpoint,\n \"access_key_id\": cos_access_key,\n \"secret_access_key\": cos_secret_key\n },\n \"location\": {\n \"bucket\": cos_input_bucket\n }\n }],\n client.training.ConfigurationMetaNames.PIPELINE_UID: pipeline_id\n }\n\n training_id = client.training.get_uid(client.training.run(meta_props=metadata))\n print(\"training_id\", client.training.get_details(training_id))\n print(\"get status\", client.training.get_status(training_id))\n # for v4\n run_details = client.training.get_details(training_id)\n run_uid = training_id\n\n # print logs\n client.training.monitor_logs(run_uid)\n client.training.monitor_metrics(run_uid)\n\n # checking the result\n status = client.training.get_status(run_uid)\n print(\"status: \", status)\n while status['state'] != 'completed':\n time.sleep(20)\n status = client.training.get_status(run_uid)\n print(status)\n\n Path(args.output_run_uid_path).parent.mkdir(parents=True, exist_ok=True)\n Path(args.output_run_uid_path).write_text(run_uid)\n\n # Get training details\n training_details = client.training.get_details(run_uid)\n print(\"training_details\", training_details)\n \n training_uid = training_details['entity']['results_reference']['location']['training']\n Path(args.output_training_uid_path).parent.mkdir(parents=True, exist_ok=True)\n Path(args.output_training_uid_path).write_text(training_uid)\n\nif __name__ == \"__main__\":\n import argparse\n parser = argparse.ArgumentParser()\n parser.add_argument('--train-code', type=str, required=True)\n parser.add_argument('--execution-command', type=str, required=True)\n parser.add_argument('--framework', type=str)\n parser.add_argument('--framework-version', type=str)\n parser.add_argument('--runtime', type=str)\n parser.add_argument('--runtime-version', type=str)\n parser.add_argument('--run-definition', type=str)\n parser.add_argument('--run-name', type=str)\n parser.add_argument('--author-name', type=str)\n parser.add_argument('--config', type=str, default=\"secret_name\")\n parser.add_argument('--compute-name', type=str)\n parser.add_argument('--compute-nodes', type=str)\n parser.add_argument('--output-run-uid-path', type=str, default=\"/tmp/run_uid\")\n parser.add_argument('--output-training-uid-path', type=str, default=\"/tmp/training_uid\")\n args = parser.parse_args()\n # Check secret name is not empty\n if (not args.config):\n print(\"Secret for this pipeline is not properly created, exiting with status 1...\")\n exit(1)\n train(args)\n", "path": "components/ibm-components/watson/train/src/wml-train.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# define the function to train a model on wml\n\ndef getSecret(secret):\n with open(secret, 'r') as f:\n res = f.readline().strip('\\'')\n f.close()\n return res\n\ndef train(args):\n from watson_machine_learning_client import WatsonMachineLearningAPIClient\n from minio import Minio\n from urllib.parse import urlsplit\n from pathlib import Path\n import os,time\n\n wml_train_code = args.train_code\n wml_execution_command = args.execution_command.strip('\\'')\n wml_framework_name = args.framework if args.framework else 'tensorflow'\n wml_framework_version = args.framework_version if args.framework_version else '1.15'\n wml_runtime_name = args.runtime if args.runtime else 'python'\n wml_runtime_version = args.runtime_version if args.runtime_version else '3.6'\n wml_run_definition = args.run_definition if args.run_definition else 'python-tensorflow-definition'\n wml_run_name = args.run_name if args.run_name else 'python-tensorflow-run'\n wml_author_name = args.author_name if args.author_name else 'default-author'\n wml_compute_name = args.compute_name if args.compute_name else 'k80'\n wml_compute_nodes = args.compute_nodes if args.compute_nodes else '1'\n\n wml_runtime_version_v4 = wml_framework_version + '-py' + wml_runtime_version\n wml_compute_nodes_v4 = int(wml_compute_nodes)\n\n # retrieve credentials\n wml_url = getSecret(\"/app/secrets/wml_url\")\n wml_apikey = getSecret(\"/app/secrets/wml_apikey\")\n wml_instance_id = getSecret(\"/app/secrets/wml_instance_id\")\n\n wml_data_source_type = getSecret(\"/app/secrets/wml_data_source_type\")\n \n cos_endpoint = getSecret(\"/app/secrets/cos_endpoint\")\n cos_endpoint_parts = urlsplit(cos_endpoint)\n if bool(cos_endpoint_parts.scheme):\n cos_endpoint_hostname = cos_endpoint_parts.hostname\n else:\n cos_endpoint_hostname = cos_endpoint\n cos_endpoint = 'https://' + cos_endpoint\n cos_access_key = getSecret(\"/app/secrets/cos_access_key\")\n cos_secret_key = getSecret(\"/app/secrets/cos_secret_key\")\n cos_input_bucket = getSecret(\"/app/secrets/cos_input_bucket\")\n cos_output_bucket = getSecret(\"/app/secrets/cos_output_bucket\")\n\n # download model code\n model_code = os.path.join('/app', wml_train_code)\n\n cos = Minio(cos_endpoint_hostname,\n access_key=cos_access_key,\n secret_key=cos_secret_key,\n secure=True)\n\n cos.fget_object(cos_input_bucket, wml_train_code, model_code)\n\n # set up the WML client\n wml_credentials = {\n \"url\": wml_url,\n \"instance_id\": wml_instance_id,\n \"apikey\": wml_apikey\n }\n client = WatsonMachineLearningAPIClient(wml_credentials)\n # define the model\n lib_meta = {\n client.runtimes.LibraryMetaNames.NAME: wml_run_definition,\n client.runtimes.LibraryMetaNames.VERSION: wml_framework_version,\n client.runtimes.LibraryMetaNames.FILEPATH: model_code,\n client.runtimes.LibraryMetaNames.PLATFORM: {\"name\": wml_framework_name, \"versions\": [wml_framework_version]}\n }\n # check exisiting library\n library_details = client.runtimes.get_library_details()\n for library_detail in library_details['resources']:\n if library_detail['entity']['name'] == wml_run_definition:\n # Delete library if exist because we cannot update model_code\n uid = client.runtimes.get_library_uid(library_detail)\n client.repository.delete(uid)\n break\n custom_library_details = client.runtimes.store_library(lib_meta)\n custom_library_uid = client.runtimes.get_library_uid(custom_library_details)\n\n # create a pipeline with the model definitions included\n doc = {\n \"doc_type\": \"pipeline\",\n \"version\": \"2.0\",\n \"primary_pipeline\": wml_framework_name,\n \"pipelines\": [{\n \"id\": wml_framework_name,\n \"runtime_ref\": \"hybrid\",\n \"nodes\": [{\n \"id\": \"training\",\n \"type\": \"model_node\",\n \"op\": \"dl_train\",\n \"runtime_ref\": wml_run_name,\n \"inputs\": [],\n \"outputs\": [],\n \"parameters\": {\n \"name\": \"tf-mnist\",\n \"description\": wml_run_definition,\n \"command\": wml_execution_command,\n \"training_lib_href\": \"/v4/libraries/\"+custom_library_uid,\n \"compute\": {\n \"name\": wml_compute_name,\n \"nodes\": wml_compute_nodes_v4\n }\n }\n }]\n }],\n \"runtimes\": [{\n \"id\": wml_run_name,\n \"name\": wml_framework_name,\n \"version\": wml_runtime_version_v4\n }]\n }\n\n metadata = {\n client.repository.PipelineMetaNames.NAME: wml_run_name,\n client.repository.PipelineMetaNames.DOCUMENT: doc\n }\n pipeline_id = client.pipelines.get_uid(client.repository.store_pipeline(meta_props=metadata))\n \n client.pipelines.get_details(pipeline_id)\n\n # start the training run for v4\n metadata = {\n client.training.ConfigurationMetaNames.TRAINING_RESULTS_REFERENCE: {\n \"name\": \"training-results-reference_name\",\n \"connection\": {\n \"endpoint_url\": cos_endpoint,\n \"access_key_id\": cos_access_key,\n \"secret_access_key\": cos_secret_key\n },\n \"location\": {\n \"bucket\": cos_output_bucket\n },\n \"type\": wml_data_source_type\n },\n client.training.ConfigurationMetaNames.TRAINING_DATA_REFERENCES:[{\n \"name\": \"training_input_data\",\n \"type\": wml_data_source_type,\n \"connection\": {\n \"endpoint_url\": cos_endpoint,\n \"access_key_id\": cos_access_key,\n \"secret_access_key\": cos_secret_key\n },\n \"location\": {\n \"bucket\": cos_input_bucket\n }\n }],\n client.training.ConfigurationMetaNames.PIPELINE_UID: pipeline_id\n }\n\n training_id = client.training.get_uid(client.training.run(meta_props=metadata))\n print(\"training_id\", client.training.get_details(training_id))\n print(\"get status\", client.training.get_status(training_id))\n # for v4\n run_details = client.training.get_details(training_id)\n run_uid = training_id\n\n # print logs\n client.training.monitor_logs(run_uid)\n client.training.monitor_metrics(run_uid)\n\n # checking the result\n status = client.training.get_status(run_uid)\n print(\"status: \", status)\n while status['state'] != 'completed':\n time.sleep(20)\n status = client.training.get_status(run_uid)\n print(status)\n\n Path(args.output_run_uid_path).parent.mkdir(parents=True, exist_ok=True)\n Path(args.output_run_uid_path).write_text(run_uid)\n\n # Get training details\n training_details = client.training.get_details(run_uid)\n print(\"training_details\", training_details)\n \n Path(args.output_training_uid_path).parent.mkdir(parents=True, exist_ok=True)\n Path(args.output_training_uid_path).write_text(run_uid)\n\nif __name__ == \"__main__\":\n import argparse\n parser = argparse.ArgumentParser()\n parser.add_argument('--train-code', type=str, required=True)\n parser.add_argument('--execution-command', type=str, required=True)\n parser.add_argument('--framework', type=str)\n parser.add_argument('--framework-version', type=str)\n parser.add_argument('--runtime', type=str)\n parser.add_argument('--runtime-version', type=str)\n parser.add_argument('--run-definition', type=str)\n parser.add_argument('--run-name', type=str)\n parser.add_argument('--author-name', type=str)\n parser.add_argument('--config', type=str, default=\"secret_name\")\n parser.add_argument('--compute-name', type=str)\n parser.add_argument('--compute-nodes', type=str)\n parser.add_argument('--output-run-uid-path', type=str, default=\"/tmp/run_uid\")\n parser.add_argument('--output-training-uid-path', type=str, default=\"/tmp/training_uid\")\n args = parser.parse_args()\n # Check secret name is not empty\n if (not args.config):\n print(\"Secret for this pipeline is not properly created, exiting with status 1...\")\n exit(1)\n train(args)\n", "path": "components/ibm-components/watson/train/src/wml-train.py"}]} | 3,142 | 183 |
gh_patches_debug_26535 | rasdani/github-patches | git_diff | DataDog__dd-trace-py-1004 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
500 Error with pylibmc+django+gunicorn
Hi, I'm integrating the Django cache in my app, if I run gunicorn without `ddtrace-run` everything works ok, but running with `ddtrace-run gunicorn app.wsgi:application --name app --workers 2 --timeout 30 --log-level=DEBUG --log-file=- --bind=unix:/projects/app/run/gunicorn.sock` I'm getting this error:
```
[2019-02-16 11:30:37 +0000] [138] [DEBUG] GET /api/accounts/me/
Internal Server Error: /api/accounts/me/
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/django/core/handlers/exception.py", line 41, in inner
response = get_response(request)
File "/usr/lib/python3.6/site-packages/django/core/handlers/base.py", line 187, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/usr/lib/python3.6/site-packages/django/core/handlers/base.py", line 185, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/lib/python3.6/site-packages/django/views/decorators/csrf.py", line 58, in wrapped_view
return view_func(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/django/views/generic/base.py", line 68, in view
return self.dispatch(request, *args, **kwargs)
File "/usr/lib/python3.6/site-packages/rest_framework/views.py", line 489, in dispatch
response = self.handle_exception(exc)
File "/usr/lib/python3.6/site-packages/ddtrace/contrib/django/restframework.py", line 26, in _traced_handle_exception
return wrapped(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/rest_framework/views.py", line 449, in handle_exception
self.raise_uncaught_exception(exc)
File "/usr/lib/python3.6/site-packages/rest_framework/views.py", line 486, in dispatch
response = handler(request, *args, **kwargs)
File "/usr/lib/python3.6/site-packages/rest_framework/generics.py", line 201, in get
return self.list(request, *args, **kwargs)
File "/usr/lib/python3.6/site-packages/django/utils/decorators.py", line 67, in _wrapper
return bound_func(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/django/utils/decorators.py", line 141, in _wrapped_view
result = middleware.process_request(request)
File "/usr/lib/python3.6/site-packages/django/middleware/cache.py", line 133, in process_request
cache_key = get_cache_key(request, self.key_prefix, 'GET', cache=self.cache)
File "/usr/lib/python3.6/site-packages/django/utils/cache.py", line 362, in get_cache_key
headerlist = cache.get(cache_key)
File "/usr/lib/python3.6/site-packages/ddtrace/contrib/django/cache.py", line 67, in wrapped
return method(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/django/core/cache/backends/memcached.py", line 79, in get
val = self._cache.get(key)
File "/usr/lib/python3.6/site-packages/ddtrace/contrib/pylibmc/client.py", line 72, in get
return self._trace_cmd("get", *args, **kwargs)
File "/usr/lib/python3.6/site-packages/ddtrace/contrib/pylibmc/client.py", line 115, in _trace_cmd
with self._span(method_name) as span:
AttributeError: __enter__
```
My lib versions:
```
ddtrace==0.20.4
Django==1.11.12
gunicorn==19.4.5
pylibmc==1.6.0
```
And my settings:
```
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.PyLibMCCache',
'LOCATION': os.environ.get('MEMCACHED_HOST', 'memcached:11211'),
}
}
```
Any idea?
Thanks!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ddtrace/contrib/pylibmc/client.py`
Content:
```
1 import random
2
3 # 3p
4 from ddtrace.vendor.wrapt import ObjectProxy
5 import pylibmc
6
7 # project
8 import ddtrace
9 from ...constants import ANALYTICS_SAMPLE_RATE_KEY
10 from ...ext import memcached
11 from ...ext import net
12 from ...internal.logger import get_logger
13 from ...settings import config
14 from .addrs import parse_addresses
15
16
17 # Original Client class
18 _Client = pylibmc.Client
19
20
21 log = get_logger(__name__)
22
23
24 class TracedClient(ObjectProxy):
25 """ TracedClient is a proxy for a pylibmc.Client that times it's network operations. """
26
27 def __init__(self, client=None, service=memcached.SERVICE, tracer=None, *args, **kwargs):
28 """ Create a traced client that wraps the given memcached client.
29
30 """
31 # The client instance/service/tracer attributes are kept for compatibility
32 # with the old interface: TracedClient(client=pylibmc.Client(['localhost:11211']))
33 # TODO(Benjamin): Remove these in favor of patching.
34 if not isinstance(client, _Client):
35 # We are in the patched situation, just pass down all arguments to the pylibmc.Client
36 # Note that, in that case, client isn't a real client (just the first argument)
37 client = _Client(client, *args, **kwargs)
38 else:
39 log.warning('TracedClient instantiation is deprecated and will be remove '
40 'in future versions (0.6.0). Use patching instead (see the docs).')
41
42 super(TracedClient, self).__init__(client)
43
44 pin = ddtrace.Pin(service=service, tracer=tracer)
45 pin.onto(self)
46
47 # attempt to collect the pool of urls this client talks to
48 try:
49 self._addresses = parse_addresses(client.addresses)
50 except Exception:
51 log.debug('error setting addresses', exc_info=True)
52
53 def clone(self, *args, **kwargs):
54 # rewrap new connections.
55 cloned = self.__wrapped__.clone(*args, **kwargs)
56 traced_client = TracedClient(cloned)
57 pin = ddtrace.Pin.get_from(self)
58 if pin:
59 pin.clone().onto(traced_client)
60 return traced_client
61
62 def get(self, *args, **kwargs):
63 return self._trace_cmd('get', *args, **kwargs)
64
65 def set(self, *args, **kwargs):
66 return self._trace_cmd('set', *args, **kwargs)
67
68 def delete(self, *args, **kwargs):
69 return self._trace_cmd('delete', *args, **kwargs)
70
71 def gets(self, *args, **kwargs):
72 return self._trace_cmd('gets', *args, **kwargs)
73
74 def touch(self, *args, **kwargs):
75 return self._trace_cmd('touch', *args, **kwargs)
76
77 def cas(self, *args, **kwargs):
78 return self._trace_cmd('cas', *args, **kwargs)
79
80 def incr(self, *args, **kwargs):
81 return self._trace_cmd('incr', *args, **kwargs)
82
83 def decr(self, *args, **kwargs):
84 return self._trace_cmd('decr', *args, **kwargs)
85
86 def append(self, *args, **kwargs):
87 return self._trace_cmd('append', *args, **kwargs)
88
89 def prepend(self, *args, **kwargs):
90 return self._trace_cmd('prepend', *args, **kwargs)
91
92 def get_multi(self, *args, **kwargs):
93 return self._trace_multi_cmd('get_multi', *args, **kwargs)
94
95 def set_multi(self, *args, **kwargs):
96 return self._trace_multi_cmd('set_multi', *args, **kwargs)
97
98 def delete_multi(self, *args, **kwargs):
99 return self._trace_multi_cmd('delete_multi', *args, **kwargs)
100
101 def _trace_cmd(self, method_name, *args, **kwargs):
102 """ trace the execution of the method with the given name and will
103 patch the first arg.
104 """
105 method = getattr(self.__wrapped__, method_name)
106 with self._span(method_name) as span:
107
108 if span and args:
109 span.set_tag(memcached.QUERY, '%s %s' % (method_name, args[0]))
110
111 return method(*args, **kwargs)
112
113 def _trace_multi_cmd(self, method_name, *args, **kwargs):
114 """ trace the execution of the multi command with the given name. """
115 method = getattr(self.__wrapped__, method_name)
116 with self._span(method_name) as span:
117
118 pre = kwargs.get('key_prefix')
119 if span and pre:
120 span.set_tag(memcached.QUERY, '%s %s' % (method_name, pre))
121
122 return method(*args, **kwargs)
123
124 def _span(self, cmd_name):
125 """ Return a span timing the given command. """
126 pin = ddtrace.Pin.get_from(self)
127 if pin and pin.enabled():
128 span = pin.tracer.trace(
129 'memcached.cmd',
130 service=pin.service,
131 resource=cmd_name,
132 # TODO(Benjamin): set a better span type
133 span_type='cache')
134
135 try:
136 self._tag_span(span)
137 except Exception:
138 log.debug('error tagging span', exc_info=True)
139 return span
140
141 def _tag_span(self, span):
142 # FIXME[matt] the host selection is buried in c code. we can't tell what it's actually
143 # using, so fallback to randomly choosing one. can we do better?
144 if self._addresses:
145 _, host, port, _ = random.choice(self._addresses)
146 span.set_meta(net.TARGET_HOST, host)
147 span.set_meta(net.TARGET_PORT, port)
148
149 # set analytics sample rate
150 span.set_tag(
151 ANALYTICS_SAMPLE_RATE_KEY,
152 config.pylibmc.get_analytics_sample_rate()
153 )
154
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ddtrace/contrib/pylibmc/client.py b/ddtrace/contrib/pylibmc/client.py
--- a/ddtrace/contrib/pylibmc/client.py
+++ b/ddtrace/contrib/pylibmc/client.py
@@ -1,3 +1,4 @@
+from contextlib import contextmanager
import random
# 3p
@@ -121,22 +122,28 @@
return method(*args, **kwargs)
+ @contextmanager
+ def _no_span(self):
+ yield None
+
def _span(self, cmd_name):
""" Return a span timing the given command. """
pin = ddtrace.Pin.get_from(self)
- if pin and pin.enabled():
- span = pin.tracer.trace(
- 'memcached.cmd',
- service=pin.service,
- resource=cmd_name,
- # TODO(Benjamin): set a better span type
- span_type='cache')
-
- try:
- self._tag_span(span)
- except Exception:
- log.debug('error tagging span', exc_info=True)
- return span
+ if not pin or not pin.enabled():
+ return self._no_span()
+
+ span = pin.tracer.trace(
+ 'memcached.cmd',
+ service=pin.service,
+ resource=cmd_name,
+ # TODO(Benjamin): set a better span type
+ span_type='cache')
+
+ try:
+ self._tag_span(span)
+ except Exception:
+ log.debug('error tagging span', exc_info=True)
+ return span
def _tag_span(self, span):
# FIXME[matt] the host selection is buried in c code. we can't tell what it's actually
| {"golden_diff": "diff --git a/ddtrace/contrib/pylibmc/client.py b/ddtrace/contrib/pylibmc/client.py\n--- a/ddtrace/contrib/pylibmc/client.py\n+++ b/ddtrace/contrib/pylibmc/client.py\n@@ -1,3 +1,4 @@\n+from contextlib import contextmanager\n import random\n \n # 3p\n@@ -121,22 +122,28 @@\n \n return method(*args, **kwargs)\n \n+ @contextmanager\n+ def _no_span(self):\n+ yield None\n+\n def _span(self, cmd_name):\n \"\"\" Return a span timing the given command. \"\"\"\n pin = ddtrace.Pin.get_from(self)\n- if pin and pin.enabled():\n- span = pin.tracer.trace(\n- 'memcached.cmd',\n- service=pin.service,\n- resource=cmd_name,\n- # TODO(Benjamin): set a better span type\n- span_type='cache')\n-\n- try:\n- self._tag_span(span)\n- except Exception:\n- log.debug('error tagging span', exc_info=True)\n- return span\n+ if not pin or not pin.enabled():\n+ return self._no_span()\n+\n+ span = pin.tracer.trace(\n+ 'memcached.cmd',\n+ service=pin.service,\n+ resource=cmd_name,\n+ # TODO(Benjamin): set a better span type\n+ span_type='cache')\n+\n+ try:\n+ self._tag_span(span)\n+ except Exception:\n+ log.debug('error tagging span', exc_info=True)\n+ return span\n \n def _tag_span(self, span):\n # FIXME[matt] the host selection is buried in c code. we can't tell what it's actually\n", "issue": "500 Error with pylibmc+django+gunicorn\nHi, I'm integrating the Django cache in my app, if I run gunicorn without `ddtrace-run` everything works ok, but running with `ddtrace-run gunicorn app.wsgi:application --name app --workers 2 --timeout 30 --log-level=DEBUG --log-file=- --bind=unix:/projects/app/run/gunicorn.sock` I'm getting this error:\r\n\r\n```\r\n[2019-02-16 11:30:37 +0000] [138] [DEBUG] GET /api/accounts/me/\r\nInternal Server Error: /api/accounts/me/\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.6/site-packages/django/core/handlers/exception.py\", line 41, in inner\r\n response = get_response(request)\r\n File \"/usr/lib/python3.6/site-packages/django/core/handlers/base.py\", line 187, in _get_response\r\n response = self.process_exception_by_middleware(e, request)\r\n File \"/usr/lib/python3.6/site-packages/django/core/handlers/base.py\", line 185, in _get_response\r\n response = wrapped_callback(request, *callback_args, **callback_kwargs)\r\n File \"/usr/lib/python3.6/site-packages/django/views/decorators/csrf.py\", line 58, in wrapped_view\r\n return view_func(*args, **kwargs)\r\n File \"/usr/lib/python3.6/site-packages/django/views/generic/base.py\", line 68, in view\r\n return self.dispatch(request, *args, **kwargs)\r\n File \"/usr/lib/python3.6/site-packages/rest_framework/views.py\", line 489, in dispatch\r\n response = self.handle_exception(exc)\r\n File \"/usr/lib/python3.6/site-packages/ddtrace/contrib/django/restframework.py\", line 26, in _traced_handle_exception\r\n return wrapped(*args, **kwargs)\r\n File \"/usr/lib/python3.6/site-packages/rest_framework/views.py\", line 449, in handle_exception\r\n self.raise_uncaught_exception(exc)\r\n File \"/usr/lib/python3.6/site-packages/rest_framework/views.py\", line 486, in dispatch\r\n response = handler(request, *args, **kwargs)\r\n File \"/usr/lib/python3.6/site-packages/rest_framework/generics.py\", line 201, in get\r\n return self.list(request, *args, **kwargs)\r\n File \"/usr/lib/python3.6/site-packages/django/utils/decorators.py\", line 67, in _wrapper\r\n return bound_func(*args, **kwargs)\r\n File \"/usr/lib/python3.6/site-packages/django/utils/decorators.py\", line 141, in _wrapped_view\r\n result = middleware.process_request(request)\r\n File \"/usr/lib/python3.6/site-packages/django/middleware/cache.py\", line 133, in process_request\r\n cache_key = get_cache_key(request, self.key_prefix, 'GET', cache=self.cache)\r\n File \"/usr/lib/python3.6/site-packages/django/utils/cache.py\", line 362, in get_cache_key\r\n headerlist = cache.get(cache_key)\r\n File \"/usr/lib/python3.6/site-packages/ddtrace/contrib/django/cache.py\", line 67, in wrapped\r\n return method(*args, **kwargs)\r\n File \"/usr/lib/python3.6/site-packages/django/core/cache/backends/memcached.py\", line 79, in get\r\n val = self._cache.get(key)\r\n File \"/usr/lib/python3.6/site-packages/ddtrace/contrib/pylibmc/client.py\", line 72, in get\r\n return self._trace_cmd(\"get\", *args, **kwargs)\r\n File \"/usr/lib/python3.6/site-packages/ddtrace/contrib/pylibmc/client.py\", line 115, in _trace_cmd\r\n with self._span(method_name) as span:\r\nAttributeError: __enter__\r\n```\r\n\r\nMy lib versions:\r\n```\r\nddtrace==0.20.4\r\nDjango==1.11.12\r\ngunicorn==19.4.5\r\npylibmc==1.6.0\r\n```\r\n\r\nAnd my settings:\r\n```\r\nCACHES = {\r\n 'default': {\r\n 'BACKEND': 'django.core.cache.backends.memcached.PyLibMCCache',\r\n 'LOCATION': os.environ.get('MEMCACHED_HOST', 'memcached:11211'),\r\n }\r\n}\r\n```\r\nAny idea?\r\n\r\nThanks!\n", "before_files": [{"content": "import random\n\n# 3p\nfrom ddtrace.vendor.wrapt import ObjectProxy\nimport pylibmc\n\n# project\nimport ddtrace\nfrom ...constants import ANALYTICS_SAMPLE_RATE_KEY\nfrom ...ext import memcached\nfrom ...ext import net\nfrom ...internal.logger import get_logger\nfrom ...settings import config\nfrom .addrs import parse_addresses\n\n\n# Original Client class\n_Client = pylibmc.Client\n\n\nlog = get_logger(__name__)\n\n\nclass TracedClient(ObjectProxy):\n \"\"\" TracedClient is a proxy for a pylibmc.Client that times it's network operations. \"\"\"\n\n def __init__(self, client=None, service=memcached.SERVICE, tracer=None, *args, **kwargs):\n \"\"\" Create a traced client that wraps the given memcached client.\n\n \"\"\"\n # The client instance/service/tracer attributes are kept for compatibility\n # with the old interface: TracedClient(client=pylibmc.Client(['localhost:11211']))\n # TODO(Benjamin): Remove these in favor of patching.\n if not isinstance(client, _Client):\n # We are in the patched situation, just pass down all arguments to the pylibmc.Client\n # Note that, in that case, client isn't a real client (just the first argument)\n client = _Client(client, *args, **kwargs)\n else:\n log.warning('TracedClient instantiation is deprecated and will be remove '\n 'in future versions (0.6.0). Use patching instead (see the docs).')\n\n super(TracedClient, self).__init__(client)\n\n pin = ddtrace.Pin(service=service, tracer=tracer)\n pin.onto(self)\n\n # attempt to collect the pool of urls this client talks to\n try:\n self._addresses = parse_addresses(client.addresses)\n except Exception:\n log.debug('error setting addresses', exc_info=True)\n\n def clone(self, *args, **kwargs):\n # rewrap new connections.\n cloned = self.__wrapped__.clone(*args, **kwargs)\n traced_client = TracedClient(cloned)\n pin = ddtrace.Pin.get_from(self)\n if pin:\n pin.clone().onto(traced_client)\n return traced_client\n\n def get(self, *args, **kwargs):\n return self._trace_cmd('get', *args, **kwargs)\n\n def set(self, *args, **kwargs):\n return self._trace_cmd('set', *args, **kwargs)\n\n def delete(self, *args, **kwargs):\n return self._trace_cmd('delete', *args, **kwargs)\n\n def gets(self, *args, **kwargs):\n return self._trace_cmd('gets', *args, **kwargs)\n\n def touch(self, *args, **kwargs):\n return self._trace_cmd('touch', *args, **kwargs)\n\n def cas(self, *args, **kwargs):\n return self._trace_cmd('cas', *args, **kwargs)\n\n def incr(self, *args, **kwargs):\n return self._trace_cmd('incr', *args, **kwargs)\n\n def decr(self, *args, **kwargs):\n return self._trace_cmd('decr', *args, **kwargs)\n\n def append(self, *args, **kwargs):\n return self._trace_cmd('append', *args, **kwargs)\n\n def prepend(self, *args, **kwargs):\n return self._trace_cmd('prepend', *args, **kwargs)\n\n def get_multi(self, *args, **kwargs):\n return self._trace_multi_cmd('get_multi', *args, **kwargs)\n\n def set_multi(self, *args, **kwargs):\n return self._trace_multi_cmd('set_multi', *args, **kwargs)\n\n def delete_multi(self, *args, **kwargs):\n return self._trace_multi_cmd('delete_multi', *args, **kwargs)\n\n def _trace_cmd(self, method_name, *args, **kwargs):\n \"\"\" trace the execution of the method with the given name and will\n patch the first arg.\n \"\"\"\n method = getattr(self.__wrapped__, method_name)\n with self._span(method_name) as span:\n\n if span and args:\n span.set_tag(memcached.QUERY, '%s %s' % (method_name, args[0]))\n\n return method(*args, **kwargs)\n\n def _trace_multi_cmd(self, method_name, *args, **kwargs):\n \"\"\" trace the execution of the multi command with the given name. \"\"\"\n method = getattr(self.__wrapped__, method_name)\n with self._span(method_name) as span:\n\n pre = kwargs.get('key_prefix')\n if span and pre:\n span.set_tag(memcached.QUERY, '%s %s' % (method_name, pre))\n\n return method(*args, **kwargs)\n\n def _span(self, cmd_name):\n \"\"\" Return a span timing the given command. \"\"\"\n pin = ddtrace.Pin.get_from(self)\n if pin and pin.enabled():\n span = pin.tracer.trace(\n 'memcached.cmd',\n service=pin.service,\n resource=cmd_name,\n # TODO(Benjamin): set a better span type\n span_type='cache')\n\n try:\n self._tag_span(span)\n except Exception:\n log.debug('error tagging span', exc_info=True)\n return span\n\n def _tag_span(self, span):\n # FIXME[matt] the host selection is buried in c code. we can't tell what it's actually\n # using, so fallback to randomly choosing one. can we do better?\n if self._addresses:\n _, host, port, _ = random.choice(self._addresses)\n span.set_meta(net.TARGET_HOST, host)\n span.set_meta(net.TARGET_PORT, port)\n\n # set analytics sample rate\n span.set_tag(\n ANALYTICS_SAMPLE_RATE_KEY,\n config.pylibmc.get_analytics_sample_rate()\n )\n", "path": "ddtrace/contrib/pylibmc/client.py"}], "after_files": [{"content": "from contextlib import contextmanager\nimport random\n\n# 3p\nfrom ddtrace.vendor.wrapt import ObjectProxy\nimport pylibmc\n\n# project\nimport ddtrace\nfrom ...constants import ANALYTICS_SAMPLE_RATE_KEY\nfrom ...ext import memcached\nfrom ...ext import net\nfrom ...internal.logger import get_logger\nfrom ...settings import config\nfrom .addrs import parse_addresses\n\n\n# Original Client class\n_Client = pylibmc.Client\n\n\nlog = get_logger(__name__)\n\n\nclass TracedClient(ObjectProxy):\n \"\"\" TracedClient is a proxy for a pylibmc.Client that times it's network operations. \"\"\"\n\n def __init__(self, client=None, service=memcached.SERVICE, tracer=None, *args, **kwargs):\n \"\"\" Create a traced client that wraps the given memcached client.\n\n \"\"\"\n # The client instance/service/tracer attributes are kept for compatibility\n # with the old interface: TracedClient(client=pylibmc.Client(['localhost:11211']))\n # TODO(Benjamin): Remove these in favor of patching.\n if not isinstance(client, _Client):\n # We are in the patched situation, just pass down all arguments to the pylibmc.Client\n # Note that, in that case, client isn't a real client (just the first argument)\n client = _Client(client, *args, **kwargs)\n else:\n log.warning('TracedClient instantiation is deprecated and will be remove '\n 'in future versions (0.6.0). Use patching instead (see the docs).')\n\n super(TracedClient, self).__init__(client)\n\n pin = ddtrace.Pin(service=service, tracer=tracer)\n pin.onto(self)\n\n # attempt to collect the pool of urls this client talks to\n try:\n self._addresses = parse_addresses(client.addresses)\n except Exception:\n log.debug('error setting addresses', exc_info=True)\n\n def clone(self, *args, **kwargs):\n # rewrap new connections.\n cloned = self.__wrapped__.clone(*args, **kwargs)\n traced_client = TracedClient(cloned)\n pin = ddtrace.Pin.get_from(self)\n if pin:\n pin.clone().onto(traced_client)\n return traced_client\n\n def get(self, *args, **kwargs):\n return self._trace_cmd('get', *args, **kwargs)\n\n def set(self, *args, **kwargs):\n return self._trace_cmd('set', *args, **kwargs)\n\n def delete(self, *args, **kwargs):\n return self._trace_cmd('delete', *args, **kwargs)\n\n def gets(self, *args, **kwargs):\n return self._trace_cmd('gets', *args, **kwargs)\n\n def touch(self, *args, **kwargs):\n return self._trace_cmd('touch', *args, **kwargs)\n\n def cas(self, *args, **kwargs):\n return self._trace_cmd('cas', *args, **kwargs)\n\n def incr(self, *args, **kwargs):\n return self._trace_cmd('incr', *args, **kwargs)\n\n def decr(self, *args, **kwargs):\n return self._trace_cmd('decr', *args, **kwargs)\n\n def append(self, *args, **kwargs):\n return self._trace_cmd('append', *args, **kwargs)\n\n def prepend(self, *args, **kwargs):\n return self._trace_cmd('prepend', *args, **kwargs)\n\n def get_multi(self, *args, **kwargs):\n return self._trace_multi_cmd('get_multi', *args, **kwargs)\n\n def set_multi(self, *args, **kwargs):\n return self._trace_multi_cmd('set_multi', *args, **kwargs)\n\n def delete_multi(self, *args, **kwargs):\n return self._trace_multi_cmd('delete_multi', *args, **kwargs)\n\n def _trace_cmd(self, method_name, *args, **kwargs):\n \"\"\" trace the execution of the method with the given name and will\n patch the first arg.\n \"\"\"\n method = getattr(self.__wrapped__, method_name)\n with self._span(method_name) as span:\n\n if span and args:\n span.set_tag(memcached.QUERY, '%s %s' % (method_name, args[0]))\n\n return method(*args, **kwargs)\n\n def _trace_multi_cmd(self, method_name, *args, **kwargs):\n \"\"\" trace the execution of the multi command with the given name. \"\"\"\n method = getattr(self.__wrapped__, method_name)\n with self._span(method_name) as span:\n\n pre = kwargs.get('key_prefix')\n if span and pre:\n span.set_tag(memcached.QUERY, '%s %s' % (method_name, pre))\n\n return method(*args, **kwargs)\n\n @contextmanager\n def _no_span(self):\n yield None\n\n def _span(self, cmd_name):\n \"\"\" Return a span timing the given command. \"\"\"\n pin = ddtrace.Pin.get_from(self)\n if not pin or not pin.enabled():\n return self._no_span()\n\n span = pin.tracer.trace(\n 'memcached.cmd',\n service=pin.service,\n resource=cmd_name,\n # TODO(Benjamin): set a better span type\n span_type='cache')\n\n try:\n self._tag_span(span)\n except Exception:\n log.debug('error tagging span', exc_info=True)\n return span\n\n def _tag_span(self, span):\n # FIXME[matt] the host selection is buried in c code. we can't tell what it's actually\n # using, so fallback to randomly choosing one. can we do better?\n if self._addresses:\n _, host, port, _ = random.choice(self._addresses)\n span.set_meta(net.TARGET_HOST, host)\n span.set_meta(net.TARGET_PORT, port)\n\n # set analytics sample rate\n span.set_tag(\n ANALYTICS_SAMPLE_RATE_KEY,\n config.pylibmc.get_analytics_sample_rate()\n )\n", "path": "ddtrace/contrib/pylibmc/client.py"}]} | 2,902 | 385 |
gh_patches_debug_26402 | rasdani/github-patches | git_diff | statsmodels__statsmodels-6654 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Handle pathlib paths with `get_file_obj`
#### Is your feature request related to a problem? Please describe
I'm trying to read a statsmodels.tsa.statespace.sarimax.SARIMAXResults pickle file I've saved. I'd like to pass a pathlib.Path object as I find it easier to handle (I work with different machines, Mac and Windows).
#### Describe the solution you'd like
The function `open` already supports pathlib objects. Why not using it?
You could replace `statsmodels.iolib.openfile.get_file_obj` with:
```python
if _is_string_like(fname):
return _open(fname, mode, encoding)
try: # try opening it with 'open'
return open(fname, mode, encoding)
except TypeError: # else, maybe return context manager (please check if it's always TypeError)
try:
# Make sure the object has the write methods
if 'r' in mode:
fname.read
if 'w' in mode or 'a' in mode:
fname.write
except AttributeError:
raise ValueError('fname must be a string or a file-like object')
return EmptyContextManager(fname)
```
This is just a suggestion though :)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `statsmodels/iolib/openfile.py`
Content:
```
1 """
2 Handle file opening for read/write
3 """
4 from numpy.lib._iotools import _is_string_like
5
6
7 class EmptyContextManager(object):
8 """
9 This class is needed to allow file-like object to be used as
10 context manager, but without getting closed.
11 """
12 def __init__(self, obj):
13 self._obj = obj
14
15 def __enter__(self):
16 '''When entering, return the embedded object'''
17 return self._obj
18
19 def __exit__(self, *args):
20 '''Do not hide anything'''
21 return False
22
23 def __getattr__(self, name):
24 return getattr(self._obj, name)
25
26
27 def _open(fname, mode, encoding):
28 if fname.endswith('.gz'):
29 import gzip
30 return gzip.open(fname, mode, encoding=encoding)
31 else:
32 return open(fname, mode, encoding=encoding)
33
34
35 def get_file_obj(fname, mode='r', encoding=None):
36 """
37 Light wrapper to handle strings and let files (anything else) pass through.
38
39 It also handle '.gz' files.
40
41 Parameters
42 ----------
43 fname : str or file-like object
44 File to open / forward
45 mode : str
46 Argument passed to the 'open' or 'gzip.open' function
47 encoding : str
48 For Python 3 only, specify the encoding of the file
49
50 Returns
51 -------
52 A file-like object that is always a context-manager. If the `fname` was
53 already a file-like object, the returned context manager *will not
54 close the file*.
55 """
56 if _is_string_like(fname):
57 return _open(fname, mode, encoding)
58 try:
59 # Make sure the object has the write methods
60 if 'r' in mode:
61 fname.read
62 if 'w' in mode or 'a' in mode:
63 fname.write
64 except AttributeError:
65 raise ValueError('fname must be a string or a file-like object')
66 return EmptyContextManager(fname)
67
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/statsmodels/iolib/openfile.py b/statsmodels/iolib/openfile.py
--- a/statsmodels/iolib/openfile.py
+++ b/statsmodels/iolib/openfile.py
@@ -34,13 +34,14 @@
def get_file_obj(fname, mode='r', encoding=None):
"""
- Light wrapper to handle strings and let files (anything else) pass through.
+ Light wrapper to handle strings, path objects and let files (anything else)
+ pass through.
It also handle '.gz' files.
Parameters
----------
- fname : str or file-like object
+ fname : str, path object or file-like object
File to open / forward
mode : str
Argument passed to the 'open' or 'gzip.open' function
@@ -56,11 +57,14 @@
if _is_string_like(fname):
return _open(fname, mode, encoding)
try:
- # Make sure the object has the write methods
- if 'r' in mode:
- fname.read
- if 'w' in mode or 'a' in mode:
- fname.write
- except AttributeError:
- raise ValueError('fname must be a string or a file-like object')
- return EmptyContextManager(fname)
+ return open(fname, mode, encoding=encoding) # handle pathlib-like objs
+ except TypeError:
+ try:
+ # Make sure the object has the write methods
+ if "r" in mode:
+ fname.read
+ if "w" in mode or "a" in mode:
+ fname.write
+ except AttributeError:
+ raise ValueError("fname must be a string or a file-like object")
+ return EmptyContextManager(fname)
| {"golden_diff": "diff --git a/statsmodels/iolib/openfile.py b/statsmodels/iolib/openfile.py\n--- a/statsmodels/iolib/openfile.py\n+++ b/statsmodels/iolib/openfile.py\n@@ -34,13 +34,14 @@\n \n def get_file_obj(fname, mode='r', encoding=None):\n \"\"\"\n- Light wrapper to handle strings and let files (anything else) pass through.\n+ Light wrapper to handle strings, path objects and let files (anything else)\n+ pass through.\n \n It also handle '.gz' files.\n \n Parameters\n ----------\n- fname : str or file-like object\n+ fname : str, path object or file-like object\n File to open / forward\n mode : str\n Argument passed to the 'open' or 'gzip.open' function\n@@ -56,11 +57,14 @@\n if _is_string_like(fname):\n return _open(fname, mode, encoding)\n try:\n- # Make sure the object has the write methods\n- if 'r' in mode:\n- fname.read\n- if 'w' in mode or 'a' in mode:\n- fname.write\n- except AttributeError:\n- raise ValueError('fname must be a string or a file-like object')\n- return EmptyContextManager(fname)\n+ return open(fname, mode, encoding=encoding) # handle pathlib-like objs\n+ except TypeError:\n+ try:\n+ # Make sure the object has the write methods\n+ if \"r\" in mode:\n+ fname.read\n+ if \"w\" in mode or \"a\" in mode:\n+ fname.write\n+ except AttributeError:\n+ raise ValueError(\"fname must be a string or a file-like object\")\n+ return EmptyContextManager(fname)\n", "issue": "Handle pathlib paths with `get_file_obj`\n#### Is your feature request related to a problem? Please describe\r\nI'm trying to read a statsmodels.tsa.statespace.sarimax.SARIMAXResults pickle file I've saved. I'd like to pass a pathlib.Path object as I find it easier to handle (I work with different machines, Mac and Windows).\r\n\r\n#### Describe the solution you'd like\r\nThe function `open` already supports pathlib objects. Why not using it?\r\n\r\nYou could replace `statsmodels.iolib.openfile.get_file_obj` with:\r\n\r\n```python\r\nif _is_string_like(fname):\r\n return _open(fname, mode, encoding)\r\ntry: # try opening it with 'open'\r\n return open(fname, mode, encoding)\r\nexcept TypeError: # else, maybe return context manager (please check if it's always TypeError)\r\n try:\r\n # Make sure the object has the write methods\r\n if 'r' in mode:\r\n fname.read\r\n if 'w' in mode or 'a' in mode:\r\n fname.write\r\n except AttributeError:\r\n raise ValueError('fname must be a string or a file-like object')\r\n return EmptyContextManager(fname)\r\n```\r\n\r\nThis is just a suggestion though :)\n", "before_files": [{"content": "\"\"\"\nHandle file opening for read/write\n\"\"\"\nfrom numpy.lib._iotools import _is_string_like\n\n\nclass EmptyContextManager(object):\n \"\"\"\n This class is needed to allow file-like object to be used as\n context manager, but without getting closed.\n \"\"\"\n def __init__(self, obj):\n self._obj = obj\n\n def __enter__(self):\n '''When entering, return the embedded object'''\n return self._obj\n\n def __exit__(self, *args):\n '''Do not hide anything'''\n return False\n\n def __getattr__(self, name):\n return getattr(self._obj, name)\n\n\ndef _open(fname, mode, encoding):\n if fname.endswith('.gz'):\n import gzip\n return gzip.open(fname, mode, encoding=encoding)\n else:\n return open(fname, mode, encoding=encoding)\n\n\ndef get_file_obj(fname, mode='r', encoding=None):\n \"\"\"\n Light wrapper to handle strings and let files (anything else) pass through.\n\n It also handle '.gz' files.\n\n Parameters\n ----------\n fname : str or file-like object\n File to open / forward\n mode : str\n Argument passed to the 'open' or 'gzip.open' function\n encoding : str\n For Python 3 only, specify the encoding of the file\n\n Returns\n -------\n A file-like object that is always a context-manager. If the `fname` was\n already a file-like object, the returned context manager *will not\n close the file*.\n \"\"\"\n if _is_string_like(fname):\n return _open(fname, mode, encoding)\n try:\n # Make sure the object has the write methods\n if 'r' in mode:\n fname.read\n if 'w' in mode or 'a' in mode:\n fname.write\n except AttributeError:\n raise ValueError('fname must be a string or a file-like object')\n return EmptyContextManager(fname)\n", "path": "statsmodels/iolib/openfile.py"}], "after_files": [{"content": "\"\"\"\nHandle file opening for read/write\n\"\"\"\nfrom numpy.lib._iotools import _is_string_like\n\n\nclass EmptyContextManager(object):\n \"\"\"\n This class is needed to allow file-like object to be used as\n context manager, but without getting closed.\n \"\"\"\n def __init__(self, obj):\n self._obj = obj\n\n def __enter__(self):\n '''When entering, return the embedded object'''\n return self._obj\n\n def __exit__(self, *args):\n '''Do not hide anything'''\n return False\n\n def __getattr__(self, name):\n return getattr(self._obj, name)\n\n\ndef _open(fname, mode, encoding):\n if fname.endswith('.gz'):\n import gzip\n return gzip.open(fname, mode, encoding=encoding)\n else:\n return open(fname, mode, encoding=encoding)\n\n\ndef get_file_obj(fname, mode='r', encoding=None):\n \"\"\"\n Light wrapper to handle strings, path objects and let files (anything else)\n pass through.\n\n It also handle '.gz' files.\n\n Parameters\n ----------\n fname : str, path object or file-like object\n File to open / forward\n mode : str\n Argument passed to the 'open' or 'gzip.open' function\n encoding : str\n For Python 3 only, specify the encoding of the file\n\n Returns\n -------\n A file-like object that is always a context-manager. If the `fname` was\n already a file-like object, the returned context manager *will not\n close the file*.\n \"\"\"\n if _is_string_like(fname):\n return _open(fname, mode, encoding)\n try:\n return open(fname, mode, encoding=encoding) # handle pathlib-like objs\n except TypeError:\n try:\n # Make sure the object has the write methods\n if \"r\" in mode:\n fname.read\n if \"w\" in mode or \"a\" in mode:\n fname.write\n except AttributeError:\n raise ValueError(\"fname must be a string or a file-like object\")\n return EmptyContextManager(fname)\n", "path": "statsmodels/iolib/openfile.py"}]} | 1,067 | 388 |
gh_patches_debug_29404 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-2151 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ID sequence isn't being created for tables
## Description
<!-- A clear and concise description of what the bug is. -->
I cannot add a new record to a new table because the ID does not have a default value. I set up this table by:
1. Importing [library_books.tsv.zip](https://github.com/centerofci/mathesar/files/10273401/library_books.tsv.zip) as "Items"
2. Pulling "Author First Name" and "Author First Name" into an "Authors" table
3. Moving "Author Website" to "Authors" table
4. Pulling "Publishers" into a "Publishers" table
5. Pulling "Title", "Publication Year", "ISBN", "Authors_id", and "Publishers_id" into a "Books" table
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
I should be able to add a new record!
## To Reproduce
<!-- How can we recreate this bug? Please try to provide a Minimal, Complete, and Verifiable (http://stackoverflow.com/help/mcve) example if code-related. -->
~Try adding a new record on https://library.mathesar.org/mathesar_tables/4/tables/34/~
I needed to nuke this table since it was blocking my work. To reproduce, follow the steps under Description.
## Additional context
<!-- Add any other context about the problem or screenshots here. -->
This seems related to
- #2116
- #1356
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `db/tables/operations/split.py`
Content:
```
1 from sqlalchemy import exists, func, literal, select
2
3 from db import constants
4 from db.columns.base import MathesarColumn
5 from db.columns.operations.alter import batch_alter_table_drop_columns
6 from db.columns.operations.select import get_column_names_from_attnums
7 from db.links.operations.create import create_foreign_key_link
8 from db.tables.operations.create import create_mathesar_table
9 from db.tables.operations.select import get_oid_from_table, reflect_table, reflect_table_from_oid
10 from db.metadata import get_empty_metadata
11
12
13 def _create_split_tables(extracted_table_name, extracted_columns, remainder_table_name, schema, engine, fk_column_name=None):
14 extracted_table = create_mathesar_table(
15 extracted_table_name,
16 schema,
17 extracted_columns,
18 engine,
19 )
20 fk_column_name = fk_column_name if fk_column_name else f"{extracted_table.name}_{constants.ID}"
21 remainder_table_oid = get_oid_from_table(remainder_table_name, schema, engine)
22 extracted_table_oid = get_oid_from_table(extracted_table_name, schema, engine)
23 create_foreign_key_link(engine, schema, fk_column_name, remainder_table_oid, extracted_table_oid)
24 # TODO reuse metadata
25 remainder_table_with_fk_key = reflect_table(remainder_table_name, schema, engine, metadata=get_empty_metadata())
26 return extracted_table, remainder_table_with_fk_key, fk_column_name
27
28
29 def _create_split_insert_stmt(old_table, extracted_table, extracted_columns, remainder_fk_name):
30 SPLIT_ID = f"{constants.MATHESAR_PREFIX}_split_column_alias"
31 extracted_column_names = [col.name for col in extracted_columns]
32 split_cte = select(
33 [
34 old_table,
35 func.dense_rank().over(order_by=extracted_columns).label(SPLIT_ID)
36 ]
37 ).cte()
38 cte_extraction_columns = (
39 [split_cte.columns[SPLIT_ID]]
40 + [split_cte.columns[n] for n in extracted_column_names]
41 )
42 extract_sel = select(
43 cte_extraction_columns,
44 distinct=True
45 )
46 extract_ins_cte = (
47 extracted_table
48 .insert()
49 .from_select([constants.ID] + extracted_column_names, extract_sel)
50 .returning(literal(1))
51 .cte()
52 )
53 fk_update_dict = {remainder_fk_name: split_cte.c[SPLIT_ID]}
54 split_ins = (
55 old_table
56 .update().values(**fk_update_dict).
57 where(old_table.c[constants.ID] == split_cte.c[constants.ID],
58 exists(extract_ins_cte.select()))
59 )
60 return split_ins
61
62
63 def extract_columns_from_table(old_table_oid, extracted_column_attnums, extracted_table_name, schema, engine, relationship_fk_column_name=None):
64 # TODO reuse metadata
65 old_table = reflect_table_from_oid(old_table_oid, engine, metadata=get_empty_metadata())
66 old_table_name = old_table.name
67 old_columns = (MathesarColumn.from_column(col) for col in old_table.columns)
68 old_non_default_columns = [
69 col for col in old_columns if not col.is_default
70 ]
71 # TODO reuse metadata
72 extracted_column_names = get_column_names_from_attnums(old_table_oid, extracted_column_attnums, engine, metadata=get_empty_metadata())
73 extracted_columns = [
74 col for col in old_non_default_columns if col.name in extracted_column_names
75 ]
76 with engine.begin() as conn:
77 extracted_table, remainder_table_with_fk_column, fk_column_name = _create_split_tables(
78 extracted_table_name,
79 extracted_columns,
80 old_table_name,
81 schema,
82 engine,
83 relationship_fk_column_name
84 )
85 split_ins = _create_split_insert_stmt(
86 remainder_table_with_fk_column,
87 extracted_table,
88 extracted_columns,
89 fk_column_name,
90 )
91 conn.execute(split_ins)
92 remainder_table_oid = get_oid_from_table(remainder_table_with_fk_column.name, schema, engine)
93 deletion_column_data = [
94 {'attnum': column_attnum, 'delete': True}
95 for column_attnum in extracted_column_attnums
96 ]
97 batch_alter_table_drop_columns(remainder_table_oid, deletion_column_data, conn, engine)
98 return extracted_table, remainder_table_with_fk_column, fk_column_name
99
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/db/tables/operations/split.py b/db/tables/operations/split.py
--- a/db/tables/operations/split.py
+++ b/db/tables/operations/split.py
@@ -89,6 +89,8 @@
fk_column_name,
)
conn.execute(split_ins)
+ update_pk_sequence_to_latest(conn, engine, extracted_table)
+
remainder_table_oid = get_oid_from_table(remainder_table_with_fk_column.name, schema, engine)
deletion_column_data = [
{'attnum': column_attnum, 'delete': True}
@@ -96,3 +98,28 @@
]
batch_alter_table_drop_columns(remainder_table_oid, deletion_column_data, conn, engine)
return extracted_table, remainder_table_with_fk_column, fk_column_name
+
+
+def update_pk_sequence_to_latest(conn, engine, extracted_table):
+ _preparer = engine.dialect.identifier_preparer
+ quoted_table_name = _preparer.quote(extracted_table.schema) + "." + _preparer.quote(extracted_table.name)
+ update_pk_sequence_stmt = func.setval(
+ # `pg_get_serial_sequence needs a string of the Table name
+ func.pg_get_serial_sequence(
+ quoted_table_name,
+ extracted_table.c[constants.ID].name
+ ),
+ # If the table can be empty, start from 1 instead of using Null
+ func.coalesce(
+ func.max(extracted_table.c[constants.ID]) + 1,
+ 1
+ ),
+ # Set the sequence to use the last value of the sequence
+ # Setting is_called field to false, meaning that the next nextval will not advance the sequence before returning a value.
+ # We need to do it as our default coalesce value is 1 instead of 0
+ # Refer the postgres docs https://www.postgresql.org/docs/current/functions-sequence.html
+ False
+ )
+ conn.execute(
+ select(update_pk_sequence_stmt)
+ )
| {"golden_diff": "diff --git a/db/tables/operations/split.py b/db/tables/operations/split.py\n--- a/db/tables/operations/split.py\n+++ b/db/tables/operations/split.py\n@@ -89,6 +89,8 @@\n fk_column_name,\n )\n conn.execute(split_ins)\n+ update_pk_sequence_to_latest(conn, engine, extracted_table)\n+\n remainder_table_oid = get_oid_from_table(remainder_table_with_fk_column.name, schema, engine)\n deletion_column_data = [\n {'attnum': column_attnum, 'delete': True}\n@@ -96,3 +98,28 @@\n ]\n batch_alter_table_drop_columns(remainder_table_oid, deletion_column_data, conn, engine)\n return extracted_table, remainder_table_with_fk_column, fk_column_name\n+\n+\n+def update_pk_sequence_to_latest(conn, engine, extracted_table):\n+ _preparer = engine.dialect.identifier_preparer\n+ quoted_table_name = _preparer.quote(extracted_table.schema) + \".\" + _preparer.quote(extracted_table.name)\n+ update_pk_sequence_stmt = func.setval(\n+ # `pg_get_serial_sequence needs a string of the Table name\n+ func.pg_get_serial_sequence(\n+ quoted_table_name,\n+ extracted_table.c[constants.ID].name\n+ ),\n+ # If the table can be empty, start from 1 instead of using Null\n+ func.coalesce(\n+ func.max(extracted_table.c[constants.ID]) + 1,\n+ 1\n+ ),\n+ # Set the sequence to use the last value of the sequence\n+ # Setting is_called field to false, meaning that the next nextval will not advance the sequence before returning a value.\n+ # We need to do it as our default coalesce value is 1 instead of 0\n+ # Refer the postgres docs https://www.postgresql.org/docs/current/functions-sequence.html\n+ False\n+ )\n+ conn.execute(\n+ select(update_pk_sequence_stmt)\n+ )\n", "issue": "ID sequence isn't being created for tables\n## Description\r\n<!-- A clear and concise description of what the bug is. -->\r\nI cannot add a new record to a new table because the ID does not have a default value. I set up this table by:\r\n1. Importing [library_books.tsv.zip](https://github.com/centerofci/mathesar/files/10273401/library_books.tsv.zip) as \"Items\"\r\n2. Pulling \"Author First Name\" and \"Author First Name\" into an \"Authors\" table\r\n3. Moving \"Author Website\" to \"Authors\" table\r\n4. Pulling \"Publishers\" into a \"Publishers\" table\r\n5. Pulling \"Title\", \"Publication Year\", \"ISBN\", \"Authors_id\", and \"Publishers_id\" into a \"Books\" table\r\n\r\n## Expected behavior\r\n<!-- A clear and concise description of what you expected to happen. -->\r\nI should be able to add a new record!\r\n\r\n## To Reproduce\r\n<!-- How can we recreate this bug? Please try to provide a Minimal, Complete, and Verifiable (http://stackoverflow.com/help/mcve) example if code-related. -->\r\n~Try adding a new record on https://library.mathesar.org/mathesar_tables/4/tables/34/~\r\n\r\nI needed to nuke this table since it was blocking my work. To reproduce, follow the steps under Description.\r\n\r\n## Additional context\r\n<!-- Add any other context about the problem or screenshots here. -->\r\nThis seems related to\r\n- #2116 \r\n- #1356 \r\n\n", "before_files": [{"content": "from sqlalchemy import exists, func, literal, select\n\nfrom db import constants\nfrom db.columns.base import MathesarColumn\nfrom db.columns.operations.alter import batch_alter_table_drop_columns\nfrom db.columns.operations.select import get_column_names_from_attnums\nfrom db.links.operations.create import create_foreign_key_link\nfrom db.tables.operations.create import create_mathesar_table\nfrom db.tables.operations.select import get_oid_from_table, reflect_table, reflect_table_from_oid\nfrom db.metadata import get_empty_metadata\n\n\ndef _create_split_tables(extracted_table_name, extracted_columns, remainder_table_name, schema, engine, fk_column_name=None):\n extracted_table = create_mathesar_table(\n extracted_table_name,\n schema,\n extracted_columns,\n engine,\n )\n fk_column_name = fk_column_name if fk_column_name else f\"{extracted_table.name}_{constants.ID}\"\n remainder_table_oid = get_oid_from_table(remainder_table_name, schema, engine)\n extracted_table_oid = get_oid_from_table(extracted_table_name, schema, engine)\n create_foreign_key_link(engine, schema, fk_column_name, remainder_table_oid, extracted_table_oid)\n # TODO reuse metadata\n remainder_table_with_fk_key = reflect_table(remainder_table_name, schema, engine, metadata=get_empty_metadata())\n return extracted_table, remainder_table_with_fk_key, fk_column_name\n\n\ndef _create_split_insert_stmt(old_table, extracted_table, extracted_columns, remainder_fk_name):\n SPLIT_ID = f\"{constants.MATHESAR_PREFIX}_split_column_alias\"\n extracted_column_names = [col.name for col in extracted_columns]\n split_cte = select(\n [\n old_table,\n func.dense_rank().over(order_by=extracted_columns).label(SPLIT_ID)\n ]\n ).cte()\n cte_extraction_columns = (\n [split_cte.columns[SPLIT_ID]]\n + [split_cte.columns[n] for n in extracted_column_names]\n )\n extract_sel = select(\n cte_extraction_columns,\n distinct=True\n )\n extract_ins_cte = (\n extracted_table\n .insert()\n .from_select([constants.ID] + extracted_column_names, extract_sel)\n .returning(literal(1))\n .cte()\n )\n fk_update_dict = {remainder_fk_name: split_cte.c[SPLIT_ID]}\n split_ins = (\n old_table\n .update().values(**fk_update_dict).\n where(old_table.c[constants.ID] == split_cte.c[constants.ID],\n exists(extract_ins_cte.select()))\n )\n return split_ins\n\n\ndef extract_columns_from_table(old_table_oid, extracted_column_attnums, extracted_table_name, schema, engine, relationship_fk_column_name=None):\n # TODO reuse metadata\n old_table = reflect_table_from_oid(old_table_oid, engine, metadata=get_empty_metadata())\n old_table_name = old_table.name\n old_columns = (MathesarColumn.from_column(col) for col in old_table.columns)\n old_non_default_columns = [\n col for col in old_columns if not col.is_default\n ]\n # TODO reuse metadata\n extracted_column_names = get_column_names_from_attnums(old_table_oid, extracted_column_attnums, engine, metadata=get_empty_metadata())\n extracted_columns = [\n col for col in old_non_default_columns if col.name in extracted_column_names\n ]\n with engine.begin() as conn:\n extracted_table, remainder_table_with_fk_column, fk_column_name = _create_split_tables(\n extracted_table_name,\n extracted_columns,\n old_table_name,\n schema,\n engine,\n relationship_fk_column_name\n )\n split_ins = _create_split_insert_stmt(\n remainder_table_with_fk_column,\n extracted_table,\n extracted_columns,\n fk_column_name,\n )\n conn.execute(split_ins)\n remainder_table_oid = get_oid_from_table(remainder_table_with_fk_column.name, schema, engine)\n deletion_column_data = [\n {'attnum': column_attnum, 'delete': True}\n for column_attnum in extracted_column_attnums\n ]\n batch_alter_table_drop_columns(remainder_table_oid, deletion_column_data, conn, engine)\n return extracted_table, remainder_table_with_fk_column, fk_column_name\n", "path": "db/tables/operations/split.py"}], "after_files": [{"content": "from sqlalchemy import exists, func, literal, select\n\nfrom db import constants\nfrom db.columns.base import MathesarColumn\nfrom db.columns.operations.alter import batch_alter_table_drop_columns\nfrom db.columns.operations.select import get_column_names_from_attnums\nfrom db.links.operations.create import create_foreign_key_link\nfrom db.tables.operations.create import create_mathesar_table\nfrom db.tables.operations.select import get_oid_from_table, reflect_table, reflect_table_from_oid\nfrom db.metadata import get_empty_metadata\n\n\ndef _create_split_tables(extracted_table_name, extracted_columns, remainder_table_name, schema, engine, fk_column_name=None):\n extracted_table = create_mathesar_table(\n extracted_table_name,\n schema,\n extracted_columns,\n engine,\n )\n fk_column_name = fk_column_name if fk_column_name else f\"{extracted_table.name}_{constants.ID}\"\n remainder_table_oid = get_oid_from_table(remainder_table_name, schema, engine)\n extracted_table_oid = get_oid_from_table(extracted_table_name, schema, engine)\n create_foreign_key_link(engine, schema, fk_column_name, remainder_table_oid, extracted_table_oid)\n # TODO reuse metadata\n remainder_table_with_fk_key = reflect_table(remainder_table_name, schema, engine, metadata=get_empty_metadata())\n return extracted_table, remainder_table_with_fk_key, fk_column_name\n\n\ndef _create_split_insert_stmt(old_table, extracted_table, extracted_columns, remainder_fk_name):\n SPLIT_ID = f\"{constants.MATHESAR_PREFIX}_split_column_alias\"\n extracted_column_names = [col.name for col in extracted_columns]\n split_cte = select(\n [\n old_table,\n func.dense_rank().over(order_by=extracted_columns).label(SPLIT_ID)\n ]\n ).cte()\n cte_extraction_columns = (\n [split_cte.columns[SPLIT_ID]]\n + [split_cte.columns[n] for n in extracted_column_names]\n )\n extract_sel = select(\n cte_extraction_columns,\n distinct=True\n )\n extract_ins_cte = (\n extracted_table\n .insert()\n .from_select([constants.ID] + extracted_column_names, extract_sel)\n .returning(literal(1))\n .cte()\n )\n fk_update_dict = {remainder_fk_name: split_cte.c[SPLIT_ID]}\n split_ins = (\n old_table\n .update().values(**fk_update_dict).\n where(old_table.c[constants.ID] == split_cte.c[constants.ID],\n exists(extract_ins_cte.select()))\n )\n return split_ins\n\n\ndef extract_columns_from_table(old_table_oid, extracted_column_attnums, extracted_table_name, schema, engine, relationship_fk_column_name=None):\n # TODO reuse metadata\n old_table = reflect_table_from_oid(old_table_oid, engine, metadata=get_empty_metadata())\n old_table_name = old_table.name\n old_columns = (MathesarColumn.from_column(col) for col in old_table.columns)\n old_non_default_columns = [\n col for col in old_columns if not col.is_default\n ]\n # TODO reuse metadata\n extracted_column_names = get_column_names_from_attnums(old_table_oid, extracted_column_attnums, engine, metadata=get_empty_metadata())\n extracted_columns = [\n col for col in old_non_default_columns if col.name in extracted_column_names\n ]\n with engine.begin() as conn:\n extracted_table, remainder_table_with_fk_column, fk_column_name = _create_split_tables(\n extracted_table_name,\n extracted_columns,\n old_table_name,\n schema,\n engine,\n relationship_fk_column_name\n )\n split_ins = _create_split_insert_stmt(\n remainder_table_with_fk_column,\n extracted_table,\n extracted_columns,\n fk_column_name,\n )\n conn.execute(split_ins)\n update_pk_sequence_to_latest(conn, engine, extracted_table)\n\n remainder_table_oid = get_oid_from_table(remainder_table_with_fk_column.name, schema, engine)\n deletion_column_data = [\n {'attnum': column_attnum, 'delete': True}\n for column_attnum in extracted_column_attnums\n ]\n batch_alter_table_drop_columns(remainder_table_oid, deletion_column_data, conn, engine)\n return extracted_table, remainder_table_with_fk_column, fk_column_name\n\n\ndef update_pk_sequence_to_latest(conn, engine, extracted_table):\n _preparer = engine.dialect.identifier_preparer\n quoted_table_name = _preparer.quote(extracted_table.schema) + \".\" + _preparer.quote(extracted_table.name)\n update_pk_sequence_stmt = func.setval(\n # `pg_get_serial_sequence needs a string of the Table name\n func.pg_get_serial_sequence(\n quoted_table_name,\n extracted_table.c[constants.ID].name\n ),\n # If the table can be empty, start from 1 instead of using Null\n func.coalesce(\n func.max(extracted_table.c[constants.ID]) + 1,\n 1\n ),\n # Set the sequence to use the last value of the sequence\n # Setting is_called field to false, meaning that the next nextval will not advance the sequence before returning a value.\n # We need to do it as our default coalesce value is 1 instead of 0\n # Refer the postgres docs https://www.postgresql.org/docs/current/functions-sequence.html\n False\n )\n conn.execute(\n select(update_pk_sequence_stmt)\n )\n", "path": "db/tables/operations/split.py"}]} | 1,674 | 443 |
gh_patches_debug_30557 | rasdani/github-patches | git_diff | enthought__chaco-538 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use the latest EDM version (3.0.1)
A new major version of EDM is available and it would be good to update to the latest version - https://assets.enthought.com/downloads/edm/. This should be as simple as updating the environment variables in travis and appveyor config files.
https://github.com/enthought/chaco/blob/3de7780561fa29e79c887432d3ce408ea82d1614/.travis.yml#L23
https://github.com/enthought/chaco/blob/3de7780561fa29e79c887432d3ce408ea82d1614/appveyor.yml#L8
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ci/edmtool.py`
Content:
```
1 #
2 # Copyright (c) 2017, Enthought, Inc.
3 # All rights reserved.
4 #
5 # This software is provided without warranty under the terms of the BSD
6 # license included in enthought/LICENSE.txt and may be redistributed only
7 # under the conditions described in the aforementioned license. The license
8 # is also available online at http://www.enthought.com/licenses/BSD.txt
9 #
10 # Thanks for using Enthought open source!
11 #
12 """
13 Tasks for Test Runs
14 ===================
15 This file is intended to be used with a python environment with the
16 click library to automate the process of setting up test environments
17 and running the test within them. This improves repeatability and
18 reliability of tests be removing many of the variables around the
19 developer's particular Python environment. Test environment setup and
20 package management is performed using `EDM http://docs.enthought.com/edm/`_
21
22 To use this to run you tests, you will need to install EDM and click
23 into your working environment. You will also need to have git
24 installed to access required source code from github repositories.
25
26 You can then do::
27 python edmtool.py install --runtime=... --toolkit=...
28 to create a test environment from the current codebase and::
29 python edmtool.py test --runtime=... --toolkit=...
30 to run tests in that environment. You can remove the environment with::
31 python edmtool.py cleanup --runtime=... --toolkit=...
32
33 If you make changes you will either need to remove and re-install the
34 environment or manually update the environment using ``edm``, as
35 the install performs a ``python setup.py install`` rather than a ``develop``,
36 so changes in your code will not be automatically mirrored in the test
37 environment. You can update with a command like::
38 edm run --environment ... -- python setup.py install
39 You can run all three tasks at once with::
40 python edmtool.py test_clean --runtime=... --toolkit=...
41 which will create, install, run tests, and then clean-up the environment. And
42 you can run tests in all supported runtimes and toolkits (with cleanup)
43 using::
44 python edmtool.py test_all
45
46 Currently supported runtime values are ``3.6``, and currently
47 supported toolkits are ``null``, ``pyqt``, ``pyqt5`` and ``pyside2``. Not all
48 combinations of toolkits and runtimes will work, but the tasks will fail with
49 a clear error if that is the case. Tests can still be run via the usual means
50 in other environments if that suits a developer's purpose.
51
52 Changing This File
53 ------------------
54 To change the packages installed during a test run, change the dependencies
55 variable below. To install a package from github, or one which is not yet
56 available via EDM, add it to the `ci/requirements.txt` file (these will be
57 installed by `pip`).
58
59 Other changes to commands should be a straightforward change to the listed
60 commands for each task. See the EDM documentation for more information about
61 how to run commands within an EDM enviornment.
62 """
63 import glob
64 import os
65 import subprocess
66 import sys
67 from shutil import rmtree, copy as copyfile
68 from tempfile import mkdtemp
69 from contextlib import contextmanager
70
71 import click
72
73 supported_combinations = {
74 '3.6': {'pyside2', 'pyqt', 'pyqt5', 'null'},
75 }
76
77 dependencies = {
78 "six",
79 "mock",
80 "numpy",
81 "pandas",
82 "pyface",
83 "pygments",
84 "pyparsing",
85 "traits",
86 "traitsui",
87 "cython",
88 "enable",
89 # Needed to install enable from source
90 "swig",
91 }
92
93 # Dependencies we install from source for cron tests
94 source_dependencies = {
95 "enable",
96 "pyface",
97 "traits",
98 "traitsui",
99 }
100
101 github_url_fmt = "git+http://github.com/enthought/{0}.git#egg={0}"
102
103 extra_dependencies = {
104 'pyside2': set(), # pyside2 is pip-installed during the install step
105 'pyqt': {'pyqt'},
106 'pyqt5': {'pyqt5'},
107 'null': set()
108 }
109
110 environment_vars = {
111 'pyside2': {'ETS_TOOLKIT': 'qt4', 'QT_API': 'pyside2'},
112 'pyqt': {'ETS_TOOLKIT': 'qt4', 'QT_API': 'pyqt'},
113 'pyqt5': {'ETS_TOOLKIT': 'qt4', 'QT_API': 'pyqt5'},
114 'null': {'ETS_TOOLKIT': 'null.image'},
115 }
116
117
118 def normalize(name):
119 return name.replace("_", "-")
120
121
122 @click.group(context_settings={"token_normalize_func": normalize})
123 def cli():
124 pass
125
126
127 @cli.command()
128 @click.option('--runtime', default='3.6')
129 @click.option('--toolkit', default='null')
130 @click.option('--environment', default=None)
131 @click.option(
132 "--source/--no-source",
133 default=False,
134 help="Install ETS packages from source",
135 )
136 def install(runtime, toolkit, environment, source):
137 """ Install project and dependencies into a clean EDM environment.
138 """
139 parameters = get_parameters(runtime, toolkit, environment)
140 parameters['packages'] = ' '.join(
141 dependencies | extra_dependencies.get(toolkit, set()))
142 # edm commands to setup the development environment
143 commands = [
144 "edm environments create {environment} --force --version={runtime}",
145 "edm install -y -e {environment} {packages}",
146 ("edm run -e {environment} -- pip install -r ci/requirements.txt"
147 " --no-dependencies"),
148 "edm run -e {environment} -- pip install . --no-deps",
149 ]
150 # pip install pyside2, because we don't have it in EDM yet
151 if toolkit == 'pyside2':
152 commands.append(
153 "edm run -e {environment} -- pip install pyside2==5.11"
154 )
155
156 click.echo("Creating environment '{environment}'".format(**parameters))
157 execute(commands, parameters)
158
159 if source:
160 # Remove EDM ETS packages and install them from source
161 cmd_fmt = (
162 "edm plumbing remove-package "
163 "--environment {environment} --force "
164 )
165 commands = [cmd_fmt + source_pkg for source_pkg in source_dependencies]
166 execute(commands, parameters)
167 source_pkgs = [
168 github_url_fmt.format(pkg) for pkg in source_dependencies
169 ]
170 commands = [
171 "python -m pip install {pkg} --no-deps".format(pkg=pkg)
172 for pkg in source_pkgs
173 ]
174 commands = [
175 "edm run -e {environment} -- " + command for command in commands
176 ]
177 execute(commands, parameters)
178 click.echo('Done install')
179
180
181 @cli.command()
182 @click.option('--runtime', default='3.6')
183 @click.option('--toolkit', default='null')
184 @click.option('--environment', default=None)
185 def test(runtime, toolkit, environment):
186 """ Run the test suite in a given environment with the specified toolkit.
187 """
188 parameters = get_parameters(runtime, toolkit, environment)
189 environ = environment_vars.get(toolkit, {}).copy()
190
191 environ['PYTHONUNBUFFERED'] = "1"
192 commands = [
193 "edm run -e {environment} -- coverage run -m unittest discover -v chaco"
194 ]
195
196 cwd = os.getcwd()
197
198 # We run in a tempdir to avoid accidentally picking up wrong traitsui
199 # code from a local dir. We need to ensure a good .coveragerc is in
200 # that directory, plus coverage has a bug that means a non-local coverage
201 # file doesn't get populated correctly.
202 click.echo("Running tests in '{environment}'".format(**parameters))
203 with do_in_tempdir(files=['.coveragerc'], capture_files=['./.coverage*']):
204 os.environ.update(environ)
205 execute(commands, parameters)
206
207 click.echo('Done test')
208
209
210 @cli.command()
211 @click.option('--runtime', default='3.6')
212 @click.option('--toolkit', default='null')
213 @click.option('--environment', default=None)
214 def cleanup(runtime, toolkit, environment):
215 """ Remove a development environment.
216 """
217 parameters = get_parameters(runtime, toolkit, environment)
218 commands = [
219 "edm run -e {environment} -- python setup.py clean",
220 "edm environments remove {environment} --purge -y",
221 ]
222 click.echo("Cleaning up environment '{environment}'".format(**parameters))
223 execute(commands, parameters)
224 click.echo('Done cleanup')
225
226
227 @cli.command()
228 @click.option('--runtime', default='3.6')
229 @click.option('--toolkit', default='null')
230 def test_clean(runtime, toolkit):
231 """ Run tests in a clean environment, cleaning up afterwards
232 """
233 args = ['--toolkit={}'.format(toolkit),
234 '--runtime={}'.format(runtime)]
235 try:
236 install(args=args, standalone_mode=False)
237 test(args=args, standalone_mode=False)
238 finally:
239 cleanup(args=args, standalone_mode=False)
240
241
242 @cli.command()
243 @click.option('--runtime', default='3.6')
244 @click.option('--toolkit', default='null')
245 @click.option('--environment', default=None)
246 def update(runtime, toolkit, environment):
247 """ Update/Reinstall package into environment.
248 """
249 parameters = get_parameters(runtime, toolkit, environment)
250 commands = [
251 "edm run -e {environment} -- python setup.py install"]
252 click.echo("Re-installing in '{environment}'".format(**parameters))
253 execute(commands, parameters)
254 click.echo('Done update')
255
256
257 @cli.command()
258 def test_all():
259 """ Run test_clean across all supported environment combinations.
260 """
261 for runtime, toolkits in supported_combinations.items():
262 for toolkit in toolkits:
263 args = ['--toolkit={}'.format(toolkit),
264 '--runtime={}'.format(runtime)]
265 test_clean(args, standalone_mode=True)
266
267
268 # ----------------------------------------------------------------------------
269 # Utility routines
270 # ----------------------------------------------------------------------------
271
272 def get_parameters(runtime, toolkit, environment):
273 """Set up parameters dictionary for format() substitution
274 """
275 parameters = {'runtime': runtime, 'toolkit': toolkit,
276 'environment': environment}
277 if toolkit not in supported_combinations[runtime]:
278 msg = ("Python {runtime!r}, toolkit {toolkit!r}, "
279 "not supported by test environments ({available})")
280 available = ", ".join(
281 repr(tk) for tk in sorted(supported_combinations[runtime])
282 )
283 raise RuntimeError(msg.format(available=available, **parameters))
284 if environment is None:
285 tmpl = 'chaco-test-{runtime}-{toolkit}'
286 environment = tmpl.format(**parameters)
287 parameters['environment'] = environment
288 return parameters
289
290
291 @contextmanager
292 def do_in_tempdir(files=(), capture_files=()):
293 """ Create a temporary directory, cleaning up after done.
294 Creates the temporary directory, and changes into it. On exit returns to
295 original directory and removes temporary dir.
296 Parameters
297 ----------
298 files : sequence of filenames
299 Files to be copied across to temporary directory.
300 capture_files : sequence of filenames
301 Files to be copied back from temporary directory.
302 """
303 path = mkdtemp()
304 old_path = os.getcwd()
305
306 # send across any files we need
307 for filepath in files:
308 click.echo('copying file to tempdir: {}'.format(filepath))
309 copyfile(filepath, path)
310
311 os.chdir(path)
312 try:
313 yield path
314 # retrieve any result files we want
315 for pattern in capture_files:
316 for filepath in glob.iglob(pattern):
317 click.echo('copying file back: {}'.format(filepath))
318 copyfile(filepath, old_path)
319 finally:
320 os.chdir(old_path)
321 rmtree(path)
322
323
324 def execute(commands, parameters):
325 for command in commands:
326 print("[EXECUTING]", command.format(**parameters))
327 try:
328 subprocess.check_call(command.format(**parameters).split())
329 except subprocess.CalledProcessError:
330 sys.exit(1)
331
332
333 if __name__ == '__main__':
334 cli()
335
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ci/edmtool.py b/ci/edmtool.py
--- a/ci/edmtool.py
+++ b/ci/edmtool.py
@@ -101,7 +101,7 @@
github_url_fmt = "git+http://github.com/enthought/{0}.git#egg={0}"
extra_dependencies = {
- 'pyside2': set(), # pyside2 is pip-installed during the install step
+ 'pyside2': {'pyside2'},
'pyqt': {'pyqt'},
'pyqt5': {'pyqt5'},
'null': set()
@@ -139,19 +139,20 @@
parameters = get_parameters(runtime, toolkit, environment)
parameters['packages'] = ' '.join(
dependencies | extra_dependencies.get(toolkit, set()))
+
+ if toolkit == "pyside2":
+ additional_repositories = "--add-repository enthought/lgpl"
+ else:
+ additional_repositories = ""
+
# edm commands to setup the development environment
commands = [
"edm environments create {environment} --force --version={runtime}",
- "edm install -y -e {environment} {packages}",
+ "edm install -y -e {environment} {packages} " + additional_repositories,
("edm run -e {environment} -- pip install -r ci/requirements.txt"
" --no-dependencies"),
"edm run -e {environment} -- pip install . --no-deps",
]
- # pip install pyside2, because we don't have it in EDM yet
- if toolkit == 'pyside2':
- commands.append(
- "edm run -e {environment} -- pip install pyside2==5.11"
- )
click.echo("Creating environment '{environment}'".format(**parameters))
execute(commands, parameters)
| {"golden_diff": "diff --git a/ci/edmtool.py b/ci/edmtool.py\n--- a/ci/edmtool.py\n+++ b/ci/edmtool.py\n@@ -101,7 +101,7 @@\n github_url_fmt = \"git+http://github.com/enthought/{0}.git#egg={0}\"\n \n extra_dependencies = {\n- 'pyside2': set(), # pyside2 is pip-installed during the install step\n+ 'pyside2': {'pyside2'},\n 'pyqt': {'pyqt'},\n 'pyqt5': {'pyqt5'},\n 'null': set()\n@@ -139,19 +139,20 @@\n parameters = get_parameters(runtime, toolkit, environment)\n parameters['packages'] = ' '.join(\n dependencies | extra_dependencies.get(toolkit, set()))\n+\n+ if toolkit == \"pyside2\":\n+ additional_repositories = \"--add-repository enthought/lgpl\"\n+ else:\n+ additional_repositories = \"\"\n+\n # edm commands to setup the development environment\n commands = [\n \"edm environments create {environment} --force --version={runtime}\",\n- \"edm install -y -e {environment} {packages}\",\n+ \"edm install -y -e {environment} {packages} \" + additional_repositories,\n (\"edm run -e {environment} -- pip install -r ci/requirements.txt\"\n \" --no-dependencies\"),\n \"edm run -e {environment} -- pip install . --no-deps\",\n ]\n- # pip install pyside2, because we don't have it in EDM yet\n- if toolkit == 'pyside2':\n- commands.append(\n- \"edm run -e {environment} -- pip install pyside2==5.11\"\n- )\n \n click.echo(\"Creating environment '{environment}'\".format(**parameters))\n execute(commands, parameters)\n", "issue": "Use the latest EDM version (3.0.1)\nA new major version of EDM is available and it would be good to update to the latest version - https://assets.enthought.com/downloads/edm/. This should be as simple as updating the environment variables in travis and appveyor config files.\r\n\r\nhttps://github.com/enthought/chaco/blob/3de7780561fa29e79c887432d3ce408ea82d1614/.travis.yml#L23\r\nhttps://github.com/enthought/chaco/blob/3de7780561fa29e79c887432d3ce408ea82d1614/appveyor.yml#L8\n", "before_files": [{"content": "#\n# Copyright (c) 2017, Enthought, Inc.\n# All rights reserved.\n#\n# This software is provided without warranty under the terms of the BSD\n# license included in enthought/LICENSE.txt and may be redistributed only\n# under the conditions described in the aforementioned license. The license\n# is also available online at http://www.enthought.com/licenses/BSD.txt\n#\n# Thanks for using Enthought open source!\n#\n\"\"\"\nTasks for Test Runs\n===================\nThis file is intended to be used with a python environment with the\nclick library to automate the process of setting up test environments\nand running the test within them. This improves repeatability and\nreliability of tests be removing many of the variables around the\ndeveloper's particular Python environment. Test environment setup and\npackage management is performed using `EDM http://docs.enthought.com/edm/`_\n\nTo use this to run you tests, you will need to install EDM and click\ninto your working environment. You will also need to have git\ninstalled to access required source code from github repositories.\n\nYou can then do::\n python edmtool.py install --runtime=... --toolkit=...\nto create a test environment from the current codebase and::\n python edmtool.py test --runtime=... --toolkit=...\nto run tests in that environment. You can remove the environment with::\n python edmtool.py cleanup --runtime=... --toolkit=...\n\nIf you make changes you will either need to remove and re-install the\nenvironment or manually update the environment using ``edm``, as\nthe install performs a ``python setup.py install`` rather than a ``develop``,\nso changes in your code will not be automatically mirrored in the test\nenvironment. You can update with a command like::\n edm run --environment ... -- python setup.py install\nYou can run all three tasks at once with::\n python edmtool.py test_clean --runtime=... --toolkit=...\nwhich will create, install, run tests, and then clean-up the environment. And\nyou can run tests in all supported runtimes and toolkits (with cleanup)\nusing::\n python edmtool.py test_all\n\nCurrently supported runtime values are ``3.6``, and currently\nsupported toolkits are ``null``, ``pyqt``, ``pyqt5`` and ``pyside2``. Not all\ncombinations of toolkits and runtimes will work, but the tasks will fail with\na clear error if that is the case. Tests can still be run via the usual means\nin other environments if that suits a developer's purpose.\n\nChanging This File\n------------------\nTo change the packages installed during a test run, change the dependencies\nvariable below. To install a package from github, or one which is not yet\navailable via EDM, add it to the `ci/requirements.txt` file (these will be\ninstalled by `pip`).\n\nOther changes to commands should be a straightforward change to the listed\ncommands for each task. See the EDM documentation for more information about\nhow to run commands within an EDM enviornment.\n\"\"\"\nimport glob\nimport os\nimport subprocess\nimport sys\nfrom shutil import rmtree, copy as copyfile\nfrom tempfile import mkdtemp\nfrom contextlib import contextmanager\n\nimport click\n\nsupported_combinations = {\n '3.6': {'pyside2', 'pyqt', 'pyqt5', 'null'},\n}\n\ndependencies = {\n \"six\",\n \"mock\",\n \"numpy\",\n \"pandas\",\n \"pyface\",\n \"pygments\",\n \"pyparsing\",\n \"traits\",\n \"traitsui\",\n \"cython\",\n \"enable\",\n # Needed to install enable from source\n \"swig\",\n}\n\n# Dependencies we install from source for cron tests\nsource_dependencies = {\n \"enable\",\n \"pyface\",\n \"traits\",\n \"traitsui\",\n}\n\ngithub_url_fmt = \"git+http://github.com/enthought/{0}.git#egg={0}\"\n\nextra_dependencies = {\n 'pyside2': set(), # pyside2 is pip-installed during the install step\n 'pyqt': {'pyqt'},\n 'pyqt5': {'pyqt5'},\n 'null': set()\n}\n\nenvironment_vars = {\n 'pyside2': {'ETS_TOOLKIT': 'qt4', 'QT_API': 'pyside2'},\n 'pyqt': {'ETS_TOOLKIT': 'qt4', 'QT_API': 'pyqt'},\n 'pyqt5': {'ETS_TOOLKIT': 'qt4', 'QT_API': 'pyqt5'},\n 'null': {'ETS_TOOLKIT': 'null.image'},\n}\n\n\ndef normalize(name):\n return name.replace(\"_\", \"-\")\n\n\[email protected](context_settings={\"token_normalize_func\": normalize})\ndef cli():\n pass\n\n\[email protected]()\[email protected]('--runtime', default='3.6')\[email protected]('--toolkit', default='null')\[email protected]('--environment', default=None)\[email protected](\n \"--source/--no-source\",\n default=False,\n help=\"Install ETS packages from source\",\n)\ndef install(runtime, toolkit, environment, source):\n \"\"\" Install project and dependencies into a clean EDM environment.\n \"\"\"\n parameters = get_parameters(runtime, toolkit, environment)\n parameters['packages'] = ' '.join(\n dependencies | extra_dependencies.get(toolkit, set()))\n # edm commands to setup the development environment\n commands = [\n \"edm environments create {environment} --force --version={runtime}\",\n \"edm install -y -e {environment} {packages}\",\n (\"edm run -e {environment} -- pip install -r ci/requirements.txt\"\n \" --no-dependencies\"),\n \"edm run -e {environment} -- pip install . --no-deps\",\n ]\n # pip install pyside2, because we don't have it in EDM yet\n if toolkit == 'pyside2':\n commands.append(\n \"edm run -e {environment} -- pip install pyside2==5.11\"\n )\n \n click.echo(\"Creating environment '{environment}'\".format(**parameters))\n execute(commands, parameters)\n\n if source:\n # Remove EDM ETS packages and install them from source\n cmd_fmt = (\n \"edm plumbing remove-package \"\n \"--environment {environment} --force \"\n )\n commands = [cmd_fmt + source_pkg for source_pkg in source_dependencies]\n execute(commands, parameters)\n source_pkgs = [\n github_url_fmt.format(pkg) for pkg in source_dependencies\n ]\n commands = [\n \"python -m pip install {pkg} --no-deps\".format(pkg=pkg)\n for pkg in source_pkgs\n ]\n commands = [\n \"edm run -e {environment} -- \" + command for command in commands\n ]\n execute(commands, parameters)\n click.echo('Done install')\n\n\[email protected]()\[email protected]('--runtime', default='3.6')\[email protected]('--toolkit', default='null')\[email protected]('--environment', default=None)\ndef test(runtime, toolkit, environment):\n \"\"\" Run the test suite in a given environment with the specified toolkit.\n \"\"\"\n parameters = get_parameters(runtime, toolkit, environment)\n environ = environment_vars.get(toolkit, {}).copy()\n\n environ['PYTHONUNBUFFERED'] = \"1\"\n commands = [\n \"edm run -e {environment} -- coverage run -m unittest discover -v chaco\"\n ]\n\n cwd = os.getcwd()\n\n # We run in a tempdir to avoid accidentally picking up wrong traitsui\n # code from a local dir. We need to ensure a good .coveragerc is in\n # that directory, plus coverage has a bug that means a non-local coverage\n # file doesn't get populated correctly.\n click.echo(\"Running tests in '{environment}'\".format(**parameters))\n with do_in_tempdir(files=['.coveragerc'], capture_files=['./.coverage*']):\n os.environ.update(environ)\n execute(commands, parameters)\n\n click.echo('Done test')\n\n\[email protected]()\[email protected]('--runtime', default='3.6')\[email protected]('--toolkit', default='null')\[email protected]('--environment', default=None)\ndef cleanup(runtime, toolkit, environment):\n \"\"\" Remove a development environment.\n \"\"\"\n parameters = get_parameters(runtime, toolkit, environment)\n commands = [\n \"edm run -e {environment} -- python setup.py clean\",\n \"edm environments remove {environment} --purge -y\",\n ]\n click.echo(\"Cleaning up environment '{environment}'\".format(**parameters))\n execute(commands, parameters)\n click.echo('Done cleanup')\n\n\[email protected]()\[email protected]('--runtime', default='3.6')\[email protected]('--toolkit', default='null')\ndef test_clean(runtime, toolkit):\n \"\"\" Run tests in a clean environment, cleaning up afterwards\n \"\"\"\n args = ['--toolkit={}'.format(toolkit),\n '--runtime={}'.format(runtime)]\n try:\n install(args=args, standalone_mode=False)\n test(args=args, standalone_mode=False)\n finally:\n cleanup(args=args, standalone_mode=False)\n\n\[email protected]()\[email protected]('--runtime', default='3.6')\[email protected]('--toolkit', default='null')\[email protected]('--environment', default=None)\ndef update(runtime, toolkit, environment):\n \"\"\" Update/Reinstall package into environment.\n \"\"\"\n parameters = get_parameters(runtime, toolkit, environment)\n commands = [\n \"edm run -e {environment} -- python setup.py install\"]\n click.echo(\"Re-installing in '{environment}'\".format(**parameters))\n execute(commands, parameters)\n click.echo('Done update')\n\n\[email protected]()\ndef test_all():\n \"\"\" Run test_clean across all supported environment combinations.\n \"\"\"\n for runtime, toolkits in supported_combinations.items():\n for toolkit in toolkits:\n args = ['--toolkit={}'.format(toolkit),\n '--runtime={}'.format(runtime)]\n test_clean(args, standalone_mode=True)\n\n\n# ----------------------------------------------------------------------------\n# Utility routines\n# ----------------------------------------------------------------------------\n\ndef get_parameters(runtime, toolkit, environment):\n \"\"\"Set up parameters dictionary for format() substitution\n \"\"\"\n parameters = {'runtime': runtime, 'toolkit': toolkit,\n 'environment': environment}\n if toolkit not in supported_combinations[runtime]:\n msg = (\"Python {runtime!r}, toolkit {toolkit!r}, \"\n \"not supported by test environments ({available})\")\n available = \", \".join(\n repr(tk) for tk in sorted(supported_combinations[runtime])\n )\n raise RuntimeError(msg.format(available=available, **parameters))\n if environment is None:\n tmpl = 'chaco-test-{runtime}-{toolkit}'\n environment = tmpl.format(**parameters)\n parameters['environment'] = environment\n return parameters\n\n\n@contextmanager\ndef do_in_tempdir(files=(), capture_files=()):\n \"\"\" Create a temporary directory, cleaning up after done.\n Creates the temporary directory, and changes into it. On exit returns to\n original directory and removes temporary dir.\n Parameters\n ----------\n files : sequence of filenames\n Files to be copied across to temporary directory.\n capture_files : sequence of filenames\n Files to be copied back from temporary directory.\n \"\"\"\n path = mkdtemp()\n old_path = os.getcwd()\n\n # send across any files we need\n for filepath in files:\n click.echo('copying file to tempdir: {}'.format(filepath))\n copyfile(filepath, path)\n\n os.chdir(path)\n try:\n yield path\n # retrieve any result files we want\n for pattern in capture_files:\n for filepath in glob.iglob(pattern):\n click.echo('copying file back: {}'.format(filepath))\n copyfile(filepath, old_path)\n finally:\n os.chdir(old_path)\n rmtree(path)\n\n\ndef execute(commands, parameters):\n for command in commands:\n print(\"[EXECUTING]\", command.format(**parameters))\n try:\n subprocess.check_call(command.format(**parameters).split())\n except subprocess.CalledProcessError:\n sys.exit(1)\n\n\nif __name__ == '__main__':\n cli()\n", "path": "ci/edmtool.py"}], "after_files": [{"content": "#\n# Copyright (c) 2017, Enthought, Inc.\n# All rights reserved.\n#\n# This software is provided without warranty under the terms of the BSD\n# license included in enthought/LICENSE.txt and may be redistributed only\n# under the conditions described in the aforementioned license. The license\n# is also available online at http://www.enthought.com/licenses/BSD.txt\n#\n# Thanks for using Enthought open source!\n#\n\"\"\"\nTasks for Test Runs\n===================\nThis file is intended to be used with a python environment with the\nclick library to automate the process of setting up test environments\nand running the test within them. This improves repeatability and\nreliability of tests be removing many of the variables around the\ndeveloper's particular Python environment. Test environment setup and\npackage management is performed using `EDM http://docs.enthought.com/edm/`_\n\nTo use this to run you tests, you will need to install EDM and click\ninto your working environment. You will also need to have git\ninstalled to access required source code from github repositories.\n\nYou can then do::\n python edmtool.py install --runtime=... --toolkit=...\nto create a test environment from the current codebase and::\n python edmtool.py test --runtime=... --toolkit=...\nto run tests in that environment. You can remove the environment with::\n python edmtool.py cleanup --runtime=... --toolkit=...\n\nIf you make changes you will either need to remove and re-install the\nenvironment or manually update the environment using ``edm``, as\nthe install performs a ``python setup.py install`` rather than a ``develop``,\nso changes in your code will not be automatically mirrored in the test\nenvironment. You can update with a command like::\n edm run --environment ... -- python setup.py install\nYou can run all three tasks at once with::\n python edmtool.py test_clean --runtime=... --toolkit=...\nwhich will create, install, run tests, and then clean-up the environment. And\nyou can run tests in all supported runtimes and toolkits (with cleanup)\nusing::\n python edmtool.py test_all\n\nCurrently supported runtime values are ``3.6``, and currently\nsupported toolkits are ``null``, ``pyqt``, ``pyqt5`` and ``pyside2``. Not all\ncombinations of toolkits and runtimes will work, but the tasks will fail with\na clear error if that is the case. Tests can still be run via the usual means\nin other environments if that suits a developer's purpose.\n\nChanging This File\n------------------\nTo change the packages installed during a test run, change the dependencies\nvariable below. To install a package from github, or one which is not yet\navailable via EDM, add it to the `ci/requirements.txt` file (these will be\ninstalled by `pip`).\n\nOther changes to commands should be a straightforward change to the listed\ncommands for each task. See the EDM documentation for more information about\nhow to run commands within an EDM enviornment.\n\"\"\"\nimport glob\nimport os\nimport subprocess\nimport sys\nfrom shutil import rmtree, copy as copyfile\nfrom tempfile import mkdtemp\nfrom contextlib import contextmanager\n\nimport click\n\nsupported_combinations = {\n '3.6': {'pyside2', 'pyqt', 'pyqt5', 'null'},\n}\n\ndependencies = {\n \"six\",\n \"mock\",\n \"numpy\",\n \"pandas\",\n \"pyface\",\n \"pygments\",\n \"pyparsing\",\n \"traits\",\n \"traitsui\",\n \"cython\",\n \"enable\",\n # Needed to install enable from source\n \"swig\",\n}\n\n# Dependencies we install from source for cron tests\nsource_dependencies = {\n \"enable\",\n \"pyface\",\n \"traits\",\n \"traitsui\",\n}\n\ngithub_url_fmt = \"git+http://github.com/enthought/{0}.git#egg={0}\"\n\nextra_dependencies = {\n 'pyside2': {'pyside2'},\n 'pyqt': {'pyqt'},\n 'pyqt5': {'pyqt5'},\n 'null': set()\n}\n\nenvironment_vars = {\n 'pyside2': {'ETS_TOOLKIT': 'qt4', 'QT_API': 'pyside2'},\n 'pyqt': {'ETS_TOOLKIT': 'qt4', 'QT_API': 'pyqt'},\n 'pyqt5': {'ETS_TOOLKIT': 'qt4', 'QT_API': 'pyqt5'},\n 'null': {'ETS_TOOLKIT': 'null.image'},\n}\n\n\ndef normalize(name):\n return name.replace(\"_\", \"-\")\n\n\[email protected](context_settings={\"token_normalize_func\": normalize})\ndef cli():\n pass\n\n\[email protected]()\[email protected]('--runtime', default='3.6')\[email protected]('--toolkit', default='null')\[email protected]('--environment', default=None)\[email protected](\n \"--source/--no-source\",\n default=False,\n help=\"Install ETS packages from source\",\n)\ndef install(runtime, toolkit, environment, source):\n \"\"\" Install project and dependencies into a clean EDM environment.\n \"\"\"\n parameters = get_parameters(runtime, toolkit, environment)\n parameters['packages'] = ' '.join(\n dependencies | extra_dependencies.get(toolkit, set()))\n\n if toolkit == \"pyside2\":\n additional_repositories = \"--add-repository enthought/lgpl\"\n else:\n additional_repositories = \"\"\n\n # edm commands to setup the development environment\n commands = [\n \"edm environments create {environment} --force --version={runtime}\",\n \"edm install -y -e {environment} {packages} \" + additional_repositories,\n (\"edm run -e {environment} -- pip install -r ci/requirements.txt\"\n \" --no-dependencies\"),\n \"edm run -e {environment} -- pip install . --no-deps\",\n ]\n \n click.echo(\"Creating environment '{environment}'\".format(**parameters))\n execute(commands, parameters)\n\n if source:\n # Remove EDM ETS packages and install them from source\n cmd_fmt = (\n \"edm plumbing remove-package \"\n \"--environment {environment} --force \"\n )\n commands = [cmd_fmt + source_pkg for source_pkg in source_dependencies]\n execute(commands, parameters)\n source_pkgs = [\n github_url_fmt.format(pkg) for pkg in source_dependencies\n ]\n commands = [\n \"python -m pip install {pkg} --no-deps\".format(pkg=pkg)\n for pkg in source_pkgs\n ]\n commands = [\n \"edm run -e {environment} -- \" + command for command in commands\n ]\n execute(commands, parameters)\n click.echo('Done install')\n\n\[email protected]()\[email protected]('--runtime', default='3.6')\[email protected]('--toolkit', default='null')\[email protected]('--environment', default=None)\ndef test(runtime, toolkit, environment):\n \"\"\" Run the test suite in a given environment with the specified toolkit.\n \"\"\"\n parameters = get_parameters(runtime, toolkit, environment)\n environ = environment_vars.get(toolkit, {}).copy()\n\n environ['PYTHONUNBUFFERED'] = \"1\"\n commands = [\n \"edm run -e {environment} -- coverage run -m unittest discover -v chaco\"\n ]\n\n cwd = os.getcwd()\n\n # We run in a tempdir to avoid accidentally picking up wrong traitsui\n # code from a local dir. We need to ensure a good .coveragerc is in\n # that directory, plus coverage has a bug that means a non-local coverage\n # file doesn't get populated correctly.\n click.echo(\"Running tests in '{environment}'\".format(**parameters))\n with do_in_tempdir(files=['.coveragerc'], capture_files=['./.coverage*']):\n os.environ.update(environ)\n execute(commands, parameters)\n\n click.echo('Done test')\n\n\[email protected]()\[email protected]('--runtime', default='3.6')\[email protected]('--toolkit', default='null')\[email protected]('--environment', default=None)\ndef cleanup(runtime, toolkit, environment):\n \"\"\" Remove a development environment.\n \"\"\"\n parameters = get_parameters(runtime, toolkit, environment)\n commands = [\n \"edm run -e {environment} -- python setup.py clean\",\n \"edm environments remove {environment} --purge -y\",\n ]\n click.echo(\"Cleaning up environment '{environment}'\".format(**parameters))\n execute(commands, parameters)\n click.echo('Done cleanup')\n\n\[email protected]()\[email protected]('--runtime', default='3.6')\[email protected]('--toolkit', default='null')\ndef test_clean(runtime, toolkit):\n \"\"\" Run tests in a clean environment, cleaning up afterwards\n \"\"\"\n args = ['--toolkit={}'.format(toolkit),\n '--runtime={}'.format(runtime)]\n try:\n install(args=args, standalone_mode=False)\n test(args=args, standalone_mode=False)\n finally:\n cleanup(args=args, standalone_mode=False)\n\n\[email protected]()\[email protected]('--runtime', default='3.6')\[email protected]('--toolkit', default='null')\[email protected]('--environment', default=None)\ndef update(runtime, toolkit, environment):\n \"\"\" Update/Reinstall package into environment.\n \"\"\"\n parameters = get_parameters(runtime, toolkit, environment)\n commands = [\n \"edm run -e {environment} -- python setup.py install\"]\n click.echo(\"Re-installing in '{environment}'\".format(**parameters))\n execute(commands, parameters)\n click.echo('Done update')\n\n\[email protected]()\ndef test_all():\n \"\"\" Run test_clean across all supported environment combinations.\n \"\"\"\n for runtime, toolkits in supported_combinations.items():\n for toolkit in toolkits:\n args = ['--toolkit={}'.format(toolkit),\n '--runtime={}'.format(runtime)]\n test_clean(args, standalone_mode=True)\n\n\n# ----------------------------------------------------------------------------\n# Utility routines\n# ----------------------------------------------------------------------------\n\ndef get_parameters(runtime, toolkit, environment):\n \"\"\"Set up parameters dictionary for format() substitution\n \"\"\"\n parameters = {'runtime': runtime, 'toolkit': toolkit,\n 'environment': environment}\n if toolkit not in supported_combinations[runtime]:\n msg = (\"Python {runtime!r}, toolkit {toolkit!r}, \"\n \"not supported by test environments ({available})\")\n available = \", \".join(\n repr(tk) for tk in sorted(supported_combinations[runtime])\n )\n raise RuntimeError(msg.format(available=available, **parameters))\n if environment is None:\n tmpl = 'chaco-test-{runtime}-{toolkit}'\n environment = tmpl.format(**parameters)\n parameters['environment'] = environment\n return parameters\n\n\n@contextmanager\ndef do_in_tempdir(files=(), capture_files=()):\n \"\"\" Create a temporary directory, cleaning up after done.\n Creates the temporary directory, and changes into it. On exit returns to\n original directory and removes temporary dir.\n Parameters\n ----------\n files : sequence of filenames\n Files to be copied across to temporary directory.\n capture_files : sequence of filenames\n Files to be copied back from temporary directory.\n \"\"\"\n path = mkdtemp()\n old_path = os.getcwd()\n\n # send across any files we need\n for filepath in files:\n click.echo('copying file to tempdir: {}'.format(filepath))\n copyfile(filepath, path)\n\n os.chdir(path)\n try:\n yield path\n # retrieve any result files we want\n for pattern in capture_files:\n for filepath in glob.iglob(pattern):\n click.echo('copying file back: {}'.format(filepath))\n copyfile(filepath, old_path)\n finally:\n os.chdir(old_path)\n rmtree(path)\n\n\ndef execute(commands, parameters):\n for command in commands:\n print(\"[EXECUTING]\", command.format(**parameters))\n try:\n subprocess.check_call(command.format(**parameters).split())\n except subprocess.CalledProcessError:\n sys.exit(1)\n\n\nif __name__ == '__main__':\n cli()\n", "path": "ci/edmtool.py"}]} | 3,947 | 420 |
gh_patches_debug_12548 | rasdani/github-patches | git_diff | mabel-dev__opteryx-1691 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
🪲 VIEWs load error should be in debug mode only
### Thank you for taking the time to report a problem with Opteryx.
_To help us to respond to your request we ask that you try to provide the below detail about the bug._
**Describe the bug** _A clear and specific description of what the bug is. What the error, incorrect or unexpected behaviour was._
**Expected behaviour** _A clear and concise description of what you expected to happen._
**Sample Code/Statement** _If you can, please submit the SQL statement or Python code snippet, or a representative example using the sample datasets._
~~~sql
~~~
**Additional context** _Add any other context about the problem here, for example what you have done to try to diagnose or workaround the problem._
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opteryx/planner/views/__init__.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import orjson
14
15 from opteryx.planner.logical_planner import LogicalPlan
16
17
18 def _load_views():
19 try:
20 with open("views.json", "rb") as defs:
21 return orjson.loads(defs.read())
22 except Exception as err:
23 # DEBUG:: log (f"[OPTERYX] Unable to open views definition file. {err}")
24 return {}
25
26
27 VIEWS = _load_views()
28
29
30 def is_view(view_name: str) -> bool:
31 return view_name in VIEWS
32
33
34 def view_as_plan(view_name: str) -> LogicalPlan:
35 from opteryx.planner.logical_planner import do_logical_planning_phase
36 from opteryx.third_party import sqloxide
37 from opteryx.utils.sql import clean_statement
38 from opteryx.utils.sql import remove_comments
39
40 operation = VIEWS.get(view_name)["statement"]
41
42 clean_sql = clean_statement(remove_comments(operation))
43 parsed_statements = sqloxide.parse_sql(clean_sql, dialect="mysql")
44 logical_plan, _, _ = next(do_logical_planning_phase(parsed_statements))
45
46 return logical_plan
47
```
Path: `opteryx/__version__.py`
Content:
```
1 __build__ = 523
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 Store the version here so:
17 1) we don't load dependencies by storing it in __init__.py
18 2) we can import it in setup.py for the same reason
19 """
20 from enum import Enum # isort: skip
21
22
23 class VersionStatus(Enum):
24 ALPHA = "alpha"
25 BETA = "beta"
26 RELEASE = "release"
27
28
29 _major = 0
30 _minor = 16
31 _revision = 0
32 _status = VersionStatus.ALPHA
33
34 __author__ = "@joocer"
35 __version__ = f"{_major}.{_minor}.{_revision}" + (
36 f"-{_status.value}.{__build__}" if _status != VersionStatus.RELEASE else ""
37 )
38
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/opteryx/__version__.py b/opteryx/__version__.py
--- a/opteryx/__version__.py
+++ b/opteryx/__version__.py
@@ -1,4 +1,4 @@
-__build__ = 523
+__build__ = 526
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
diff --git a/opteryx/planner/views/__init__.py b/opteryx/planner/views/__init__.py
--- a/opteryx/planner/views/__init__.py
+++ b/opteryx/planner/views/__init__.py
@@ -20,7 +20,7 @@
with open("views.json", "rb") as defs:
return orjson.loads(defs.read())
except Exception as err:
- print(f"[OPTERYX] Unable to open views definition file. {err}")
+ # DEBUG:: log (f"[OPTERYX] Unable to open views definition file. {err}")
return {}
| {"golden_diff": "diff --git a/opteryx/__version__.py b/opteryx/__version__.py\n--- a/opteryx/__version__.py\n+++ b/opteryx/__version__.py\n@@ -1,4 +1,4 @@\n-__build__ = 523\n+__build__ = 526\n \n # Licensed under the Apache License, Version 2.0 (the \"License\");\n # you may not use this file except in compliance with the License.\ndiff --git a/opteryx/planner/views/__init__.py b/opteryx/planner/views/__init__.py\n--- a/opteryx/planner/views/__init__.py\n+++ b/opteryx/planner/views/__init__.py\n@@ -20,7 +20,7 @@\n with open(\"views.json\", \"rb\") as defs:\n return orjson.loads(defs.read())\n except Exception as err:\n- print(f\"[OPTERYX] Unable to open views definition file. {err}\")\n+ # DEBUG:: log (f\"[OPTERYX] Unable to open views definition file. {err}\")\n return {}\n", "issue": "\ud83e\udeb2 VIEWs load error should be in debug mode only\n### Thank you for taking the time to report a problem with Opteryx.\r\n_To help us to respond to your request we ask that you try to provide the below detail about the bug._\r\n\r\n**Describe the bug** _A clear and specific description of what the bug is. What the error, incorrect or unexpected behaviour was._\r\n\r\n\r\n**Expected behaviour** _A clear and concise description of what you expected to happen._\r\n\r\n\r\n**Sample Code/Statement** _If you can, please submit the SQL statement or Python code snippet, or a representative example using the sample datasets._\r\n\r\n~~~sql\r\n\r\n~~~\r\n\r\n**Additional context** _Add any other context about the problem here, for example what you have done to try to diagnose or workaround the problem._\r\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport orjson\n\nfrom opteryx.planner.logical_planner import LogicalPlan\n\n\ndef _load_views():\n try:\n with open(\"views.json\", \"rb\") as defs:\n return orjson.loads(defs.read())\n except Exception as err:\n # DEBUG:: log (f\"[OPTERYX] Unable to open views definition file. {err}\")\n return {}\n\n\nVIEWS = _load_views()\n\n\ndef is_view(view_name: str) -> bool:\n return view_name in VIEWS\n\n\ndef view_as_plan(view_name: str) -> LogicalPlan:\n from opteryx.planner.logical_planner import do_logical_planning_phase\n from opteryx.third_party import sqloxide\n from opteryx.utils.sql import clean_statement\n from opteryx.utils.sql import remove_comments\n\n operation = VIEWS.get(view_name)[\"statement\"]\n\n clean_sql = clean_statement(remove_comments(operation))\n parsed_statements = sqloxide.parse_sql(clean_sql, dialect=\"mysql\")\n logical_plan, _, _ = next(do_logical_planning_phase(parsed_statements))\n\n return logical_plan\n", "path": "opteryx/planner/views/__init__.py"}, {"content": "__build__ = 523\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nStore the version here so:\n1) we don't load dependencies by storing it in __init__.py\n2) we can import it in setup.py for the same reason\n\"\"\"\nfrom enum import Enum # isort: skip\n\n\nclass VersionStatus(Enum):\n ALPHA = \"alpha\"\n BETA = \"beta\"\n RELEASE = \"release\"\n\n\n_major = 0\n_minor = 16\n_revision = 0\n_status = VersionStatus.ALPHA\n\n__author__ = \"@joocer\"\n__version__ = f\"{_major}.{_minor}.{_revision}\" + (\n f\"-{_status.value}.{__build__}\" if _status != VersionStatus.RELEASE else \"\"\n)\n", "path": "opteryx/__version__.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport orjson\n\nfrom opteryx.planner.logical_planner import LogicalPlan\n\n\ndef _load_views():\n try:\n with open(\"views.json\", \"rb\") as defs:\n return orjson.loads(defs.read())\n except Exception as err:\n # DEBUG:: log (f\"[OPTERYX] Unable to open views definition file. {err}\")\n return {}\n\n\nVIEWS = _load_views()\n\n\ndef is_view(view_name: str) -> bool:\n return view_name in VIEWS\n\n\ndef view_as_plan(view_name: str) -> LogicalPlan:\n from opteryx.planner.logical_planner import do_logical_planning_phase\n from opteryx.third_party import sqloxide\n from opteryx.utils.sql import clean_statement\n from opteryx.utils.sql import remove_comments\n\n operation = VIEWS.get(view_name)[\"statement\"]\n\n clean_sql = clean_statement(remove_comments(operation))\n parsed_statements = sqloxide.parse_sql(clean_sql, dialect=\"mysql\")\n logical_plan, _, _ = next(do_logical_planning_phase(parsed_statements))\n\n return logical_plan\n", "path": "opteryx/planner/views/__init__.py"}, {"content": "__build__ = 526\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nStore the version here so:\n1) we don't load dependencies by storing it in __init__.py\n2) we can import it in setup.py for the same reason\n\"\"\"\nfrom enum import Enum # isort: skip\n\n\nclass VersionStatus(Enum):\n ALPHA = \"alpha\"\n BETA = \"beta\"\n RELEASE = \"release\"\n\n\n_major = 0\n_minor = 16\n_revision = 0\n_status = VersionStatus.ALPHA\n\n__author__ = \"@joocer\"\n__version__ = f\"{_major}.{_minor}.{_revision}\" + (\n f\"-{_status.value}.{__build__}\" if _status != VersionStatus.RELEASE else \"\"\n)\n", "path": "opteryx/__version__.py"}]} | 1,248 | 238 |
gh_patches_debug_20772 | rasdani/github-patches | git_diff | nvaccess__nvda-10144 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error in alpha with built-in audacity appmod
### Steps to reproduce:
1. open audacity.
2. select audio. ctrl+a works.
3. press alt+c, and press c until "compresser" is found.
4. press enter.
5. press tab.
### Actual behavior:
```
ERROR - eventHandler.executeEvent (23:07:33.553):
error executing event: gainFocus on <NVDAObjects.IAccessible.IAccessible object at 0x03318FF0> with extra args of {}
Traceback (most recent call last):
File "eventHandler.pyc", line 150, in executeEvent
File "eventHandler.pyc", line 163, in doPreGainFocus
File "api.pyc", line 109, in setFocusObject
File "baseObject.pyc", line 46, in __get__
File "baseObject.pyc", line 149, in _getPropertyViaCache
File "NVDAObjects\IAccessible\__init__.pyc", line 1695, in _get_container
File "IAccessibleHandler.pyc", line 950, in findGroupboxObject
File "NVDAObjects\IAccessible\__init__.pyc", line 48, in getNVDAObjectFromEvent
File "NVDAObjects\__init__.pyc", line 137, in __call__
File "appModules\audacity.pyc", line 15, in event_NVDAObject_init
AttributeError: 'NoneType' object has no attribute 'replace'
```
### Expected behavior:
Items in the UI read.
### System configuration
#### NVDA installed/portable/running from source:
installed
#### NVDA version:
Version: alpha-18399,26485a09
#### Windows version:
^10
#### Name and version of other software in use when reproducing the issue:
Audacity: 2.3.1
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `source/appModules/audacity.py`
Content:
```
1 # -*- coding: UTF-8 -*-
2 #appModules/audacity.py
3 #A part of NonVisual Desktop Access (NVDA)
4 #Copyright (C) 2006-2018 NV Access Limited, Robert Hänggi
5 #This file is covered by the GNU General Public License.
6 #See the file COPYING for more details.
7
8 import appModuleHandler
9 import controlTypes
10
11 class AppModule(appModuleHandler.AppModule):
12
13 def event_NVDAObject_init(self,obj):
14 if obj.windowClassName=="Button" and not obj.role in [controlTypes.ROLE_MENUBAR, controlTypes.ROLE_MENUITEM, controlTypes.ROLE_POPUPMENU]:
15 obj.name=obj.name.replace('&','')
16
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/source/appModules/audacity.py b/source/appModules/audacity.py
--- a/source/appModules/audacity.py
+++ b/source/appModules/audacity.py
@@ -1,9 +1,8 @@
# -*- coding: UTF-8 -*-
-#appModules/audacity.py
-#A part of NonVisual Desktop Access (NVDA)
-#Copyright (C) 2006-2018 NV Access Limited, Robert Hänggi
-#This file is covered by the GNU General Public License.
-#See the file COPYING for more details.
+# A part of NonVisual Desktop Access (NVDA)
+# Copyright (C) 2006-2019 NV Access Limited, Robert Hänggi, Łukasz Golonka
+# This file is covered by the GNU General Public License.
+# See the file COPYING for more details.
import appModuleHandler
import controlTypes
@@ -11,5 +10,9 @@
class AppModule(appModuleHandler.AppModule):
def event_NVDAObject_init(self,obj):
- if obj.windowClassName=="Button" and not obj.role in [controlTypes.ROLE_MENUBAR, controlTypes.ROLE_MENUITEM, controlTypes.ROLE_POPUPMENU]:
+ if (
+ obj.windowClassName == "Button"
+ and obj.role not in [controlTypes.ROLE_MENUBAR, controlTypes.ROLE_MENUITEM, controlTypes.ROLE_POPUPMENU]
+ and obj.name is not None
+ ):
obj.name=obj.name.replace('&','')
| {"golden_diff": "diff --git a/source/appModules/audacity.py b/source/appModules/audacity.py\n--- a/source/appModules/audacity.py\n+++ b/source/appModules/audacity.py\n@@ -1,9 +1,8 @@\n # -*- coding: UTF-8 -*-\r\n-#appModules/audacity.py\r\n-#A part of NonVisual Desktop Access (NVDA)\r\n-#Copyright (C) 2006-2018 NV Access Limited, Robert H\u00e4nggi\r\n-#This file is covered by the GNU General Public License.\r\n-#See the file COPYING for more details.\r\n+# A part of NonVisual Desktop Access (NVDA)\r\n+# Copyright (C) 2006-2019 NV Access Limited, Robert H\u00e4nggi, \u0141ukasz Golonka\r\n+# This file is covered by the GNU General Public License.\r\n+# See the file COPYING for more details.\r\n \r\n import appModuleHandler\r\n import controlTypes\r\n@@ -11,5 +10,9 @@\n class AppModule(appModuleHandler.AppModule):\r\n \r\n \tdef event_NVDAObject_init(self,obj):\r\n-\t\tif obj.windowClassName==\"Button\" and not obj.role in [controlTypes.ROLE_MENUBAR, controlTypes.ROLE_MENUITEM, controlTypes.ROLE_POPUPMENU]:\r\n+\t\tif (\r\n+\t\t\tobj.windowClassName == \"Button\"\r\n+\t\t\tand obj.role not in [controlTypes.ROLE_MENUBAR, controlTypes.ROLE_MENUITEM, controlTypes.ROLE_POPUPMENU]\r\n+\t\t\tand obj.name is not None\r\n+\t\t):\r\n \t\t\tobj.name=obj.name.replace('&','')\n", "issue": "Error in alpha with built-in audacity appmod\n### Steps to reproduce:\r\n\r\n1. open audacity.\r\n2. select audio. ctrl+a works.\r\n3. press alt+c, and press c until \"compresser\" is found.\r\n4. press enter.\r\n5. press tab.\r\n\r\n### Actual behavior:\r\n\r\n```\r\nERROR - eventHandler.executeEvent (23:07:33.553):\r\nerror executing event: gainFocus on <NVDAObjects.IAccessible.IAccessible object at 0x03318FF0> with extra args of {}\r\nTraceback (most recent call last):\r\n File \"eventHandler.pyc\", line 150, in executeEvent\r\n File \"eventHandler.pyc\", line 163, in doPreGainFocus\r\n File \"api.pyc\", line 109, in setFocusObject\r\n File \"baseObject.pyc\", line 46, in __get__\r\n File \"baseObject.pyc\", line 149, in _getPropertyViaCache\r\n File \"NVDAObjects\\IAccessible\\__init__.pyc\", line 1695, in _get_container\r\n File \"IAccessibleHandler.pyc\", line 950, in findGroupboxObject\r\n File \"NVDAObjects\\IAccessible\\__init__.pyc\", line 48, in getNVDAObjectFromEvent\r\n File \"NVDAObjects\\__init__.pyc\", line 137, in __call__\r\n File \"appModules\\audacity.pyc\", line 15, in event_NVDAObject_init\r\nAttributeError: 'NoneType' object has no attribute 'replace'\r\n```\r\n\r\n### Expected behavior:\r\n\r\nItems in the UI read.\r\n\r\n### System configuration\r\n#### NVDA installed/portable/running from source:\r\ninstalled\r\n\r\n#### NVDA version:\r\nVersion: alpha-18399,26485a09\r\n\r\n\r\n#### Windows version:\r\n^10\r\n\r\n#### Name and version of other software in use when reproducing the issue:\r\nAudacity: 2.3.1\r\n\r\n\n", "before_files": [{"content": "# -*- coding: UTF-8 -*-\r\n#appModules/audacity.py\r\n#A part of NonVisual Desktop Access (NVDA)\r\n#Copyright (C) 2006-2018 NV Access Limited, Robert H\u00e4nggi\r\n#This file is covered by the GNU General Public License.\r\n#See the file COPYING for more details.\r\n\r\nimport appModuleHandler\r\nimport controlTypes\r\n\r\nclass AppModule(appModuleHandler.AppModule):\r\n\r\n\tdef event_NVDAObject_init(self,obj):\r\n\t\tif obj.windowClassName==\"Button\" and not obj.role in [controlTypes.ROLE_MENUBAR, controlTypes.ROLE_MENUITEM, controlTypes.ROLE_POPUPMENU]:\r\n\t\t\tobj.name=obj.name.replace('&','')\r\n", "path": "source/appModules/audacity.py"}], "after_files": [{"content": "# -*- coding: UTF-8 -*-\r\n# A part of NonVisual Desktop Access (NVDA)\r\n# Copyright (C) 2006-2019 NV Access Limited, Robert H\u00e4nggi, \u0141ukasz Golonka\r\n# This file is covered by the GNU General Public License.\r\n# See the file COPYING for more details.\r\n\r\nimport appModuleHandler\r\nimport controlTypes\r\n\r\nclass AppModule(appModuleHandler.AppModule):\r\n\r\n\tdef event_NVDAObject_init(self,obj):\r\n\t\tif (\r\n\t\t\tobj.windowClassName == \"Button\"\r\n\t\t\tand obj.role not in [controlTypes.ROLE_MENUBAR, controlTypes.ROLE_MENUITEM, controlTypes.ROLE_POPUPMENU]\r\n\t\t\tand obj.name is not None\r\n\t\t):\r\n\t\t\tobj.name=obj.name.replace('&','')\r\n", "path": "source/appModules/audacity.py"}]} | 881 | 339 |
gh_patches_debug_28192 | rasdani/github-patches | git_diff | PennyLaneAI__pennylane-5422 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Sampling an observable with more than one wire is not compatible with JAX-JIT
### Expected behavior
That I can JIT a circuit that returns `qml.sample(obs)`, where `obs` is some observable with more than one wire.
### Actual behavior
A shape mismatch causes the circuit to fail if executing with JAX-JIT
### Additional information
_No response_
### Source code
```shell
import jax
jax.config.update("jax_enable_x64", True)
dev = qml.device("default.qubit", wires=2, shots=100)
@qml.qnode(dev, interface="jax")
def circuit(x):
qml.RX(x, wires=0)
return qml.sample(qml.PauliX(0) @ qml.PauliY(1))
results = jax.jit(circuit)(jax.numpy.array(0.123, dtype=jax.numpy.float64))
```
### Tracebacks
```shell
---------------------------------------------------------------------------
XlaRuntimeError Traceback (most recent call last)
Cell In[131], line 12
9 qml.RX(x, wires=0)
10 return qml.sample(qml.PauliX(0) @ qml.PauliY(1))
---> 12 results = jax.jit(circuit)(jax.numpy.array(0.123, dtype=jax.numpy.float64))
[... skipping hidden 10 frame]
File /opt/homebrew/Caskroom/miniforge/base/envs/pennylane/lib/python3.10/site-packages/jax/_src/interpreters/pxla.py:1229, in ExecuteReplicated.__call__(self, *args)
1224 self._handle_token_bufs(
1225 results.disassemble_prefix_into_single_device_arrays(
1226 len(self.ordered_effects)),
1227 results.consume_token())
1228 else:
-> 1229 results = self.xla_executable.execute_sharded(input_bufs)
1230 if dispatch.needs_check_special():
1231 out_arrays = results.disassemble_into_single_device_arrays()
XlaRuntimeError: INTERNAL: Generated function failed: CpuCallback error: RuntimeError: Incorrect output shape for return value 0: Expected: (100, 2), Actual: (100,)
```
### System information
```shell
pl-dev
```
### Existing GitHub issues
- [X] I have searched existing GitHub issues to make sure the issue does not already exist.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pennylane/measurements/sample.py`
Content:
```
1 # Copyright 2018-2021 Xanadu Quantum Technologies Inc.
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """
15 This module contains the qml.sample measurement.
16 """
17 import functools
18 import warnings
19 from typing import Sequence, Tuple, Optional, Union
20
21 import pennylane as qml
22 from pennylane.operation import Operator
23 from pennylane.wires import Wires
24
25 from .measurements import MeasurementShapeError, Sample, SampleMeasurement
26 from .mid_measure import MeasurementValue
27
28
29 def sample(op: Optional[Union[Operator, MeasurementValue]] = None, wires=None) -> "SampleMP":
30 r"""Sample from the supplied observable, with the number of shots
31 determined from the ``dev.shots`` attribute of the corresponding device,
32 returning raw samples. If no observable is provided then basis state samples are returned
33 directly from the device.
34
35 Note that the output shape of this measurement process depends on the shots
36 specified on the device.
37
38 Args:
39 op (Observable or MeasurementValue): a quantum observable object. To get samples
40 for mid-circuit measurements, ``op`` should be a``MeasurementValue``.
41 wires (Sequence[int] or int or None): the wires we wish to sample from; ONLY set wires if
42 op is ``None``
43
44 Returns:
45 SampleMP: Measurement process instance
46
47 Raises:
48 ValueError: Cannot set wires if an observable is provided
49
50 The samples are drawn from the eigenvalues :math:`\{\lambda_i\}` of the observable.
51 The probability of drawing eigenvalue :math:`\lambda_i` is given by
52 :math:`p(\lambda_i) = |\langle \xi_i | \psi \rangle|^2`, where :math:`| \xi_i \rangle`
53 is the corresponding basis state from the observable's eigenbasis.
54
55 .. note::
56
57 QNodes that return samples cannot, in general, be differentiated, since the derivative
58 with respect to a sample --- a stochastic process --- is ill-defined. An alternative
59 approach would be to use single-shot expectation values. For example, instead of this:
60
61 .. code-block:: python
62
63 dev = qml.device("default.qubit", shots=10)
64
65 @qml.qnode(dev, diff_method="parameter-shift")
66 def circuit(angle):
67 qml.RX(angle, wires=0)
68 return qml.sample(qml.PauliX(0))
69
70 angle = qml.numpy.array(0.1)
71 res = qml.jacobian(circuit)(angle)
72
73 Consider using :func:`~pennylane.expval` and a sequence of single shots, like this:
74
75 .. code-block:: python
76
77 dev = qml.device("default.qubit", shots=[(1, 10)])
78
79 @qml.qnode(dev, diff_method="parameter-shift")
80 def circuit(angle):
81 qml.RX(angle, wires=0)
82 return qml.expval(qml.PauliX(0))
83
84 def cost(angle):
85 return qml.math.hstack(circuit(angle))
86
87 angle = qml.numpy.array(0.1)
88 res = qml.jacobian(cost)(angle)
89
90 **Example**
91
92 .. code-block:: python3
93
94 dev = qml.device("default.qubit", wires=2, shots=4)
95
96 @qml.qnode(dev)
97 def circuit(x):
98 qml.RX(x, wires=0)
99 qml.Hadamard(wires=1)
100 qml.CNOT(wires=[0, 1])
101 return qml.sample(qml.Y(0))
102
103 Executing this QNode:
104
105 >>> circuit(0.5)
106 array([ 1., 1., 1., -1.])
107
108 If no observable is provided, then the raw basis state samples obtained
109 from device are returned (e.g., for a qubit device, samples from the
110 computational device are returned). In this case, ``wires`` can be specified
111 so that sample results only include measurement results of the qubits of interest.
112
113 .. code-block:: python3
114
115 dev = qml.device("default.qubit", wires=2, shots=4)
116
117 @qml.qnode(dev)
118 def circuit(x):
119 qml.RX(x, wires=0)
120 qml.Hadamard(wires=1)
121 qml.CNOT(wires=[0, 1])
122 return qml.sample()
123
124 Executing this QNode:
125
126 >>> circuit(0.5)
127 array([[0, 1],
128 [0, 0],
129 [1, 1],
130 [0, 0]])
131
132 """
133 if isinstance(op, MeasurementValue):
134 return SampleMP(obs=op)
135
136 if isinstance(op, Sequence):
137 if not all(isinstance(o, MeasurementValue) and len(o.measurements) == 1 for o in op):
138 raise qml.QuantumFunctionError(
139 "Only sequences of single MeasurementValues can be passed with the op argument. "
140 "MeasurementValues manipulated using arithmetic operators cannot be used when "
141 "collecting statistics for a sequence of mid-circuit measurements."
142 )
143
144 return SampleMP(obs=op)
145
146 if op is not None and not op.is_hermitian: # None type is also allowed for op
147 warnings.warn(f"{op.name} might not be hermitian.")
148
149 if wires is not None:
150 if op is not None:
151 raise ValueError(
152 "Cannot specify the wires to sample if an observable is "
153 "provided. The wires to sample will be determined directly from the observable."
154 )
155 wires = Wires(wires)
156
157 return SampleMP(obs=op, wires=wires)
158
159
160 class SampleMP(SampleMeasurement):
161 """Measurement process that returns the samples of a given observable. If no observable is
162 provided then basis state samples are returned directly from the device.
163
164 Please refer to :func:`sample` for detailed documentation.
165
166 Args:
167 obs (Union[.Operator, .MeasurementValue]): The observable that is to be measured
168 as part of the measurement process. Not all measurement processes require observables
169 (for example ``Probability``); this argument is optional.
170 wires (.Wires): The wires the measurement process applies to.
171 This can only be specified if an observable was not provided.
172 eigvals (array): A flat array representing the eigenvalues of the measurement.
173 This can only be specified if an observable was not provided.
174 id (str): custom label given to a measurement instance, can be useful for some applications
175 where the instance has to be identified
176 """
177
178 @property
179 def return_type(self):
180 return Sample
181
182 @property
183 @functools.lru_cache()
184 def numeric_type(self):
185 # Note: we only assume an integer numeric type if the observable is a
186 # built-in observable with integer eigenvalues or a tensor product thereof
187 if self.obs is None:
188 # Computational basis samples
189 return int
190 int_eigval_obs = {qml.X, qml.Y, qml.Z, qml.Hadamard, qml.Identity}
191 tensor_terms = self.obs.obs if hasattr(self.obs, "obs") else [self.obs]
192 every_term_standard = all(o.__class__ in int_eigval_obs for o in tensor_terms)
193 return int if every_term_standard else float
194
195 def shape(self, device, shots):
196 if not shots:
197 raise MeasurementShapeError(
198 "Shots are required to obtain the shape of the measurement "
199 f"{self.__class__.__name__}."
200 )
201 len_wires = len(self.wires) if len(self.wires) > 0 else len(device.wires)
202
203 def _single_int_shape(shot_val, num_wires):
204 # singleton dimensions, whether in shot val or num_wires are squeezed away
205 inner_shape = []
206 if shot_val != 1:
207 inner_shape.append(shot_val)
208 if num_wires != 1:
209 inner_shape.append(num_wires)
210 return tuple(inner_shape)
211
212 if not shots.has_partitioned_shots:
213 return _single_int_shape(shots.total_shots, len_wires)
214
215 shape = []
216 for s in shots.shot_vector:
217 for _ in range(s.copies):
218 shape.append(_single_int_shape(s.shots, len_wires))
219
220 return tuple(shape)
221
222 def process_samples(
223 self,
224 samples: Sequence[complex],
225 wire_order: Wires,
226 shot_range: Tuple[int] = None,
227 bin_size: int = None,
228 ):
229 wire_map = dict(zip(wire_order, range(len(wire_order))))
230 mapped_wires = [wire_map[w] for w in self.wires]
231 name = self.obs.name if self.obs is not None else None
232 # Select the samples from samples that correspond to ``shot_range`` if provided
233 if shot_range is not None:
234 # Indexing corresponds to: (potential broadcasting, shots, wires). Note that the last
235 # colon (:) is required because shots is the second-to-last axis and the
236 # Ellipsis (...) otherwise would take up broadcasting and shots axes.
237 samples = samples[..., slice(*shot_range), :]
238
239 if mapped_wires:
240 # if wires are provided, then we only return samples from those wires
241 samples = samples[..., mapped_wires]
242
243 num_wires = samples.shape[-1] # wires is the last dimension
244
245 # If we're sampling wires or a list of mid-circuit measurements
246 if self.obs is None and not isinstance(self.mv, MeasurementValue):
247 # if no observable was provided then return the raw samples
248 return samples if bin_size is None else samples.T.reshape(num_wires, bin_size, -1)
249
250 # If we're sampling observables
251 if str(name) in {"PauliX", "PauliY", "PauliZ", "Hadamard"}:
252 # Process samples for observables with eigenvalues {1, -1}
253 samples = 1 - 2 * qml.math.squeeze(samples, axis=-1)
254 else:
255 # Replace the basis state in the computational basis with the correct eigenvalue.
256 # Extract only the columns of the basis samples required based on ``wires``.
257 powers_of_two = 2 ** qml.math.arange(num_wires)[::-1]
258 indices = samples @ powers_of_two
259 indices = qml.math.array(indices) # Add np.array here for Jax support.
260 try:
261 # This also covers statistics for mid-circuit measurements manipulated using
262 # arithmetic operators
263 samples = self.eigvals()[indices]
264 except qml.operation.EigvalsUndefinedError as e:
265 # if observable has no info on eigenvalues, we cannot return this measurement
266 raise qml.operation.EigvalsUndefinedError(
267 f"Cannot compute samples of {self.obs.name}."
268 ) from e
269
270 return samples if bin_size is None else samples.reshape((bin_size, -1))
271
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pennylane/measurements/sample.py b/pennylane/measurements/sample.py
--- a/pennylane/measurements/sample.py
+++ b/pennylane/measurements/sample.py
@@ -198,24 +198,28 @@
"Shots are required to obtain the shape of the measurement "
f"{self.__class__.__name__}."
)
- len_wires = len(self.wires) if len(self.wires) > 0 else len(device.wires)
+ if self.obs:
+ num_values_per_shot = 1 # one single eigenvalue
+ else:
+ # one value per wire
+ num_values_per_shot = len(self.wires) if len(self.wires) > 0 else len(device.wires)
- def _single_int_shape(shot_val, num_wires):
+ def _single_int_shape(shot_val, num_values):
# singleton dimensions, whether in shot val or num_wires are squeezed away
inner_shape = []
if shot_val != 1:
inner_shape.append(shot_val)
- if num_wires != 1:
- inner_shape.append(num_wires)
+ if num_values != 1:
+ inner_shape.append(num_values)
return tuple(inner_shape)
if not shots.has_partitioned_shots:
- return _single_int_shape(shots.total_shots, len_wires)
+ return _single_int_shape(shots.total_shots, num_values_per_shot)
shape = []
for s in shots.shot_vector:
for _ in range(s.copies):
- shape.append(_single_int_shape(s.shots, len_wires))
+ shape.append(_single_int_shape(s.shots, num_values_per_shot))
return tuple(shape)
| {"golden_diff": "diff --git a/pennylane/measurements/sample.py b/pennylane/measurements/sample.py\n--- a/pennylane/measurements/sample.py\n+++ b/pennylane/measurements/sample.py\n@@ -198,24 +198,28 @@\n \"Shots are required to obtain the shape of the measurement \"\n f\"{self.__class__.__name__}.\"\n )\n- len_wires = len(self.wires) if len(self.wires) > 0 else len(device.wires)\n+ if self.obs:\n+ num_values_per_shot = 1 # one single eigenvalue\n+ else:\n+ # one value per wire\n+ num_values_per_shot = len(self.wires) if len(self.wires) > 0 else len(device.wires)\n \n- def _single_int_shape(shot_val, num_wires):\n+ def _single_int_shape(shot_val, num_values):\n # singleton dimensions, whether in shot val or num_wires are squeezed away\n inner_shape = []\n if shot_val != 1:\n inner_shape.append(shot_val)\n- if num_wires != 1:\n- inner_shape.append(num_wires)\n+ if num_values != 1:\n+ inner_shape.append(num_values)\n return tuple(inner_shape)\n \n if not shots.has_partitioned_shots:\n- return _single_int_shape(shots.total_shots, len_wires)\n+ return _single_int_shape(shots.total_shots, num_values_per_shot)\n \n shape = []\n for s in shots.shot_vector:\n for _ in range(s.copies):\n- shape.append(_single_int_shape(s.shots, len_wires))\n+ shape.append(_single_int_shape(s.shots, num_values_per_shot))\n \n return tuple(shape)\n", "issue": "[BUG] Sampling an observable with more than one wire is not compatible with JAX-JIT\n### Expected behavior\n\nThat I can JIT a circuit that returns `qml.sample(obs)`, where `obs` is some observable with more than one wire.\n\n### Actual behavior\n\nA shape mismatch causes the circuit to fail if executing with JAX-JIT\n\n### Additional information\n\n_No response_\n\n### Source code\n\n```shell\nimport jax\r\n\r\njax.config.update(\"jax_enable_x64\", True)\r\n\r\ndev = qml.device(\"default.qubit\", wires=2, shots=100)\r\n\r\[email protected](dev, interface=\"jax\")\r\ndef circuit(x):\r\n qml.RX(x, wires=0)\r\n return qml.sample(qml.PauliX(0) @ qml.PauliY(1))\r\n\r\nresults = jax.jit(circuit)(jax.numpy.array(0.123, dtype=jax.numpy.float64))\n```\n\n\n### Tracebacks\n\n```shell\n---------------------------------------------------------------------------\r\nXlaRuntimeError Traceback (most recent call last)\r\nCell In[131], line 12\r\n 9 qml.RX(x, wires=0)\r\n 10 return qml.sample(qml.PauliX(0) @ qml.PauliY(1))\r\n---> 12 results = jax.jit(circuit)(jax.numpy.array(0.123, dtype=jax.numpy.float64))\r\n\r\n [... skipping hidden 10 frame]\r\n\r\nFile /opt/homebrew/Caskroom/miniforge/base/envs/pennylane/lib/python3.10/site-packages/jax/_src/interpreters/pxla.py:1229, in ExecuteReplicated.__call__(self, *args)\r\n 1224 self._handle_token_bufs(\r\n 1225 results.disassemble_prefix_into_single_device_arrays(\r\n 1226 len(self.ordered_effects)),\r\n 1227 results.consume_token())\r\n 1228 else:\r\n-> 1229 results = self.xla_executable.execute_sharded(input_bufs)\r\n 1230 if dispatch.needs_check_special():\r\n 1231 out_arrays = results.disassemble_into_single_device_arrays()\r\n\r\nXlaRuntimeError: INTERNAL: Generated function failed: CpuCallback error: RuntimeError: Incorrect output shape for return value 0: Expected: (100, 2), Actual: (100,)\n```\n\n\n### System information\n\n```shell\npl-dev\n```\n\n\n### Existing GitHub issues\n\n- [X] I have searched existing GitHub issues to make sure the issue does not already exist.\n", "before_files": [{"content": "# Copyright 2018-2021 Xanadu Quantum Technologies Inc.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nThis module contains the qml.sample measurement.\n\"\"\"\nimport functools\nimport warnings\nfrom typing import Sequence, Tuple, Optional, Union\n\nimport pennylane as qml\nfrom pennylane.operation import Operator\nfrom pennylane.wires import Wires\n\nfrom .measurements import MeasurementShapeError, Sample, SampleMeasurement\nfrom .mid_measure import MeasurementValue\n\n\ndef sample(op: Optional[Union[Operator, MeasurementValue]] = None, wires=None) -> \"SampleMP\":\n r\"\"\"Sample from the supplied observable, with the number of shots\n determined from the ``dev.shots`` attribute of the corresponding device,\n returning raw samples. If no observable is provided then basis state samples are returned\n directly from the device.\n\n Note that the output shape of this measurement process depends on the shots\n specified on the device.\n\n Args:\n op (Observable or MeasurementValue): a quantum observable object. To get samples\n for mid-circuit measurements, ``op`` should be a``MeasurementValue``.\n wires (Sequence[int] or int or None): the wires we wish to sample from; ONLY set wires if\n op is ``None``\n\n Returns:\n SampleMP: Measurement process instance\n\n Raises:\n ValueError: Cannot set wires if an observable is provided\n\n The samples are drawn from the eigenvalues :math:`\\{\\lambda_i\\}` of the observable.\n The probability of drawing eigenvalue :math:`\\lambda_i` is given by\n :math:`p(\\lambda_i) = |\\langle \\xi_i | \\psi \\rangle|^2`, where :math:`| \\xi_i \\rangle`\n is the corresponding basis state from the observable's eigenbasis.\n\n .. note::\n\n QNodes that return samples cannot, in general, be differentiated, since the derivative\n with respect to a sample --- a stochastic process --- is ill-defined. An alternative\n approach would be to use single-shot expectation values. For example, instead of this:\n\n .. code-block:: python\n\n dev = qml.device(\"default.qubit\", shots=10)\n\n @qml.qnode(dev, diff_method=\"parameter-shift\")\n def circuit(angle):\n qml.RX(angle, wires=0)\n return qml.sample(qml.PauliX(0))\n\n angle = qml.numpy.array(0.1)\n res = qml.jacobian(circuit)(angle)\n\n Consider using :func:`~pennylane.expval` and a sequence of single shots, like this:\n\n .. code-block:: python\n\n dev = qml.device(\"default.qubit\", shots=[(1, 10)])\n\n @qml.qnode(dev, diff_method=\"parameter-shift\")\n def circuit(angle):\n qml.RX(angle, wires=0)\n return qml.expval(qml.PauliX(0))\n\n def cost(angle):\n return qml.math.hstack(circuit(angle))\n\n angle = qml.numpy.array(0.1)\n res = qml.jacobian(cost)(angle)\n\n **Example**\n\n .. code-block:: python3\n\n dev = qml.device(\"default.qubit\", wires=2, shots=4)\n\n @qml.qnode(dev)\n def circuit(x):\n qml.RX(x, wires=0)\n qml.Hadamard(wires=1)\n qml.CNOT(wires=[0, 1])\n return qml.sample(qml.Y(0))\n\n Executing this QNode:\n\n >>> circuit(0.5)\n array([ 1., 1., 1., -1.])\n\n If no observable is provided, then the raw basis state samples obtained\n from device are returned (e.g., for a qubit device, samples from the\n computational device are returned). In this case, ``wires`` can be specified\n so that sample results only include measurement results of the qubits of interest.\n\n .. code-block:: python3\n\n dev = qml.device(\"default.qubit\", wires=2, shots=4)\n\n @qml.qnode(dev)\n def circuit(x):\n qml.RX(x, wires=0)\n qml.Hadamard(wires=1)\n qml.CNOT(wires=[0, 1])\n return qml.sample()\n\n Executing this QNode:\n\n >>> circuit(0.5)\n array([[0, 1],\n [0, 0],\n [1, 1],\n [0, 0]])\n\n \"\"\"\n if isinstance(op, MeasurementValue):\n return SampleMP(obs=op)\n\n if isinstance(op, Sequence):\n if not all(isinstance(o, MeasurementValue) and len(o.measurements) == 1 for o in op):\n raise qml.QuantumFunctionError(\n \"Only sequences of single MeasurementValues can be passed with the op argument. \"\n \"MeasurementValues manipulated using arithmetic operators cannot be used when \"\n \"collecting statistics for a sequence of mid-circuit measurements.\"\n )\n\n return SampleMP(obs=op)\n\n if op is not None and not op.is_hermitian: # None type is also allowed for op\n warnings.warn(f\"{op.name} might not be hermitian.\")\n\n if wires is not None:\n if op is not None:\n raise ValueError(\n \"Cannot specify the wires to sample if an observable is \"\n \"provided. The wires to sample will be determined directly from the observable.\"\n )\n wires = Wires(wires)\n\n return SampleMP(obs=op, wires=wires)\n\n\nclass SampleMP(SampleMeasurement):\n \"\"\"Measurement process that returns the samples of a given observable. If no observable is\n provided then basis state samples are returned directly from the device.\n\n Please refer to :func:`sample` for detailed documentation.\n\n Args:\n obs (Union[.Operator, .MeasurementValue]): The observable that is to be measured\n as part of the measurement process. Not all measurement processes require observables\n (for example ``Probability``); this argument is optional.\n wires (.Wires): The wires the measurement process applies to.\n This can only be specified if an observable was not provided.\n eigvals (array): A flat array representing the eigenvalues of the measurement.\n This can only be specified if an observable was not provided.\n id (str): custom label given to a measurement instance, can be useful for some applications\n where the instance has to be identified\n \"\"\"\n\n @property\n def return_type(self):\n return Sample\n\n @property\n @functools.lru_cache()\n def numeric_type(self):\n # Note: we only assume an integer numeric type if the observable is a\n # built-in observable with integer eigenvalues or a tensor product thereof\n if self.obs is None:\n # Computational basis samples\n return int\n int_eigval_obs = {qml.X, qml.Y, qml.Z, qml.Hadamard, qml.Identity}\n tensor_terms = self.obs.obs if hasattr(self.obs, \"obs\") else [self.obs]\n every_term_standard = all(o.__class__ in int_eigval_obs for o in tensor_terms)\n return int if every_term_standard else float\n\n def shape(self, device, shots):\n if not shots:\n raise MeasurementShapeError(\n \"Shots are required to obtain the shape of the measurement \"\n f\"{self.__class__.__name__}.\"\n )\n len_wires = len(self.wires) if len(self.wires) > 0 else len(device.wires)\n\n def _single_int_shape(shot_val, num_wires):\n # singleton dimensions, whether in shot val or num_wires are squeezed away\n inner_shape = []\n if shot_val != 1:\n inner_shape.append(shot_val)\n if num_wires != 1:\n inner_shape.append(num_wires)\n return tuple(inner_shape)\n\n if not shots.has_partitioned_shots:\n return _single_int_shape(shots.total_shots, len_wires)\n\n shape = []\n for s in shots.shot_vector:\n for _ in range(s.copies):\n shape.append(_single_int_shape(s.shots, len_wires))\n\n return tuple(shape)\n\n def process_samples(\n self,\n samples: Sequence[complex],\n wire_order: Wires,\n shot_range: Tuple[int] = None,\n bin_size: int = None,\n ):\n wire_map = dict(zip(wire_order, range(len(wire_order))))\n mapped_wires = [wire_map[w] for w in self.wires]\n name = self.obs.name if self.obs is not None else None\n # Select the samples from samples that correspond to ``shot_range`` if provided\n if shot_range is not None:\n # Indexing corresponds to: (potential broadcasting, shots, wires). Note that the last\n # colon (:) is required because shots is the second-to-last axis and the\n # Ellipsis (...) otherwise would take up broadcasting and shots axes.\n samples = samples[..., slice(*shot_range), :]\n\n if mapped_wires:\n # if wires are provided, then we only return samples from those wires\n samples = samples[..., mapped_wires]\n\n num_wires = samples.shape[-1] # wires is the last dimension\n\n # If we're sampling wires or a list of mid-circuit measurements\n if self.obs is None and not isinstance(self.mv, MeasurementValue):\n # if no observable was provided then return the raw samples\n return samples if bin_size is None else samples.T.reshape(num_wires, bin_size, -1)\n\n # If we're sampling observables\n if str(name) in {\"PauliX\", \"PauliY\", \"PauliZ\", \"Hadamard\"}:\n # Process samples for observables with eigenvalues {1, -1}\n samples = 1 - 2 * qml.math.squeeze(samples, axis=-1)\n else:\n # Replace the basis state in the computational basis with the correct eigenvalue.\n # Extract only the columns of the basis samples required based on ``wires``.\n powers_of_two = 2 ** qml.math.arange(num_wires)[::-1]\n indices = samples @ powers_of_two\n indices = qml.math.array(indices) # Add np.array here for Jax support.\n try:\n # This also covers statistics for mid-circuit measurements manipulated using\n # arithmetic operators\n samples = self.eigvals()[indices]\n except qml.operation.EigvalsUndefinedError as e:\n # if observable has no info on eigenvalues, we cannot return this measurement\n raise qml.operation.EigvalsUndefinedError(\n f\"Cannot compute samples of {self.obs.name}.\"\n ) from e\n\n return samples if bin_size is None else samples.reshape((bin_size, -1))\n", "path": "pennylane/measurements/sample.py"}], "after_files": [{"content": "# Copyright 2018-2021 Xanadu Quantum Technologies Inc.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nThis module contains the qml.sample measurement.\n\"\"\"\nimport functools\nimport warnings\nfrom typing import Sequence, Tuple, Optional, Union\n\nimport pennylane as qml\nfrom pennylane.operation import Operator\nfrom pennylane.wires import Wires\n\nfrom .measurements import MeasurementShapeError, Sample, SampleMeasurement\nfrom .mid_measure import MeasurementValue\n\n\ndef sample(op: Optional[Union[Operator, MeasurementValue]] = None, wires=None) -> \"SampleMP\":\n r\"\"\"Sample from the supplied observable, with the number of shots\n determined from the ``dev.shots`` attribute of the corresponding device,\n returning raw samples. If no observable is provided then basis state samples are returned\n directly from the device.\n\n Note that the output shape of this measurement process depends on the shots\n specified on the device.\n\n Args:\n op (Observable or MeasurementValue): a quantum observable object. To get samples\n for mid-circuit measurements, ``op`` should be a``MeasurementValue``.\n wires (Sequence[int] or int or None): the wires we wish to sample from; ONLY set wires if\n op is ``None``\n\n Returns:\n SampleMP: Measurement process instance\n\n Raises:\n ValueError: Cannot set wires if an observable is provided\n\n The samples are drawn from the eigenvalues :math:`\\{\\lambda_i\\}` of the observable.\n The probability of drawing eigenvalue :math:`\\lambda_i` is given by\n :math:`p(\\lambda_i) = |\\langle \\xi_i | \\psi \\rangle|^2`, where :math:`| \\xi_i \\rangle`\n is the corresponding basis state from the observable's eigenbasis.\n\n .. note::\n\n QNodes that return samples cannot, in general, be differentiated, since the derivative\n with respect to a sample --- a stochastic process --- is ill-defined. An alternative\n approach would be to use single-shot expectation values. For example, instead of this:\n\n .. code-block:: python\n\n dev = qml.device(\"default.qubit\", shots=10)\n\n @qml.qnode(dev, diff_method=\"parameter-shift\")\n def circuit(angle):\n qml.RX(angle, wires=0)\n return qml.sample(qml.PauliX(0))\n\n angle = qml.numpy.array(0.1)\n res = qml.jacobian(circuit)(angle)\n\n Consider using :func:`~pennylane.expval` and a sequence of single shots, like this:\n\n .. code-block:: python\n\n dev = qml.device(\"default.qubit\", shots=[(1, 10)])\n\n @qml.qnode(dev, diff_method=\"parameter-shift\")\n def circuit(angle):\n qml.RX(angle, wires=0)\n return qml.expval(qml.PauliX(0))\n\n def cost(angle):\n return qml.math.hstack(circuit(angle))\n\n angle = qml.numpy.array(0.1)\n res = qml.jacobian(cost)(angle)\n\n **Example**\n\n .. code-block:: python3\n\n dev = qml.device(\"default.qubit\", wires=2, shots=4)\n\n @qml.qnode(dev)\n def circuit(x):\n qml.RX(x, wires=0)\n qml.Hadamard(wires=1)\n qml.CNOT(wires=[0, 1])\n return qml.sample(qml.Y(0))\n\n Executing this QNode:\n\n >>> circuit(0.5)\n array([ 1., 1., 1., -1.])\n\n If no observable is provided, then the raw basis state samples obtained\n from device are returned (e.g., for a qubit device, samples from the\n computational device are returned). In this case, ``wires`` can be specified\n so that sample results only include measurement results of the qubits of interest.\n\n .. code-block:: python3\n\n dev = qml.device(\"default.qubit\", wires=2, shots=4)\n\n @qml.qnode(dev)\n def circuit(x):\n qml.RX(x, wires=0)\n qml.Hadamard(wires=1)\n qml.CNOT(wires=[0, 1])\n return qml.sample()\n\n Executing this QNode:\n\n >>> circuit(0.5)\n array([[0, 1],\n [0, 0],\n [1, 1],\n [0, 0]])\n\n \"\"\"\n if isinstance(op, MeasurementValue):\n return SampleMP(obs=op)\n\n if isinstance(op, Sequence):\n if not all(isinstance(o, MeasurementValue) and len(o.measurements) == 1 for o in op):\n raise qml.QuantumFunctionError(\n \"Only sequences of single MeasurementValues can be passed with the op argument. \"\n \"MeasurementValues manipulated using arithmetic operators cannot be used when \"\n \"collecting statistics for a sequence of mid-circuit measurements.\"\n )\n\n return SampleMP(obs=op)\n\n if op is not None and not op.is_hermitian: # None type is also allowed for op\n warnings.warn(f\"{op.name} might not be hermitian.\")\n\n if wires is not None:\n if op is not None:\n raise ValueError(\n \"Cannot specify the wires to sample if an observable is \"\n \"provided. The wires to sample will be determined directly from the observable.\"\n )\n wires = Wires(wires)\n\n return SampleMP(obs=op, wires=wires)\n\n\nclass SampleMP(SampleMeasurement):\n \"\"\"Measurement process that returns the samples of a given observable. If no observable is\n provided then basis state samples are returned directly from the device.\n\n Please refer to :func:`sample` for detailed documentation.\n\n Args:\n obs (Union[.Operator, .MeasurementValue]): The observable that is to be measured\n as part of the measurement process. Not all measurement processes require observables\n (for example ``Probability``); this argument is optional.\n wires (.Wires): The wires the measurement process applies to.\n This can only be specified if an observable was not provided.\n eigvals (array): A flat array representing the eigenvalues of the measurement.\n This can only be specified if an observable was not provided.\n id (str): custom label given to a measurement instance, can be useful for some applications\n where the instance has to be identified\n \"\"\"\n\n @property\n def return_type(self):\n return Sample\n\n @property\n @functools.lru_cache()\n def numeric_type(self):\n # Note: we only assume an integer numeric type if the observable is a\n # built-in observable with integer eigenvalues or a tensor product thereof\n if self.obs is None:\n # Computational basis samples\n return int\n int_eigval_obs = {qml.X, qml.Y, qml.Z, qml.Hadamard, qml.Identity}\n tensor_terms = self.obs.obs if hasattr(self.obs, \"obs\") else [self.obs]\n every_term_standard = all(o.__class__ in int_eigval_obs for o in tensor_terms)\n return int if every_term_standard else float\n\n def shape(self, device, shots):\n if not shots:\n raise MeasurementShapeError(\n \"Shots are required to obtain the shape of the measurement \"\n f\"{self.__class__.__name__}.\"\n )\n if self.obs:\n num_values_per_shot = 1 # one single eigenvalue\n else:\n # one value per wire\n num_values_per_shot = len(self.wires) if len(self.wires) > 0 else len(device.wires)\n\n def _single_int_shape(shot_val, num_values):\n # singleton dimensions, whether in shot val or num_wires are squeezed away\n inner_shape = []\n if shot_val != 1:\n inner_shape.append(shot_val)\n if num_values != 1:\n inner_shape.append(num_values)\n return tuple(inner_shape)\n\n if not shots.has_partitioned_shots:\n return _single_int_shape(shots.total_shots, num_values_per_shot)\n\n shape = []\n for s in shots.shot_vector:\n for _ in range(s.copies):\n shape.append(_single_int_shape(s.shots, num_values_per_shot))\n\n return tuple(shape)\n\n def process_samples(\n self,\n samples: Sequence[complex],\n wire_order: Wires,\n shot_range: Tuple[int] = None,\n bin_size: int = None,\n ):\n wire_map = dict(zip(wire_order, range(len(wire_order))))\n mapped_wires = [wire_map[w] for w in self.wires]\n name = self.obs.name if self.obs is not None else None\n # Select the samples from samples that correspond to ``shot_range`` if provided\n if shot_range is not None:\n # Indexing corresponds to: (potential broadcasting, shots, wires). Note that the last\n # colon (:) is required because shots is the second-to-last axis and the\n # Ellipsis (...) otherwise would take up broadcasting and shots axes.\n samples = samples[..., slice(*shot_range), :]\n\n if mapped_wires:\n # if wires are provided, then we only return samples from those wires\n samples = samples[..., mapped_wires]\n\n num_wires = samples.shape[-1] # wires is the last dimension\n\n # If we're sampling wires or a list of mid-circuit measurements\n if self.obs is None and not isinstance(self.mv, MeasurementValue):\n # if no observable was provided then return the raw samples\n return samples if bin_size is None else samples.T.reshape(num_wires, bin_size, -1)\n\n # If we're sampling observables\n if str(name) in {\"PauliX\", \"PauliY\", \"PauliZ\", \"Hadamard\"}:\n # Process samples for observables with eigenvalues {1, -1}\n samples = 1 - 2 * qml.math.squeeze(samples, axis=-1)\n else:\n # Replace the basis state in the computational basis with the correct eigenvalue.\n # Extract only the columns of the basis samples required based on ``wires``.\n powers_of_two = 2 ** qml.math.arange(num_wires)[::-1]\n indices = samples @ powers_of_two\n indices = qml.math.array(indices) # Add np.array here for Jax support.\n try:\n # This also covers statistics for mid-circuit measurements manipulated using\n # arithmetic operators\n samples = self.eigvals()[indices]\n except qml.operation.EigvalsUndefinedError as e:\n # if observable has no info on eigenvalues, we cannot return this measurement\n raise qml.operation.EigvalsUndefinedError(\n f\"Cannot compute samples of {self.obs.name}.\"\n ) from e\n\n return samples if bin_size is None else samples.reshape((bin_size, -1))\n", "path": "pennylane/measurements/sample.py"}]} | 4,031 | 396 |
gh_patches_debug_11675 | rasdani/github-patches | git_diff | pallets__werkzeug-1620 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot import `secure_filename`
```python
In [1]: import werkzeug
In [2]: werkzeug.secure_filename
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-2-1002393a11fd> in <module>
----> 1 werkzeug.secure_filename
~/pypath/werkzeug/__init__.py in __getattr__(self, name)
186 module = __import__(object_origins[name], None, None, [name])
187 for extra_name in all_by_module[module.__name__]:
--> 188 setattr(self, extra_name, getattr(module, extra_name))
189 return getattr(module, name)
190 elif name in attribute_modules:
AttributeError: module 'werkzeug.utils' has no attribute 'dump_cookie'
```
My checkout is up-to-date as of today, 3454460b6231f9b690144f7706a246f78b485d9b
It looks like "dump_cookie" got moved to ".http" but wasn't updated in `__init__.py` of the module? Not sure.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/werkzeug/__init__.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 werkzeug
4 ~~~~~~~~
5
6 Werkzeug is the Swiss Army knife of Python web development.
7
8 It provides useful classes and functions for any WSGI application to make
9 the life of a python web developer much easier. All of the provided
10 classes are independent from each other so you can mix it with any other
11 library.
12
13
14 :copyright: 2007 Pallets
15 :license: BSD-3-Clause
16 """
17 import sys
18 from types import ModuleType
19
20 __version__ = "1.0.0.dev0"
21
22 # This import magic raises concerns quite often which is why the implementation
23 # and motivation is explained here in detail now.
24 #
25 # The majority of the functions and classes provided by Werkzeug work on the
26 # HTTP and WSGI layer. There is no useful grouping for those which is why
27 # they are all importable from "werkzeug" instead of the modules where they are
28 # implemented. The downside of that is, that now everything would be loaded at
29 # once, even if unused.
30 #
31 # The implementation of a lazy-loading module in this file replaces the
32 # werkzeug package when imported from within. Attribute access to the werkzeug
33 # module will then lazily import from the modules that implement the objects.
34
35 # import mapping to objects in other modules
36 all_by_module = {
37 "werkzeug.debug": ["DebuggedApplication"],
38 "werkzeug.local": [
39 "Local",
40 "LocalManager",
41 "LocalProxy",
42 "LocalStack",
43 "release_local",
44 ],
45 "werkzeug.serving": ["run_simple"],
46 "werkzeug.test": ["Client", "EnvironBuilder", "create_environ", "run_wsgi_app"],
47 "werkzeug.testapp": ["test_app"],
48 "werkzeug.exceptions": ["abort", "Aborter"],
49 "werkzeug.urls": [
50 "url_decode",
51 "url_encode",
52 "url_quote",
53 "url_quote_plus",
54 "url_unquote",
55 "url_unquote_plus",
56 "url_fix",
57 "Href",
58 "iri_to_uri",
59 "uri_to_iri",
60 ],
61 "werkzeug.formparser": ["parse_form_data"],
62 "werkzeug.utils": [
63 "escape",
64 "environ_property",
65 "append_slash_redirect",
66 "redirect",
67 "cached_property",
68 "import_string",
69 "dump_cookie",
70 "parse_cookie",
71 "unescape",
72 "format_string",
73 "find_modules",
74 "header_property",
75 "html",
76 "xhtml",
77 "HTMLBuilder",
78 "validate_arguments",
79 "ArgumentValidationError",
80 "bind_arguments",
81 "secure_filename",
82 ],
83 "werkzeug.wsgi": [
84 "get_current_url",
85 "get_host",
86 "pop_path_info",
87 "peek_path_info",
88 "ClosingIterator",
89 "FileWrapper",
90 "make_line_iter",
91 "LimitedStream",
92 "responder",
93 "wrap_file",
94 "extract_path_info",
95 ],
96 "werkzeug.datastructures": [
97 "MultiDict",
98 "CombinedMultiDict",
99 "Headers",
100 "EnvironHeaders",
101 "ImmutableList",
102 "ImmutableDict",
103 "ImmutableMultiDict",
104 "TypeConversionDict",
105 "ImmutableTypeConversionDict",
106 "Accept",
107 "MIMEAccept",
108 "CharsetAccept",
109 "LanguageAccept",
110 "RequestCacheControl",
111 "ResponseCacheControl",
112 "ETags",
113 "HeaderSet",
114 "WWWAuthenticate",
115 "Authorization",
116 "FileMultiDict",
117 "CallbackDict",
118 "FileStorage",
119 "OrderedMultiDict",
120 "ImmutableOrderedMultiDict",
121 ],
122 "werkzeug.useragents": ["UserAgent"],
123 "werkzeug.http": [
124 "parse_etags",
125 "parse_date",
126 "http_date",
127 "cookie_date",
128 "parse_cache_control_header",
129 "is_resource_modified",
130 "parse_accept_header",
131 "parse_set_header",
132 "quote_etag",
133 "unquote_etag",
134 "generate_etag",
135 "dump_header",
136 "parse_list_header",
137 "parse_dict_header",
138 "parse_authorization_header",
139 "parse_www_authenticate_header",
140 "remove_entity_headers",
141 "is_entity_header",
142 "remove_hop_by_hop_headers",
143 "parse_options_header",
144 "dump_options_header",
145 "is_hop_by_hop_header",
146 "unquote_header_value",
147 "quote_header_value",
148 "HTTP_STATUS_CODES",
149 ],
150 "werkzeug.wrappers": [
151 "BaseResponse",
152 "BaseRequest",
153 "Request",
154 "Response",
155 "AcceptMixin",
156 "ETagRequestMixin",
157 "ETagResponseMixin",
158 "ResponseStreamMixin",
159 "CommonResponseDescriptorsMixin",
160 "UserAgentMixin",
161 "AuthorizationMixin",
162 "WWWAuthenticateMixin",
163 "CommonRequestDescriptorsMixin",
164 ],
165 "werkzeug.middleware.dispatcher": ["DispatcherMiddleware"],
166 "werkzeug.middleware.shared_data": ["SharedDataMiddleware"],
167 "werkzeug.security": ["generate_password_hash", "check_password_hash"],
168 # the undocumented easteregg ;-)
169 "werkzeug._internal": ["_easteregg"],
170 }
171
172 # modules that should be imported when accessed as attributes of werkzeug
173 attribute_modules = frozenset(["exceptions", "routing"])
174
175 object_origins = {}
176 for module, items in all_by_module.items():
177 for item in items:
178 object_origins[item] = module
179
180
181 class module(ModuleType):
182 """Automatically import objects from the modules."""
183
184 def __getattr__(self, name):
185 if name in object_origins:
186 module = __import__(object_origins[name], None, None, [name])
187 for extra_name in all_by_module[module.__name__]:
188 setattr(self, extra_name, getattr(module, extra_name))
189 return getattr(module, name)
190 elif name in attribute_modules:
191 __import__("werkzeug." + name)
192 return ModuleType.__getattribute__(self, name)
193
194 def __dir__(self):
195 """Just show what we want to show."""
196 result = list(new_module.__all__)
197 result.extend(
198 (
199 "__file__",
200 "__doc__",
201 "__all__",
202 "__docformat__",
203 "__name__",
204 "__path__",
205 "__package__",
206 "__version__",
207 )
208 )
209 return result
210
211
212 # keep a reference to this module so that it's not garbage collected
213 old_module = sys.modules["werkzeug"]
214
215
216 # setup the new module and patch it into the dict of loaded modules
217 new_module = sys.modules["werkzeug"] = module("werkzeug")
218 new_module.__dict__.update(
219 {
220 "__file__": __file__,
221 "__package__": "werkzeug",
222 "__path__": __path__,
223 "__doc__": __doc__,
224 "__version__": __version__,
225 "__all__": tuple(object_origins) + tuple(attribute_modules),
226 "__docformat__": "restructuredtext en",
227 }
228 )
229
230
231 # Due to bootstrapping issues we need to import exceptions here.
232 # Don't ask :-(
233 __import__("werkzeug.exceptions")
234
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/werkzeug/__init__.py b/src/werkzeug/__init__.py
--- a/src/werkzeug/__init__.py
+++ b/src/werkzeug/__init__.py
@@ -66,8 +66,6 @@
"redirect",
"cached_property",
"import_string",
- "dump_cookie",
- "parse_cookie",
"unescape",
"format_string",
"find_modules",
@@ -146,6 +144,8 @@
"unquote_header_value",
"quote_header_value",
"HTTP_STATUS_CODES",
+ "dump_cookie",
+ "parse_cookie",
],
"werkzeug.wrappers": [
"BaseResponse",
| {"golden_diff": "diff --git a/src/werkzeug/__init__.py b/src/werkzeug/__init__.py\n--- a/src/werkzeug/__init__.py\n+++ b/src/werkzeug/__init__.py\n@@ -66,8 +66,6 @@\n \"redirect\",\n \"cached_property\",\n \"import_string\",\n- \"dump_cookie\",\n- \"parse_cookie\",\n \"unescape\",\n \"format_string\",\n \"find_modules\",\n@@ -146,6 +144,8 @@\n \"unquote_header_value\",\n \"quote_header_value\",\n \"HTTP_STATUS_CODES\",\n+ \"dump_cookie\",\n+ \"parse_cookie\",\n ],\n \"werkzeug.wrappers\": [\n \"BaseResponse\",\n", "issue": "Cannot import `secure_filename`\n```python\r\nIn [1]: import werkzeug \r\n\r\nIn [2]: werkzeug.secure_filename \r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n<ipython-input-2-1002393a11fd> in <module>\r\n----> 1 werkzeug.secure_filename\r\n\r\n~/pypath/werkzeug/__init__.py in __getattr__(self, name)\r\n 186 module = __import__(object_origins[name], None, None, [name])\r\n 187 for extra_name in all_by_module[module.__name__]:\r\n--> 188 setattr(self, extra_name, getattr(module, extra_name))\r\n 189 return getattr(module, name)\r\n 190 elif name in attribute_modules:\r\n\r\nAttributeError: module 'werkzeug.utils' has no attribute 'dump_cookie'\r\n```\r\n\r\nMy checkout is up-to-date as of today, 3454460b6231f9b690144f7706a246f78b485d9b\r\n\r\nIt looks like \"dump_cookie\" got moved to \".http\" but wasn't updated in `__init__.py` of the module? Not sure.\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\n werkzeug\n ~~~~~~~~\n\n Werkzeug is the Swiss Army knife of Python web development.\n\n It provides useful classes and functions for any WSGI application to make\n the life of a python web developer much easier. All of the provided\n classes are independent from each other so you can mix it with any other\n library.\n\n\n :copyright: 2007 Pallets\n :license: BSD-3-Clause\n\"\"\"\nimport sys\nfrom types import ModuleType\n\n__version__ = \"1.0.0.dev0\"\n\n# This import magic raises concerns quite often which is why the implementation\n# and motivation is explained here in detail now.\n#\n# The majority of the functions and classes provided by Werkzeug work on the\n# HTTP and WSGI layer. There is no useful grouping for those which is why\n# they are all importable from \"werkzeug\" instead of the modules where they are\n# implemented. The downside of that is, that now everything would be loaded at\n# once, even if unused.\n#\n# The implementation of a lazy-loading module in this file replaces the\n# werkzeug package when imported from within. Attribute access to the werkzeug\n# module will then lazily import from the modules that implement the objects.\n\n# import mapping to objects in other modules\nall_by_module = {\n \"werkzeug.debug\": [\"DebuggedApplication\"],\n \"werkzeug.local\": [\n \"Local\",\n \"LocalManager\",\n \"LocalProxy\",\n \"LocalStack\",\n \"release_local\",\n ],\n \"werkzeug.serving\": [\"run_simple\"],\n \"werkzeug.test\": [\"Client\", \"EnvironBuilder\", \"create_environ\", \"run_wsgi_app\"],\n \"werkzeug.testapp\": [\"test_app\"],\n \"werkzeug.exceptions\": [\"abort\", \"Aborter\"],\n \"werkzeug.urls\": [\n \"url_decode\",\n \"url_encode\",\n \"url_quote\",\n \"url_quote_plus\",\n \"url_unquote\",\n \"url_unquote_plus\",\n \"url_fix\",\n \"Href\",\n \"iri_to_uri\",\n \"uri_to_iri\",\n ],\n \"werkzeug.formparser\": [\"parse_form_data\"],\n \"werkzeug.utils\": [\n \"escape\",\n \"environ_property\",\n \"append_slash_redirect\",\n \"redirect\",\n \"cached_property\",\n \"import_string\",\n \"dump_cookie\",\n \"parse_cookie\",\n \"unescape\",\n \"format_string\",\n \"find_modules\",\n \"header_property\",\n \"html\",\n \"xhtml\",\n \"HTMLBuilder\",\n \"validate_arguments\",\n \"ArgumentValidationError\",\n \"bind_arguments\",\n \"secure_filename\",\n ],\n \"werkzeug.wsgi\": [\n \"get_current_url\",\n \"get_host\",\n \"pop_path_info\",\n \"peek_path_info\",\n \"ClosingIterator\",\n \"FileWrapper\",\n \"make_line_iter\",\n \"LimitedStream\",\n \"responder\",\n \"wrap_file\",\n \"extract_path_info\",\n ],\n \"werkzeug.datastructures\": [\n \"MultiDict\",\n \"CombinedMultiDict\",\n \"Headers\",\n \"EnvironHeaders\",\n \"ImmutableList\",\n \"ImmutableDict\",\n \"ImmutableMultiDict\",\n \"TypeConversionDict\",\n \"ImmutableTypeConversionDict\",\n \"Accept\",\n \"MIMEAccept\",\n \"CharsetAccept\",\n \"LanguageAccept\",\n \"RequestCacheControl\",\n \"ResponseCacheControl\",\n \"ETags\",\n \"HeaderSet\",\n \"WWWAuthenticate\",\n \"Authorization\",\n \"FileMultiDict\",\n \"CallbackDict\",\n \"FileStorage\",\n \"OrderedMultiDict\",\n \"ImmutableOrderedMultiDict\",\n ],\n \"werkzeug.useragents\": [\"UserAgent\"],\n \"werkzeug.http\": [\n \"parse_etags\",\n \"parse_date\",\n \"http_date\",\n \"cookie_date\",\n \"parse_cache_control_header\",\n \"is_resource_modified\",\n \"parse_accept_header\",\n \"parse_set_header\",\n \"quote_etag\",\n \"unquote_etag\",\n \"generate_etag\",\n \"dump_header\",\n \"parse_list_header\",\n \"parse_dict_header\",\n \"parse_authorization_header\",\n \"parse_www_authenticate_header\",\n \"remove_entity_headers\",\n \"is_entity_header\",\n \"remove_hop_by_hop_headers\",\n \"parse_options_header\",\n \"dump_options_header\",\n \"is_hop_by_hop_header\",\n \"unquote_header_value\",\n \"quote_header_value\",\n \"HTTP_STATUS_CODES\",\n ],\n \"werkzeug.wrappers\": [\n \"BaseResponse\",\n \"BaseRequest\",\n \"Request\",\n \"Response\",\n \"AcceptMixin\",\n \"ETagRequestMixin\",\n \"ETagResponseMixin\",\n \"ResponseStreamMixin\",\n \"CommonResponseDescriptorsMixin\",\n \"UserAgentMixin\",\n \"AuthorizationMixin\",\n \"WWWAuthenticateMixin\",\n \"CommonRequestDescriptorsMixin\",\n ],\n \"werkzeug.middleware.dispatcher\": [\"DispatcherMiddleware\"],\n \"werkzeug.middleware.shared_data\": [\"SharedDataMiddleware\"],\n \"werkzeug.security\": [\"generate_password_hash\", \"check_password_hash\"],\n # the undocumented easteregg ;-)\n \"werkzeug._internal\": [\"_easteregg\"],\n}\n\n# modules that should be imported when accessed as attributes of werkzeug\nattribute_modules = frozenset([\"exceptions\", \"routing\"])\n\nobject_origins = {}\nfor module, items in all_by_module.items():\n for item in items:\n object_origins[item] = module\n\n\nclass module(ModuleType):\n \"\"\"Automatically import objects from the modules.\"\"\"\n\n def __getattr__(self, name):\n if name in object_origins:\n module = __import__(object_origins[name], None, None, [name])\n for extra_name in all_by_module[module.__name__]:\n setattr(self, extra_name, getattr(module, extra_name))\n return getattr(module, name)\n elif name in attribute_modules:\n __import__(\"werkzeug.\" + name)\n return ModuleType.__getattribute__(self, name)\n\n def __dir__(self):\n \"\"\"Just show what we want to show.\"\"\"\n result = list(new_module.__all__)\n result.extend(\n (\n \"__file__\",\n \"__doc__\",\n \"__all__\",\n \"__docformat__\",\n \"__name__\",\n \"__path__\",\n \"__package__\",\n \"__version__\",\n )\n )\n return result\n\n\n# keep a reference to this module so that it's not garbage collected\nold_module = sys.modules[\"werkzeug\"]\n\n\n# setup the new module and patch it into the dict of loaded modules\nnew_module = sys.modules[\"werkzeug\"] = module(\"werkzeug\")\nnew_module.__dict__.update(\n {\n \"__file__\": __file__,\n \"__package__\": \"werkzeug\",\n \"__path__\": __path__,\n \"__doc__\": __doc__,\n \"__version__\": __version__,\n \"__all__\": tuple(object_origins) + tuple(attribute_modules),\n \"__docformat__\": \"restructuredtext en\",\n }\n)\n\n\n# Due to bootstrapping issues we need to import exceptions here.\n# Don't ask :-(\n__import__(\"werkzeug.exceptions\")\n", "path": "src/werkzeug/__init__.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\n werkzeug\n ~~~~~~~~\n\n Werkzeug is the Swiss Army knife of Python web development.\n\n It provides useful classes and functions for any WSGI application to make\n the life of a python web developer much easier. All of the provided\n classes are independent from each other so you can mix it with any other\n library.\n\n\n :copyright: 2007 Pallets\n :license: BSD-3-Clause\n\"\"\"\nimport sys\nfrom types import ModuleType\n\n__version__ = \"1.0.0.dev0\"\n\n# This import magic raises concerns quite often which is why the implementation\n# and motivation is explained here in detail now.\n#\n# The majority of the functions and classes provided by Werkzeug work on the\n# HTTP and WSGI layer. There is no useful grouping for those which is why\n# they are all importable from \"werkzeug\" instead of the modules where they are\n# implemented. The downside of that is, that now everything would be loaded at\n# once, even if unused.\n#\n# The implementation of a lazy-loading module in this file replaces the\n# werkzeug package when imported from within. Attribute access to the werkzeug\n# module will then lazily import from the modules that implement the objects.\n\n# import mapping to objects in other modules\nall_by_module = {\n \"werkzeug.debug\": [\"DebuggedApplication\"],\n \"werkzeug.local\": [\n \"Local\",\n \"LocalManager\",\n \"LocalProxy\",\n \"LocalStack\",\n \"release_local\",\n ],\n \"werkzeug.serving\": [\"run_simple\"],\n \"werkzeug.test\": [\"Client\", \"EnvironBuilder\", \"create_environ\", \"run_wsgi_app\"],\n \"werkzeug.testapp\": [\"test_app\"],\n \"werkzeug.exceptions\": [\"abort\", \"Aborter\"],\n \"werkzeug.urls\": [\n \"url_decode\",\n \"url_encode\",\n \"url_quote\",\n \"url_quote_plus\",\n \"url_unquote\",\n \"url_unquote_plus\",\n \"url_fix\",\n \"Href\",\n \"iri_to_uri\",\n \"uri_to_iri\",\n ],\n \"werkzeug.formparser\": [\"parse_form_data\"],\n \"werkzeug.utils\": [\n \"escape\",\n \"environ_property\",\n \"append_slash_redirect\",\n \"redirect\",\n \"cached_property\",\n \"import_string\",\n \"unescape\",\n \"format_string\",\n \"find_modules\",\n \"header_property\",\n \"html\",\n \"xhtml\",\n \"HTMLBuilder\",\n \"validate_arguments\",\n \"ArgumentValidationError\",\n \"bind_arguments\",\n \"secure_filename\",\n ],\n \"werkzeug.wsgi\": [\n \"get_current_url\",\n \"get_host\",\n \"pop_path_info\",\n \"peek_path_info\",\n \"ClosingIterator\",\n \"FileWrapper\",\n \"make_line_iter\",\n \"LimitedStream\",\n \"responder\",\n \"wrap_file\",\n \"extract_path_info\",\n ],\n \"werkzeug.datastructures\": [\n \"MultiDict\",\n \"CombinedMultiDict\",\n \"Headers\",\n \"EnvironHeaders\",\n \"ImmutableList\",\n \"ImmutableDict\",\n \"ImmutableMultiDict\",\n \"TypeConversionDict\",\n \"ImmutableTypeConversionDict\",\n \"Accept\",\n \"MIMEAccept\",\n \"CharsetAccept\",\n \"LanguageAccept\",\n \"RequestCacheControl\",\n \"ResponseCacheControl\",\n \"ETags\",\n \"HeaderSet\",\n \"WWWAuthenticate\",\n \"Authorization\",\n \"FileMultiDict\",\n \"CallbackDict\",\n \"FileStorage\",\n \"OrderedMultiDict\",\n \"ImmutableOrderedMultiDict\",\n ],\n \"werkzeug.useragents\": [\"UserAgent\"],\n \"werkzeug.http\": [\n \"parse_etags\",\n \"parse_date\",\n \"http_date\",\n \"cookie_date\",\n \"parse_cache_control_header\",\n \"is_resource_modified\",\n \"parse_accept_header\",\n \"parse_set_header\",\n \"quote_etag\",\n \"unquote_etag\",\n \"generate_etag\",\n \"dump_header\",\n \"parse_list_header\",\n \"parse_dict_header\",\n \"parse_authorization_header\",\n \"parse_www_authenticate_header\",\n \"remove_entity_headers\",\n \"is_entity_header\",\n \"remove_hop_by_hop_headers\",\n \"parse_options_header\",\n \"dump_options_header\",\n \"is_hop_by_hop_header\",\n \"unquote_header_value\",\n \"quote_header_value\",\n \"HTTP_STATUS_CODES\",\n \"dump_cookie\",\n \"parse_cookie\",\n ],\n \"werkzeug.wrappers\": [\n \"BaseResponse\",\n \"BaseRequest\",\n \"Request\",\n \"Response\",\n \"AcceptMixin\",\n \"ETagRequestMixin\",\n \"ETagResponseMixin\",\n \"ResponseStreamMixin\",\n \"CommonResponseDescriptorsMixin\",\n \"UserAgentMixin\",\n \"AuthorizationMixin\",\n \"WWWAuthenticateMixin\",\n \"CommonRequestDescriptorsMixin\",\n ],\n \"werkzeug.middleware.dispatcher\": [\"DispatcherMiddleware\"],\n \"werkzeug.middleware.shared_data\": [\"SharedDataMiddleware\"],\n \"werkzeug.security\": [\"generate_password_hash\", \"check_password_hash\"],\n # the undocumented easteregg ;-)\n \"werkzeug._internal\": [\"_easteregg\"],\n}\n\n# modules that should be imported when accessed as attributes of werkzeug\nattribute_modules = frozenset([\"exceptions\", \"routing\"])\n\nobject_origins = {}\nfor module, items in all_by_module.items():\n for item in items:\n object_origins[item] = module\n\n\nclass module(ModuleType):\n \"\"\"Automatically import objects from the modules.\"\"\"\n\n def __getattr__(self, name):\n if name in object_origins:\n module = __import__(object_origins[name], None, None, [name])\n for extra_name in all_by_module[module.__name__]:\n setattr(self, extra_name, getattr(module, extra_name))\n return getattr(module, name)\n elif name in attribute_modules:\n __import__(\"werkzeug.\" + name)\n return ModuleType.__getattribute__(self, name)\n\n def __dir__(self):\n \"\"\"Just show what we want to show.\"\"\"\n result = list(new_module.__all__)\n result.extend(\n (\n \"__file__\",\n \"__doc__\",\n \"__all__\",\n \"__docformat__\",\n \"__name__\",\n \"__path__\",\n \"__package__\",\n \"__version__\",\n )\n )\n return result\n\n\n# keep a reference to this module so that it's not garbage collected\nold_module = sys.modules[\"werkzeug\"]\n\n\n# setup the new module and patch it into the dict of loaded modules\nnew_module = sys.modules[\"werkzeug\"] = module(\"werkzeug\")\nnew_module.__dict__.update(\n {\n \"__file__\": __file__,\n \"__package__\": \"werkzeug\",\n \"__path__\": __path__,\n \"__doc__\": __doc__,\n \"__version__\": __version__,\n \"__all__\": tuple(object_origins) + tuple(attribute_modules),\n \"__docformat__\": \"restructuredtext en\",\n }\n)\n\n\n# Due to bootstrapping issues we need to import exceptions here.\n# Don't ask :-(\n__import__(\"werkzeug.exceptions\")\n", "path": "src/werkzeug/__init__.py"}]} | 2,681 | 157 |
gh_patches_debug_10777 | rasdani/github-patches | git_diff | ansible__ansible-36176 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
python3 asa_command cannot use a string pattern
<!---
Verify first that your issue/request is not already reported on GitHub.
Also test if the latest release, and devel branch are affected too.
-->
##### ISSUE TYPE
<!--- Pick one below and delete the rest -->
- Bug Report
##### COMPONENT NAME
<!---
Name of the module, plugin, task or feature
Do not include extra details here, e.g. "vyos_command" not "the network module vyos_command" or the full path
-->
asa_command
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes below -->
```
ansible 2.5.0b1
config file = /mnt/hgfs/Share/HCS/ansible.cfg
configured module search path = ['/home/ddean1/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/ddean1/ansible/lib/python3.5/site-packages/ansible
executable location = /home/ddean1/ansible/bin/ansible
python version = 3.5.2 (default, Nov 23 2017, 16:37:01) [GCC 5.4.0 20160609]
```
##### CONFIGURATION
<!---
If using Ansible 2.4 or above, paste the results of "ansible-config dump --only-changed"
Otherwise, mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
ANSIBLE_FORCE_COLOR(/mnt/hgfs/Share/HCS/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/mnt/hgfs/Share/HCS/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=30s
DEFAULT_GATHERING(/mnt/hgfs/Share/HCS/ansible.cfg) = explicit
DEFAULT_HOST_LIST(/mnt/hgfs/Share/HCS/ansible.cfg) = ['/mnt/hgfs/Share/HCS/hosts']
HOST_KEY_CHECKING(/mnt/hgfs/Share/HCS/ansible.cfg) = False
RETRY_FILES_ENABLED(/mnt/hgfs/Share/HCS/ansible.cfg) = False
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say "N/A" for anything that is not platform-specific.
Also mention the specific version of what you are trying to control,
e.g. if this is a network bug the version of firmware on the network device.
-->
Cisco ASA version = 9.2(2)4
##### SUMMARY
<!--- Explain the problem briefly -->
unable to execute commands using asa_command on cisco asa
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem, using a minimal test-case.
For new features, show how the feature would be used.
-->
any basic playbook using asa_command should reproduce the issue
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: "ASA Health Check"
hosts: lab_asa
connection: local
tasks:
- name: "Get Running config"
asa_command:
commands:
- show running-config
context: system
register: output
- debug: msg="{{ output.stdout_lines[0] }}"
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Return `show running-config` as `output` and display it in `stdout` using the debug module
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
command fails and returns `fatal: [hcs_asa_core]: FAILED! => {"failed": true, "msg": "cannot use a string pattern on a bytes-like object"}`
<!--- Paste verbatim command output between quotes below -->
```
(ansible) ddean1@DevBox ~/Share/HCS[master*]$ ansible-playbook ASA_Health_Check.yml -k -vvvv
ansible-playbook 2.5.0b1
config file = /mnt/hgfs/Share/HCS/ansible.cfg
configured module search path = ['/home/ddean1/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/ddean1/ansible/lib/python3.5/site-packages/ansible
executable location = /home/ddean1/ansible/bin/ansible-playbook
python version = 3.5.2 (default, Nov 23 2017, 16:37:01) [GCC 5.4.0 20160609]
Using /mnt/hgfs/Share/HCS/ansible.cfg as config file
SSH password:
setting up inventory plugins
Parsed /mnt/hgfs/Share/HCS/hosts inventory source with ini plugin
Loading callback plugin default of type stdout, v2.0 from /home/ddean1/ansible/lib/python3.5/site-packages/ansible/plugins/callback/default.py
PLAYBOOK: ASA_Health_Check.yml **********************************************************************************************************************************************************************************************************
1 plays in ASA_Health_Check.yml
PLAY [ASA Health Check] *****************************************************************************************************************************************************************************************************************
META: ran handlers
TASK [Get Runninf config] ***************************************************************************************************************************************************************************************************************
task path: /mnt/hgfs/Share/HCS/ASA_Health_Check.yml:6
<10.204.81.101> using connection plugin network_cli (was local)
<10.204.81.101> starting connection from persistent connection plugin
<10.204.81.101> local domain socket does not exist, starting it
<10.204.81.101> control socket path is /home/ddean1/.ansible/pc/70c4561b1e
<10.204.81.101>
The full traceback is:
Traceback (most recent call last):
File "/home/ddean1/ansible/bin/ansible-connection", line 70, in start
self.connection._connect()
File "/home/ddean1/ansible/lib/python3.5/site-packages/ansible/plugins/connection/network_cli.py", line 315, in _connect
self.receive()
File "/home/ddean1/ansible/lib/python3.5/site-packages/ansible/plugins/connection/network_cli.py", line 410, in receive
if self._find_prompt(window):
File "/home/ddean1/ansible/lib/python3.5/site-packages/ansible/plugins/connection/network_cli.py", line 481, in _find_prompt
if regex.search(response):
TypeError: cannot use a string pattern on a bytes-like object
fatal: [hcs_asa_core]: FAILED! => {
"failed": true,
"msg": "cannot use a string pattern on a bytes-like object"
}
PLAY RECAP ******************************************************************************************************************************************************************************************************************************
hcs_asa_core : ok=0 changed=0 unreachable=0 failed=1
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/ansible/plugins/terminal/asa.py`
Content:
```
1 #
2 # (c) 2016 Red Hat Inc.
3 #
4 # This file is part of Ansible
5 #
6 # Ansible is free software: you can redistribute it and/or modify
7 # it under the terms of the GNU General Public License as published by
8 # the Free Software Foundation, either version 3 of the License, or
9 # (at your option) any later version.
10 #
11 # Ansible is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU General Public License for more details.
15 #
16 # You should have received a copy of the GNU General Public License
17 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
18 #
19 from __future__ import (absolute_import, division, print_function)
20 __metaclass__ = type
21
22 import re
23 import json
24
25 from ansible.errors import AnsibleConnectionFailure
26 from ansible.module_utils._text import to_text, to_bytes
27 from ansible.plugins.terminal import TerminalBase
28
29
30 class TerminalModule(TerminalBase):
31
32 terminal_stdout_re = [
33 re.compile(r"[\r\n]?[\w+\-\.:\/\[\]]+(?:\([^\)]+\)){,3}(?:>|#) ?$"),
34 re.compile(r"\[\w+\@[\w\-\.]+(?: [^\]])\] ?[>#\$] ?$")
35 ]
36
37 terminal_stderr_re = [
38 re.compile(r"error:", re.I),
39 re.compile(br"Removing.* not allowed, it is being used")
40 ]
41
42 def on_open_shell(self):
43 if self._get_prompt().strip().endswith(b'#'):
44 self.disable_pager()
45
46 def disable_pager(self):
47 cmd = {u'command': u'no terminal pager'}
48 try:
49 self._exec_cli_command(u'no terminal pager')
50 except AnsibleConnectionFailure:
51 raise AnsibleConnectionFailure('unable to disable terminal pager')
52
53 def on_become(self, passwd=None):
54 if self._get_prompt().strip().endswith(b'#'):
55 return
56
57 cmd = {u'command': u'enable'}
58 if passwd:
59 # Note: python-3.5 cannot combine u"" and r"" together. Thus make
60 # an r string and use to_text to ensure it's text on both py2 and py3.
61 cmd[u'prompt'] = to_text(r"[\r\n]?password: $", errors='surrogate_or_strict')
62 cmd[u'answer'] = passwd
63
64 try:
65 self._exec_cli_command(to_bytes(json.dumps(cmd), errors='surrogate_or_strict'))
66 except AnsibleConnectionFailure:
67 raise AnsibleConnectionFailure('unable to elevate privilege to enable mode')
68
69 self.disable_pager()
70
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/ansible/plugins/terminal/asa.py b/lib/ansible/plugins/terminal/asa.py
--- a/lib/ansible/plugins/terminal/asa.py
+++ b/lib/ansible/plugins/terminal/asa.py
@@ -30,12 +30,12 @@
class TerminalModule(TerminalBase):
terminal_stdout_re = [
- re.compile(r"[\r\n]?[\w+\-\.:\/\[\]]+(?:\([^\)]+\)){,3}(?:>|#) ?$"),
- re.compile(r"\[\w+\@[\w\-\.]+(?: [^\]])\] ?[>#\$] ?$")
+ re.compile(br"[\r\n]?[\w+\-\.:\/\[\]]+(?:\([^\)]+\)){,3}(?:>|#) ?$"),
+ re.compile(br"\[\w+\@[\w\-\.]+(?: [^\]])\] ?[>#\$] ?$")
]
terminal_stderr_re = [
- re.compile(r"error:", re.I),
+ re.compile(br"error:", re.I),
re.compile(br"Removing.* not allowed, it is being used")
]
| {"golden_diff": "diff --git a/lib/ansible/plugins/terminal/asa.py b/lib/ansible/plugins/terminal/asa.py\n--- a/lib/ansible/plugins/terminal/asa.py\n+++ b/lib/ansible/plugins/terminal/asa.py\n@@ -30,12 +30,12 @@\n class TerminalModule(TerminalBase):\n \n terminal_stdout_re = [\n- re.compile(r\"[\\r\\n]?[\\w+\\-\\.:\\/\\[\\]]+(?:\\([^\\)]+\\)){,3}(?:>|#) ?$\"),\n- re.compile(r\"\\[\\w+\\@[\\w\\-\\.]+(?: [^\\]])\\] ?[>#\\$] ?$\")\n+ re.compile(br\"[\\r\\n]?[\\w+\\-\\.:\\/\\[\\]]+(?:\\([^\\)]+\\)){,3}(?:>|#) ?$\"),\n+ re.compile(br\"\\[\\w+\\@[\\w\\-\\.]+(?: [^\\]])\\] ?[>#\\$] ?$\")\n ]\n \n terminal_stderr_re = [\n- re.compile(r\"error:\", re.I),\n+ re.compile(br\"error:\", re.I),\n re.compile(br\"Removing.* not allowed, it is being used\")\n ]\n", "issue": "python3 asa_command cannot use a string pattern \n<!---\r\nVerify first that your issue/request is not already reported on GitHub.\r\nAlso test if the latest release, and devel branch are affected too.\r\n-->\r\n\r\n##### ISSUE TYPE\r\n<!--- Pick one below and delete the rest -->\r\n - Bug Report\r\n \r\n\r\n##### COMPONENT NAME\r\n<!---\r\nName of the module, plugin, task or feature\r\nDo not include extra details here, e.g. \"vyos_command\" not \"the network module vyos_command\" or the full path\r\n-->\r\nasa_command\r\n##### ANSIBLE VERSION\r\n<!--- Paste verbatim output from \"ansible --version\" between quotes below -->\r\n```\r\nansible 2.5.0b1\r\n config file = /mnt/hgfs/Share/HCS/ansible.cfg\r\n configured module search path = ['/home/ddean1/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /home/ddean1/ansible/lib/python3.5/site-packages/ansible\r\n executable location = /home/ddean1/ansible/bin/ansible\r\n python version = 3.5.2 (default, Nov 23 2017, 16:37:01) [GCC 5.4.0 20160609]\r\n```\r\n\r\n##### CONFIGURATION\r\n<!---\r\nIf using Ansible 2.4 or above, paste the results of \"ansible-config dump --only-changed\"\r\nOtherwise, mention any settings you have changed/added/removed in ansible.cfg\r\n(or using the ANSIBLE_* environment variables).\r\n-->\r\nANSIBLE_FORCE_COLOR(/mnt/hgfs/Share/HCS/ansible.cfg) = True\r\nANSIBLE_SSH_ARGS(/mnt/hgfs/Share/HCS/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=30s\r\nDEFAULT_GATHERING(/mnt/hgfs/Share/HCS/ansible.cfg) = explicit\r\nDEFAULT_HOST_LIST(/mnt/hgfs/Share/HCS/ansible.cfg) = ['/mnt/hgfs/Share/HCS/hosts']\r\nHOST_KEY_CHECKING(/mnt/hgfs/Share/HCS/ansible.cfg) = False\r\nRETRY_FILES_ENABLED(/mnt/hgfs/Share/HCS/ansible.cfg) = False\r\n\r\n##### OS / ENVIRONMENT\r\n<!---\r\nMention the OS you are running Ansible from, and the OS you are\r\nmanaging, or say \"N/A\" for anything that is not platform-specific.\r\nAlso mention the specific version of what you are trying to control,\r\ne.g. if this is a network bug the version of firmware on the network device.\r\n-->\r\nCisco ASA version = 9.2(2)4\r\n\r\n##### SUMMARY\r\n<!--- Explain the problem briefly -->\r\nunable to execute commands using asa_command on cisco asa\r\n##### STEPS TO REPRODUCE\r\n<!---\r\nFor bugs, show exactly how to reproduce the problem, using a minimal test-case.\r\nFor new features, show how the feature would be used.\r\n-->\r\nany basic playbook using asa_command should reproduce the issue \r\n\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml\r\n- name: \"ASA Health Check\"\r\n hosts: lab_asa\r\n connection: local\r\n\r\n tasks:\r\n - name: \"Get Running config\"\r\n asa_command:\r\n commands:\r\n - show running-config \r\n context: system\r\n register: output\r\n\r\n - debug: msg=\"{{ output.stdout_lines[0] }}\"\r\n```\r\n\r\n<!--- You can also paste gist.github.com links for larger files -->\r\n\r\n##### EXPECTED RESULTS\r\n<!--- What did you expect to happen when running the steps above? -->\r\nReturn `show running-config` as `output` and display it in `stdout` using the debug module\r\n##### ACTUAL RESULTS\r\n<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->\r\ncommand fails and returns `fatal: [hcs_asa_core]: FAILED! => {\"failed\": true, \"msg\": \"cannot use a string pattern on a bytes-like object\"}`\r\n<!--- Paste verbatim command output between quotes below -->\r\n```\r\n(ansible) ddean1@DevBox ~/Share/HCS[master*]$ ansible-playbook ASA_Health_Check.yml -k -vvvv\r\nansible-playbook 2.5.0b1\r\n config file = /mnt/hgfs/Share/HCS/ansible.cfg\r\n configured module search path = ['/home/ddean1/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /home/ddean1/ansible/lib/python3.5/site-packages/ansible\r\n executable location = /home/ddean1/ansible/bin/ansible-playbook\r\n python version = 3.5.2 (default, Nov 23 2017, 16:37:01) [GCC 5.4.0 20160609]\r\nUsing /mnt/hgfs/Share/HCS/ansible.cfg as config file\r\nSSH password: \r\nsetting up inventory plugins\r\nParsed /mnt/hgfs/Share/HCS/hosts inventory source with ini plugin\r\nLoading callback plugin default of type stdout, v2.0 from /home/ddean1/ansible/lib/python3.5/site-packages/ansible/plugins/callback/default.py\r\n\r\nPLAYBOOK: ASA_Health_Check.yml **********************************************************************************************************************************************************************************************************\r\n1 plays in ASA_Health_Check.yml\r\n\r\nPLAY [ASA Health Check] *****************************************************************************************************************************************************************************************************************\r\nMETA: ran handlers\r\n\r\nTASK [Get Runninf config] ***************************************************************************************************************************************************************************************************************\r\ntask path: /mnt/hgfs/Share/HCS/ASA_Health_Check.yml:6\r\n<10.204.81.101> using connection plugin network_cli (was local)\r\n<10.204.81.101> starting connection from persistent connection plugin\r\n<10.204.81.101> local domain socket does not exist, starting it\r\n<10.204.81.101> control socket path is /home/ddean1/.ansible/pc/70c4561b1e\r\n<10.204.81.101> \r\nThe full traceback is:\r\nTraceback (most recent call last):\r\n File \"/home/ddean1/ansible/bin/ansible-connection\", line 70, in start\r\n self.connection._connect()\r\n File \"/home/ddean1/ansible/lib/python3.5/site-packages/ansible/plugins/connection/network_cli.py\", line 315, in _connect\r\n self.receive()\r\n File \"/home/ddean1/ansible/lib/python3.5/site-packages/ansible/plugins/connection/network_cli.py\", line 410, in receive\r\n if self._find_prompt(window):\r\n File \"/home/ddean1/ansible/lib/python3.5/site-packages/ansible/plugins/connection/network_cli.py\", line 481, in _find_prompt\r\n if regex.search(response):\r\nTypeError: cannot use a string pattern on a bytes-like object\r\n\r\nfatal: [hcs_asa_core]: FAILED! => {\r\n \"failed\": true,\r\n \"msg\": \"cannot use a string pattern on a bytes-like object\"\r\n}\r\n\r\nPLAY RECAP ******************************************************************************************************************************************************************************************************************************\r\nhcs_asa_core : ok=0 changed=0 unreachable=0 failed=1 \r\n\r\n```\r\n\n", "before_files": [{"content": "#\n# (c) 2016 Red Hat Inc.\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n#\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nimport re\nimport json\n\nfrom ansible.errors import AnsibleConnectionFailure\nfrom ansible.module_utils._text import to_text, to_bytes\nfrom ansible.plugins.terminal import TerminalBase\n\n\nclass TerminalModule(TerminalBase):\n\n terminal_stdout_re = [\n re.compile(r\"[\\r\\n]?[\\w+\\-\\.:\\/\\[\\]]+(?:\\([^\\)]+\\)){,3}(?:>|#) ?$\"),\n re.compile(r\"\\[\\w+\\@[\\w\\-\\.]+(?: [^\\]])\\] ?[>#\\$] ?$\")\n ]\n\n terminal_stderr_re = [\n re.compile(r\"error:\", re.I),\n re.compile(br\"Removing.* not allowed, it is being used\")\n ]\n\n def on_open_shell(self):\n if self._get_prompt().strip().endswith(b'#'):\n self.disable_pager()\n\n def disable_pager(self):\n cmd = {u'command': u'no terminal pager'}\n try:\n self._exec_cli_command(u'no terminal pager')\n except AnsibleConnectionFailure:\n raise AnsibleConnectionFailure('unable to disable terminal pager')\n\n def on_become(self, passwd=None):\n if self._get_prompt().strip().endswith(b'#'):\n return\n\n cmd = {u'command': u'enable'}\n if passwd:\n # Note: python-3.5 cannot combine u\"\" and r\"\" together. Thus make\n # an r string and use to_text to ensure it's text on both py2 and py3.\n cmd[u'prompt'] = to_text(r\"[\\r\\n]?password: $\", errors='surrogate_or_strict')\n cmd[u'answer'] = passwd\n\n try:\n self._exec_cli_command(to_bytes(json.dumps(cmd), errors='surrogate_or_strict'))\n except AnsibleConnectionFailure:\n raise AnsibleConnectionFailure('unable to elevate privilege to enable mode')\n\n self.disable_pager()\n", "path": "lib/ansible/plugins/terminal/asa.py"}], "after_files": [{"content": "#\n# (c) 2016 Red Hat Inc.\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n#\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nimport re\nimport json\n\nfrom ansible.errors import AnsibleConnectionFailure\nfrom ansible.module_utils._text import to_text, to_bytes\nfrom ansible.plugins.terminal import TerminalBase\n\n\nclass TerminalModule(TerminalBase):\n\n terminal_stdout_re = [\n re.compile(br\"[\\r\\n]?[\\w+\\-\\.:\\/\\[\\]]+(?:\\([^\\)]+\\)){,3}(?:>|#) ?$\"),\n re.compile(br\"\\[\\w+\\@[\\w\\-\\.]+(?: [^\\]])\\] ?[>#\\$] ?$\")\n ]\n\n terminal_stderr_re = [\n re.compile(br\"error:\", re.I),\n re.compile(br\"Removing.* not allowed, it is being used\")\n ]\n\n def on_open_shell(self):\n if self._get_prompt().strip().endswith(b'#'):\n self.disable_pager()\n\n def disable_pager(self):\n cmd = {u'command': u'no terminal pager'}\n try:\n self._exec_cli_command(u'no terminal pager')\n except AnsibleConnectionFailure:\n raise AnsibleConnectionFailure('unable to disable terminal pager')\n\n def on_become(self, passwd=None):\n if self._get_prompt().strip().endswith(b'#'):\n return\n\n cmd = {u'command': u'enable'}\n if passwd:\n # Note: python-3.5 cannot combine u\"\" and r\"\" together. Thus make\n # an r string and use to_text to ensure it's text on both py2 and py3.\n cmd[u'prompt'] = to_text(r\"[\\r\\n]?password: $\", errors='surrogate_or_strict')\n cmd[u'answer'] = passwd\n\n try:\n self._exec_cli_command(to_bytes(json.dumps(cmd), errors='surrogate_or_strict'))\n except AnsibleConnectionFailure:\n raise AnsibleConnectionFailure('unable to elevate privilege to enable mode')\n\n self.disable_pager()\n", "path": "lib/ansible/plugins/terminal/asa.py"}]} | 2,565 | 255 |
gh_patches_debug_27630 | rasdani/github-patches | git_diff | explosion__spaCy-1961 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
importlib module_from_spec error in Python 3.4
I download models like this
python -m spacy download en
python -m spacy download en_core_web_md
Above commands create a link like this
Linking successful
/home/sameer/anaconda2/envs/py34/lib/python3.4/site-packages/en_core_web_md
-->
/home/sameer/anaconda2/envs/py34/lib/python3.4/site-packages/spacy/data/en_core_web_md
You can now load the model via spacy.load('en_core_web_md')
But when I actually load the model ,it throws the following error
>>> import spacy
>>> nlp= spacy.load('en_core_web_md')
Traceback (most recent call last):
File "/home/sameer/anaconda2/envs/py34/lib/python3.4/site-packages/spacy/util.py", line 126, in load_model_from_link
cls = import_file(name, path)
File "/home/sameer/anaconda2/envs/py34/lib/python3.4/site-packages/spacy/compat.py", line 118, in import_file
module = importlib.util.module_from_spec(spec)
AttributeError: 'module' object has no attribute 'module_from_spec'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/sameer/anaconda2/envs/py34/lib/python3.4/site-packages/spacy/__init__.py", line 19, in load
return util.load_model(name, **overrides)
File "/home/sameer/anaconda2/envs/py34/lib/python3.4/site-packages/spacy/util.py", line 112, in load_model
return load_model_from_link(name, **overrides)
File "/home/sameer/anaconda2/envs/py34/lib/python3.4/site-packages/spacy/util.py", line 130, in load_model_from_link
"points to a valid package (not just a data directory)." % name)
OSError: Cant' load 'en_core_web_md'. If you're using a shortcut link, make sure it points to a valid package (not just a data directory).
>>>
## Your Environment
<!-- Include details of your environment. If you're using spaCy 1.7+, you can also type
`python -m spacy info --markdown` and copy-paste the result here.-->* **Python version:** 3.4.5
* **Platform:** Linux-4.4.0-101-generic-x86_64-with-debian-jessie-sid
* **Models:** __cache__, en_core_web_md, en, en-1.1.0
* **spaCy version:** 2.0.5
* Operating System: Ububtu 14.04
* Python Version Used: 3.4.5
* spaCy Version Used: 2.0.5
* Environment Information: Conda Virtual Environment
importlib module_from_spec error in Python 3.4
I download models like this
python -m spacy download en
python -m spacy download en_core_web_md
Above commands create a link like this
Linking successful
/home/sameer/anaconda2/envs/py34/lib/python3.4/site-packages/en_core_web_md
-->
/home/sameer/anaconda2/envs/py34/lib/python3.4/site-packages/spacy/data/en_core_web_md
You can now load the model via spacy.load('en_core_web_md')
But when I actually load the model ,it throws the following error
>>> import spacy
>>> nlp= spacy.load('en_core_web_md')
Traceback (most recent call last):
File "/home/sameer/anaconda2/envs/py34/lib/python3.4/site-packages/spacy/util.py", line 126, in load_model_from_link
cls = import_file(name, path)
File "/home/sameer/anaconda2/envs/py34/lib/python3.4/site-packages/spacy/compat.py", line 118, in import_file
module = importlib.util.module_from_spec(spec)
AttributeError: 'module' object has no attribute 'module_from_spec'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/sameer/anaconda2/envs/py34/lib/python3.4/site-packages/spacy/__init__.py", line 19, in load
return util.load_model(name, **overrides)
File "/home/sameer/anaconda2/envs/py34/lib/python3.4/site-packages/spacy/util.py", line 112, in load_model
return load_model_from_link(name, **overrides)
File "/home/sameer/anaconda2/envs/py34/lib/python3.4/site-packages/spacy/util.py", line 130, in load_model_from_link
"points to a valid package (not just a data directory)." % name)
OSError: Cant' load 'en_core_web_md'. If you're using a shortcut link, make sure it points to a valid package (not just a data directory).
>>>
## Your Environment
<!-- Include details of your environment. If you're using spaCy 1.7+, you can also type
`python -m spacy info --markdown` and copy-paste the result here.-->* **Python version:** 3.4.5
* **Platform:** Linux-4.4.0-101-generic-x86_64-with-debian-jessie-sid
* **Models:** __cache__, en_core_web_md, en, en-1.1.0
* **spaCy version:** 2.0.5
* Operating System: Ububtu 14.04
* Python Version Used: 3.4.5
* spaCy Version Used: 2.0.5
* Environment Information: Conda Virtual Environment
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `spacy/compat.py`
Content:
```
1 # coding: utf8
2 from __future__ import unicode_literals
3
4 import six
5 import ftfy
6 import sys
7 import ujson
8 import itertools
9 import locale
10
11 from thinc.neural.util import copy_array
12
13 try:
14 import cPickle as pickle
15 except ImportError:
16 import pickle
17
18 try:
19 import copy_reg
20 except ImportError:
21 import copyreg as copy_reg
22
23 try:
24 from cupy.cuda.stream import Stream as CudaStream
25 except ImportError:
26 CudaStream = None
27
28 try:
29 import cupy
30 except ImportError:
31 cupy = None
32
33 try:
34 from thinc.neural.optimizers import Optimizer
35 except ImportError:
36 from thinc.neural.optimizers import Adam as Optimizer
37
38 pickle = pickle
39 copy_reg = copy_reg
40 CudaStream = CudaStream
41 cupy = cupy
42 fix_text = ftfy.fix_text
43 copy_array = copy_array
44 izip = getattr(itertools, 'izip', zip)
45
46 is_python2 = six.PY2
47 is_python3 = six.PY3
48 is_windows = sys.platform.startswith('win')
49 is_linux = sys.platform.startswith('linux')
50 is_osx = sys.platform == 'darwin'
51
52
53 if is_python2:
54 import imp
55 bytes_ = str
56 unicode_ = unicode # noqa: F821
57 basestring_ = basestring # noqa: F821
58 input_ = raw_input # noqa: F821
59 json_dumps = lambda data: ujson.dumps(data, indent=2, escape_forward_slashes=False).decode('utf8')
60 path2str = lambda path: str(path).decode('utf8')
61
62 elif is_python3:
63 import importlib.util
64 bytes_ = bytes
65 unicode_ = str
66 basestring_ = str
67 input_ = input
68 json_dumps = lambda data: ujson.dumps(data, indent=2, escape_forward_slashes=False)
69 path2str = lambda path: str(path)
70
71
72 def b_to_str(b_str):
73 if is_python2:
74 return b_str
75 # important: if no encoding is set, string becomes "b'...'"
76 return str(b_str, encoding='utf8')
77
78
79 def getattr_(obj, name, *default):
80 if is_python3 and isinstance(name, bytes):
81 name = name.decode('utf8')
82 return getattr(obj, name, *default)
83
84
85 def symlink_to(orig, dest):
86 if is_python2 and is_windows:
87 import subprocess
88 subprocess.call(['mklink', '/d', path2str(orig), path2str(dest)], shell=True)
89 else:
90 orig.symlink_to(dest)
91
92
93 def is_config(python2=None, python3=None, windows=None, linux=None, osx=None):
94 return ((python2 is None or python2 == is_python2) and
95 (python3 is None or python3 == is_python3) and
96 (windows is None or windows == is_windows) and
97 (linux is None or linux == is_linux) and
98 (osx is None or osx == is_osx))
99
100
101 def normalize_string_keys(old):
102 """Given a dictionary, make sure keys are unicode strings, not bytes."""
103 new = {}
104 for key, value in old.items():
105 if isinstance(key, bytes_):
106 new[key.decode('utf8')] = value
107 else:
108 new[key] = value
109 return new
110
111
112 def import_file(name, loc):
113 loc = str(loc)
114 if is_python2:
115 return imp.load_source(name, loc)
116 else:
117 spec = importlib.util.spec_from_file_location(name, str(loc))
118 module = importlib.util.module_from_spec(spec)
119 spec.loader.exec_module(module)
120 return module
121
122
123 def locale_escape(string, errors='replace'):
124 '''
125 Mangle non-supported characters, for savages with ascii terminals.
126 '''
127 encoding = locale.getpreferredencoding()
128 string = string.encode(encoding, errors).decode('utf8')
129 return string
130
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/spacy/compat.py b/spacy/compat.py
--- a/spacy/compat.py
+++ b/spacy/compat.py
@@ -43,15 +43,15 @@
copy_array = copy_array
izip = getattr(itertools, 'izip', zip)
-is_python2 = six.PY2
-is_python3 = six.PY3
is_windows = sys.platform.startswith('win')
is_linux = sys.platform.startswith('linux')
is_osx = sys.platform == 'darwin'
+is_python2 = six.PY2
+is_python3 = six.PY3
+is_python_pre_3_5 = is_python2 or (is_python3 and sys.version_info[1]<5)
if is_python2:
- import imp
bytes_ = str
unicode_ = unicode # noqa: F821
basestring_ = basestring # noqa: F821
@@ -60,7 +60,6 @@
path2str = lambda path: str(path).decode('utf8')
elif is_python3:
- import importlib.util
bytes_ = bytes
unicode_ = str
basestring_ = str
@@ -111,9 +110,11 @@
def import_file(name, loc):
loc = str(loc)
- if is_python2:
+ if is_python_pre_3_5:
+ import imp
return imp.load_source(name, loc)
else:
+ import importlib.util
spec = importlib.util.spec_from_file_location(name, str(loc))
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
| {"golden_diff": "diff --git a/spacy/compat.py b/spacy/compat.py\n--- a/spacy/compat.py\n+++ b/spacy/compat.py\n@@ -43,15 +43,15 @@\n copy_array = copy_array\n izip = getattr(itertools, 'izip', zip)\n \n-is_python2 = six.PY2\n-is_python3 = six.PY3\n is_windows = sys.platform.startswith('win')\n is_linux = sys.platform.startswith('linux')\n is_osx = sys.platform == 'darwin'\n \n+is_python2 = six.PY2\n+is_python3 = six.PY3\n+is_python_pre_3_5 = is_python2 or (is_python3 and sys.version_info[1]<5)\n \n if is_python2:\n- import imp\n bytes_ = str\n unicode_ = unicode # noqa: F821\n basestring_ = basestring # noqa: F821\n@@ -60,7 +60,6 @@\n path2str = lambda path: str(path).decode('utf8')\n \n elif is_python3:\n- import importlib.util\n bytes_ = bytes\n unicode_ = str\n basestring_ = str\n@@ -111,9 +110,11 @@\n \n def import_file(name, loc):\n loc = str(loc)\n- if is_python2:\n+ if is_python_pre_3_5:\n+ import imp\n return imp.load_source(name, loc)\n else:\n+ import importlib.util\n spec = importlib.util.spec_from_file_location(name, str(loc))\n module = importlib.util.module_from_spec(spec)\n spec.loader.exec_module(module)\n", "issue": "importlib module_from_spec error in Python 3.4\nI download models like this\r\npython -m spacy download en\r\npython -m spacy download en_core_web_md\r\n\r\nAbove commands create a link like this \r\n\r\n\r\n\r\n Linking successful\r\n /home/sameer/anaconda2/envs/py34/lib/python3.4/site-packages/en_core_web_md\r\n -->\r\n /home/sameer/anaconda2/envs/py34/lib/python3.4/site-packages/spacy/data/en_core_web_md\r\n\r\n You can now load the model via spacy.load('en_core_web_md')\r\n\r\n\r\n\r\n\r\nBut when I actually load the model ,it throws the following error \r\n\r\n\r\n>>> import spacy\r\n>>> nlp= spacy.load('en_core_web_md')\r\nTraceback (most recent call last):\r\n File \"/home/sameer/anaconda2/envs/py34/lib/python3.4/site-packages/spacy/util.py\", line 126, in load_model_from_link\r\n cls = import_file(name, path)\r\n File \"/home/sameer/anaconda2/envs/py34/lib/python3.4/site-packages/spacy/compat.py\", line 118, in import_file\r\n module = importlib.util.module_from_spec(spec)\r\nAttributeError: 'module' object has no attribute 'module_from_spec'\r\nDuring handling of the above exception, another exception occurred:\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/sameer/anaconda2/envs/py34/lib/python3.4/site-packages/spacy/__init__.py\", line 19, in load\r\n return util.load_model(name, **overrides)\r\n File \"/home/sameer/anaconda2/envs/py34/lib/python3.4/site-packages/spacy/util.py\", line 112, in load_model\r\n return load_model_from_link(name, **overrides)\r\n File \"/home/sameer/anaconda2/envs/py34/lib/python3.4/site-packages/spacy/util.py\", line 130, in load_model_from_link\r\n \"points to a valid package (not just a data directory).\" % name)\r\nOSError: Cant' load 'en_core_web_md'. If you're using a shortcut link, make sure it points to a valid package (not just a data directory).\r\n>>> \r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n## Your Environment\r\n<!-- Include details of your environment. If you're using spaCy 1.7+, you can also type \r\n`python -m spacy info --markdown` and copy-paste the result here.-->* **Python version:** 3.4.5\r\n* **Platform:** Linux-4.4.0-101-generic-x86_64-with-debian-jessie-sid\r\n* **Models:** __cache__, en_core_web_md, en, en-1.1.0\r\n* **spaCy version:** 2.0.5\r\n\r\n* Operating System: Ububtu 14.04\r\n* Python Version Used: 3.4.5\r\n* spaCy Version Used: 2.0.5\r\n* Environment Information: Conda Virtual Environment\r\n\nimportlib module_from_spec error in Python 3.4\nI download models like this\r\npython -m spacy download en\r\npython -m spacy download en_core_web_md\r\n\r\nAbove commands create a link like this \r\n\r\n\r\n\r\n Linking successful\r\n /home/sameer/anaconda2/envs/py34/lib/python3.4/site-packages/en_core_web_md\r\n -->\r\n /home/sameer/anaconda2/envs/py34/lib/python3.4/site-packages/spacy/data/en_core_web_md\r\n\r\n You can now load the model via spacy.load('en_core_web_md')\r\n\r\n\r\n\r\n\r\nBut when I actually load the model ,it throws the following error \r\n\r\n\r\n>>> import spacy\r\n>>> nlp= spacy.load('en_core_web_md')\r\nTraceback (most recent call last):\r\n File \"/home/sameer/anaconda2/envs/py34/lib/python3.4/site-packages/spacy/util.py\", line 126, in load_model_from_link\r\n cls = import_file(name, path)\r\n File \"/home/sameer/anaconda2/envs/py34/lib/python3.4/site-packages/spacy/compat.py\", line 118, in import_file\r\n module = importlib.util.module_from_spec(spec)\r\nAttributeError: 'module' object has no attribute 'module_from_spec'\r\nDuring handling of the above exception, another exception occurred:\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/sameer/anaconda2/envs/py34/lib/python3.4/site-packages/spacy/__init__.py\", line 19, in load\r\n return util.load_model(name, **overrides)\r\n File \"/home/sameer/anaconda2/envs/py34/lib/python3.4/site-packages/spacy/util.py\", line 112, in load_model\r\n return load_model_from_link(name, **overrides)\r\n File \"/home/sameer/anaconda2/envs/py34/lib/python3.4/site-packages/spacy/util.py\", line 130, in load_model_from_link\r\n \"points to a valid package (not just a data directory).\" % name)\r\nOSError: Cant' load 'en_core_web_md'. If you're using a shortcut link, make sure it points to a valid package (not just a data directory).\r\n>>> \r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n## Your Environment\r\n<!-- Include details of your environment. If you're using spaCy 1.7+, you can also type \r\n`python -m spacy info --markdown` and copy-paste the result here.-->* **Python version:** 3.4.5\r\n* **Platform:** Linux-4.4.0-101-generic-x86_64-with-debian-jessie-sid\r\n* **Models:** __cache__, en_core_web_md, en, en-1.1.0\r\n* **spaCy version:** 2.0.5\r\n\r\n* Operating System: Ububtu 14.04\r\n* Python Version Used: 3.4.5\r\n* spaCy Version Used: 2.0.5\r\n* Environment Information: Conda Virtual Environment\r\n\n", "before_files": [{"content": "# coding: utf8\nfrom __future__ import unicode_literals\n\nimport six\nimport ftfy\nimport sys\nimport ujson\nimport itertools\nimport locale\n\nfrom thinc.neural.util import copy_array\n\ntry:\n import cPickle as pickle\nexcept ImportError:\n import pickle\n\ntry:\n import copy_reg\nexcept ImportError:\n import copyreg as copy_reg\n\ntry:\n from cupy.cuda.stream import Stream as CudaStream\nexcept ImportError:\n CudaStream = None\n\ntry:\n import cupy\nexcept ImportError:\n cupy = None\n\ntry:\n from thinc.neural.optimizers import Optimizer\nexcept ImportError:\n from thinc.neural.optimizers import Adam as Optimizer\n\npickle = pickle\ncopy_reg = copy_reg\nCudaStream = CudaStream\ncupy = cupy\nfix_text = ftfy.fix_text\ncopy_array = copy_array\nizip = getattr(itertools, 'izip', zip)\n\nis_python2 = six.PY2\nis_python3 = six.PY3\nis_windows = sys.platform.startswith('win')\nis_linux = sys.platform.startswith('linux')\nis_osx = sys.platform == 'darwin'\n\n\nif is_python2:\n import imp\n bytes_ = str\n unicode_ = unicode # noqa: F821\n basestring_ = basestring # noqa: F821\n input_ = raw_input # noqa: F821\n json_dumps = lambda data: ujson.dumps(data, indent=2, escape_forward_slashes=False).decode('utf8')\n path2str = lambda path: str(path).decode('utf8')\n\nelif is_python3:\n import importlib.util\n bytes_ = bytes\n unicode_ = str\n basestring_ = str\n input_ = input\n json_dumps = lambda data: ujson.dumps(data, indent=2, escape_forward_slashes=False)\n path2str = lambda path: str(path)\n\n\ndef b_to_str(b_str):\n if is_python2:\n return b_str\n # important: if no encoding is set, string becomes \"b'...'\"\n return str(b_str, encoding='utf8')\n\n\ndef getattr_(obj, name, *default):\n if is_python3 and isinstance(name, bytes):\n name = name.decode('utf8')\n return getattr(obj, name, *default)\n\n\ndef symlink_to(orig, dest):\n if is_python2 and is_windows:\n import subprocess\n subprocess.call(['mklink', '/d', path2str(orig), path2str(dest)], shell=True)\n else:\n orig.symlink_to(dest)\n\n\ndef is_config(python2=None, python3=None, windows=None, linux=None, osx=None):\n return ((python2 is None or python2 == is_python2) and\n (python3 is None or python3 == is_python3) and\n (windows is None or windows == is_windows) and\n (linux is None or linux == is_linux) and\n (osx is None or osx == is_osx))\n\n\ndef normalize_string_keys(old):\n \"\"\"Given a dictionary, make sure keys are unicode strings, not bytes.\"\"\"\n new = {}\n for key, value in old.items():\n if isinstance(key, bytes_):\n new[key.decode('utf8')] = value\n else:\n new[key] = value\n return new\n\n\ndef import_file(name, loc):\n loc = str(loc)\n if is_python2:\n return imp.load_source(name, loc)\n else:\n spec = importlib.util.spec_from_file_location(name, str(loc))\n module = importlib.util.module_from_spec(spec)\n spec.loader.exec_module(module)\n return module\n\n\ndef locale_escape(string, errors='replace'):\n '''\n Mangle non-supported characters, for savages with ascii terminals.\n '''\n encoding = locale.getpreferredencoding()\n string = string.encode(encoding, errors).decode('utf8')\n return string\n", "path": "spacy/compat.py"}], "after_files": [{"content": "# coding: utf8\nfrom __future__ import unicode_literals\n\nimport six\nimport ftfy\nimport sys\nimport ujson\nimport itertools\nimport locale\n\nfrom thinc.neural.util import copy_array\n\ntry:\n import cPickle as pickle\nexcept ImportError:\n import pickle\n\ntry:\n import copy_reg\nexcept ImportError:\n import copyreg as copy_reg\n\ntry:\n from cupy.cuda.stream import Stream as CudaStream\nexcept ImportError:\n CudaStream = None\n\ntry:\n import cupy\nexcept ImportError:\n cupy = None\n\ntry:\n from thinc.neural.optimizers import Optimizer\nexcept ImportError:\n from thinc.neural.optimizers import Adam as Optimizer\n\npickle = pickle\ncopy_reg = copy_reg\nCudaStream = CudaStream\ncupy = cupy\nfix_text = ftfy.fix_text\ncopy_array = copy_array\nizip = getattr(itertools, 'izip', zip)\n\nis_windows = sys.platform.startswith('win')\nis_linux = sys.platform.startswith('linux')\nis_osx = sys.platform == 'darwin'\n\nis_python2 = six.PY2\nis_python3 = six.PY3\nis_python_pre_3_5 = is_python2 or (is_python3 and sys.version_info[1]<5)\n\nif is_python2:\n bytes_ = str\n unicode_ = unicode # noqa: F821\n basestring_ = basestring # noqa: F821\n input_ = raw_input # noqa: F821\n json_dumps = lambda data: ujson.dumps(data, indent=2, escape_forward_slashes=False).decode('utf8')\n path2str = lambda path: str(path).decode('utf8')\n\nelif is_python3:\n bytes_ = bytes\n unicode_ = str\n basestring_ = str\n input_ = input\n json_dumps = lambda data: ujson.dumps(data, indent=2, escape_forward_slashes=False)\n path2str = lambda path: str(path)\n\n\ndef b_to_str(b_str):\n if is_python2:\n return b_str\n # important: if no encoding is set, string becomes \"b'...'\"\n return str(b_str, encoding='utf8')\n\n\ndef getattr_(obj, name, *default):\n if is_python3 and isinstance(name, bytes):\n name = name.decode('utf8')\n return getattr(obj, name, *default)\n\n\ndef symlink_to(orig, dest):\n if is_python2 and is_windows:\n import subprocess\n subprocess.call(['mklink', '/d', path2str(orig), path2str(dest)], shell=True)\n else:\n orig.symlink_to(dest)\n\n\ndef is_config(python2=None, python3=None, windows=None, linux=None, osx=None):\n return ((python2 is None or python2 == is_python2) and\n (python3 is None or python3 == is_python3) and\n (windows is None or windows == is_windows) and\n (linux is None or linux == is_linux) and\n (osx is None or osx == is_osx))\n\n\ndef normalize_string_keys(old):\n \"\"\"Given a dictionary, make sure keys are unicode strings, not bytes.\"\"\"\n new = {}\n for key, value in old.items():\n if isinstance(key, bytes_):\n new[key.decode('utf8')] = value\n else:\n new[key] = value\n return new\n\n\ndef import_file(name, loc):\n loc = str(loc)\n if is_python_pre_3_5:\n import imp\n return imp.load_source(name, loc)\n else:\n import importlib.util\n spec = importlib.util.spec_from_file_location(name, str(loc))\n module = importlib.util.module_from_spec(spec)\n spec.loader.exec_module(module)\n return module\n\n\ndef locale_escape(string, errors='replace'):\n '''\n Mangle non-supported characters, for savages with ascii terminals.\n '''\n encoding = locale.getpreferredencoding()\n string = string.encode(encoding, errors).decode('utf8')\n return string\n", "path": "spacy/compat.py"}]} | 2,773 | 370 |
gh_patches_debug_28175 | rasdani/github-patches | git_diff | pyjanitor-devs__pyjanitor-449 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[DOC] Clarify original-dataframe mutation behavior in pyjanitor function docstrings
# Brief Description of Fix
Currently, some pyjanitor functions mutate the original dataframe and others return a copy. Solutions are heavily discussed in #79 but no conclusion has been reached. At the moment, it is unclear, without experimentation from the user, which behavior applies in each function.
In the interim, I propose to explicitly clarify this behavior in each function's docstring so the user has a clear idea regarding the function's mutating behavior. Below is a sample of what this could look like for `.clean_names()`:
"""
Clean column names.
Takes all column names, converts them to lowercase, then replaces all
spaces with underscores. <b>Does not mutate original dataframe.</b>
"""
Happy to add this line somewhere else in the docstring if inappropriate here.
- [Link to documentation page](https://pyjanitor.readthedocs.io/reference/index.html)
- [Link to exact file to be edited](https://github.com/ericmjl/pyjanitor/blob/dev/janitor/functions.py)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `janitor/chemistry.py`
Content:
```
1 """
2 Chemistry and cheminformatics-oriented data cleaning functions.
3 """
4
5 from typing import Union
6
7 import numpy as np
8 import pandas as pd
9 import pandas_flavor as pf
10
11 from .utils import deprecated_alias, import_message
12
13 try:
14 from rdkit import Chem, DataStructs
15 from rdkit.Chem.rdMolDescriptors import (
16 GetHashedMorganFingerprint,
17 GetMorganFingerprintAsBitVect,
18 CalcChi0n,
19 CalcChi0v,
20 CalcChi1n,
21 CalcChi1v,
22 CalcChi2n,
23 CalcChi2v,
24 CalcChi3n,
25 CalcChi3v,
26 CalcChi4n,
27 CalcChi4v,
28 CalcExactMolWt,
29 CalcFractionCSP3,
30 CalcHallKierAlpha,
31 CalcKappa1,
32 CalcKappa2,
33 CalcKappa3,
34 CalcLabuteASA,
35 CalcNumAliphaticCarbocycles,
36 CalcNumAliphaticHeterocycles,
37 CalcNumAliphaticRings,
38 CalcNumAmideBonds,
39 CalcNumAromaticCarbocycles,
40 CalcNumAromaticHeterocycles,
41 CalcNumAromaticRings,
42 CalcNumAtomStereoCenters,
43 CalcNumBridgeheadAtoms,
44 CalcNumHBA,
45 CalcNumHBD,
46 CalcNumHeteroatoms,
47 CalcNumHeterocycles,
48 CalcNumLipinskiHBA,
49 CalcNumLipinskiHBD,
50 CalcNumRings,
51 CalcNumSaturatedCarbocycles,
52 CalcNumSaturatedHeterocycles,
53 CalcNumSaturatedRings,
54 CalcNumSpiroAtoms,
55 CalcNumUnspecifiedAtomStereoCenters,
56 CalcTPSA,
57 GetMACCSKeysFingerprint,
58 )
59 except ImportError:
60 import_message("chemistry", "rdkit", "conda install -c rdkit rdkit")
61
62 try:
63 from tqdm import tqdm
64 from tqdm import tqdm_notebook as tqdmn
65 except ImportError:
66 import_message("chemistry", "tqdm", "conda install -c conda-forge tqdm")
67
68
69 @pf.register_dataframe_method
70 @deprecated_alias(smiles_col="smiles_column_name", mols_col="mols_column_name")
71 def smiles2mol(
72 df: pd.DataFrame,
73 smiles_column_name,
74 mols_column_name,
75 drop_nulls: bool = True,
76 progressbar: Union[None, str] = None,
77 ) -> pd.DataFrame:
78 """
79 Convert a column of SMILES strings into RDKit Mol objects.
80
81 Automatically drops invalid SMILES, as determined by RDKIT.
82
83 Method chaining usage:
84
85 .. code-block:: python
86
87 df = (
88 pd.DataFrame(...)
89 .smiles2mol(smiles_column_name='smiles', mols_column_name='mols')
90 )
91
92 A progressbar can be optionally used.
93
94 - Pass in "notebook" to show a tqdm notebook progressbar. (ipywidgets must
95 be enabled with your Jupyter installation.)
96 - Pass in "terminal" to show a tqdm progressbar. Better suited for use
97 with scripts.
98 - "none" is the default value - progress bar will be not be shown.
99
100 :param df: pandas DataFrame.
101 :param smiles_column_name: Name of column that holds the SMILES strings.
102 :param mols_column_name: Name to be given to the new mols column.
103 :param drop_nulls: Whether to drop rows whose mols failed to be
104 constructed.
105 :param progressbar: Whether to show a progressbar or not.
106 """
107 valid_progress = ["notebook", "terminal", None]
108 if progressbar not in valid_progress:
109 raise ValueError(f"progressbar kwarg must be one of {valid_progress}")
110
111 if progressbar is None:
112 df[mols_column_name] = df[smiles_column_name].apply(
113 lambda x: Chem.MolFromSmiles(x)
114 )
115 else:
116 if progressbar == "notebook":
117 tqdmn().pandas(desc="mols")
118 elif progressbar == "terminal":
119 tqdm.pandas(desc="mols")
120 df[mols_column_name] = df[smiles_column_name].progress_apply(
121 lambda x: Chem.MolFromSmiles(x)
122 )
123
124 if drop_nulls:
125 df.dropna(subset=[mols_column_name], inplace=True)
126 df.reset_index(inplace=True, drop=True)
127 return df
128
129
130 @pf.register_dataframe_method
131 @deprecated_alias(mols_col="mols_column_name")
132 def morgan_fingerprint(
133 df: pd.DataFrame,
134 mols_column_name,
135 radius: int = 3,
136 nbits: int = 2048,
137 kind: str = "counts",
138 ) -> pd.DataFrame:
139 """
140 Convert a column of RDKIT Mol objects into Morgan Fingerprints.
141
142 Returns a new dataframe without any of the original data. This is
143 intentional, as Morgan fingerprints are usually high-dimensional
144 features.
145
146 Method chaining usage:
147
148 .. code-block:: python
149
150 df = pd.DataFrame(...)
151 morgans = df.morgan_fingerprint(mols_column_name='mols', radius=3,
152 nbits=2048)
153
154 If you wish to join the Morgans back into the original dataframe, this
155 can be accomplished by doing a `join`, becuase the indices are
156 preserved:
157
158 .. code-block:: python
159
160 joined = df.join(morgans)
161
162 :param df: A pandas DataFrame.
163 :param mols_column_name: The name of the column that has the RDKIT
164 mol objects
165 :param radius: Radius of Morgan fingerprints. Defaults to 3.
166 :param nbits: The length of the fingerprints. Defaults to 2048.
167 :param kind: Whether to return counts or bits. Defaults to counts.
168 :returns: A pandas DataFrame
169 """
170 acceptable_kinds = ["counts", "bits"]
171 if kind not in acceptable_kinds:
172 raise ValueError(f"`kind` must be one of {acceptable_kinds}")
173
174 if kind == "bits":
175 fps = [
176 GetMorganFingerprintAsBitVect(m, radius, nbits)
177 for m in df[mols_column_name]
178 ]
179 elif kind == "counts":
180 fps = [
181 GetHashedMorganFingerprint(m, radius, nbits)
182 for m in df[mols_column_name]
183 ]
184
185 np_fps = []
186 for fp in fps:
187 arr = np.zeros((1,))
188 DataStructs.ConvertToNumpyArray(fp, arr)
189 np_fps.append(arr)
190 np_fps = np.vstack(np_fps)
191 fpdf = pd.DataFrame(np_fps)
192 fpdf.index = df.index
193 return fpdf
194
195
196 @pf.register_dataframe_method
197 @deprecated_alias(mols_col="mols_column_name")
198 def molecular_descriptors(df: pd.DataFrame, mols_column_name) -> pd.DataFrame:
199 """
200 Convert a column of RDKIT mol objects into a Pandas DataFrame
201 of molecular descriptors.
202
203 Returns a new dataframe without any of the original data. This is
204 intentional to leave the user only with the data requested.
205
206 The molecular descriptors are from the rdkit.Chem.rdMolDescriptors:
207
208 Chi0n, Chi0v, Chi1n, Chi1v, Chi2n, Chi2v, Chi3n, Chi3v,
209 Chi4n, Chi4v, ExactMolWt, FractionCSP3, HallKierAlpha, Kappa1,
210 Kappa2, Kappa3, LabuteASA, NumAliphaticCarbocycles,
211 NumAliphaticHeterocycles, NumAliphaticRings, NumAmideBonds,
212 NumAromaticCarbocycles, NumAromaticHeterocycles, NumAromaticRings,
213 NumAtomStereoCenters, NumBridgeheadAtoms, NumHBA, NumHBD,
214 NumHeteroatoms, NumHeterocycles, NumLipinskiHBA, NumLipinskiHBD,
215 NumRings, NumSaturatedCarbocycles, NumSaturatedHeterocycles,
216 NumSaturatedRings, NumSpiroAtoms, NumUnspecifiedAtomStereoCenters,
217 TPSA.
218
219 Method chaining usage:
220
221 .. code-block:: python
222
223 df = pd.DataFrame(...)
224 mol_desc = df.molecular_descriptors(mols_column_name='mols')
225
226 If you wish to join the molecular descriptors back into the original
227 dataframe, this can be accomplished by doing a `join`,
228 because the indices are preserved:
229
230 .. code-block:: python
231
232 joined = df.join(mol_desc)
233
234 :param df: A pandas DataFrame.
235 :param mols_column_name: The name of the column that has the RDKIT mol
236 objects.
237 :returns: A pandas DataFrame
238 """
239 descriptors = [
240 CalcChi0n,
241 CalcChi0v,
242 CalcChi1n,
243 CalcChi1v,
244 CalcChi2n,
245 CalcChi2v,
246 CalcChi3n,
247 CalcChi3v,
248 CalcChi4n,
249 CalcChi4v,
250 CalcExactMolWt,
251 CalcFractionCSP3,
252 CalcHallKierAlpha,
253 CalcKappa1,
254 CalcKappa2,
255 CalcKappa3,
256 CalcLabuteASA,
257 CalcNumAliphaticCarbocycles,
258 CalcNumAliphaticHeterocycles,
259 CalcNumAliphaticRings,
260 CalcNumAmideBonds,
261 CalcNumAromaticCarbocycles,
262 CalcNumAromaticHeterocycles,
263 CalcNumAromaticRings,
264 CalcNumAtomStereoCenters,
265 CalcNumBridgeheadAtoms,
266 CalcNumHBA,
267 CalcNumHBD,
268 CalcNumHeteroatoms,
269 CalcNumHeterocycles,
270 CalcNumLipinskiHBA,
271 CalcNumLipinskiHBD,
272 CalcNumRings,
273 CalcNumSaturatedCarbocycles,
274 CalcNumSaturatedHeterocycles,
275 CalcNumSaturatedRings,
276 CalcNumSpiroAtoms,
277 CalcNumUnspecifiedAtomStereoCenters,
278 CalcTPSA,
279 ]
280 descriptors_mapping = {f.__name__.strip("Calc"): f for f in descriptors}
281
282 feats = dict()
283 for name, func in descriptors_mapping.items():
284 feats[name] = [func(m) for m in df[mols_column_name]]
285 return pd.DataFrame(feats)
286
287
288 @pf.register_dataframe_method
289 @deprecated_alias(mols_col="mols_column_name")
290 def maccs_keys_fingerprint(df: pd.DataFrame, mols_column_name) -> pd.DataFrame:
291 """
292 Convert a column of RDKIT mol objects into MACCS Keys Fingeprints.
293
294 Returns a new dataframe without any of the original data.
295 This is intentional to leave the user with the data requested.
296
297 Method chaining usage:
298
299 .. code-block:: python
300
301 df = pd.DataFrame(...)
302 maccs = df.maccs_keys_fingerprint(mols_column_name='mols')
303
304 If you wish to join the molecular descriptors back into the
305 original dataframe, this can be accomplished by doing a `join`,
306 because the indices are preserved:
307
308 .. code-block:: python
309
310 joined = df.join(maccs_keys_fingerprint)
311
312
313 :param df: A pandas DataFrame.
314 :param mols_column_name: The name of the column that has the RDKIT mol
315 objects.
316 :returns: A pandas DataFrame
317 """
318
319 maccs = [GetMACCSKeysFingerprint(m) for m in df[mols_column_name]]
320
321 np_maccs = []
322
323 for macc in maccs:
324 arr = np.zeros((1,))
325 DataStructs.ConvertToNumpyArray(macc, arr)
326 np_maccs.append(arr)
327 np_maccs = np.vstack(np_maccs)
328 fmaccs = pd.DataFrame(np_maccs)
329 fmaccs.index = df.index
330 return fmaccs
331
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/janitor/chemistry.py b/janitor/chemistry.py
--- a/janitor/chemistry.py
+++ b/janitor/chemistry.py
@@ -80,6 +80,8 @@
Automatically drops invalid SMILES, as determined by RDKIT.
+ This method mutates the original DataFrame.
+
Method chaining usage:
.. code-block:: python
@@ -143,6 +145,8 @@
intentional, as Morgan fingerprints are usually high-dimensional
features.
+ This method does not mutate the original DataFrame.
+
Method chaining usage:
.. code-block:: python
@@ -203,6 +207,8 @@
Returns a new dataframe without any of the original data. This is
intentional to leave the user only with the data requested.
+ This method does not mutate the original DataFrame.
+
The molecular descriptors are from the rdkit.Chem.rdMolDescriptors:
Chi0n, Chi0v, Chi1n, Chi1v, Chi2n, Chi2v, Chi3n, Chi3v,
@@ -294,6 +300,8 @@
Returns a new dataframe without any of the original data.
This is intentional to leave the user with the data requested.
+ This method does not mutate the original DataFrame.
+
Method chaining usage:
.. code-block:: python
| {"golden_diff": "diff --git a/janitor/chemistry.py b/janitor/chemistry.py\n--- a/janitor/chemistry.py\n+++ b/janitor/chemistry.py\n@@ -80,6 +80,8 @@\n \n Automatically drops invalid SMILES, as determined by RDKIT.\n \n+ This method mutates the original DataFrame.\n+\n Method chaining usage:\n \n .. code-block:: python\n@@ -143,6 +145,8 @@\n intentional, as Morgan fingerprints are usually high-dimensional\n features.\n \n+ This method does not mutate the original DataFrame.\n+\n Method chaining usage:\n \n .. code-block:: python\n@@ -203,6 +207,8 @@\n Returns a new dataframe without any of the original data. This is\n intentional to leave the user only with the data requested.\n \n+ This method does not mutate the original DataFrame.\n+\n The molecular descriptors are from the rdkit.Chem.rdMolDescriptors:\n \n Chi0n, Chi0v, Chi1n, Chi1v, Chi2n, Chi2v, Chi3n, Chi3v,\n@@ -294,6 +300,8 @@\n Returns a new dataframe without any of the original data.\n This is intentional to leave the user with the data requested.\n \n+ This method does not mutate the original DataFrame.\n+\n Method chaining usage:\n \n .. code-block:: python\n", "issue": "[DOC] Clarify original-dataframe mutation behavior in pyjanitor function docstrings\n# Brief Description of Fix\r\nCurrently, some pyjanitor functions mutate the original dataframe and others return a copy. Solutions are heavily discussed in #79 but no conclusion has been reached. At the moment, it is unclear, without experimentation from the user, which behavior applies in each function. \r\n\r\nIn the interim, I propose to explicitly clarify this behavior in each function's docstring so the user has a clear idea regarding the function's mutating behavior. Below is a sample of what this could look like for `.clean_names()`:\r\n\r\n\"\"\"\r\nClean column names.\r\n Takes all column names, converts them to lowercase, then replaces all\r\n spaces with underscores. <b>Does not mutate original dataframe.</b>\r\n\"\"\"\r\n\r\nHappy to add this line somewhere else in the docstring if inappropriate here. \r\n\r\n- [Link to documentation page](https://pyjanitor.readthedocs.io/reference/index.html)\r\n- [Link to exact file to be edited](https://github.com/ericmjl/pyjanitor/blob/dev/janitor/functions.py)\r\n\n", "before_files": [{"content": "\"\"\"\nChemistry and cheminformatics-oriented data cleaning functions.\n\"\"\"\n\nfrom typing import Union\n\nimport numpy as np\nimport pandas as pd\nimport pandas_flavor as pf\n\nfrom .utils import deprecated_alias, import_message\n\ntry:\n from rdkit import Chem, DataStructs\n from rdkit.Chem.rdMolDescriptors import (\n GetHashedMorganFingerprint,\n GetMorganFingerprintAsBitVect,\n CalcChi0n,\n CalcChi0v,\n CalcChi1n,\n CalcChi1v,\n CalcChi2n,\n CalcChi2v,\n CalcChi3n,\n CalcChi3v,\n CalcChi4n,\n CalcChi4v,\n CalcExactMolWt,\n CalcFractionCSP3,\n CalcHallKierAlpha,\n CalcKappa1,\n CalcKappa2,\n CalcKappa3,\n CalcLabuteASA,\n CalcNumAliphaticCarbocycles,\n CalcNumAliphaticHeterocycles,\n CalcNumAliphaticRings,\n CalcNumAmideBonds,\n CalcNumAromaticCarbocycles,\n CalcNumAromaticHeterocycles,\n CalcNumAromaticRings,\n CalcNumAtomStereoCenters,\n CalcNumBridgeheadAtoms,\n CalcNumHBA,\n CalcNumHBD,\n CalcNumHeteroatoms,\n CalcNumHeterocycles,\n CalcNumLipinskiHBA,\n CalcNumLipinskiHBD,\n CalcNumRings,\n CalcNumSaturatedCarbocycles,\n CalcNumSaturatedHeterocycles,\n CalcNumSaturatedRings,\n CalcNumSpiroAtoms,\n CalcNumUnspecifiedAtomStereoCenters,\n CalcTPSA,\n GetMACCSKeysFingerprint,\n )\nexcept ImportError:\n import_message(\"chemistry\", \"rdkit\", \"conda install -c rdkit rdkit\")\n\ntry:\n from tqdm import tqdm\n from tqdm import tqdm_notebook as tqdmn\nexcept ImportError:\n import_message(\"chemistry\", \"tqdm\", \"conda install -c conda-forge tqdm\")\n\n\[email protected]_dataframe_method\n@deprecated_alias(smiles_col=\"smiles_column_name\", mols_col=\"mols_column_name\")\ndef smiles2mol(\n df: pd.DataFrame,\n smiles_column_name,\n mols_column_name,\n drop_nulls: bool = True,\n progressbar: Union[None, str] = None,\n) -> pd.DataFrame:\n \"\"\"\n Convert a column of SMILES strings into RDKit Mol objects.\n\n Automatically drops invalid SMILES, as determined by RDKIT.\n\n Method chaining usage:\n\n .. code-block:: python\n\n df = (\n pd.DataFrame(...)\n .smiles2mol(smiles_column_name='smiles', mols_column_name='mols')\n )\n\n A progressbar can be optionally used.\n\n - Pass in \"notebook\" to show a tqdm notebook progressbar. (ipywidgets must\n be enabled with your Jupyter installation.)\n - Pass in \"terminal\" to show a tqdm progressbar. Better suited for use\n with scripts.\n - \"none\" is the default value - progress bar will be not be shown.\n\n :param df: pandas DataFrame.\n :param smiles_column_name: Name of column that holds the SMILES strings.\n :param mols_column_name: Name to be given to the new mols column.\n :param drop_nulls: Whether to drop rows whose mols failed to be\n constructed.\n :param progressbar: Whether to show a progressbar or not.\n \"\"\"\n valid_progress = [\"notebook\", \"terminal\", None]\n if progressbar not in valid_progress:\n raise ValueError(f\"progressbar kwarg must be one of {valid_progress}\")\n\n if progressbar is None:\n df[mols_column_name] = df[smiles_column_name].apply(\n lambda x: Chem.MolFromSmiles(x)\n )\n else:\n if progressbar == \"notebook\":\n tqdmn().pandas(desc=\"mols\")\n elif progressbar == \"terminal\":\n tqdm.pandas(desc=\"mols\")\n df[mols_column_name] = df[smiles_column_name].progress_apply(\n lambda x: Chem.MolFromSmiles(x)\n )\n\n if drop_nulls:\n df.dropna(subset=[mols_column_name], inplace=True)\n df.reset_index(inplace=True, drop=True)\n return df\n\n\[email protected]_dataframe_method\n@deprecated_alias(mols_col=\"mols_column_name\")\ndef morgan_fingerprint(\n df: pd.DataFrame,\n mols_column_name,\n radius: int = 3,\n nbits: int = 2048,\n kind: str = \"counts\",\n) -> pd.DataFrame:\n \"\"\"\n Convert a column of RDKIT Mol objects into Morgan Fingerprints.\n\n Returns a new dataframe without any of the original data. This is\n intentional, as Morgan fingerprints are usually high-dimensional\n features.\n\n Method chaining usage:\n\n .. code-block:: python\n\n df = pd.DataFrame(...)\n morgans = df.morgan_fingerprint(mols_column_name='mols', radius=3,\n nbits=2048)\n\n If you wish to join the Morgans back into the original dataframe, this\n can be accomplished by doing a `join`, becuase the indices are\n preserved:\n\n .. code-block:: python\n\n joined = df.join(morgans)\n\n :param df: A pandas DataFrame.\n :param mols_column_name: The name of the column that has the RDKIT\n mol objects\n :param radius: Radius of Morgan fingerprints. Defaults to 3.\n :param nbits: The length of the fingerprints. Defaults to 2048.\n :param kind: Whether to return counts or bits. Defaults to counts.\n :returns: A pandas DataFrame\n \"\"\"\n acceptable_kinds = [\"counts\", \"bits\"]\n if kind not in acceptable_kinds:\n raise ValueError(f\"`kind` must be one of {acceptable_kinds}\")\n\n if kind == \"bits\":\n fps = [\n GetMorganFingerprintAsBitVect(m, radius, nbits)\n for m in df[mols_column_name]\n ]\n elif kind == \"counts\":\n fps = [\n GetHashedMorganFingerprint(m, radius, nbits)\n for m in df[mols_column_name]\n ]\n\n np_fps = []\n for fp in fps:\n arr = np.zeros((1,))\n DataStructs.ConvertToNumpyArray(fp, arr)\n np_fps.append(arr)\n np_fps = np.vstack(np_fps)\n fpdf = pd.DataFrame(np_fps)\n fpdf.index = df.index\n return fpdf\n\n\[email protected]_dataframe_method\n@deprecated_alias(mols_col=\"mols_column_name\")\ndef molecular_descriptors(df: pd.DataFrame, mols_column_name) -> pd.DataFrame:\n \"\"\"\n Convert a column of RDKIT mol objects into a Pandas DataFrame\n of molecular descriptors.\n\n Returns a new dataframe without any of the original data. This is\n intentional to leave the user only with the data requested.\n\n The molecular descriptors are from the rdkit.Chem.rdMolDescriptors:\n\n Chi0n, Chi0v, Chi1n, Chi1v, Chi2n, Chi2v, Chi3n, Chi3v,\n Chi4n, Chi4v, ExactMolWt, FractionCSP3, HallKierAlpha, Kappa1,\n Kappa2, Kappa3, LabuteASA, NumAliphaticCarbocycles,\n NumAliphaticHeterocycles, NumAliphaticRings, NumAmideBonds,\n NumAromaticCarbocycles, NumAromaticHeterocycles, NumAromaticRings,\n NumAtomStereoCenters, NumBridgeheadAtoms, NumHBA, NumHBD,\n NumHeteroatoms, NumHeterocycles, NumLipinskiHBA, NumLipinskiHBD,\n NumRings, NumSaturatedCarbocycles, NumSaturatedHeterocycles,\n NumSaturatedRings, NumSpiroAtoms, NumUnspecifiedAtomStereoCenters,\n TPSA.\n\n Method chaining usage:\n\n .. code-block:: python\n\n df = pd.DataFrame(...)\n mol_desc = df.molecular_descriptors(mols_column_name='mols')\n\n If you wish to join the molecular descriptors back into the original\n dataframe, this can be accomplished by doing a `join`,\n because the indices are preserved:\n\n .. code-block:: python\n\n joined = df.join(mol_desc)\n\n :param df: A pandas DataFrame.\n :param mols_column_name: The name of the column that has the RDKIT mol\n objects.\n :returns: A pandas DataFrame\n \"\"\"\n descriptors = [\n CalcChi0n,\n CalcChi0v,\n CalcChi1n,\n CalcChi1v,\n CalcChi2n,\n CalcChi2v,\n CalcChi3n,\n CalcChi3v,\n CalcChi4n,\n CalcChi4v,\n CalcExactMolWt,\n CalcFractionCSP3,\n CalcHallKierAlpha,\n CalcKappa1,\n CalcKappa2,\n CalcKappa3,\n CalcLabuteASA,\n CalcNumAliphaticCarbocycles,\n CalcNumAliphaticHeterocycles,\n CalcNumAliphaticRings,\n CalcNumAmideBonds,\n CalcNumAromaticCarbocycles,\n CalcNumAromaticHeterocycles,\n CalcNumAromaticRings,\n CalcNumAtomStereoCenters,\n CalcNumBridgeheadAtoms,\n CalcNumHBA,\n CalcNumHBD,\n CalcNumHeteroatoms,\n CalcNumHeterocycles,\n CalcNumLipinskiHBA,\n CalcNumLipinskiHBD,\n CalcNumRings,\n CalcNumSaturatedCarbocycles,\n CalcNumSaturatedHeterocycles,\n CalcNumSaturatedRings,\n CalcNumSpiroAtoms,\n CalcNumUnspecifiedAtomStereoCenters,\n CalcTPSA,\n ]\n descriptors_mapping = {f.__name__.strip(\"Calc\"): f for f in descriptors}\n\n feats = dict()\n for name, func in descriptors_mapping.items():\n feats[name] = [func(m) for m in df[mols_column_name]]\n return pd.DataFrame(feats)\n\n\[email protected]_dataframe_method\n@deprecated_alias(mols_col=\"mols_column_name\")\ndef maccs_keys_fingerprint(df: pd.DataFrame, mols_column_name) -> pd.DataFrame:\n \"\"\"\n Convert a column of RDKIT mol objects into MACCS Keys Fingeprints.\n\n Returns a new dataframe without any of the original data.\n This is intentional to leave the user with the data requested.\n\n Method chaining usage:\n\n .. code-block:: python\n\n df = pd.DataFrame(...)\n maccs = df.maccs_keys_fingerprint(mols_column_name='mols')\n\n If you wish to join the molecular descriptors back into the\n original dataframe, this can be accomplished by doing a `join`,\n because the indices are preserved:\n\n .. code-block:: python\n\n joined = df.join(maccs_keys_fingerprint)\n\n\n :param df: A pandas DataFrame.\n :param mols_column_name: The name of the column that has the RDKIT mol\n objects.\n :returns: A pandas DataFrame\n \"\"\"\n\n maccs = [GetMACCSKeysFingerprint(m) for m in df[mols_column_name]]\n\n np_maccs = []\n\n for macc in maccs:\n arr = np.zeros((1,))\n DataStructs.ConvertToNumpyArray(macc, arr)\n np_maccs.append(arr)\n np_maccs = np.vstack(np_maccs)\n fmaccs = pd.DataFrame(np_maccs)\n fmaccs.index = df.index\n return fmaccs\n", "path": "janitor/chemistry.py"}], "after_files": [{"content": "\"\"\"\nChemistry and cheminformatics-oriented data cleaning functions.\n\"\"\"\n\nfrom typing import Union\n\nimport numpy as np\nimport pandas as pd\nimport pandas_flavor as pf\n\nfrom .utils import deprecated_alias, import_message\n\ntry:\n from rdkit import Chem, DataStructs\n from rdkit.Chem.rdMolDescriptors import (\n GetHashedMorganFingerprint,\n GetMorganFingerprintAsBitVect,\n CalcChi0n,\n CalcChi0v,\n CalcChi1n,\n CalcChi1v,\n CalcChi2n,\n CalcChi2v,\n CalcChi3n,\n CalcChi3v,\n CalcChi4n,\n CalcChi4v,\n CalcExactMolWt,\n CalcFractionCSP3,\n CalcHallKierAlpha,\n CalcKappa1,\n CalcKappa2,\n CalcKappa3,\n CalcLabuteASA,\n CalcNumAliphaticCarbocycles,\n CalcNumAliphaticHeterocycles,\n CalcNumAliphaticRings,\n CalcNumAmideBonds,\n CalcNumAromaticCarbocycles,\n CalcNumAromaticHeterocycles,\n CalcNumAromaticRings,\n CalcNumAtomStereoCenters,\n CalcNumBridgeheadAtoms,\n CalcNumHBA,\n CalcNumHBD,\n CalcNumHeteroatoms,\n CalcNumHeterocycles,\n CalcNumLipinskiHBA,\n CalcNumLipinskiHBD,\n CalcNumRings,\n CalcNumSaturatedCarbocycles,\n CalcNumSaturatedHeterocycles,\n CalcNumSaturatedRings,\n CalcNumSpiroAtoms,\n CalcNumUnspecifiedAtomStereoCenters,\n CalcTPSA,\n GetMACCSKeysFingerprint,\n )\nexcept ImportError:\n import_message(\"chemistry\", \"rdkit\", \"conda install -c rdkit rdkit\")\n\ntry:\n from tqdm import tqdm\n from tqdm import tqdm_notebook as tqdmn\nexcept ImportError:\n import_message(\"chemistry\", \"tqdm\", \"conda install -c conda-forge tqdm\")\n\n\[email protected]_dataframe_method\n@deprecated_alias(smiles_col=\"smiles_column_name\", mols_col=\"mols_column_name\")\ndef smiles2mol(\n df: pd.DataFrame,\n smiles_column_name,\n mols_column_name,\n drop_nulls: bool = True,\n progressbar: Union[None, str] = None,\n) -> pd.DataFrame:\n \"\"\"\n Convert a column of SMILES strings into RDKit Mol objects.\n\n Automatically drops invalid SMILES, as determined by RDKIT.\n\n This method mutates the original DataFrame.\n\n Method chaining usage:\n\n .. code-block:: python\n\n df = (\n pd.DataFrame(...)\n .smiles2mol(smiles_column_name='smiles', mols_column_name='mols')\n )\n\n A progressbar can be optionally used.\n\n - Pass in \"notebook\" to show a tqdm notebook progressbar. (ipywidgets must\n be enabled with your Jupyter installation.)\n - Pass in \"terminal\" to show a tqdm progressbar. Better suited for use\n with scripts.\n - \"none\" is the default value - progress bar will be not be shown.\n\n :param df: pandas DataFrame.\n :param smiles_column_name: Name of column that holds the SMILES strings.\n :param mols_column_name: Name to be given to the new mols column.\n :param drop_nulls: Whether to drop rows whose mols failed to be\n constructed.\n :param progressbar: Whether to show a progressbar or not.\n \"\"\"\n valid_progress = [\"notebook\", \"terminal\", None]\n if progressbar not in valid_progress:\n raise ValueError(f\"progressbar kwarg must be one of {valid_progress}\")\n\n if progressbar is None:\n df[mols_column_name] = df[smiles_column_name].apply(\n lambda x: Chem.MolFromSmiles(x)\n )\n else:\n if progressbar == \"notebook\":\n tqdmn().pandas(desc=\"mols\")\n elif progressbar == \"terminal\":\n tqdm.pandas(desc=\"mols\")\n df[mols_column_name] = df[smiles_column_name].progress_apply(\n lambda x: Chem.MolFromSmiles(x)\n )\n\n if drop_nulls:\n df.dropna(subset=[mols_column_name], inplace=True)\n df.reset_index(inplace=True, drop=True)\n return df\n\n\[email protected]_dataframe_method\n@deprecated_alias(mols_col=\"mols_column_name\")\ndef morgan_fingerprint(\n df: pd.DataFrame,\n mols_column_name,\n radius: int = 3,\n nbits: int = 2048,\n kind: str = \"counts\",\n) -> pd.DataFrame:\n \"\"\"\n Convert a column of RDKIT Mol objects into Morgan Fingerprints.\n\n Returns a new dataframe without any of the original data. This is\n intentional, as Morgan fingerprints are usually high-dimensional\n features.\n\n This method does not mutate the original DataFrame.\n\n Method chaining usage:\n\n .. code-block:: python\n\n df = pd.DataFrame(...)\n morgans = df.morgan_fingerprint(mols_column_name='mols', radius=3,\n nbits=2048)\n\n If you wish to join the Morgans back into the original dataframe, this\n can be accomplished by doing a `join`, becuase the indices are\n preserved:\n\n .. code-block:: python\n\n joined = df.join(morgans)\n\n :param df: A pandas DataFrame.\n :param mols_column_name: The name of the column that has the RDKIT\n mol objects\n :param radius: Radius of Morgan fingerprints. Defaults to 3.\n :param nbits: The length of the fingerprints. Defaults to 2048.\n :param kind: Whether to return counts or bits. Defaults to counts.\n :returns: A pandas DataFrame\n \"\"\"\n acceptable_kinds = [\"counts\", \"bits\"]\n if kind not in acceptable_kinds:\n raise ValueError(f\"`kind` must be one of {acceptable_kinds}\")\n\n if kind == \"bits\":\n fps = [\n GetMorganFingerprintAsBitVect(m, radius, nbits)\n for m in df[mols_column_name]\n ]\n elif kind == \"counts\":\n fps = [\n GetHashedMorganFingerprint(m, radius, nbits)\n for m in df[mols_column_name]\n ]\n\n np_fps = []\n for fp in fps:\n arr = np.zeros((1,))\n DataStructs.ConvertToNumpyArray(fp, arr)\n np_fps.append(arr)\n np_fps = np.vstack(np_fps)\n fpdf = pd.DataFrame(np_fps)\n fpdf.index = df.index\n return fpdf\n\n\[email protected]_dataframe_method\n@deprecated_alias(mols_col=\"mols_column_name\")\ndef molecular_descriptors(df: pd.DataFrame, mols_column_name) -> pd.DataFrame:\n \"\"\"\n Convert a column of RDKIT mol objects into a Pandas DataFrame\n of molecular descriptors.\n\n Returns a new dataframe without any of the original data. This is\n intentional to leave the user only with the data requested.\n\n This method does not mutate the original DataFrame.\n\n The molecular descriptors are from the rdkit.Chem.rdMolDescriptors:\n\n Chi0n, Chi0v, Chi1n, Chi1v, Chi2n, Chi2v, Chi3n, Chi3v,\n Chi4n, Chi4v, ExactMolWt, FractionCSP3, HallKierAlpha, Kappa1,\n Kappa2, Kappa3, LabuteASA, NumAliphaticCarbocycles,\n NumAliphaticHeterocycles, NumAliphaticRings, NumAmideBonds,\n NumAromaticCarbocycles, NumAromaticHeterocycles, NumAromaticRings,\n NumAtomStereoCenters, NumBridgeheadAtoms, NumHBA, NumHBD,\n NumHeteroatoms, NumHeterocycles, NumLipinskiHBA, NumLipinskiHBD,\n NumRings, NumSaturatedCarbocycles, NumSaturatedHeterocycles,\n NumSaturatedRings, NumSpiroAtoms, NumUnspecifiedAtomStereoCenters,\n TPSA.\n\n Method chaining usage:\n\n .. code-block:: python\n\n df = pd.DataFrame(...)\n mol_desc = df.molecular_descriptors(mols_column_name='mols')\n\n If you wish to join the molecular descriptors back into the original\n dataframe, this can be accomplished by doing a `join`,\n because the indices are preserved:\n\n .. code-block:: python\n\n joined = df.join(mol_desc)\n\n :param df: A pandas DataFrame.\n :param mols_column_name: The name of the column that has the RDKIT mol\n objects.\n :returns: A pandas DataFrame\n \"\"\"\n descriptors = [\n CalcChi0n,\n CalcChi0v,\n CalcChi1n,\n CalcChi1v,\n CalcChi2n,\n CalcChi2v,\n CalcChi3n,\n CalcChi3v,\n CalcChi4n,\n CalcChi4v,\n CalcExactMolWt,\n CalcFractionCSP3,\n CalcHallKierAlpha,\n CalcKappa1,\n CalcKappa2,\n CalcKappa3,\n CalcLabuteASA,\n CalcNumAliphaticCarbocycles,\n CalcNumAliphaticHeterocycles,\n CalcNumAliphaticRings,\n CalcNumAmideBonds,\n CalcNumAromaticCarbocycles,\n CalcNumAromaticHeterocycles,\n CalcNumAromaticRings,\n CalcNumAtomStereoCenters,\n CalcNumBridgeheadAtoms,\n CalcNumHBA,\n CalcNumHBD,\n CalcNumHeteroatoms,\n CalcNumHeterocycles,\n CalcNumLipinskiHBA,\n CalcNumLipinskiHBD,\n CalcNumRings,\n CalcNumSaturatedCarbocycles,\n CalcNumSaturatedHeterocycles,\n CalcNumSaturatedRings,\n CalcNumSpiroAtoms,\n CalcNumUnspecifiedAtomStereoCenters,\n CalcTPSA,\n ]\n descriptors_mapping = {f.__name__.strip(\"Calc\"): f for f in descriptors}\n\n feats = dict()\n for name, func in descriptors_mapping.items():\n feats[name] = [func(m) for m in df[mols_column_name]]\n return pd.DataFrame(feats)\n\n\[email protected]_dataframe_method\n@deprecated_alias(mols_col=\"mols_column_name\")\ndef maccs_keys_fingerprint(df: pd.DataFrame, mols_column_name) -> pd.DataFrame:\n \"\"\"\n Convert a column of RDKIT mol objects into MACCS Keys Fingeprints.\n\n Returns a new dataframe without any of the original data.\n This is intentional to leave the user with the data requested.\n\n This method does not mutate the original DataFrame.\n\n Method chaining usage:\n\n .. code-block:: python\n\n df = pd.DataFrame(...)\n maccs = df.maccs_keys_fingerprint(mols_column_name='mols')\n\n If you wish to join the molecular descriptors back into the\n original dataframe, this can be accomplished by doing a `join`,\n because the indices are preserved:\n\n .. code-block:: python\n\n joined = df.join(maccs_keys_fingerprint)\n\n\n :param df: A pandas DataFrame.\n :param mols_column_name: The name of the column that has the RDKIT mol\n objects.\n :returns: A pandas DataFrame\n \"\"\"\n\n maccs = [GetMACCSKeysFingerprint(m) for m in df[mols_column_name]]\n\n np_maccs = []\n\n for macc in maccs:\n arr = np.zeros((1,))\n DataStructs.ConvertToNumpyArray(macc, arr)\n np_maccs.append(arr)\n np_maccs = np.vstack(np_maccs)\n fmaccs = pd.DataFrame(np_maccs)\n fmaccs.index = df.index\n return fmaccs\n", "path": "janitor/chemistry.py"}]} | 4,016 | 312 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.