problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
25.4k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 582
39.1k
| num_tokens
int64 271
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_67407 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-1711 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pathoc does not accept `:pa,f` to pause forever at end of message
##### Steps to reproduce the problem:
`pathoc www.example.com 'get:/:pa,f'`
##### What is the expected behavior?
Send request, but pause forever after sending.
##### What went wrong?
I get a stack trace with "a float is required".
```
$ pathoc www.example.com 'get:/:pa,f'
08-09-16 16:59:41: >> 'GET':/:pa,f
Traceback (most recent call last):
File "/usr/local/bin/pathoc", line 11, in <module>
sys.exit(go_pathoc())
File "/usr/local/lib/python2.7/dist-packages/pathod/pathoc_cmdline.py", line 226, in go_pathoc
pathoc.main(args)
File "/usr/local/lib/python2.7/dist-packages/pathod/pathoc.py", line 522, in main
ret = p.request(spec)
File "/usr/local/lib/python2.7/dist-packages/pathod/pathoc.py", line 452, in request
return self.http(r)
File "/usr/local/lib/python2.7/dist-packages/pathod/pathoc.py", line 432, in http
return resp
File "/usr/local/lib/python2.7/dist-packages/pathod/pathoc.py", line 411, in http
req = language.serve(r, self.wfile, self.settings)
File "/usr/local/lib/python2.7/dist-packages/pathod/language/__init__.py", line 105, in serve
disconnect = writer.write_values(fp, vals, actions[:])
File "/usr/local/lib/python2.7/dist-packages/pathod/language/writer.py", line 61, in write_values
time.sleep(a[2])
TypeError: a float is required
```
##### Any other comments? What have you tried so far?
All other combinations of pause flags work as expected:
```
$ pathoc www.example.com 'get:/:p2,5'
08-09-16 17:05:07: >> 'GET':/:p2,5
<< 200 OK: 1270 bytes
$ pathoc www.example.com 'get:/:pr,5'
08-09-16 17:05:21: >> 'GET':/:pr,5
<< 200 OK: 1270 bytes
$ pathoc www.example.com 'get:/:pa,5'
08-09-16 17:05:41: >> 'GET':/:pa,5
<< 200 OK: 1270 bytes
$ pathoc www.example.com 'get:/:p2,f'
^C08-09-16 17:04:46: >> 'GET':/:p2,f
$ pathoc www.example.com 'get:/:pr,f'
^C08-09-16 17:04:55: >> 'GET':/:pr,f
```
---
pathoc version: 0.17
Operating System: Debian Linux 8.5 "Jessie" x64
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pathod/language/writer.py`
Content:
```
1 import time
2 from mitmproxy import exceptions
3
4 BLOCKSIZE = 1024
5 # It's not clear what the upper limit for time.sleep is. It's lower than the
6 # maximum int or float. 1 year should do.
7 FOREVER = 60 * 60 * 24 * 365
8
9
10 def send_chunk(fp, val, blocksize, start, end):
11 """
12 (start, end): Inclusive lower bound, exclusive upper bound.
13 """
14 for i in range(start, end, blocksize):
15 fp.write(
16 val[i:min(i + blocksize, end)]
17 )
18 return end - start
19
20
21 def write_values(fp, vals, actions, sofar=0, blocksize=BLOCKSIZE):
22 """
23 vals: A list of values, which may be strings or Value objects.
24
25 actions: A list of (offset, action, arg) tuples. Action may be "inject",
26 "pause" or "disconnect".
27
28 Both vals and actions are in reverse order, with the first items last.
29
30 Return True if connection should disconnect.
31 """
32 sofar = 0
33 try:
34 while vals:
35 v = vals.pop()
36 offset = 0
37 while actions and actions[-1][0] < (sofar + len(v)):
38 a = actions.pop()
39 offset += send_chunk(
40 fp,
41 v,
42 blocksize,
43 offset,
44 a[0] - sofar - offset
45 )
46 if a[1] == "pause":
47 time.sleep(
48 FOREVER if a[2] == "f" else a[2]
49 )
50 elif a[1] == "disconnect":
51 return True
52 elif a[1] == "inject":
53 send_chunk(fp, a[2], blocksize, 0, len(a[2]))
54 send_chunk(fp, v, blocksize, offset, len(v))
55 sofar += len(v)
56 # Remainders
57 while actions:
58 a = actions.pop()
59 if a[1] == "pause":
60 time.sleep(a[2])
61 elif a[1] == "disconnect":
62 return True
63 elif a[1] == "inject":
64 send_chunk(fp, a[2], blocksize, 0, len(a[2]))
65 except exceptions.TcpDisconnect: # pragma: no cover
66 return True
67
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pathod/language/writer.py b/pathod/language/writer.py
--- a/pathod/language/writer.py
+++ b/pathod/language/writer.py
@@ -57,7 +57,9 @@
while actions:
a = actions.pop()
if a[1] == "pause":
- time.sleep(a[2])
+ time.sleep(
+ FOREVER if a[2] == "f" else a[2]
+ )
elif a[1] == "disconnect":
return True
elif a[1] == "inject":
| {"golden_diff": "diff --git a/pathod/language/writer.py b/pathod/language/writer.py\n--- a/pathod/language/writer.py\n+++ b/pathod/language/writer.py\n@@ -57,7 +57,9 @@\n while actions:\n a = actions.pop()\n if a[1] == \"pause\":\n- time.sleep(a[2])\n+ time.sleep(\n+ FOREVER if a[2] == \"f\" else a[2]\n+ )\n elif a[1] == \"disconnect\":\n return True\n elif a[1] == \"inject\":\n", "issue": "pathoc does not accept `:pa,f` to pause forever at end of message\n##### Steps to reproduce the problem:\n\n`pathoc www.example.com 'get:/:pa,f'`\n##### What is the expected behavior?\n\nSend request, but pause forever after sending.\n##### What went wrong?\n\nI get a stack trace with \"a float is required\".\n\n```\n$ pathoc www.example.com 'get:/:pa,f'\n08-09-16 16:59:41: >> 'GET':/:pa,f\nTraceback (most recent call last):\n File \"/usr/local/bin/pathoc\", line 11, in <module>\n sys.exit(go_pathoc())\n File \"/usr/local/lib/python2.7/dist-packages/pathod/pathoc_cmdline.py\", line 226, in go_pathoc\n pathoc.main(args)\n File \"/usr/local/lib/python2.7/dist-packages/pathod/pathoc.py\", line 522, in main\n ret = p.request(spec)\n File \"/usr/local/lib/python2.7/dist-packages/pathod/pathoc.py\", line 452, in request\n return self.http(r)\n File \"/usr/local/lib/python2.7/dist-packages/pathod/pathoc.py\", line 432, in http\n return resp\n File \"/usr/local/lib/python2.7/dist-packages/pathod/pathoc.py\", line 411, in http\n req = language.serve(r, self.wfile, self.settings)\n File \"/usr/local/lib/python2.7/dist-packages/pathod/language/__init__.py\", line 105, in serve\n disconnect = writer.write_values(fp, vals, actions[:])\n File \"/usr/local/lib/python2.7/dist-packages/pathod/language/writer.py\", line 61, in write_values\n time.sleep(a[2])\nTypeError: a float is required\n```\n##### Any other comments? What have you tried so far?\n\nAll other combinations of pause flags work as expected:\n\n```\n$ pathoc www.example.com 'get:/:p2,5'\n08-09-16 17:05:07: >> 'GET':/:p2,5\n<< 200 OK: 1270 bytes\n$ pathoc www.example.com 'get:/:pr,5'\n08-09-16 17:05:21: >> 'GET':/:pr,5\n<< 200 OK: 1270 bytes\n$ pathoc www.example.com 'get:/:pa,5'\n08-09-16 17:05:41: >> 'GET':/:pa,5\n<< 200 OK: 1270 bytes\n$ pathoc www.example.com 'get:/:p2,f'\n^C08-09-16 17:04:46: >> 'GET':/:p2,f\n$ pathoc www.example.com 'get:/:pr,f'\n^C08-09-16 17:04:55: >> 'GET':/:pr,f\n```\n\n---\n\npathoc version: 0.17\nOperating System: Debian Linux 8.5 \"Jessie\" x64\n\n", "before_files": [{"content": "import time\nfrom mitmproxy import exceptions\n\nBLOCKSIZE = 1024\n# It's not clear what the upper limit for time.sleep is. It's lower than the\n# maximum int or float. 1 year should do.\nFOREVER = 60 * 60 * 24 * 365\n\n\ndef send_chunk(fp, val, blocksize, start, end):\n \"\"\"\n (start, end): Inclusive lower bound, exclusive upper bound.\n \"\"\"\n for i in range(start, end, blocksize):\n fp.write(\n val[i:min(i + blocksize, end)]\n )\n return end - start\n\n\ndef write_values(fp, vals, actions, sofar=0, blocksize=BLOCKSIZE):\n \"\"\"\n vals: A list of values, which may be strings or Value objects.\n\n actions: A list of (offset, action, arg) tuples. Action may be \"inject\",\n \"pause\" or \"disconnect\".\n\n Both vals and actions are in reverse order, with the first items last.\n\n Return True if connection should disconnect.\n \"\"\"\n sofar = 0\n try:\n while vals:\n v = vals.pop()\n offset = 0\n while actions and actions[-1][0] < (sofar + len(v)):\n a = actions.pop()\n offset += send_chunk(\n fp,\n v,\n blocksize,\n offset,\n a[0] - sofar - offset\n )\n if a[1] == \"pause\":\n time.sleep(\n FOREVER if a[2] == \"f\" else a[2]\n )\n elif a[1] == \"disconnect\":\n return True\n elif a[1] == \"inject\":\n send_chunk(fp, a[2], blocksize, 0, len(a[2]))\n send_chunk(fp, v, blocksize, offset, len(v))\n sofar += len(v)\n # Remainders\n while actions:\n a = actions.pop()\n if a[1] == \"pause\":\n time.sleep(a[2])\n elif a[1] == \"disconnect\":\n return True\n elif a[1] == \"inject\":\n send_chunk(fp, a[2], blocksize, 0, len(a[2]))\n except exceptions.TcpDisconnect: # pragma: no cover\n return True\n", "path": "pathod/language/writer.py"}], "after_files": [{"content": "import time\nfrom mitmproxy import exceptions\n\nBLOCKSIZE = 1024\n# It's not clear what the upper limit for time.sleep is. It's lower than the\n# maximum int or float. 1 year should do.\nFOREVER = 60 * 60 * 24 * 365\n\n\ndef send_chunk(fp, val, blocksize, start, end):\n \"\"\"\n (start, end): Inclusive lower bound, exclusive upper bound.\n \"\"\"\n for i in range(start, end, blocksize):\n fp.write(\n val[i:min(i + blocksize, end)]\n )\n return end - start\n\n\ndef write_values(fp, vals, actions, sofar=0, blocksize=BLOCKSIZE):\n \"\"\"\n vals: A list of values, which may be strings or Value objects.\n\n actions: A list of (offset, action, arg) tuples. Action may be \"inject\",\n \"pause\" or \"disconnect\".\n\n Both vals and actions are in reverse order, with the first items last.\n\n Return True if connection should disconnect.\n \"\"\"\n sofar = 0\n try:\n while vals:\n v = vals.pop()\n offset = 0\n while actions and actions[-1][0] < (sofar + len(v)):\n a = actions.pop()\n offset += send_chunk(\n fp,\n v,\n blocksize,\n offset,\n a[0] - sofar - offset\n )\n if a[1] == \"pause\":\n time.sleep(\n FOREVER if a[2] == \"f\" else a[2]\n )\n elif a[1] == \"disconnect\":\n return True\n elif a[1] == \"inject\":\n send_chunk(fp, a[2], blocksize, 0, len(a[2]))\n send_chunk(fp, v, blocksize, offset, len(v))\n sofar += len(v)\n # Remainders\n while actions:\n a = actions.pop()\n if a[1] == \"pause\":\n time.sleep(\n FOREVER if a[2] == \"f\" else a[2]\n )\n elif a[1] == \"disconnect\":\n return True\n elif a[1] == \"inject\":\n send_chunk(fp, a[2], blocksize, 0, len(a[2]))\n except exceptions.TcpDisconnect: # pragma: no cover\n return True\n", "path": "pathod/language/writer.py"}]} | 1,613 | 126 |
gh_patches_debug_10220 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-391 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Missing ABC for Metric
[Missing ABC](https://github.com/open-telemetry/opentelemetry-python/blob/master/opentelemetry-api/src/opentelemetry/metrics/__init__.py#L90) for `Metric` seems strange to me.
I guess in the PR #311 and Issue #66 settle to add ABC for all interfaces.
Maybe it should look like:
```python
class Metric(abc.ABC):
"""...
"""
@abc.abstractmethod
def get_handle(self, label_set: LabelSet) -> "object":
"""...
"""
```
Missing ABC for Metric
[Missing ABC](https://github.com/open-telemetry/opentelemetry-python/blob/master/opentelemetry-api/src/opentelemetry/metrics/__init__.py#L90) for `Metric` seems strange to me.
I guess in the PR #311 and Issue #66 settle to add ABC for all interfaces.
Maybe it should look like:
```python
class Metric(abc.ABC):
"""...
"""
@abc.abstractmethod
def get_handle(self, label_set: LabelSet) -> "object":
"""...
"""
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opentelemetry-api/src/opentelemetry/metrics/__init__.py`
Content:
```
1 # Copyright 2019, OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 The OpenTelemetry metrics API describes the classes used to report raw
17 measurements, as well as metrics with known aggregation and labels.
18
19 The `Meter` class is used to construct `Metric` s to record raw statistics
20 as well as metrics with predefined aggregation.
21
22 See the `metrics api`_ spec for terminology and context clarification.
23
24 .. _metrics api:
25 https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/api-metrics.md
26
27
28 """
29 import abc
30 from typing import Callable, Dict, Optional, Sequence, Tuple, Type, TypeVar
31
32 from opentelemetry.util import loader
33
34 ValueT = TypeVar("ValueT", int, float)
35
36
37 class DefaultMetricHandle:
38 """The default MetricHandle.
39
40 Used when no MetricHandle implementation is available.
41 """
42
43
44 class CounterHandle:
45 def add(self, value: ValueT) -> None:
46 """Increases the value of the handle by ``value``.
47
48 Args:
49 value: The value to record to the handle.
50 """
51
52
53 class GaugeHandle:
54 def set(self, value: ValueT) -> None:
55 """Sets the current value of the handle to ``value``.
56
57 Args:
58 value: The value to record to the handle.
59 """
60
61
62 class MeasureHandle:
63 def record(self, value: ValueT) -> None:
64 """Records the given ``value`` to this handle.
65
66 Args:
67 value: The value to record to the handle.
68 """
69
70
71 class LabelSet(abc.ABC):
72 """A canonicalized set of labels useful for preaggregation
73
74 Re-usable LabelSet objects provide a potential optimization for scenarios
75 where handles might not be effective. For example, if the LabelSet will be
76 re-used but only used once per metrics, handles do not offer any
77 optimization. It may best to pre-compute a canonicalized LabelSet once and
78 re-use it with the direct calling convention. LabelSets are immutable and
79 should be opaque in implementation.
80 """
81
82
83 class DefaultLabelSet(LabelSet):
84 """The default LabelSet.
85
86 Used when no LabelSet implementation is available.
87 """
88
89
90 class Metric:
91 """Base class for various types of metrics.
92
93 Metric class that inherit from this class are specialized with the type of
94 handle that the metric holds.
95 """
96
97 def get_handle(self, label_set: LabelSet) -> "object":
98 """Gets a handle, used for repeated-use of metrics instruments.
99
100 Handles are useful to reduce the cost of repeatedly recording a metric
101 with a pre-defined set of label values. All metric kinds (counter,
102 gauge, measure) support declaring a set of required label keys. The
103 values corresponding to these keys should be specified in every handle.
104 "Unspecified" label values, in cases where a handle is requested but
105 a value was not provided are permitted.
106
107 Args:
108 label_set: `LabelSet` to associate with the returned handle.
109 """
110
111
112 class DefaultMetric(Metric):
113 """The default Metric used when no Metric implementation is available."""
114
115 def get_handle(self, label_set: LabelSet) -> "DefaultMetricHandle":
116 """Gets a `DefaultMetricHandle`.
117
118 Args:
119 label_set: `LabelSet` to associate with the returned handle.
120 """
121 return DefaultMetricHandle()
122
123
124 class Counter(Metric):
125 """A counter type metric that expresses the computation of a sum."""
126
127 def get_handle(self, label_set: LabelSet) -> "CounterHandle":
128 """Gets a `CounterHandle`."""
129 return CounterHandle()
130
131 def add(self, label_set: LabelSet, value: ValueT) -> None:
132 """Increases the value of the counter by ``value``.
133
134 Args:
135 label_set: `LabelSet` to associate with the returned handle.
136 value: The value to add to the counter metric.
137 """
138
139
140 class Gauge(Metric):
141 """A gauge type metric that expresses a pre-calculated value.
142
143 Gauge metrics have a value that is either ``Set`` by explicit
144 instrumentation or observed through a callback. This kind of metric
145 should be used when the metric cannot be expressed as a sum or because
146 the measurement interval is arbitrary.
147 """
148
149 def get_handle(self, label_set: LabelSet) -> "GaugeHandle":
150 """Gets a `GaugeHandle`."""
151 return GaugeHandle()
152
153 def set(self, label_set: LabelSet, value: ValueT) -> None:
154 """Sets the value of the gauge to ``value``.
155
156 Args:
157 label_set: `LabelSet` to associate with the returned handle.
158 value: The value to set the gauge metric to.
159 """
160
161
162 class Measure(Metric):
163 """A measure type metric that represent raw stats that are recorded.
164
165 Measure metrics represent raw statistics that are recorded. By
166 default, measure metrics can accept both positive and negatives.
167 Negative inputs will be discarded when monotonic is True.
168 """
169
170 def get_handle(self, label_set: LabelSet) -> "MeasureHandle":
171 """Gets a `MeasureHandle` with a float value."""
172 return MeasureHandle()
173
174 def record(self, label_set: LabelSet, value: ValueT) -> None:
175 """Records the ``value`` to the measure.
176
177 Args:
178 label_set: `LabelSet` to associate with the returned handle.
179 value: The value to record to this measure metric.
180 """
181
182
183 MetricT = TypeVar("MetricT", Counter, Gauge, Measure)
184
185
186 # pylint: disable=unused-argument
187 class Meter(abc.ABC):
188 """An interface to allow the recording of metrics.
189
190 `Metric` s are used for recording pre-defined aggregation (gauge and
191 counter), or raw values (measure) in which the aggregation and labels
192 for the exported metric are deferred.
193 """
194
195 @abc.abstractmethod
196 def record_batch(
197 self,
198 label_set: LabelSet,
199 record_tuples: Sequence[Tuple["Metric", ValueT]],
200 ) -> None:
201 """Atomically records a batch of `Metric` and value pairs.
202
203 Allows the functionality of acting upon multiple metrics with
204 a single API call. Implementations should find metric and handles that
205 match the key-value pairs in the label tuples.
206
207 Args:
208 label_set: The `LabelSet` associated with all measurements in
209 the batch. A measurement is a tuple, representing the `Metric`
210 being recorded and the corresponding value to record.
211 record_tuples: A sequence of pairs of `Metric` s and the
212 corresponding value to record for that metric.
213 """
214
215 @abc.abstractmethod
216 def create_metric(
217 self,
218 name: str,
219 description: str,
220 unit: str,
221 value_type: Type[ValueT],
222 metric_type: Type[MetricT],
223 label_keys: Sequence[str] = (),
224 enabled: bool = True,
225 monotonic: bool = False,
226 ) -> "Metric":
227 """Creates a ``metric_kind`` metric with type ``value_type``.
228
229 Args:
230 name: The name of the metric.
231 description: Human-readable description of the metric.
232 unit: Unit of the metric values.
233 value_type: The type of values being recorded by the metric.
234 metric_type: The type of metric being created.
235 label_keys: The keys for the labels with dynamic values.
236 enabled: Whether to report the metric by default.
237 monotonic: Whether to only allow non-negative values.
238
239 Returns: A new ``metric_type`` metric with values of ``value_type``.
240 """
241
242 @abc.abstractmethod
243 def get_label_set(self, labels: Dict[str, str]) -> "LabelSet":
244 """Gets a `LabelSet` with the given labels.
245
246 Args:
247 labels: A dictionary representing label key to label value pairs.
248
249 Returns: A `LabelSet` object canonicalized using the given input.
250 """
251
252
253 class DefaultMeter(Meter):
254 """The default Meter used when no Meter implementation is available."""
255
256 def record_batch(
257 self,
258 label_set: LabelSet,
259 record_tuples: Sequence[Tuple["Metric", ValueT]],
260 ) -> None:
261 pass
262
263 def create_metric(
264 self,
265 name: str,
266 description: str,
267 unit: str,
268 value_type: Type[ValueT],
269 metric_type: Type[MetricT],
270 label_keys: Sequence[str] = (),
271 enabled: bool = True,
272 monotonic: bool = False,
273 ) -> "Metric":
274 # pylint: disable=no-self-use
275 return DefaultMetric()
276
277 def get_label_set(self, labels: Dict[str, str]) -> "LabelSet":
278 # pylint: disable=no-self-use
279 return DefaultLabelSet()
280
281
282 # Once https://github.com/python/mypy/issues/7092 is resolved,
283 # the following type definition should be replaced with
284 # from opentelemetry.util.loader import ImplementationFactory
285 ImplementationFactory = Callable[[Type[Meter]], Optional[Meter]]
286
287 _METER = None
288 _METER_FACTORY = None
289
290
291 def meter() -> Meter:
292 """Gets the current global :class:`~.Meter` object.
293
294 If there isn't one set yet, a default will be loaded.
295 """
296 global _METER, _METER_FACTORY # pylint:disable=global-statement
297
298 if _METER is None:
299 # pylint:disable=protected-access
300 _METER = loader._load_impl(DefaultMeter, _METER_FACTORY)
301 del _METER_FACTORY
302
303 return _METER
304
305
306 def set_preferred_meter_implementation(factory: ImplementationFactory) -> None:
307 """Set the factory to be used to create the meter.
308
309 See :mod:`opentelemetry.util.loader` for details.
310
311 This function may not be called after a meter is already loaded.
312
313 Args:
314 factory: Callback that should create a new :class:`Meter` instance.
315 """
316 global _METER, _METER_FACTORY # pylint:disable=global-statement
317
318 if _METER:
319 raise RuntimeError("Meter already loaded.")
320
321 _METER_FACTORY = factory
322
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/opentelemetry-api/src/opentelemetry/metrics/__init__.py b/opentelemetry-api/src/opentelemetry/metrics/__init__.py
--- a/opentelemetry-api/src/opentelemetry/metrics/__init__.py
+++ b/opentelemetry-api/src/opentelemetry/metrics/__init__.py
@@ -87,13 +87,14 @@
"""
-class Metric:
+class Metric(abc.ABC):
"""Base class for various types of metrics.
Metric class that inherit from this class are specialized with the type of
handle that the metric holds.
"""
+ @abc.abstractmethod
def get_handle(self, label_set: LabelSet) -> "object":
"""Gets a handle, used for repeated-use of metrics instruments.
| {"golden_diff": "diff --git a/opentelemetry-api/src/opentelemetry/metrics/__init__.py b/opentelemetry-api/src/opentelemetry/metrics/__init__.py\n--- a/opentelemetry-api/src/opentelemetry/metrics/__init__.py\n+++ b/opentelemetry-api/src/opentelemetry/metrics/__init__.py\n@@ -87,13 +87,14 @@\n \"\"\"\n \n \n-class Metric:\n+class Metric(abc.ABC):\n \"\"\"Base class for various types of metrics.\n \n Metric class that inherit from this class are specialized with the type of\n handle that the metric holds.\n \"\"\"\n \n+ @abc.abstractmethod\n def get_handle(self, label_set: LabelSet) -> \"object\":\n \"\"\"Gets a handle, used for repeated-use of metrics instruments.\n", "issue": "Missing ABC for Metric\n[Missing ABC](https://github.com/open-telemetry/opentelemetry-python/blob/master/opentelemetry-api/src/opentelemetry/metrics/__init__.py#L90) for `Metric` seems strange to me.\r\n I guess in the PR #311 and Issue #66 settle to add ABC for all interfaces.\r\n\r\nMaybe it should look like:\r\n```python\r\n\r\nclass Metric(abc.ABC):\r\n \"\"\"...\r\n \"\"\"\r\n\r\n @abc.abstractmethod\r\n def get_handle(self, label_set: LabelSet) -> \"object\":\r\n \"\"\"...\r\n \"\"\"\r\n\r\n\r\n```\nMissing ABC for Metric\n[Missing ABC](https://github.com/open-telemetry/opentelemetry-python/blob/master/opentelemetry-api/src/opentelemetry/metrics/__init__.py#L90) for `Metric` seems strange to me.\r\n I guess in the PR #311 and Issue #66 settle to add ABC for all interfaces.\r\n\r\nMaybe it should look like:\r\n```python\r\n\r\nclass Metric(abc.ABC):\r\n \"\"\"...\r\n \"\"\"\r\n\r\n @abc.abstractmethod\r\n def get_handle(self, label_set: LabelSet) -> \"object\":\r\n \"\"\"...\r\n \"\"\"\r\n\r\n\r\n```\n", "before_files": [{"content": "# Copyright 2019, OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nThe OpenTelemetry metrics API describes the classes used to report raw\nmeasurements, as well as metrics with known aggregation and labels.\n\nThe `Meter` class is used to construct `Metric` s to record raw statistics\nas well as metrics with predefined aggregation.\n\nSee the `metrics api`_ spec for terminology and context clarification.\n\n.. _metrics api:\n https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/api-metrics.md\n\n\n\"\"\"\nimport abc\nfrom typing import Callable, Dict, Optional, Sequence, Tuple, Type, TypeVar\n\nfrom opentelemetry.util import loader\n\nValueT = TypeVar(\"ValueT\", int, float)\n\n\nclass DefaultMetricHandle:\n \"\"\"The default MetricHandle.\n\n Used when no MetricHandle implementation is available.\n \"\"\"\n\n\nclass CounterHandle:\n def add(self, value: ValueT) -> None:\n \"\"\"Increases the value of the handle by ``value``.\n\n Args:\n value: The value to record to the handle.\n \"\"\"\n\n\nclass GaugeHandle:\n def set(self, value: ValueT) -> None:\n \"\"\"Sets the current value of the handle to ``value``.\n\n Args:\n value: The value to record to the handle.\n \"\"\"\n\n\nclass MeasureHandle:\n def record(self, value: ValueT) -> None:\n \"\"\"Records the given ``value`` to this handle.\n\n Args:\n value: The value to record to the handle.\n \"\"\"\n\n\nclass LabelSet(abc.ABC):\n \"\"\"A canonicalized set of labels useful for preaggregation\n\n Re-usable LabelSet objects provide a potential optimization for scenarios\n where handles might not be effective. For example, if the LabelSet will be\n re-used but only used once per metrics, handles do not offer any\n optimization. It may best to pre-compute a canonicalized LabelSet once and\n re-use it with the direct calling convention. LabelSets are immutable and\n should be opaque in implementation.\n \"\"\"\n\n\nclass DefaultLabelSet(LabelSet):\n \"\"\"The default LabelSet.\n\n Used when no LabelSet implementation is available.\n \"\"\"\n\n\nclass Metric:\n \"\"\"Base class for various types of metrics.\n\n Metric class that inherit from this class are specialized with the type of\n handle that the metric holds.\n \"\"\"\n\n def get_handle(self, label_set: LabelSet) -> \"object\":\n \"\"\"Gets a handle, used for repeated-use of metrics instruments.\n\n Handles are useful to reduce the cost of repeatedly recording a metric\n with a pre-defined set of label values. All metric kinds (counter,\n gauge, measure) support declaring a set of required label keys. The\n values corresponding to these keys should be specified in every handle.\n \"Unspecified\" label values, in cases where a handle is requested but\n a value was not provided are permitted.\n\n Args:\n label_set: `LabelSet` to associate with the returned handle.\n \"\"\"\n\n\nclass DefaultMetric(Metric):\n \"\"\"The default Metric used when no Metric implementation is available.\"\"\"\n\n def get_handle(self, label_set: LabelSet) -> \"DefaultMetricHandle\":\n \"\"\"Gets a `DefaultMetricHandle`.\n\n Args:\n label_set: `LabelSet` to associate with the returned handle.\n \"\"\"\n return DefaultMetricHandle()\n\n\nclass Counter(Metric):\n \"\"\"A counter type metric that expresses the computation of a sum.\"\"\"\n\n def get_handle(self, label_set: LabelSet) -> \"CounterHandle\":\n \"\"\"Gets a `CounterHandle`.\"\"\"\n return CounterHandle()\n\n def add(self, label_set: LabelSet, value: ValueT) -> None:\n \"\"\"Increases the value of the counter by ``value``.\n\n Args:\n label_set: `LabelSet` to associate with the returned handle.\n value: The value to add to the counter metric.\n \"\"\"\n\n\nclass Gauge(Metric):\n \"\"\"A gauge type metric that expresses a pre-calculated value.\n\n Gauge metrics have a value that is either ``Set`` by explicit\n instrumentation or observed through a callback. This kind of metric\n should be used when the metric cannot be expressed as a sum or because\n the measurement interval is arbitrary.\n \"\"\"\n\n def get_handle(self, label_set: LabelSet) -> \"GaugeHandle\":\n \"\"\"Gets a `GaugeHandle`.\"\"\"\n return GaugeHandle()\n\n def set(self, label_set: LabelSet, value: ValueT) -> None:\n \"\"\"Sets the value of the gauge to ``value``.\n\n Args:\n label_set: `LabelSet` to associate with the returned handle.\n value: The value to set the gauge metric to.\n \"\"\"\n\n\nclass Measure(Metric):\n \"\"\"A measure type metric that represent raw stats that are recorded.\n\n Measure metrics represent raw statistics that are recorded. By\n default, measure metrics can accept both positive and negatives.\n Negative inputs will be discarded when monotonic is True.\n \"\"\"\n\n def get_handle(self, label_set: LabelSet) -> \"MeasureHandle\":\n \"\"\"Gets a `MeasureHandle` with a float value.\"\"\"\n return MeasureHandle()\n\n def record(self, label_set: LabelSet, value: ValueT) -> None:\n \"\"\"Records the ``value`` to the measure.\n\n Args:\n label_set: `LabelSet` to associate with the returned handle.\n value: The value to record to this measure metric.\n \"\"\"\n\n\nMetricT = TypeVar(\"MetricT\", Counter, Gauge, Measure)\n\n\n# pylint: disable=unused-argument\nclass Meter(abc.ABC):\n \"\"\"An interface to allow the recording of metrics.\n\n `Metric` s are used for recording pre-defined aggregation (gauge and\n counter), or raw values (measure) in which the aggregation and labels\n for the exported metric are deferred.\n \"\"\"\n\n @abc.abstractmethod\n def record_batch(\n self,\n label_set: LabelSet,\n record_tuples: Sequence[Tuple[\"Metric\", ValueT]],\n ) -> None:\n \"\"\"Atomically records a batch of `Metric` and value pairs.\n\n Allows the functionality of acting upon multiple metrics with\n a single API call. Implementations should find metric and handles that\n match the key-value pairs in the label tuples.\n\n Args:\n label_set: The `LabelSet` associated with all measurements in\n the batch. A measurement is a tuple, representing the `Metric`\n being recorded and the corresponding value to record.\n record_tuples: A sequence of pairs of `Metric` s and the\n corresponding value to record for that metric.\n \"\"\"\n\n @abc.abstractmethod\n def create_metric(\n self,\n name: str,\n description: str,\n unit: str,\n value_type: Type[ValueT],\n metric_type: Type[MetricT],\n label_keys: Sequence[str] = (),\n enabled: bool = True,\n monotonic: bool = False,\n ) -> \"Metric\":\n \"\"\"Creates a ``metric_kind`` metric with type ``value_type``.\n\n Args:\n name: The name of the metric.\n description: Human-readable description of the metric.\n unit: Unit of the metric values.\n value_type: The type of values being recorded by the metric.\n metric_type: The type of metric being created.\n label_keys: The keys for the labels with dynamic values.\n enabled: Whether to report the metric by default.\n monotonic: Whether to only allow non-negative values.\n\n Returns: A new ``metric_type`` metric with values of ``value_type``.\n \"\"\"\n\n @abc.abstractmethod\n def get_label_set(self, labels: Dict[str, str]) -> \"LabelSet\":\n \"\"\"Gets a `LabelSet` with the given labels.\n\n Args:\n labels: A dictionary representing label key to label value pairs.\n\n Returns: A `LabelSet` object canonicalized using the given input.\n \"\"\"\n\n\nclass DefaultMeter(Meter):\n \"\"\"The default Meter used when no Meter implementation is available.\"\"\"\n\n def record_batch(\n self,\n label_set: LabelSet,\n record_tuples: Sequence[Tuple[\"Metric\", ValueT]],\n ) -> None:\n pass\n\n def create_metric(\n self,\n name: str,\n description: str,\n unit: str,\n value_type: Type[ValueT],\n metric_type: Type[MetricT],\n label_keys: Sequence[str] = (),\n enabled: bool = True,\n monotonic: bool = False,\n ) -> \"Metric\":\n # pylint: disable=no-self-use\n return DefaultMetric()\n\n def get_label_set(self, labels: Dict[str, str]) -> \"LabelSet\":\n # pylint: disable=no-self-use\n return DefaultLabelSet()\n\n\n# Once https://github.com/python/mypy/issues/7092 is resolved,\n# the following type definition should be replaced with\n# from opentelemetry.util.loader import ImplementationFactory\nImplementationFactory = Callable[[Type[Meter]], Optional[Meter]]\n\n_METER = None\n_METER_FACTORY = None\n\n\ndef meter() -> Meter:\n \"\"\"Gets the current global :class:`~.Meter` object.\n\n If there isn't one set yet, a default will be loaded.\n \"\"\"\n global _METER, _METER_FACTORY # pylint:disable=global-statement\n\n if _METER is None:\n # pylint:disable=protected-access\n _METER = loader._load_impl(DefaultMeter, _METER_FACTORY)\n del _METER_FACTORY\n\n return _METER\n\n\ndef set_preferred_meter_implementation(factory: ImplementationFactory) -> None:\n \"\"\"Set the factory to be used to create the meter.\n\n See :mod:`opentelemetry.util.loader` for details.\n\n This function may not be called after a meter is already loaded.\n\n Args:\n factory: Callback that should create a new :class:`Meter` instance.\n \"\"\"\n global _METER, _METER_FACTORY # pylint:disable=global-statement\n\n if _METER:\n raise RuntimeError(\"Meter already loaded.\")\n\n _METER_FACTORY = factory\n", "path": "opentelemetry-api/src/opentelemetry/metrics/__init__.py"}], "after_files": [{"content": "# Copyright 2019, OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nThe OpenTelemetry metrics API describes the classes used to report raw\nmeasurements, as well as metrics with known aggregation and labels.\n\nThe `Meter` class is used to construct `Metric` s to record raw statistics\nas well as metrics with predefined aggregation.\n\nSee the `metrics api`_ spec for terminology and context clarification.\n\n.. _metrics api:\n https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/api-metrics.md\n\n\n\"\"\"\nimport abc\nfrom typing import Callable, Dict, Optional, Sequence, Tuple, Type, TypeVar\n\nfrom opentelemetry.util import loader\n\nValueT = TypeVar(\"ValueT\", int, float)\n\n\nclass DefaultMetricHandle:\n \"\"\"The default MetricHandle.\n\n Used when no MetricHandle implementation is available.\n \"\"\"\n\n\nclass CounterHandle:\n def add(self, value: ValueT) -> None:\n \"\"\"Increases the value of the handle by ``value``.\n\n Args:\n value: The value to record to the handle.\n \"\"\"\n\n\nclass GaugeHandle:\n def set(self, value: ValueT) -> None:\n \"\"\"Sets the current value of the handle to ``value``.\n\n Args:\n value: The value to record to the handle.\n \"\"\"\n\n\nclass MeasureHandle:\n def record(self, value: ValueT) -> None:\n \"\"\"Records the given ``value`` to this handle.\n\n Args:\n value: The value to record to the handle.\n \"\"\"\n\n\nclass LabelSet(abc.ABC):\n \"\"\"A canonicalized set of labels useful for preaggregation\n\n Re-usable LabelSet objects provide a potential optimization for scenarios\n where handles might not be effective. For example, if the LabelSet will be\n re-used but only used once per metrics, handles do not offer any\n optimization. It may best to pre-compute a canonicalized LabelSet once and\n re-use it with the direct calling convention. LabelSets are immutable and\n should be opaque in implementation.\n \"\"\"\n\n\nclass DefaultLabelSet(LabelSet):\n \"\"\"The default LabelSet.\n\n Used when no LabelSet implementation is available.\n \"\"\"\n\n\nclass Metric(abc.ABC):\n \"\"\"Base class for various types of metrics.\n\n Metric class that inherit from this class are specialized with the type of\n handle that the metric holds.\n \"\"\"\n\n @abc.abstractmethod\n def get_handle(self, label_set: LabelSet) -> \"object\":\n \"\"\"Gets a handle, used for repeated-use of metrics instruments.\n\n Handles are useful to reduce the cost of repeatedly recording a metric\n with a pre-defined set of label values. All metric kinds (counter,\n gauge, measure) support declaring a set of required label keys. The\n values corresponding to these keys should be specified in every handle.\n \"Unspecified\" label values, in cases where a handle is requested but\n a value was not provided are permitted.\n\n Args:\n label_set: `LabelSet` to associate with the returned handle.\n \"\"\"\n\n\nclass DefaultMetric(Metric):\n \"\"\"The default Metric used when no Metric implementation is available.\"\"\"\n\n def get_handle(self, label_set: LabelSet) -> \"DefaultMetricHandle\":\n \"\"\"Gets a `DefaultMetricHandle`.\n\n Args:\n label_set: `LabelSet` to associate with the returned handle.\n \"\"\"\n return DefaultMetricHandle()\n\n\nclass Counter(Metric):\n \"\"\"A counter type metric that expresses the computation of a sum.\"\"\"\n\n def get_handle(self, label_set: LabelSet) -> \"CounterHandle\":\n \"\"\"Gets a `CounterHandle`.\"\"\"\n return CounterHandle()\n\n def add(self, label_set: LabelSet, value: ValueT) -> None:\n \"\"\"Increases the value of the counter by ``value``.\n\n Args:\n label_set: `LabelSet` to associate with the returned handle.\n value: The value to add to the counter metric.\n \"\"\"\n\n\nclass Gauge(Metric):\n \"\"\"A gauge type metric that expresses a pre-calculated value.\n\n Gauge metrics have a value that is either ``Set`` by explicit\n instrumentation or observed through a callback. This kind of metric\n should be used when the metric cannot be expressed as a sum or because\n the measurement interval is arbitrary.\n \"\"\"\n\n def get_handle(self, label_set: LabelSet) -> \"GaugeHandle\":\n \"\"\"Gets a `GaugeHandle`.\"\"\"\n return GaugeHandle()\n\n def set(self, label_set: LabelSet, value: ValueT) -> None:\n \"\"\"Sets the value of the gauge to ``value``.\n\n Args:\n label_set: `LabelSet` to associate with the returned handle.\n value: The value to set the gauge metric to.\n \"\"\"\n\n\nclass Measure(Metric):\n \"\"\"A measure type metric that represent raw stats that are recorded.\n\n Measure metrics represent raw statistics that are recorded. By\n default, measure metrics can accept both positive and negatives.\n Negative inputs will be discarded when monotonic is True.\n \"\"\"\n\n def get_handle(self, label_set: LabelSet) -> \"MeasureHandle\":\n \"\"\"Gets a `MeasureHandle` with a float value.\"\"\"\n return MeasureHandle()\n\n def record(self, label_set: LabelSet, value: ValueT) -> None:\n \"\"\"Records the ``value`` to the measure.\n\n Args:\n label_set: `LabelSet` to associate with the returned handle.\n value: The value to record to this measure metric.\n \"\"\"\n\n\nMetricT = TypeVar(\"MetricT\", Counter, Gauge, Measure)\n\n\n# pylint: disable=unused-argument\nclass Meter(abc.ABC):\n \"\"\"An interface to allow the recording of metrics.\n\n `Metric` s are used for recording pre-defined aggregation (gauge and\n counter), or raw values (measure) in which the aggregation and labels\n for the exported metric are deferred.\n \"\"\"\n\n @abc.abstractmethod\n def record_batch(\n self,\n label_set: LabelSet,\n record_tuples: Sequence[Tuple[\"Metric\", ValueT]],\n ) -> None:\n \"\"\"Atomically records a batch of `Metric` and value pairs.\n\n Allows the functionality of acting upon multiple metrics with\n a single API call. Implementations should find metric and handles that\n match the key-value pairs in the label tuples.\n\n Args:\n label_set: The `LabelSet` associated with all measurements in\n the batch. A measurement is a tuple, representing the `Metric`\n being recorded and the corresponding value to record.\n record_tuples: A sequence of pairs of `Metric` s and the\n corresponding value to record for that metric.\n \"\"\"\n\n @abc.abstractmethod\n def create_metric(\n self,\n name: str,\n description: str,\n unit: str,\n value_type: Type[ValueT],\n metric_type: Type[MetricT],\n label_keys: Sequence[str] = (),\n enabled: bool = True,\n monotonic: bool = False,\n ) -> \"Metric\":\n \"\"\"Creates a ``metric_kind`` metric with type ``value_type``.\n\n Args:\n name: The name of the metric.\n description: Human-readable description of the metric.\n unit: Unit of the metric values.\n value_type: The type of values being recorded by the metric.\n metric_type: The type of metric being created.\n label_keys: The keys for the labels with dynamic values.\n enabled: Whether to report the metric by default.\n monotonic: Whether to only allow non-negative values.\n\n Returns: A new ``metric_type`` metric with values of ``value_type``.\n \"\"\"\n\n @abc.abstractmethod\n def get_label_set(self, labels: Dict[str, str]) -> \"LabelSet\":\n \"\"\"Gets a `LabelSet` with the given labels.\n\n Args:\n labels: A dictionary representing label key to label value pairs.\n\n Returns: A `LabelSet` object canonicalized using the given input.\n \"\"\"\n\n\nclass DefaultMeter(Meter):\n \"\"\"The default Meter used when no Meter implementation is available.\"\"\"\n\n def record_batch(\n self,\n label_set: LabelSet,\n record_tuples: Sequence[Tuple[\"Metric\", ValueT]],\n ) -> None:\n pass\n\n def create_metric(\n self,\n name: str,\n description: str,\n unit: str,\n value_type: Type[ValueT],\n metric_type: Type[MetricT],\n label_keys: Sequence[str] = (),\n enabled: bool = True,\n monotonic: bool = False,\n ) -> \"Metric\":\n # pylint: disable=no-self-use\n return DefaultMetric()\n\n def get_label_set(self, labels: Dict[str, str]) -> \"LabelSet\":\n # pylint: disable=no-self-use\n return DefaultLabelSet()\n\n\n# Once https://github.com/python/mypy/issues/7092 is resolved,\n# the following type definition should be replaced with\n# from opentelemetry.util.loader import ImplementationFactory\nImplementationFactory = Callable[[Type[Meter]], Optional[Meter]]\n\n_METER = None\n_METER_FACTORY = None\n\n\ndef meter() -> Meter:\n \"\"\"Gets the current global :class:`~.Meter` object.\n\n If there isn't one set yet, a default will be loaded.\n \"\"\"\n global _METER, _METER_FACTORY # pylint:disable=global-statement\n\n if _METER is None:\n # pylint:disable=protected-access\n _METER = loader._load_impl(DefaultMeter, _METER_FACTORY)\n del _METER_FACTORY\n\n return _METER\n\n\ndef set_preferred_meter_implementation(factory: ImplementationFactory) -> None:\n \"\"\"Set the factory to be used to create the meter.\n\n See :mod:`opentelemetry.util.loader` for details.\n\n This function may not be called after a meter is already loaded.\n\n Args:\n factory: Callback that should create a new :class:`Meter` instance.\n \"\"\"\n global _METER, _METER_FACTORY # pylint:disable=global-statement\n\n if _METER:\n raise RuntimeError(\"Meter already loaded.\")\n\n _METER_FACTORY = factory\n", "path": "opentelemetry-api/src/opentelemetry/metrics/__init__.py"}]} | 3,693 | 169 |
gh_patches_debug_16454 | rasdani/github-patches | git_diff | huggingface__text-generation-inference-1414 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Local Install: No rule to make target 'install-flash-attention-v2'
### System Info
**Release:** v1.3.4, main branch
**Target**: x86_64-unknown-linux-gnu
**Cargo version**: 1.70.0
**Commit sha**: 630800eed37b15c4b0c9eb8e6ab47212026720f7
**Docker label**: N/A
**nvidia-smi**: Wed Jan 3 18:13:50 2024
```
Mon Jan 8 15:13:03 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.54.03 Driver Version: 535.54.03 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 4090 On | 00000000:01:00.0 Off | Off |
| 0% 40C P8 26W / 450W | 3MiB / 24564MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 1 NVIDIA GeForce RTX 4090 On | 00000000:02:00.0 Off | Off |
| 0% 46C P8 26W / 450W | 3MiB / 24564MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+
```
### Information
- [ ] Docker
- [X] The CLI directly
### Tasks
- [X] An officially supported command
- [ ] My own modifications
### Reproduction
Followed steps found within the local install instructions (https://github.com/huggingface/text-generation-inference#local-install), in addition to a couple undocumented make steps
1. Fork & pull from main branch, v1.3.4
2. Install and test Rust: 1.70.0 (90c541806 2023-05-31)
3. Install & test Conda: 23.11.0
5. Install and test libprotoc: 3.21.12
6. Execute `export BUILD_EXTENSIONS=True`
7. Execute `make install`
8. Execute `cd server && make install-vllm-cuda && cd ..`
9. Update source code for paged attention [as described here](https://github.com/huggingface/text-generation-inference/pull/1386/commits/ad7f8396732c12368c7961e15fe2287f1aa42db3)
10. Execute `cd server && make install-flash-attention && cd ..`
11. Execute `make run-falcon-7b-instruct`, which yields the following message:
_Flash Attention V2 is not installed.
Use the official Docker image (ghcr.io/huggingface/text-generation-inference:latest) or install flash attention v2 with `cd server && make install install-flash-attention-v2`_
12. Execute `cd server && make install install-flash-attention-v2`, which yields the following error message:
_make: *** No rule to make target 'install-flash-attention-v2'. Stop._
### Expected behavior
Make target should be found and executed. Solution below.
**SOLUTION**:
The file that issues the warning, `server/text_generation_server/utils/flash_attn.py`, is missing the proper make target suffix. It should be:
`make install install-flash-attention-v2-cuda` or `install-flash-attention-v2-rocm`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `server/text_generation_server/utils/flash_attn.py`
Content:
```
1 import os
2 import torch
3
4 from loguru import logger
5
6 from text_generation_server.utils.import_utils import IS_CUDA_SYSTEM, IS_ROCM_SYSTEM
7
8 if os.getenv("USE_FLASH_ATTENTION", "").lower() == "false":
9 raise ImportError("`USE_FLASH_ATTENTION` is false.")
10
11 if not torch.cuda.is_available():
12 raise ImportError("CUDA is not available")
13
14 major, minor = torch.cuda.get_device_capability()
15 is_sm75 = major == 7 and minor == 5
16 is_sm8x = major == 8 and minor >= 0
17 is_sm90 = major == 9 and minor == 0
18
19 HAS_FLASH_ATTN = False
20 HAS_FLASH_ATTN_V2_CUDA = False
21 HAS_FLASH_ATTN_V2_ROCM = False
22 try:
23 try:
24 import flash_attn_2_cuda
25 except ImportError:
26 raise ImportError(
27 "Flash Attention V2 is not installed.\n"
28 "Use the official Docker image (ghcr.io/huggingface/text-generation-inference:latest) "
29 "or install flash attention v2 with `cd server && make install install-flash-attention-v2`"
30 )
31 if not (is_sm8x or is_sm90):
32 raise ImportError(
33 f"GPU with CUDA capability {major} {minor} is not supported for "
34 "Flash Attention V2"
35 )
36 HAS_FLASH_ATTN_V2_CUDA = IS_CUDA_SYSTEM
37 HAS_FLASH_ATTN_V2_ROCM = IS_ROCM_SYSTEM
38 except ImportError as e:
39 try:
40 import flash_attn_cuda
41 except ImportError:
42 raise ImportError(
43 "Flash Attention is not installed.\n"
44 "Use the official Docker image (ghcr.io/huggingface/text-generation-inference:latest) "
45 "or install flash attention with `cd server && make install install-flash-attention`"
46 ) from e
47
48 if IS_CUDA_SYSTEM and not (is_sm75 or is_sm8x or is_sm90):
49 raise ImportError(
50 f"GPU with CUDA capability {major} {minor} is not supported"
51 ) from e
52 elif IS_ROCM_SYSTEM:
53 for idx in range(torch.cuda.device_count()):
54 if "MI210" not in torch.cuda.get_device_name(
55 idx
56 ) and "MI250" not in torch.cuda.get_device_name(idx):
57 raise ImportError(
58 f"AMD GPU {torch.cuda.get_device_name(idx)} does not support flash-attention"
59 )
60
61 logger.warning(f"Unable to use Flash Attention V2: {e}")
62 HAS_FLASH_ATTN = True
63
64
65 def attention(
66 q,
67 k,
68 v,
69 out,
70 cu_seqlens,
71 max_s,
72 softmax_scale,
73 window_size_left=-1,
74 ):
75 if window_size_left <= 0 and window_size_left != -1:
76 raise ValueError("`window_size_left` must be > 0 or -1")
77
78 if HAS_FLASH_ATTN_V2_CUDA:
79 return flash_attn_2_cuda.varlen_fwd(
80 q,
81 k,
82 v,
83 out,
84 cu_seqlens,
85 cu_seqlens,
86 max_s,
87 max_s,
88 0.0,
89 softmax_scale,
90 False,
91 True,
92 window_size_left,
93 0,
94 False,
95 None,
96 )
97 elif HAS_FLASH_ATTN_V2_ROCM:
98 if window_size_left != -1:
99 raise ValueError(
100 f"RoCm version of Flash Attention v2 does not support window attention (window_size_left != -1, got window_size_left={window_size_left})."
101 )
102
103 # RoCm flash API does not take the window_size_left and window_size_right arguments.
104 return flash_attn_2_cuda.varlen_fwd(
105 q,
106 k,
107 v,
108 out,
109 cu_seqlens,
110 cu_seqlens,
111 max_s,
112 max_s,
113 0.0,
114 softmax_scale,
115 False,
116 True,
117 False,
118 None,
119 )
120 elif HAS_FLASH_ATTN:
121 if window_size_left != -1:
122 raise NotImplementedError(
123 "window_size_left is only available with flash attn v2"
124 )
125
126 # Flash attention v1 requires q, k and v to have the same number of heads
127 if k.shape[1] != q.shape[1]:
128 # MQA expand
129 if k.shape[1] == 1:
130 k = k.expand(-1, q.shape[1], -1)
131 # Grouped attention reshape
132 else:
133 original_shape = k.shape
134 k = (
135 k.unsqueeze(2)
136 .expand(-1, -1, q.shape[1] // k.shape[1], -1)
137 .reshape(original_shape[0], -1, original_shape[2])
138 )
139 if v.shape[1] != q.shape[1]:
140 # MQA expand
141 if v.shape[1] == 1:
142 v = v.expand(-1, q.shape[1], -1)
143 # Grouped attention reshape
144 else:
145 original_shape = v.shape
146 v = (
147 v.unsqueeze(2)
148 .expand(-1, -1, q.shape[1] // v.shape[1], -1)
149 .reshape(original_shape[0], -1, original_shape[2])
150 )
151
152 return flash_attn_cuda.fwd(
153 q,
154 k,
155 v,
156 out,
157 cu_seqlens,
158 cu_seqlens,
159 max_s,
160 max_s,
161 0.0,
162 softmax_scale,
163 False,
164 True,
165 False,
166 0,
167 None,
168 )
169
170 raise NotImplementedError("flash attention is not installed")
171
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/server/text_generation_server/utils/flash_attn.py b/server/text_generation_server/utils/flash_attn.py
--- a/server/text_generation_server/utils/flash_attn.py
+++ b/server/text_generation_server/utils/flash_attn.py
@@ -23,10 +23,15 @@
try:
import flash_attn_2_cuda
except ImportError:
+ architecture_suffix = ""
+ if IS_CUDA_SYSTEM:
+ architecture_suffix = "-cuda"
+ elif IS_ROCM_SYSTEM:
+ architecture_suffix = "-rocm"
raise ImportError(
"Flash Attention V2 is not installed.\n"
"Use the official Docker image (ghcr.io/huggingface/text-generation-inference:latest) "
- "or install flash attention v2 with `cd server && make install install-flash-attention-v2`"
+ f"or install flash attention v2 with `cd server && make install install-flash-attention-v2{architecture_suffix}`"
)
if not (is_sm8x or is_sm90):
raise ImportError(
| {"golden_diff": "diff --git a/server/text_generation_server/utils/flash_attn.py b/server/text_generation_server/utils/flash_attn.py\n--- a/server/text_generation_server/utils/flash_attn.py\n+++ b/server/text_generation_server/utils/flash_attn.py\n@@ -23,10 +23,15 @@\n try:\n import flash_attn_2_cuda\n except ImportError:\n+ architecture_suffix = \"\"\n+ if IS_CUDA_SYSTEM:\n+ architecture_suffix = \"-cuda\"\n+ elif IS_ROCM_SYSTEM:\n+ architecture_suffix = \"-rocm\"\n raise ImportError(\n \"Flash Attention V2 is not installed.\\n\"\n \"Use the official Docker image (ghcr.io/huggingface/text-generation-inference:latest) \"\n- \"or install flash attention v2 with `cd server && make install install-flash-attention-v2`\"\n+ f\"or install flash attention v2 with `cd server && make install install-flash-attention-v2{architecture_suffix}`\"\n )\n if not (is_sm8x or is_sm90):\n raise ImportError(\n", "issue": "Local Install: No rule to make target 'install-flash-attention-v2'\n### System Info\r\n\r\n**Release:** v1.3.4, main branch\r\n**Target**: x86_64-unknown-linux-gnu\r\n**Cargo version**: 1.70.0\r\n**Commit sha**: 630800eed37b15c4b0c9eb8e6ab47212026720f7\r\n**Docker label**: N/A\r\n**nvidia-smi**: Wed Jan 3 18:13:50 2024\r\n \r\n```\r\nMon Jan 8 15:13:03 2024\r\n+---------------------------------------------------------------------------------------+\r\n| NVIDIA-SMI 535.54.03 Driver Version: 535.54.03 CUDA Version: 12.2 |\r\n|-----------------------------------------+----------------------+----------------------+\r\n| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |\r\n| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |\r\n| | | MIG M. |\r\n|=========================================+======================+======================|\r\n| 0 NVIDIA GeForce RTX 4090 On | 00000000:01:00.0 Off | Off |\r\n| 0% 40C P8 26W / 450W | 3MiB / 24564MiB | 0% Default |\r\n| | | N/A |\r\n+-----------------------------------------+----------------------+----------------------+\r\n| 1 NVIDIA GeForce RTX 4090 On | 00000000:02:00.0 Off | Off |\r\n| 0% 46C P8 26W / 450W | 3MiB / 24564MiB | 0% Default |\r\n| | | N/A |\r\n+-----------------------------------------+----------------------+----------------------+\r\n\r\n+---------------------------------------------------------------------------------------+\r\n| Processes: |\r\n| GPU GI CI PID Type Process name GPU Memory |\r\n| ID ID Usage |\r\n|=======================================================================================|\r\n| No running processes found |\r\n+---------------------------------------------------------------------------------------+\r\n```\r\n\r\n### Information\r\n\r\n- [ ] Docker\r\n- [X] The CLI directly\r\n\r\n### Tasks\r\n\r\n- [X] An officially supported command\r\n- [ ] My own modifications\r\n\r\n### Reproduction\r\n\r\nFollowed steps found within the local install instructions (https://github.com/huggingface/text-generation-inference#local-install), in addition to a couple undocumented make steps \r\n\r\n1. Fork & pull from main branch, v1.3.4\r\n2. Install and test Rust: 1.70.0 (90c541806 2023-05-31)\r\n3. Install & test Conda: 23.11.0\r\n5. Install and test libprotoc: 3.21.12\r\n6. Execute `export BUILD_EXTENSIONS=True`\r\n7. Execute `make install`\r\n8. Execute `cd server && make install-vllm-cuda && cd ..`\r\n9. Update source code for paged attention [as described here](https://github.com/huggingface/text-generation-inference/pull/1386/commits/ad7f8396732c12368c7961e15fe2287f1aa42db3)\r\n10. Execute `cd server && make install-flash-attention && cd ..`\r\n11. Execute `make run-falcon-7b-instruct`, which yields the following message:\r\n\r\n_Flash Attention V2 is not installed.\r\nUse the official Docker image (ghcr.io/huggingface/text-generation-inference:latest) or install flash attention v2 with `cd server && make install install-flash-attention-v2`_\r\n\r\n12. Execute `cd server && make install install-flash-attention-v2`, which yields the following error message:\r\n\r\n_make: *** No rule to make target 'install-flash-attention-v2'. Stop._\r\n\r\n### Expected behavior\r\n\r\nMake target should be found and executed. Solution below.\r\n\r\n**SOLUTION**: \r\n\r\nThe file that issues the warning, `server/text_generation_server/utils/flash_attn.py`, is missing the proper make target suffix. It should be: \r\n\r\n`make install install-flash-attention-v2-cuda` or `install-flash-attention-v2-rocm`\n", "before_files": [{"content": "import os\nimport torch\n\nfrom loguru import logger\n\nfrom text_generation_server.utils.import_utils import IS_CUDA_SYSTEM, IS_ROCM_SYSTEM\n\nif os.getenv(\"USE_FLASH_ATTENTION\", \"\").lower() == \"false\":\n raise ImportError(\"`USE_FLASH_ATTENTION` is false.\")\n\nif not torch.cuda.is_available():\n raise ImportError(\"CUDA is not available\")\n\nmajor, minor = torch.cuda.get_device_capability()\nis_sm75 = major == 7 and minor == 5\nis_sm8x = major == 8 and minor >= 0\nis_sm90 = major == 9 and minor == 0\n\nHAS_FLASH_ATTN = False\nHAS_FLASH_ATTN_V2_CUDA = False\nHAS_FLASH_ATTN_V2_ROCM = False\ntry:\n try:\n import flash_attn_2_cuda\n except ImportError:\n raise ImportError(\n \"Flash Attention V2 is not installed.\\n\"\n \"Use the official Docker image (ghcr.io/huggingface/text-generation-inference:latest) \"\n \"or install flash attention v2 with `cd server && make install install-flash-attention-v2`\"\n )\n if not (is_sm8x or is_sm90):\n raise ImportError(\n f\"GPU with CUDA capability {major} {minor} is not supported for \"\n \"Flash Attention V2\"\n )\n HAS_FLASH_ATTN_V2_CUDA = IS_CUDA_SYSTEM\n HAS_FLASH_ATTN_V2_ROCM = IS_ROCM_SYSTEM\nexcept ImportError as e:\n try:\n import flash_attn_cuda\n except ImportError:\n raise ImportError(\n \"Flash Attention is not installed.\\n\"\n \"Use the official Docker image (ghcr.io/huggingface/text-generation-inference:latest) \"\n \"or install flash attention with `cd server && make install install-flash-attention`\"\n ) from e\n\n if IS_CUDA_SYSTEM and not (is_sm75 or is_sm8x or is_sm90):\n raise ImportError(\n f\"GPU with CUDA capability {major} {minor} is not supported\"\n ) from e\n elif IS_ROCM_SYSTEM:\n for idx in range(torch.cuda.device_count()):\n if \"MI210\" not in torch.cuda.get_device_name(\n idx\n ) and \"MI250\" not in torch.cuda.get_device_name(idx):\n raise ImportError(\n f\"AMD GPU {torch.cuda.get_device_name(idx)} does not support flash-attention\"\n )\n\n logger.warning(f\"Unable to use Flash Attention V2: {e}\")\n HAS_FLASH_ATTN = True\n\n\ndef attention(\n q,\n k,\n v,\n out,\n cu_seqlens,\n max_s,\n softmax_scale,\n window_size_left=-1,\n):\n if window_size_left <= 0 and window_size_left != -1:\n raise ValueError(\"`window_size_left` must be > 0 or -1\")\n\n if HAS_FLASH_ATTN_V2_CUDA:\n return flash_attn_2_cuda.varlen_fwd(\n q,\n k,\n v,\n out,\n cu_seqlens,\n cu_seqlens,\n max_s,\n max_s,\n 0.0,\n softmax_scale,\n False,\n True,\n window_size_left,\n 0,\n False,\n None,\n )\n elif HAS_FLASH_ATTN_V2_ROCM:\n if window_size_left != -1:\n raise ValueError(\n f\"RoCm version of Flash Attention v2 does not support window attention (window_size_left != -1, got window_size_left={window_size_left}).\"\n )\n\n # RoCm flash API does not take the window_size_left and window_size_right arguments.\n return flash_attn_2_cuda.varlen_fwd(\n q,\n k,\n v,\n out,\n cu_seqlens,\n cu_seqlens,\n max_s,\n max_s,\n 0.0,\n softmax_scale,\n False,\n True,\n False,\n None,\n )\n elif HAS_FLASH_ATTN:\n if window_size_left != -1:\n raise NotImplementedError(\n \"window_size_left is only available with flash attn v2\"\n )\n\n # Flash attention v1 requires q, k and v to have the same number of heads\n if k.shape[1] != q.shape[1]:\n # MQA expand\n if k.shape[1] == 1:\n k = k.expand(-1, q.shape[1], -1)\n # Grouped attention reshape\n else:\n original_shape = k.shape\n k = (\n k.unsqueeze(2)\n .expand(-1, -1, q.shape[1] // k.shape[1], -1)\n .reshape(original_shape[0], -1, original_shape[2])\n )\n if v.shape[1] != q.shape[1]:\n # MQA expand\n if v.shape[1] == 1:\n v = v.expand(-1, q.shape[1], -1)\n # Grouped attention reshape\n else:\n original_shape = v.shape\n v = (\n v.unsqueeze(2)\n .expand(-1, -1, q.shape[1] // v.shape[1], -1)\n .reshape(original_shape[0], -1, original_shape[2])\n )\n\n return flash_attn_cuda.fwd(\n q,\n k,\n v,\n out,\n cu_seqlens,\n cu_seqlens,\n max_s,\n max_s,\n 0.0,\n softmax_scale,\n False,\n True,\n False,\n 0,\n None,\n )\n\n raise NotImplementedError(\"flash attention is not installed\")\n", "path": "server/text_generation_server/utils/flash_attn.py"}], "after_files": [{"content": "import os\nimport torch\n\nfrom loguru import logger\n\nfrom text_generation_server.utils.import_utils import IS_CUDA_SYSTEM, IS_ROCM_SYSTEM\n\nif os.getenv(\"USE_FLASH_ATTENTION\", \"\").lower() == \"false\":\n raise ImportError(\"`USE_FLASH_ATTENTION` is false.\")\n\nif not torch.cuda.is_available():\n raise ImportError(\"CUDA is not available\")\n\nmajor, minor = torch.cuda.get_device_capability()\nis_sm75 = major == 7 and minor == 5\nis_sm8x = major == 8 and minor >= 0\nis_sm90 = major == 9 and minor == 0\n\nHAS_FLASH_ATTN = False\nHAS_FLASH_ATTN_V2_CUDA = False\nHAS_FLASH_ATTN_V2_ROCM = False\ntry:\n try:\n import flash_attn_2_cuda\n except ImportError:\n architecture_suffix = \"\"\n if IS_CUDA_SYSTEM:\n architecture_suffix = \"-cuda\"\n elif IS_ROCM_SYSTEM:\n architecture_suffix = \"-rocm\"\n raise ImportError(\n \"Flash Attention V2 is not installed.\\n\"\n \"Use the official Docker image (ghcr.io/huggingface/text-generation-inference:latest) \"\n f\"or install flash attention v2 with `cd server && make install install-flash-attention-v2{architecture_suffix}`\"\n )\n if not (is_sm8x or is_sm90):\n raise ImportError(\n f\"GPU with CUDA capability {major} {minor} is not supported for \"\n \"Flash Attention V2\"\n )\n HAS_FLASH_ATTN_V2_CUDA = IS_CUDA_SYSTEM\n HAS_FLASH_ATTN_V2_ROCM = IS_ROCM_SYSTEM\nexcept ImportError as e:\n try:\n import flash_attn_cuda\n except ImportError:\n raise ImportError(\n \"Flash Attention is not installed.\\n\"\n \"Use the official Docker image (ghcr.io/huggingface/text-generation-inference:latest) \"\n \"or install flash attention with `cd server && make install install-flash-attention`\"\n ) from e\n\n if IS_CUDA_SYSTEM and not (is_sm75 or is_sm8x or is_sm90):\n raise ImportError(\n f\"GPU with CUDA capability {major} {minor} is not supported\"\n ) from e\n elif IS_ROCM_SYSTEM:\n for idx in range(torch.cuda.device_count()):\n if \"MI210\" not in torch.cuda.get_device_name(\n idx\n ) and \"MI250\" not in torch.cuda.get_device_name(idx):\n raise ImportError(\n f\"AMD GPU {torch.cuda.get_device_name(idx)} does not support flash-attention\"\n )\n\n logger.warning(f\"Unable to use Flash Attention V2: {e}\")\n HAS_FLASH_ATTN = True\n\n\ndef attention(\n q,\n k,\n v,\n out,\n cu_seqlens,\n max_s,\n softmax_scale,\n window_size_left=-1,\n):\n if window_size_left <= 0 and window_size_left != -1:\n raise ValueError(\"`window_size_left` must be > 0 or -1\")\n\n if HAS_FLASH_ATTN_V2_CUDA:\n return flash_attn_2_cuda.varlen_fwd(\n q,\n k,\n v,\n out,\n cu_seqlens,\n cu_seqlens,\n max_s,\n max_s,\n 0.0,\n softmax_scale,\n False,\n True,\n window_size_left,\n 0,\n False,\n None,\n )\n elif HAS_FLASH_ATTN_V2_ROCM:\n if window_size_left != -1:\n raise ValueError(\n f\"RoCm version of Flash Attention v2 does not support window attention (window_size_left != -1, got window_size_left={window_size_left}).\"\n )\n\n # RoCm flash API does not take the window_size_left and window_size_right arguments.\n return flash_attn_2_cuda.varlen_fwd(\n q,\n k,\n v,\n out,\n cu_seqlens,\n cu_seqlens,\n max_s,\n max_s,\n 0.0,\n softmax_scale,\n False,\n True,\n False,\n None,\n )\n elif HAS_FLASH_ATTN:\n if window_size_left != -1:\n raise NotImplementedError(\n \"window_size_left is only available with flash attn v2\"\n )\n\n # Flash attention v1 requires q, k and v to have the same number of heads\n if k.shape[1] != q.shape[1]:\n # MQA expand\n if k.shape[1] == 1:\n k = k.expand(-1, q.shape[1], -1)\n # Grouped attention reshape\n else:\n original_shape = k.shape\n k = (\n k.unsqueeze(2)\n .expand(-1, -1, q.shape[1] // k.shape[1], -1)\n .reshape(original_shape[0], -1, original_shape[2])\n )\n if v.shape[1] != q.shape[1]:\n # MQA expand\n if v.shape[1] == 1:\n v = v.expand(-1, q.shape[1], -1)\n # Grouped attention reshape\n else:\n original_shape = v.shape\n v = (\n v.unsqueeze(2)\n .expand(-1, -1, q.shape[1] // v.shape[1], -1)\n .reshape(original_shape[0], -1, original_shape[2])\n )\n\n return flash_attn_cuda.fwd(\n q,\n k,\n v,\n out,\n cu_seqlens,\n cu_seqlens,\n max_s,\n max_s,\n 0.0,\n softmax_scale,\n False,\n True,\n False,\n 0,\n None,\n )\n\n raise NotImplementedError(\"flash attention is not installed\")\n", "path": "server/text_generation_server/utils/flash_attn.py"}]} | 2,914 | 230 |
gh_patches_debug_22811 | rasdani/github-patches | git_diff | goauthentik__authentik-9441 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Redis (Unix) Socket?
**Describe your question/**
Is it possible to run an authentik docker container (server and worker) with a redis unix socket rather than tcp?
Directly overwriting redis variables via user_settings.py doesn't appear to modify anything despite @BeryJu suggesting in [pull request #4036](https://github.com/goauthentik/authentik/pull/4036) it should be viable.
**Relevant infos**
Docker compose:
version: '3.4'
services:
server:
image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:latest
restart: unless-stopped
command: server
environment:
AUTHENTIK_POSTGRESQL__HOST: ${PG_HOST}
AUTHENTIK_POSTGRESQL__USER: ${PG_USER}
AUTHENTIK_POSTGRESQL__NAME: ${PG_NAME}
AUTHENTIK_POSTGRESQL__PASSWORD: ${PG_PASS}
AUTHENTIK_REDIS__HOST: ${REDIS_HOST}
AUTHENTIK_SECRET_KEY: ${AUTHENTIK_SECRET_KEY}
AUTHENTIK_LISTEN__HTTP: ${AUTHENTIK_LISTEN__HTTP}
AUTHENTIK_LISTEN__HTTPS: ${AUTHENTIK_LISTEN__HTTPS}
volumes:
- ./media:/media
- ./custom-templates:/templates
- /run/redis/redis-server.sock:/run/redis/redis-server.sock
- /var/run/postgresql/.s.PGSQL.5432:/var/run/postgresql/.s.PGSQL.5432
- ./user_settings.py:/data/user_settings.py
ports:
- "${AUTHENTIK_PORT_HTTP}:${AUTHENTIK_PORT_HTTP}"
- "${AUTHENTIK_PORT_HTTPS}:${AUTHENTIK_PORT_HTTPS}"
worker:
image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:latest
restart: unless-stopped
command: worker
environment:
AUTHENTIK_POSTGRESQL__HOST: ${PG_HOST}
AUTHENTIK_POSTGRESQL__USER: ${PG_USER}
AUTHENTIK_POSTGRESQL__NAME: ${PG_NAME}
AUTHENTIK_POSTGRESQL__PASSWORD: ${PG_PASS}
AUTHENTIK_REDIS__HOST: ${REDIS_HOST}
AUTHENTIK_SECRET_KEY: ${AUTHENTIK_SECRET_KEY}
user: authentik
volumes:
- ./media:/media
- ./certs:/certs
- ./custom-templates:/templates
- /run/docker.sock:/var/run/docker.sock
- /run/redis/redis-server.sock:/run/redis/redis-server.sock
- /var/run/postgresql/.s.PGSQL.5432:/var/run/postgresql/.s.PGSQL.5432
- ./user_settings.py:/data/user_settings.py
I didn't include the definition for ${REDIS_HOST} as sockets cannot be passed via environmental variables per [pull 4036](https://github.com/goauthentik/authentik/pull/4036).
**Logs**
DBG undefined | event=Loaded config logger=authentik.lib.config timestamp=1679734632.5913134 file=/authentik/lib/default.yml
DBG undefined | event=Loaded environment variables logger=authentik.lib.config timestamp=1679734632.5915806 count=7
INF undefined | event=Starting authentik bootstrap logger=authentik.lib.config timestamp=1679734632.59165
INF undefined | event=PostgreSQL connection successful logger=authentik.lib.config timestamp=1679734632.5988
INF undefined | event=Redis Connection failed, retrying... (Error 99 connecting to localhost:6379. Cannot assign requested address.) logger=authentik.lib.config timestamp=1679734633.600902 redis_url=redis://:@localhost:6379/0
**Version and Deployment (please complete the following information):**
- authentik version: 2023.3
- Deployment: docker-compose
**Additional context**
docker exec -it authentik-stack-server-1 bash
authentik@283a42f97c09:/$ cd /data
authentik@283a42f97c09:/data$ ls -la
total 27
drwxr-xr-x 2 root root 3 Mar 25 08:24 .
drwxr-xr-x 33 root root 39 Mar 25 08:24 ..
-rw-rw-r-- 1 1022 _runit-log 16463 Mar 25 08:45 user_settings.py
authentik@283a42f97c09:/data$ cat user_settings.py | grep redis
REDIS_PROTOCOL_PREFIX = "redis://"
if CONFIG.y_bool("redis.tls", False):
REDIS_PROTOCOL_PREFIX = "rediss://"
REDIS_CELERY_TLS_REQUIREMENTS = f"?ssl_cert_reqs={CONFIG.y('redis.tls_reqs')}"
_redis_url = (
f"{quote_plus(CONFIG.y('redis.password'))}@{quote_plus(CONFIG.y('redis.host'))}:"
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": 'unix://run/redis/redis-server.sock',
"TIMEOUT": int(CONFIG.y("redis.cache_timeout", 300)),
"OPTIONS": {"CLIENT_CLASS": "django_redis.client.DefaultClient"},
"BACKEND": "channels_redis.core.RedisChannelLayer",
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": 'unix://run/redis/redis-server.sock',
CELERY_BROKER_URL = 'redis+socket:///run/redis/redis-server.sock'
CELERY_RESULT_BACKEND = 'redis+socket:///run/redis/redis-server.sock'
"redis": "WARNING",
authentik@283a42f97c09:/data$
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lifecycle/wait_for_db.py`
Content:
```
1 #!/usr/bin/env python
2 """This file needs to be run from the root of the project to correctly
3 import authentik. This is done by the dockerfile."""
4 from sys import exit as sysexit
5 from time import sleep
6 from urllib.parse import quote_plus
7
8 from psycopg import OperationalError, connect
9 from redis import Redis
10 from redis.exceptions import RedisError
11
12 from authentik.lib.config import CONFIG, redis_url
13
14
15 def check_postgres():
16 while True:
17 try:
18 conn = connect(
19 dbname=CONFIG.get("postgresql.name"),
20 user=CONFIG.get("postgresql.user"),
21 password=CONFIG.get("postgresql.password"),
22 host=CONFIG.get("postgresql.host"),
23 port=CONFIG.get_int("postgresql.port"),
24 sslmode=CONFIG.get("postgresql.sslmode"),
25 sslrootcert=CONFIG.get("postgresql.sslrootcert"),
26 sslcert=CONFIG.get("postgresql.sslcert"),
27 sslkey=CONFIG.get("postgresql.sslkey"),
28 )
29 conn.cursor()
30 break
31 except OperationalError as exc:
32 sleep(1)
33 CONFIG.log("info", f"PostgreSQL connection failed, retrying... ({exc})")
34 CONFIG.log("info", "PostgreSQL connection successful")
35
36
37 def check_redis():
38 url = redis_url(CONFIG.get("redis.db"))
39 while True:
40 try:
41 redis = Redis.from_url(url)
42 redis.ping()
43 break
44 except RedisError as exc:
45 sleep(1)
46 sanitized_url = url.replace(quote_plus(CONFIG.get("redis.password")), "******")
47 CONFIG.log(
48 "info", f"Redis Connection failed, retrying... ({exc})", redis_url=sanitized_url
49 )
50 CONFIG.log("info", "Redis Connection successful")
51
52
53 def wait_for_db():
54 CONFIG.log("info", "Starting authentik bootstrap")
55 # Sanity check, ensure SECRET_KEY is set before we even check for database connectivity
56 if CONFIG.get("secret_key") is None or len(CONFIG.get("secret_key")) == 0:
57 CONFIG.log("info", "----------------------------------------------------------------------")
58 CONFIG.log("info", "Secret key missing, check https://goauthentik.io/docs/installation/.")
59 CONFIG.log("info", "----------------------------------------------------------------------")
60 sysexit(1)
61 check_postgres()
62 check_redis()
63 CONFIG.log("info", "Finished authentik bootstrap")
64
65
66 if __name__ == "__main__":
67 wait_for_db()
68
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lifecycle/wait_for_db.py b/lifecycle/wait_for_db.py
--- a/lifecycle/wait_for_db.py
+++ b/lifecycle/wait_for_db.py
@@ -3,7 +3,6 @@
import authentik. This is done by the dockerfile."""
from sys import exit as sysexit
from time import sleep
-from urllib.parse import quote_plus
from psycopg import OperationalError, connect
from redis import Redis
@@ -35,7 +34,7 @@
def check_redis():
- url = redis_url(CONFIG.get("redis.db"))
+ url = CONFIG.get("cache.url") or redis_url(CONFIG.get("redis.db"))
while True:
try:
redis = Redis.from_url(url)
@@ -43,10 +42,7 @@
break
except RedisError as exc:
sleep(1)
- sanitized_url = url.replace(quote_plus(CONFIG.get("redis.password")), "******")
- CONFIG.log(
- "info", f"Redis Connection failed, retrying... ({exc})", redis_url=sanitized_url
- )
+ CONFIG.log("info", f"Redis Connection failed, retrying... ({exc})")
CONFIG.log("info", "Redis Connection successful")
| {"golden_diff": "diff --git a/lifecycle/wait_for_db.py b/lifecycle/wait_for_db.py\n--- a/lifecycle/wait_for_db.py\n+++ b/lifecycle/wait_for_db.py\n@@ -3,7 +3,6 @@\n import authentik. This is done by the dockerfile.\"\"\"\n from sys import exit as sysexit\n from time import sleep\n-from urllib.parse import quote_plus\n \n from psycopg import OperationalError, connect\n from redis import Redis\n@@ -35,7 +34,7 @@\n \n \n def check_redis():\n- url = redis_url(CONFIG.get(\"redis.db\"))\n+ url = CONFIG.get(\"cache.url\") or redis_url(CONFIG.get(\"redis.db\"))\n while True:\n try:\n redis = Redis.from_url(url)\n@@ -43,10 +42,7 @@\n break\n except RedisError as exc:\n sleep(1)\n- sanitized_url = url.replace(quote_plus(CONFIG.get(\"redis.password\")), \"******\")\n- CONFIG.log(\n- \"info\", f\"Redis Connection failed, retrying... ({exc})\", redis_url=sanitized_url\n- )\n+ CONFIG.log(\"info\", f\"Redis Connection failed, retrying... ({exc})\")\n CONFIG.log(\"info\", \"Redis Connection successful\")\n", "issue": "Redis (Unix) Socket?\n**Describe your question/**\r\nIs it possible to run an authentik docker container (server and worker) with a redis unix socket rather than tcp?\r\nDirectly overwriting redis variables via user_settings.py doesn't appear to modify anything despite @BeryJu suggesting in [pull request #4036](https://github.com/goauthentik/authentik/pull/4036) it should be viable.\r\n\r\n**Relevant infos**\r\nDocker compose:\r\nversion: '3.4'\r\n\r\nservices:\r\n server:\r\n image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:latest\r\n restart: unless-stopped\r\n command: server\r\n environment:\r\n AUTHENTIK_POSTGRESQL__HOST: ${PG_HOST}\r\n AUTHENTIK_POSTGRESQL__USER: ${PG_USER}\r\n AUTHENTIK_POSTGRESQL__NAME: ${PG_NAME}\r\n AUTHENTIK_POSTGRESQL__PASSWORD: ${PG_PASS}\r\n AUTHENTIK_REDIS__HOST: ${REDIS_HOST}\r\n AUTHENTIK_SECRET_KEY: ${AUTHENTIK_SECRET_KEY}\r\n AUTHENTIK_LISTEN__HTTP: ${AUTHENTIK_LISTEN__HTTP}\r\n AUTHENTIK_LISTEN__HTTPS: ${AUTHENTIK_LISTEN__HTTPS}\r\n volumes:\r\n - ./media:/media\r\n - ./custom-templates:/templates\r\n - /run/redis/redis-server.sock:/run/redis/redis-server.sock\r\n - /var/run/postgresql/.s.PGSQL.5432:/var/run/postgresql/.s.PGSQL.5432\r\n - ./user_settings.py:/data/user_settings.py\r\n ports:\r\n - \"${AUTHENTIK_PORT_HTTP}:${AUTHENTIK_PORT_HTTP}\"\r\n - \"${AUTHENTIK_PORT_HTTPS}:${AUTHENTIK_PORT_HTTPS}\"\r\n worker:\r\n image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:latest\r\n restart: unless-stopped\r\n command: worker\r\n environment:\r\n AUTHENTIK_POSTGRESQL__HOST: ${PG_HOST}\r\n AUTHENTIK_POSTGRESQL__USER: ${PG_USER}\r\n AUTHENTIK_POSTGRESQL__NAME: ${PG_NAME}\r\n AUTHENTIK_POSTGRESQL__PASSWORD: ${PG_PASS}\r\n AUTHENTIK_REDIS__HOST: ${REDIS_HOST}\r\n AUTHENTIK_SECRET_KEY: ${AUTHENTIK_SECRET_KEY}\r\n user: authentik\r\n volumes:\r\n - ./media:/media\r\n - ./certs:/certs\r\n - ./custom-templates:/templates\r\n - /run/docker.sock:/var/run/docker.sock\r\n - /run/redis/redis-server.sock:/run/redis/redis-server.sock\r\n - /var/run/postgresql/.s.PGSQL.5432:/var/run/postgresql/.s.PGSQL.5432\r\n - ./user_settings.py:/data/user_settings.py\r\n\r\nI didn't include the definition for ${REDIS_HOST} as sockets cannot be passed via environmental variables per [pull 4036](https://github.com/goauthentik/authentik/pull/4036).\r\n\r\n**Logs**\r\nDBG undefined | event=Loaded config logger=authentik.lib.config timestamp=1679734632.5913134 file=/authentik/lib/default.yml \r\nDBG undefined | event=Loaded environment variables logger=authentik.lib.config timestamp=1679734632.5915806 count=7 \r\nINF undefined | event=Starting authentik bootstrap logger=authentik.lib.config timestamp=1679734632.59165 \r\nINF undefined | event=PostgreSQL connection successful logger=authentik.lib.config timestamp=1679734632.5988 \r\nINF undefined | event=Redis Connection failed, retrying... (Error 99 connecting to localhost:6379. Cannot assign requested address.) logger=authentik.lib.config timestamp=1679734633.600902 redis_url=redis://:@localhost:6379/0 \r\n\r\n**Version and Deployment (please complete the following information):**\r\n - authentik version: 2023.3\r\n - Deployment: docker-compose\r\n\r\n**Additional context**\r\ndocker exec -it authentik-stack-server-1 bash\r\nauthentik@283a42f97c09:/$ cd /data\r\nauthentik@283a42f97c09:/data$ ls -la\r\ntotal 27\r\ndrwxr-xr-x 2 root root 3 Mar 25 08:24 .\r\ndrwxr-xr-x 33 root root 39 Mar 25 08:24 ..\r\n-rw-rw-r-- 1 1022 _runit-log 16463 Mar 25 08:45 user_settings.py\r\nauthentik@283a42f97c09:/data$ cat user_settings.py | grep redis\r\nREDIS_PROTOCOL_PREFIX = \"redis://\"\r\nif CONFIG.y_bool(\"redis.tls\", False):\r\n REDIS_PROTOCOL_PREFIX = \"rediss://\"\r\n REDIS_CELERY_TLS_REQUIREMENTS = f\"?ssl_cert_reqs={CONFIG.y('redis.tls_reqs')}\"\r\n_redis_url = (\r\n f\"{quote_plus(CONFIG.y('redis.password'))}@{quote_plus(CONFIG.y('redis.host'))}:\"\r\n \"BACKEND\": \"django_redis.cache.RedisCache\",\r\n \"LOCATION\": 'unix://run/redis/redis-server.sock',\r\n \"TIMEOUT\": int(CONFIG.y(\"redis.cache_timeout\", 300)),\r\n \"OPTIONS\": {\"CLIENT_CLASS\": \"django_redis.client.DefaultClient\"},\r\n \"BACKEND\": \"channels_redis.core.RedisChannelLayer\",\r\n \"BACKEND\": \"django_redis.cache.RedisCache\",\r\n \"LOCATION\": 'unix://run/redis/redis-server.sock',\r\nCELERY_BROKER_URL = 'redis+socket:///run/redis/redis-server.sock'\r\nCELERY_RESULT_BACKEND = 'redis+socket:///run/redis/redis-server.sock'\r\n \"redis\": \"WARNING\",\r\nauthentik@283a42f97c09:/data$ \n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"This file needs to be run from the root of the project to correctly\nimport authentik. This is done by the dockerfile.\"\"\"\nfrom sys import exit as sysexit\nfrom time import sleep\nfrom urllib.parse import quote_plus\n\nfrom psycopg import OperationalError, connect\nfrom redis import Redis\nfrom redis.exceptions import RedisError\n\nfrom authentik.lib.config import CONFIG, redis_url\n\n\ndef check_postgres():\n while True:\n try:\n conn = connect(\n dbname=CONFIG.get(\"postgresql.name\"),\n user=CONFIG.get(\"postgresql.user\"),\n password=CONFIG.get(\"postgresql.password\"),\n host=CONFIG.get(\"postgresql.host\"),\n port=CONFIG.get_int(\"postgresql.port\"),\n sslmode=CONFIG.get(\"postgresql.sslmode\"),\n sslrootcert=CONFIG.get(\"postgresql.sslrootcert\"),\n sslcert=CONFIG.get(\"postgresql.sslcert\"),\n sslkey=CONFIG.get(\"postgresql.sslkey\"),\n )\n conn.cursor()\n break\n except OperationalError as exc:\n sleep(1)\n CONFIG.log(\"info\", f\"PostgreSQL connection failed, retrying... ({exc})\")\n CONFIG.log(\"info\", \"PostgreSQL connection successful\")\n\n\ndef check_redis():\n url = redis_url(CONFIG.get(\"redis.db\"))\n while True:\n try:\n redis = Redis.from_url(url)\n redis.ping()\n break\n except RedisError as exc:\n sleep(1)\n sanitized_url = url.replace(quote_plus(CONFIG.get(\"redis.password\")), \"******\")\n CONFIG.log(\n \"info\", f\"Redis Connection failed, retrying... ({exc})\", redis_url=sanitized_url\n )\n CONFIG.log(\"info\", \"Redis Connection successful\")\n\n\ndef wait_for_db():\n CONFIG.log(\"info\", \"Starting authentik bootstrap\")\n # Sanity check, ensure SECRET_KEY is set before we even check for database connectivity\n if CONFIG.get(\"secret_key\") is None or len(CONFIG.get(\"secret_key\")) == 0:\n CONFIG.log(\"info\", \"----------------------------------------------------------------------\")\n CONFIG.log(\"info\", \"Secret key missing, check https://goauthentik.io/docs/installation/.\")\n CONFIG.log(\"info\", \"----------------------------------------------------------------------\")\n sysexit(1)\n check_postgres()\n check_redis()\n CONFIG.log(\"info\", \"Finished authentik bootstrap\")\n\n\nif __name__ == \"__main__\":\n wait_for_db()\n", "path": "lifecycle/wait_for_db.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\"\"\"This file needs to be run from the root of the project to correctly\nimport authentik. This is done by the dockerfile.\"\"\"\nfrom sys import exit as sysexit\nfrom time import sleep\n\nfrom psycopg import OperationalError, connect\nfrom redis import Redis\nfrom redis.exceptions import RedisError\n\nfrom authentik.lib.config import CONFIG, redis_url\n\n\ndef check_postgres():\n while True:\n try:\n conn = connect(\n dbname=CONFIG.get(\"postgresql.name\"),\n user=CONFIG.get(\"postgresql.user\"),\n password=CONFIG.get(\"postgresql.password\"),\n host=CONFIG.get(\"postgresql.host\"),\n port=CONFIG.get_int(\"postgresql.port\"),\n sslmode=CONFIG.get(\"postgresql.sslmode\"),\n sslrootcert=CONFIG.get(\"postgresql.sslrootcert\"),\n sslcert=CONFIG.get(\"postgresql.sslcert\"),\n sslkey=CONFIG.get(\"postgresql.sslkey\"),\n )\n conn.cursor()\n break\n except OperationalError as exc:\n sleep(1)\n CONFIG.log(\"info\", f\"PostgreSQL connection failed, retrying... ({exc})\")\n CONFIG.log(\"info\", \"PostgreSQL connection successful\")\n\n\ndef check_redis():\n url = CONFIG.get(\"cache.url\") or redis_url(CONFIG.get(\"redis.db\"))\n while True:\n try:\n redis = Redis.from_url(url)\n redis.ping()\n break\n except RedisError as exc:\n sleep(1)\n CONFIG.log(\"info\", f\"Redis Connection failed, retrying... ({exc})\")\n CONFIG.log(\"info\", \"Redis Connection successful\")\n\n\ndef wait_for_db():\n CONFIG.log(\"info\", \"Starting authentik bootstrap\")\n # Sanity check, ensure SECRET_KEY is set before we even check for database connectivity\n if CONFIG.get(\"secret_key\") is None or len(CONFIG.get(\"secret_key\")) == 0:\n CONFIG.log(\"info\", \"----------------------------------------------------------------------\")\n CONFIG.log(\"info\", \"Secret key missing, check https://goauthentik.io/docs/installation/.\")\n CONFIG.log(\"info\", \"----------------------------------------------------------------------\")\n sysexit(1)\n check_postgres()\n check_redis()\n CONFIG.log(\"info\", \"Finished authentik bootstrap\")\n\n\nif __name__ == \"__main__\":\n wait_for_db()\n", "path": "lifecycle/wait_for_db.py"}]} | 2,257 | 266 |
gh_patches_debug_6997 | rasdani/github-patches | git_diff | mkdocs__mkdocs-1878 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Theme files override docs_dir files on Windows
As investigated and discussed in https://github.com/squidfunk/mkdocs-material/issues/1282, custom files in docs_dir are overriden by theme files.
For example, if you use the default MkDocs theme and create a custom `favicon.ico` in `docs/img`, the custom icon does not appear in the output. It is overwritten by the theme's own favicon.
The issue was already brought up in https://github.com/mkdocs/mkdocs/issues/1671 and fixed by https://github.com/mkdocs/mkdocs/pull/1672.
However, the fix (https://github.com/mkdocs/mkdocs/pull/1672) **only works on Unix systems (including Mac). It does not work on Windows.**
Windows handles path names differently, so I suppose the issue must be caused by that. See especially the use of `os.path` in `files.py` (https://github.com/mkdocs/mkdocs/commit/f8ac3c7ee).
I had a similar issue in mkdocs-exclude, see here: https://github.com/apenwarr/mkdocs-exclude/issues/2
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mkdocs/structure/files.py`
Content:
```
1 # coding: utf-8
2
3 from __future__ import unicode_literals
4 import fnmatch
5 import os
6 import logging
7 from functools import cmp_to_key
8
9 from mkdocs import utils
10
11
12 log = logging.getLogger(__name__)
13 log.addFilter(utils.warning_filter)
14
15
16 class Files(object):
17 """ A collection of File objects. """
18 def __init__(self, files):
19 self._files = files
20 self.src_paths = {file.src_path: file for file in files}
21
22 def __iter__(self):
23 return iter(self._files)
24
25 def __len__(self):
26 return len(self._files)
27
28 def __contains__(self, path):
29 return path in self.src_paths
30
31 def get_file_from_path(self, path):
32 """ Return a File instance with File.src_path equal to path. """
33 return self.src_paths.get(os.path.normpath(path))
34
35 def append(self, file):
36 """ Append file to Files collection. """
37 self._files.append(file)
38 self.src_paths[file.src_path] = file
39
40 def copy_static_files(self, dirty=False):
41 """ Copy static files from source to destination. """
42 for file in self:
43 if not file.is_documentation_page():
44 file.copy_file(dirty)
45
46 def documentation_pages(self):
47 """ Return iterable of all Markdown page file objects. """
48 return [file for file in self if file.is_documentation_page()]
49
50 def static_pages(self):
51 """ Return iterable of all static page file objects. """
52 return [file for file in self if file.is_static_page()]
53
54 def media_files(self):
55 """ Return iterable of all file objects which are not documentation or static pages. """
56 return [file for file in self if file.is_media_file()]
57
58 def javascript_files(self):
59 """ Return iterable of all javascript file objects. """
60 return [file for file in self if file.is_javascript()]
61
62 def css_files(self):
63 """ Return iterable of all CSS file objects. """
64 return [file for file in self if file.is_css()]
65
66 def add_files_from_theme(self, env, config):
67 """ Retrieve static files from Jinja environment and add to collection. """
68 def filter(name):
69 patterns = ['.*', '*.py', '*.pyc', '*.html', '*readme*', 'mkdocs_theme.yml']
70 patterns.extend('*{0}'.format(x) for x in utils.markdown_extensions)
71 patterns.extend(config['theme'].static_templates)
72 for pattern in patterns:
73 if fnmatch.fnmatch(name.lower(), pattern):
74 return False
75 return True
76 for path in env.list_templates(filter_func=filter):
77 # Theme files do not override docs_dir files
78 if path not in self:
79 for dir in config['theme'].dirs:
80 # Find the first theme dir which contains path
81 if os.path.isfile(os.path.join(dir, path)):
82 self.append(File(path, dir, config['site_dir'], config['use_directory_urls']))
83 break
84
85
86 class File(object):
87 """
88 A MkDocs File object.
89
90 Points to the source and destination locations of a file.
91
92 The `path` argument must be a path that exists relative to `src_dir`.
93
94 The `src_dir` and `dest_dir` must be absolute paths on the local file system.
95
96 The `use_directory_urls` argument controls how destination paths are generated. If `False`, a Markdown file is
97 mapped to an HTML file of the same name (the file extension is changed to `.html`). If True, a Markdown file is
98 mapped to an HTML index file (`index.html`) nested in a directory using the "name" of the file in `path`. The
99 `use_directory_urls` argument has no effect on non-Markdown files.
100
101 File objects have the following properties, which are Unicode strings:
102
103 File.src_path
104 The pure path of the source file relative to the source directory.
105
106 File.abs_src_path
107 The absolute concrete path of the source file.
108
109 File.dest_path
110 The pure path of the destination file relative to the destination directory.
111
112 File.abs_dest_path
113 The absolute concrete path of the destination file.
114
115 File.url
116 The url of the destination file relative to the destination directory as a string.
117 """
118 def __init__(self, path, src_dir, dest_dir, use_directory_urls):
119 self.page = None
120 self.src_path = os.path.normpath(path)
121 self.abs_src_path = os.path.normpath(os.path.join(src_dir, self.src_path))
122 self.name = self._get_stem()
123 self.dest_path = self._get_dest_path(use_directory_urls)
124 self.abs_dest_path = os.path.normpath(os.path.join(dest_dir, self.dest_path))
125 self.url = self._get_url(use_directory_urls)
126
127 def __eq__(self, other):
128
129 def sub_dict(d):
130 return dict((key, value) for key, value in d.items() if key in ['src_path', 'abs_src_path', 'url'])
131
132 return (isinstance(other, self.__class__) and sub_dict(self.__dict__) == sub_dict(other.__dict__))
133
134 def __ne__(self, other):
135 return not self.__eq__(other)
136
137 def _get_stem(self):
138 """ Return the name of the file without it's extension. """
139 filename = os.path.basename(self.src_path)
140 stem, ext = os.path.splitext(filename)
141 return 'index' if stem in ('index', 'README') else stem
142
143 def _get_dest_path(self, use_directory_urls):
144 """ Return destination path based on source path. """
145 if self.is_documentation_page():
146 if use_directory_urls:
147 parent, filename = os.path.split(self.src_path)
148 if self.name == 'index':
149 # index.md or README.md => index.html
150 return os.path.join(parent, 'index.html')
151 else:
152 # foo.md => foo/index.html
153 return os.path.join(parent, self.name, 'index.html')
154 else:
155 # foo.md => foo.html
156 root, ext = os.path.splitext(self.src_path)
157 return root + '.html'
158 return self.src_path
159
160 def _get_url(self, use_directory_urls):
161 """ Return url based in destination path. """
162 url = self.dest_path.replace(os.path.sep, '/')
163 dirname, filename = os.path.split(url)
164 if use_directory_urls and filename == 'index.html':
165 if dirname == '':
166 url = '.'
167 else:
168 url = dirname + '/'
169 return utils.urlquote(url)
170
171 def url_relative_to(self, other):
172 """ Return url for file relative to other file. """
173 return utils.get_relative_url(self.url, other.url if isinstance(other, File) else other)
174
175 def copy_file(self, dirty=False):
176 """ Copy source file to destination, ensuring parent directories exist. """
177 if dirty and not self.is_modified():
178 log.debug("Skip copying unmodified file: '{}'".format(self.src_path))
179 else:
180 log.debug("Copying media file: '{}'".format(self.src_path))
181 utils.copy_file(self.abs_src_path, self.abs_dest_path)
182
183 def is_modified(self):
184 if os.path.isfile(self.abs_dest_path):
185 return os.path.getmtime(self.abs_dest_path) < os.path.getmtime(self.abs_src_path)
186 return True
187
188 def is_documentation_page(self):
189 """ Return True if file is a Markdown page. """
190 return os.path.splitext(self.src_path)[1] in utils.markdown_extensions
191
192 def is_static_page(self):
193 """ Return True if file is a static page (html, xml, json). """
194 return os.path.splitext(self.src_path)[1] in (
195 '.html',
196 '.htm',
197 '.xml',
198 '.json',
199 )
200
201 def is_media_file(self):
202 """ Return True if file is not a documentation or static page. """
203 return not (self.is_documentation_page() or self.is_static_page())
204
205 def is_javascript(self):
206 """ Return True if file is a JavaScript file. """
207 return os.path.splitext(self.src_path)[1] in (
208 '.js',
209 '.javascript',
210 )
211
212 def is_css(self):
213 """ Return True if file is a CSS file. """
214 return os.path.splitext(self.src_path)[1] in (
215 '.css',
216 )
217
218
219 def get_files(config):
220 """ Walk the `docs_dir` and return a Files collection. """
221 files = []
222 exclude = ['.*', '/templates']
223
224 for source_dir, dirnames, filenames in os.walk(config['docs_dir'], followlinks=True):
225 relative_dir = os.path.relpath(source_dir, config['docs_dir'])
226
227 for dirname in list(dirnames):
228 path = os.path.normpath(os.path.join(relative_dir, dirname))
229 # Skip any excluded directories
230 if _filter_paths(basename=dirname, path=path, is_dir=True, exclude=exclude):
231 dirnames.remove(dirname)
232 dirnames.sort()
233
234 for filename in _sort_files(filenames):
235 path = os.path.normpath(os.path.join(relative_dir, filename))
236 # Skip any excluded files
237 if _filter_paths(basename=filename, path=path, is_dir=False, exclude=exclude):
238 continue
239 # Skip README.md if an index file also exists in dir
240 if filename.lower() == 'readme.md' and 'index.md' in filenames:
241 log.warning("Both index.md and readme.md found. Skipping readme.md from {}".format(source_dir))
242 continue
243 files.append(File(path, config['docs_dir'], config['site_dir'], config['use_directory_urls']))
244
245 return Files(files)
246
247
248 def _sort_files(filenames):
249 """ Always sort `index` or `README` as first filename in list. """
250
251 def compare(x, y):
252 if x == y:
253 return 0
254 if os.path.splitext(y)[0] in ['index', 'README']:
255 return 1
256 if os.path.splitext(x)[0] in ['index', 'README'] or x < y:
257 return -1
258 return 1
259
260 return sorted(filenames, key=cmp_to_key(compare))
261
262
263 def _filter_paths(basename, path, is_dir, exclude):
264 """ .gitignore style file filtering. """
265 for item in exclude:
266 # Items ending in '/' apply only to directories.
267 if item.endswith('/') and not is_dir:
268 continue
269 # Items starting with '/' apply to the whole path.
270 # In any other cases just the basename is used.
271 match = path if item.startswith('/') else basename
272 if fnmatch.fnmatch(match, item.strip('/')):
273 return True
274 return False
275
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mkdocs/structure/files.py b/mkdocs/structure/files.py
--- a/mkdocs/structure/files.py
+++ b/mkdocs/structure/files.py
@@ -75,6 +75,7 @@
return True
for path in env.list_templates(filter_func=filter):
# Theme files do not override docs_dir files
+ path = os.path.normpath(path)
if path not in self:
for dir in config['theme'].dirs:
# Find the first theme dir which contains path
| {"golden_diff": "diff --git a/mkdocs/structure/files.py b/mkdocs/structure/files.py\n--- a/mkdocs/structure/files.py\n+++ b/mkdocs/structure/files.py\n@@ -75,6 +75,7 @@\n return True\n for path in env.list_templates(filter_func=filter):\n # Theme files do not override docs_dir files\n+ path = os.path.normpath(path)\n if path not in self:\n for dir in config['theme'].dirs:\n # Find the first theme dir which contains path\n", "issue": "Theme files override docs_dir files on Windows\nAs investigated and discussed in https://github.com/squidfunk/mkdocs-material/issues/1282, custom files in docs_dir are overriden by theme files. \r\n\r\nFor example, if you use the default MkDocs theme and create a custom `favicon.ico` in `docs/img`, the custom icon does not appear in the output. It is overwritten by the theme's own favicon.\r\n\r\nThe issue was already brought up in https://github.com/mkdocs/mkdocs/issues/1671 and fixed by https://github.com/mkdocs/mkdocs/pull/1672.\r\n\r\nHowever, the fix (https://github.com/mkdocs/mkdocs/pull/1672) **only works on Unix systems (including Mac). It does not work on Windows.**\r\n\r\nWindows handles path names differently, so I suppose the issue must be caused by that. See especially the use of `os.path` in `files.py` (https://github.com/mkdocs/mkdocs/commit/f8ac3c7ee).\r\n\r\nI had a similar issue in mkdocs-exclude, see here: https://github.com/apenwarr/mkdocs-exclude/issues/2\r\n\n", "before_files": [{"content": "# coding: utf-8\n\nfrom __future__ import unicode_literals\nimport fnmatch\nimport os\nimport logging\nfrom functools import cmp_to_key\n\nfrom mkdocs import utils\n\n\nlog = logging.getLogger(__name__)\nlog.addFilter(utils.warning_filter)\n\n\nclass Files(object):\n \"\"\" A collection of File objects. \"\"\"\n def __init__(self, files):\n self._files = files\n self.src_paths = {file.src_path: file for file in files}\n\n def __iter__(self):\n return iter(self._files)\n\n def __len__(self):\n return len(self._files)\n\n def __contains__(self, path):\n return path in self.src_paths\n\n def get_file_from_path(self, path):\n \"\"\" Return a File instance with File.src_path equal to path. \"\"\"\n return self.src_paths.get(os.path.normpath(path))\n\n def append(self, file):\n \"\"\" Append file to Files collection. \"\"\"\n self._files.append(file)\n self.src_paths[file.src_path] = file\n\n def copy_static_files(self, dirty=False):\n \"\"\" Copy static files from source to destination. \"\"\"\n for file in self:\n if not file.is_documentation_page():\n file.copy_file(dirty)\n\n def documentation_pages(self):\n \"\"\" Return iterable of all Markdown page file objects. \"\"\"\n return [file for file in self if file.is_documentation_page()]\n\n def static_pages(self):\n \"\"\" Return iterable of all static page file objects. \"\"\"\n return [file for file in self if file.is_static_page()]\n\n def media_files(self):\n \"\"\" Return iterable of all file objects which are not documentation or static pages. \"\"\"\n return [file for file in self if file.is_media_file()]\n\n def javascript_files(self):\n \"\"\" Return iterable of all javascript file objects. \"\"\"\n return [file for file in self if file.is_javascript()]\n\n def css_files(self):\n \"\"\" Return iterable of all CSS file objects. \"\"\"\n return [file for file in self if file.is_css()]\n\n def add_files_from_theme(self, env, config):\n \"\"\" Retrieve static files from Jinja environment and add to collection. \"\"\"\n def filter(name):\n patterns = ['.*', '*.py', '*.pyc', '*.html', '*readme*', 'mkdocs_theme.yml']\n patterns.extend('*{0}'.format(x) for x in utils.markdown_extensions)\n patterns.extend(config['theme'].static_templates)\n for pattern in patterns:\n if fnmatch.fnmatch(name.lower(), pattern):\n return False\n return True\n for path in env.list_templates(filter_func=filter):\n # Theme files do not override docs_dir files\n if path not in self:\n for dir in config['theme'].dirs:\n # Find the first theme dir which contains path\n if os.path.isfile(os.path.join(dir, path)):\n self.append(File(path, dir, config['site_dir'], config['use_directory_urls']))\n break\n\n\nclass File(object):\n \"\"\"\n A MkDocs File object.\n\n Points to the source and destination locations of a file.\n\n The `path` argument must be a path that exists relative to `src_dir`.\n\n The `src_dir` and `dest_dir` must be absolute paths on the local file system.\n\n The `use_directory_urls` argument controls how destination paths are generated. If `False`, a Markdown file is\n mapped to an HTML file of the same name (the file extension is changed to `.html`). If True, a Markdown file is\n mapped to an HTML index file (`index.html`) nested in a directory using the \"name\" of the file in `path`. The\n `use_directory_urls` argument has no effect on non-Markdown files.\n\n File objects have the following properties, which are Unicode strings:\n\n File.src_path\n The pure path of the source file relative to the source directory.\n\n File.abs_src_path\n The absolute concrete path of the source file.\n\n File.dest_path\n The pure path of the destination file relative to the destination directory.\n\n File.abs_dest_path\n The absolute concrete path of the destination file.\n\n File.url\n The url of the destination file relative to the destination directory as a string.\n \"\"\"\n def __init__(self, path, src_dir, dest_dir, use_directory_urls):\n self.page = None\n self.src_path = os.path.normpath(path)\n self.abs_src_path = os.path.normpath(os.path.join(src_dir, self.src_path))\n self.name = self._get_stem()\n self.dest_path = self._get_dest_path(use_directory_urls)\n self.abs_dest_path = os.path.normpath(os.path.join(dest_dir, self.dest_path))\n self.url = self._get_url(use_directory_urls)\n\n def __eq__(self, other):\n\n def sub_dict(d):\n return dict((key, value) for key, value in d.items() if key in ['src_path', 'abs_src_path', 'url'])\n\n return (isinstance(other, self.__class__) and sub_dict(self.__dict__) == sub_dict(other.__dict__))\n\n def __ne__(self, other):\n return not self.__eq__(other)\n\n def _get_stem(self):\n \"\"\" Return the name of the file without it's extension. \"\"\"\n filename = os.path.basename(self.src_path)\n stem, ext = os.path.splitext(filename)\n return 'index' if stem in ('index', 'README') else stem\n\n def _get_dest_path(self, use_directory_urls):\n \"\"\" Return destination path based on source path. \"\"\"\n if self.is_documentation_page():\n if use_directory_urls:\n parent, filename = os.path.split(self.src_path)\n if self.name == 'index':\n # index.md or README.md => index.html\n return os.path.join(parent, 'index.html')\n else:\n # foo.md => foo/index.html\n return os.path.join(parent, self.name, 'index.html')\n else:\n # foo.md => foo.html\n root, ext = os.path.splitext(self.src_path)\n return root + '.html'\n return self.src_path\n\n def _get_url(self, use_directory_urls):\n \"\"\" Return url based in destination path. \"\"\"\n url = self.dest_path.replace(os.path.sep, '/')\n dirname, filename = os.path.split(url)\n if use_directory_urls and filename == 'index.html':\n if dirname == '':\n url = '.'\n else:\n url = dirname + '/'\n return utils.urlquote(url)\n\n def url_relative_to(self, other):\n \"\"\" Return url for file relative to other file. \"\"\"\n return utils.get_relative_url(self.url, other.url if isinstance(other, File) else other)\n\n def copy_file(self, dirty=False):\n \"\"\" Copy source file to destination, ensuring parent directories exist. \"\"\"\n if dirty and not self.is_modified():\n log.debug(\"Skip copying unmodified file: '{}'\".format(self.src_path))\n else:\n log.debug(\"Copying media file: '{}'\".format(self.src_path))\n utils.copy_file(self.abs_src_path, self.abs_dest_path)\n\n def is_modified(self):\n if os.path.isfile(self.abs_dest_path):\n return os.path.getmtime(self.abs_dest_path) < os.path.getmtime(self.abs_src_path)\n return True\n\n def is_documentation_page(self):\n \"\"\" Return True if file is a Markdown page. \"\"\"\n return os.path.splitext(self.src_path)[1] in utils.markdown_extensions\n\n def is_static_page(self):\n \"\"\" Return True if file is a static page (html, xml, json). \"\"\"\n return os.path.splitext(self.src_path)[1] in (\n '.html',\n '.htm',\n '.xml',\n '.json',\n )\n\n def is_media_file(self):\n \"\"\" Return True if file is not a documentation or static page. \"\"\"\n return not (self.is_documentation_page() or self.is_static_page())\n\n def is_javascript(self):\n \"\"\" Return True if file is a JavaScript file. \"\"\"\n return os.path.splitext(self.src_path)[1] in (\n '.js',\n '.javascript',\n )\n\n def is_css(self):\n \"\"\" Return True if file is a CSS file. \"\"\"\n return os.path.splitext(self.src_path)[1] in (\n '.css',\n )\n\n\ndef get_files(config):\n \"\"\" Walk the `docs_dir` and return a Files collection. \"\"\"\n files = []\n exclude = ['.*', '/templates']\n\n for source_dir, dirnames, filenames in os.walk(config['docs_dir'], followlinks=True):\n relative_dir = os.path.relpath(source_dir, config['docs_dir'])\n\n for dirname in list(dirnames):\n path = os.path.normpath(os.path.join(relative_dir, dirname))\n # Skip any excluded directories\n if _filter_paths(basename=dirname, path=path, is_dir=True, exclude=exclude):\n dirnames.remove(dirname)\n dirnames.sort()\n\n for filename in _sort_files(filenames):\n path = os.path.normpath(os.path.join(relative_dir, filename))\n # Skip any excluded files\n if _filter_paths(basename=filename, path=path, is_dir=False, exclude=exclude):\n continue\n # Skip README.md if an index file also exists in dir\n if filename.lower() == 'readme.md' and 'index.md' in filenames:\n log.warning(\"Both index.md and readme.md found. Skipping readme.md from {}\".format(source_dir))\n continue\n files.append(File(path, config['docs_dir'], config['site_dir'], config['use_directory_urls']))\n\n return Files(files)\n\n\ndef _sort_files(filenames):\n \"\"\" Always sort `index` or `README` as first filename in list. \"\"\"\n\n def compare(x, y):\n if x == y:\n return 0\n if os.path.splitext(y)[0] in ['index', 'README']:\n return 1\n if os.path.splitext(x)[0] in ['index', 'README'] or x < y:\n return -1\n return 1\n\n return sorted(filenames, key=cmp_to_key(compare))\n\n\ndef _filter_paths(basename, path, is_dir, exclude):\n \"\"\" .gitignore style file filtering. \"\"\"\n for item in exclude:\n # Items ending in '/' apply only to directories.\n if item.endswith('/') and not is_dir:\n continue\n # Items starting with '/' apply to the whole path.\n # In any other cases just the basename is used.\n match = path if item.startswith('/') else basename\n if fnmatch.fnmatch(match, item.strip('/')):\n return True\n return False\n", "path": "mkdocs/structure/files.py"}], "after_files": [{"content": "# coding: utf-8\n\nfrom __future__ import unicode_literals\nimport fnmatch\nimport os\nimport logging\nfrom functools import cmp_to_key\n\nfrom mkdocs import utils\n\n\nlog = logging.getLogger(__name__)\nlog.addFilter(utils.warning_filter)\n\n\nclass Files(object):\n \"\"\" A collection of File objects. \"\"\"\n def __init__(self, files):\n self._files = files\n self.src_paths = {file.src_path: file for file in files}\n\n def __iter__(self):\n return iter(self._files)\n\n def __len__(self):\n return len(self._files)\n\n def __contains__(self, path):\n return path in self.src_paths\n\n def get_file_from_path(self, path):\n \"\"\" Return a File instance with File.src_path equal to path. \"\"\"\n return self.src_paths.get(os.path.normpath(path))\n\n def append(self, file):\n \"\"\" Append file to Files collection. \"\"\"\n self._files.append(file)\n self.src_paths[file.src_path] = file\n\n def copy_static_files(self, dirty=False):\n \"\"\" Copy static files from source to destination. \"\"\"\n for file in self:\n if not file.is_documentation_page():\n file.copy_file(dirty)\n\n def documentation_pages(self):\n \"\"\" Return iterable of all Markdown page file objects. \"\"\"\n return [file for file in self if file.is_documentation_page()]\n\n def static_pages(self):\n \"\"\" Return iterable of all static page file objects. \"\"\"\n return [file for file in self if file.is_static_page()]\n\n def media_files(self):\n \"\"\" Return iterable of all file objects which are not documentation or static pages. \"\"\"\n return [file for file in self if file.is_media_file()]\n\n def javascript_files(self):\n \"\"\" Return iterable of all javascript file objects. \"\"\"\n return [file for file in self if file.is_javascript()]\n\n def css_files(self):\n \"\"\" Return iterable of all CSS file objects. \"\"\"\n return [file for file in self if file.is_css()]\n\n def add_files_from_theme(self, env, config):\n \"\"\" Retrieve static files from Jinja environment and add to collection. \"\"\"\n def filter(name):\n patterns = ['.*', '*.py', '*.pyc', '*.html', '*readme*', 'mkdocs_theme.yml']\n patterns.extend('*{0}'.format(x) for x in utils.markdown_extensions)\n patterns.extend(config['theme'].static_templates)\n for pattern in patterns:\n if fnmatch.fnmatch(name.lower(), pattern):\n return False\n return True\n for path in env.list_templates(filter_func=filter):\n # Theme files do not override docs_dir files\n path = os.path.normpath(path)\n if path not in self:\n for dir in config['theme'].dirs:\n # Find the first theme dir which contains path\n if os.path.isfile(os.path.join(dir, path)):\n self.append(File(path, dir, config['site_dir'], config['use_directory_urls']))\n break\n\n\nclass File(object):\n \"\"\"\n A MkDocs File object.\n\n Points to the source and destination locations of a file.\n\n The `path` argument must be a path that exists relative to `src_dir`.\n\n The `src_dir` and `dest_dir` must be absolute paths on the local file system.\n\n The `use_directory_urls` argument controls how destination paths are generated. If `False`, a Markdown file is\n mapped to an HTML file of the same name (the file extension is changed to `.html`). If True, a Markdown file is\n mapped to an HTML index file (`index.html`) nested in a directory using the \"name\" of the file in `path`. The\n `use_directory_urls` argument has no effect on non-Markdown files.\n\n File objects have the following properties, which are Unicode strings:\n\n File.src_path\n The pure path of the source file relative to the source directory.\n\n File.abs_src_path\n The absolute concrete path of the source file.\n\n File.dest_path\n The pure path of the destination file relative to the destination directory.\n\n File.abs_dest_path\n The absolute concrete path of the destination file.\n\n File.url\n The url of the destination file relative to the destination directory as a string.\n \"\"\"\n def __init__(self, path, src_dir, dest_dir, use_directory_urls):\n self.page = None\n self.src_path = os.path.normpath(path)\n self.abs_src_path = os.path.normpath(os.path.join(src_dir, self.src_path))\n self.name = self._get_stem()\n self.dest_path = self._get_dest_path(use_directory_urls)\n self.abs_dest_path = os.path.normpath(os.path.join(dest_dir, self.dest_path))\n self.url = self._get_url(use_directory_urls)\n\n def __eq__(self, other):\n\n def sub_dict(d):\n return dict((key, value) for key, value in d.items() if key in ['src_path', 'abs_src_path', 'url'])\n\n return (isinstance(other, self.__class__) and sub_dict(self.__dict__) == sub_dict(other.__dict__))\n\n def __ne__(self, other):\n return not self.__eq__(other)\n\n def _get_stem(self):\n \"\"\" Return the name of the file without it's extension. \"\"\"\n filename = os.path.basename(self.src_path)\n stem, ext = os.path.splitext(filename)\n return 'index' if stem in ('index', 'README') else stem\n\n def _get_dest_path(self, use_directory_urls):\n \"\"\" Return destination path based on source path. \"\"\"\n if self.is_documentation_page():\n if use_directory_urls:\n parent, filename = os.path.split(self.src_path)\n if self.name == 'index':\n # index.md or README.md => index.html\n return os.path.join(parent, 'index.html')\n else:\n # foo.md => foo/index.html\n return os.path.join(parent, self.name, 'index.html')\n else:\n # foo.md => foo.html\n root, ext = os.path.splitext(self.src_path)\n return root + '.html'\n return self.src_path\n\n def _get_url(self, use_directory_urls):\n \"\"\" Return url based in destination path. \"\"\"\n url = self.dest_path.replace(os.path.sep, '/')\n dirname, filename = os.path.split(url)\n if use_directory_urls and filename == 'index.html':\n if dirname == '':\n url = '.'\n else:\n url = dirname + '/'\n return utils.urlquote(url)\n\n def url_relative_to(self, other):\n \"\"\" Return url for file relative to other file. \"\"\"\n return utils.get_relative_url(self.url, other.url if isinstance(other, File) else other)\n\n def copy_file(self, dirty=False):\n \"\"\" Copy source file to destination, ensuring parent directories exist. \"\"\"\n if dirty and not self.is_modified():\n log.debug(\"Skip copying unmodified file: '{}'\".format(self.src_path))\n else:\n log.debug(\"Copying media file: '{}'\".format(self.src_path))\n utils.copy_file(self.abs_src_path, self.abs_dest_path)\n\n def is_modified(self):\n if os.path.isfile(self.abs_dest_path):\n return os.path.getmtime(self.abs_dest_path) < os.path.getmtime(self.abs_src_path)\n return True\n\n def is_documentation_page(self):\n \"\"\" Return True if file is a Markdown page. \"\"\"\n return os.path.splitext(self.src_path)[1] in utils.markdown_extensions\n\n def is_static_page(self):\n \"\"\" Return True if file is a static page (html, xml, json). \"\"\"\n return os.path.splitext(self.src_path)[1] in (\n '.html',\n '.htm',\n '.xml',\n '.json',\n )\n\n def is_media_file(self):\n \"\"\" Return True if file is not a documentation or static page. \"\"\"\n return not (self.is_documentation_page() or self.is_static_page())\n\n def is_javascript(self):\n \"\"\" Return True if file is a JavaScript file. \"\"\"\n return os.path.splitext(self.src_path)[1] in (\n '.js',\n '.javascript',\n )\n\n def is_css(self):\n \"\"\" Return True if file is a CSS file. \"\"\"\n return os.path.splitext(self.src_path)[1] in (\n '.css',\n )\n\n\ndef get_files(config):\n \"\"\" Walk the `docs_dir` and return a Files collection. \"\"\"\n files = []\n exclude = ['.*', '/templates']\n\n for source_dir, dirnames, filenames in os.walk(config['docs_dir'], followlinks=True):\n relative_dir = os.path.relpath(source_dir, config['docs_dir'])\n\n for dirname in list(dirnames):\n path = os.path.normpath(os.path.join(relative_dir, dirname))\n # Skip any excluded directories\n if _filter_paths(basename=dirname, path=path, is_dir=True, exclude=exclude):\n dirnames.remove(dirname)\n dirnames.sort()\n\n for filename in _sort_files(filenames):\n path = os.path.normpath(os.path.join(relative_dir, filename))\n # Skip any excluded files\n if _filter_paths(basename=filename, path=path, is_dir=False, exclude=exclude):\n continue\n # Skip README.md if an index file also exists in dir\n if filename.lower() == 'readme.md' and 'index.md' in filenames:\n log.warning(\"Both index.md and readme.md found. Skipping readme.md from {}\".format(source_dir))\n continue\n files.append(File(path, config['docs_dir'], config['site_dir'], config['use_directory_urls']))\n\n return Files(files)\n\n\ndef _sort_files(filenames):\n \"\"\" Always sort `index` or `README` as first filename in list. \"\"\"\n\n def compare(x, y):\n if x == y:\n return 0\n if os.path.splitext(y)[0] in ['index', 'README']:\n return 1\n if os.path.splitext(x)[0] in ['index', 'README'] or x < y:\n return -1\n return 1\n\n return sorted(filenames, key=cmp_to_key(compare))\n\n\ndef _filter_paths(basename, path, is_dir, exclude):\n \"\"\" .gitignore style file filtering. \"\"\"\n for item in exclude:\n # Items ending in '/' apply only to directories.\n if item.endswith('/') and not is_dir:\n continue\n # Items starting with '/' apply to the whole path.\n # In any other cases just the basename is used.\n match = path if item.startswith('/') else basename\n if fnmatch.fnmatch(match, item.strip('/')):\n return True\n return False\n", "path": "mkdocs/structure/files.py"}]} | 3,522 | 117 |
gh_patches_debug_8792 | rasdani/github-patches | git_diff | rasterio__rasterio-401 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
rio-merge ignores --driver
Due to a [typo](https://github.com/mapbox/rasterio/blob/master/rasterio/rio/merge.py#L90):
``` python
kwargs['driver'] == driver
```
Instead of:
``` python
kwargs['driver'] = driver
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rasterio/rio/merge.py`
Content:
```
1 # Merge command.
2
3
4 import logging
5 import math
6 import os.path
7 import warnings
8
9 import click
10 from cligj import files_inout_arg, format_opt
11
12 from .helpers import resolve_inout
13 from . import options
14 import rasterio
15 from rasterio.transform import Affine
16
17
18 @click.command(short_help="Merge a stack of raster datasets.")
19 @files_inout_arg
20 @options.output_opt
21 @format_opt
22 @options.bounds_opt
23 @click.option('-r', '--res', nargs=2, type=float, default=None,
24 help="Output dataset resolution: pixel width, pixel height")
25 @click.option('--nodata', type=float, default=None,
26 help="Override nodata values defined in input datasets")
27 @click.pass_context
28 def merge(ctx, files, output, driver, bounds, res, nodata):
29 """Copy valid pixels from input files to an output file.
30
31 All files must have the same number of bands, data type, and
32 coordinate reference system.
33
34 Input files are merged in their listed order using the reverse
35 painter's algorithm. If the output file exists, its values will be
36 overwritten by input values.
37
38 Geospatial bounds and resolution of a new output file in the
39 units of the input file coordinate reference system may be provided
40 and are otherwise taken from the first input file.
41 """
42 import numpy as np
43
44 verbosity = (ctx.obj and ctx.obj.get('verbosity')) or 1
45 logger = logging.getLogger('rio')
46
47 try:
48 with rasterio.drivers(CPL_DEBUG=verbosity>2):
49 output, files = resolve_inout(files=files, output=output)
50
51 with rasterio.open(files[0]) as first:
52 first_res = first.res
53 kwargs = first.meta
54 kwargs.pop('affine')
55 nodataval = first.nodatavals[0]
56 dtype = first.dtypes[0]
57
58 if os.path.exists(output):
59 # TODO: prompt user to update existing file (-i option) like:
60 # overwrite b.tif? (y/n [n]) n
61 # not overwritten
62 dst = rasterio.open(output, 'r+')
63 nodataval = dst.nodatavals[0]
64 dtype = dst.dtypes[0]
65 dest = np.zeros((dst.count,) + dst.shape, dtype=dtype)
66 else:
67 # Create new output file.
68 # Extent from option or extent of all inputs.
69 if not bounds:
70 # scan input files.
71 xs = []
72 ys = []
73 for f in files:
74 with rasterio.open(f) as src:
75 left, bottom, right, top = src.bounds
76 xs.extend([left, right])
77 ys.extend([bottom, top])
78 bounds = min(xs), min(ys), max(xs), max(ys)
79 output_transform = Affine.translation(bounds[0], bounds[3])
80
81 # Resolution/pixel size.
82 if not res:
83 res = first_res
84 output_transform *= Affine.scale(res[0], -res[1])
85
86 # Dataset shape.
87 output_width = int(math.ceil((bounds[2]-bounds[0])/res[0]))
88 output_height = int(math.ceil((bounds[3]-bounds[1])/res[1]))
89
90 kwargs['driver'] == driver
91 kwargs['transform'] = output_transform
92 kwargs['width'] = output_width
93 kwargs['height'] = output_height
94
95 logger.debug("Kwargs: %r", kwargs)
96 logger.debug("bounds: %r", bounds)
97 logger.debug("Res: %r", res)
98
99 dst = rasterio.open(output, 'w', **kwargs)
100 dest = np.zeros((first.count, output_height, output_width),
101 dtype=dtype)
102
103 logger.debug("In merge, dest shape: %r", dest.shape)
104
105 if nodata is not None:
106 nodataval = nodata
107
108 if nodataval is not None:
109 # Only fill if the nodataval is within dtype's range.
110 inrange = False
111 if np.dtype(dtype).kind in ('i', 'u'):
112 info = np.iinfo(dtype)
113 inrange = (info.min <= nodataval <= info.max)
114 elif np.dtype(dtype).kind == 'f':
115 info = np.finfo(dtype)
116 inrange = (info.min <= nodataval <= info.max)
117 if inrange:
118 dest.fill(nodataval)
119 else:
120 warnings.warn(
121 "Input file's nodata value, %s, is beyond the valid "
122 "range of its data type, %s. Consider overriding it "
123 "using the --nodata option for better results." % (
124 nodataval, dtype))
125 else:
126 nodataval = 0
127
128 dst_w, dst_s, dst_e, dst_n = dst.bounds
129
130 for fname in reversed(files):
131 with rasterio.open(fname) as src:
132 # Real World (tm) use of boundless reads.
133 # This approach uses the maximum amount of memory to solve
134 # the problem. Making it more efficient is a TODO.
135
136 # 1. Compute spatial intersection of destination
137 # and source.
138 src_w, src_s, src_e, src_n = src.bounds
139
140 int_w = src_w if src_w > dst_w else dst_w
141 int_s = src_s if src_s > dst_s else dst_s
142 int_e = src_e if src_e < dst_e else dst_e
143 int_n = src_n if src_n < dst_n else dst_n
144
145 # 2. Compute the source window.
146 src_window = src.window(int_w, int_s, int_e, int_n)
147
148 # 3. Compute the destination window.
149 dst_window = dst.window(int_w, int_s, int_e, int_n)
150
151 # 4. Initialize temp array.
152 temp = np.zeros(
153 (first.count,) + tuple(b - a for a, b in dst_window),
154 dtype=dtype)
155
156 temp = src.read(
157 out=temp,
158 window=src_window,
159 boundless=False,
160 masked=True)
161
162 # 5. Copy elements of temp into dest.
163 roff, coff = dst.index(int_w, int_n)
164 h, w = temp.shape[-2:]
165
166 region = dest[:,roff:roff+h,coff:coff+w]
167 np.copyto(region, temp,
168 where=np.logical_and(
169 region==nodataval, temp.mask==False))
170
171 if dst.mode == 'r+':
172 temp = dst.read(masked=True)
173 np.copyto(dest, temp,
174 where=np.logical_and(
175 dest==nodataval, temp.mask==False))
176
177 dst.write(dest)
178 dst.close()
179
180 except Exception:
181 logger.exception("Exception caught during processing")
182 raise click.Abort()
183
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/rasterio/rio/merge.py b/rasterio/rio/merge.py
--- a/rasterio/rio/merge.py
+++ b/rasterio/rio/merge.py
@@ -87,7 +87,7 @@
output_width = int(math.ceil((bounds[2]-bounds[0])/res[0]))
output_height = int(math.ceil((bounds[3]-bounds[1])/res[1]))
- kwargs['driver'] == driver
+ kwargs['driver'] = driver
kwargs['transform'] = output_transform
kwargs['width'] = output_width
kwargs['height'] = output_height
| {"golden_diff": "diff --git a/rasterio/rio/merge.py b/rasterio/rio/merge.py\n--- a/rasterio/rio/merge.py\n+++ b/rasterio/rio/merge.py\n@@ -87,7 +87,7 @@\n output_width = int(math.ceil((bounds[2]-bounds[0])/res[0]))\n output_height = int(math.ceil((bounds[3]-bounds[1])/res[1]))\n \n- kwargs['driver'] == driver\n+ kwargs['driver'] = driver\n kwargs['transform'] = output_transform\n kwargs['width'] = output_width\n kwargs['height'] = output_height\n", "issue": "rio-merge ignores --driver\nDue to a [typo](https://github.com/mapbox/rasterio/blob/master/rasterio/rio/merge.py#L90):\n\n``` python\nkwargs['driver'] == driver\n```\n\nInstead of:\n\n``` python\nkwargs['driver'] = driver\n```\n\n", "before_files": [{"content": "# Merge command.\n\n\nimport logging\nimport math\nimport os.path\nimport warnings\n\nimport click\nfrom cligj import files_inout_arg, format_opt\n\nfrom .helpers import resolve_inout\nfrom . import options\nimport rasterio\nfrom rasterio.transform import Affine\n\n\[email protected](short_help=\"Merge a stack of raster datasets.\")\n@files_inout_arg\[email protected]_opt\n@format_opt\[email protected]_opt\[email protected]('-r', '--res', nargs=2, type=float, default=None,\n help=\"Output dataset resolution: pixel width, pixel height\")\[email protected]('--nodata', type=float, default=None,\n help=\"Override nodata values defined in input datasets\")\[email protected]_context\ndef merge(ctx, files, output, driver, bounds, res, nodata):\n \"\"\"Copy valid pixels from input files to an output file.\n\n All files must have the same number of bands, data type, and\n coordinate reference system.\n\n Input files are merged in their listed order using the reverse\n painter's algorithm. If the output file exists, its values will be\n overwritten by input values.\n\n Geospatial bounds and resolution of a new output file in the\n units of the input file coordinate reference system may be provided\n and are otherwise taken from the first input file.\n \"\"\"\n import numpy as np\n\n verbosity = (ctx.obj and ctx.obj.get('verbosity')) or 1\n logger = logging.getLogger('rio')\n\n try:\n with rasterio.drivers(CPL_DEBUG=verbosity>2):\n output, files = resolve_inout(files=files, output=output)\n\n with rasterio.open(files[0]) as first:\n first_res = first.res\n kwargs = first.meta\n kwargs.pop('affine')\n nodataval = first.nodatavals[0]\n dtype = first.dtypes[0]\n\n if os.path.exists(output):\n # TODO: prompt user to update existing file (-i option) like:\n # overwrite b.tif? (y/n [n]) n\n # not overwritten\n dst = rasterio.open(output, 'r+')\n nodataval = dst.nodatavals[0]\n dtype = dst.dtypes[0]\n dest = np.zeros((dst.count,) + dst.shape, dtype=dtype)\n else:\n # Create new output file.\n # Extent from option or extent of all inputs.\n if not bounds:\n # scan input files.\n xs = []\n ys = []\n for f in files:\n with rasterio.open(f) as src:\n left, bottom, right, top = src.bounds\n xs.extend([left, right])\n ys.extend([bottom, top])\n bounds = min(xs), min(ys), max(xs), max(ys)\n output_transform = Affine.translation(bounds[0], bounds[3])\n\n # Resolution/pixel size.\n if not res:\n res = first_res\n output_transform *= Affine.scale(res[0], -res[1])\n\n # Dataset shape.\n output_width = int(math.ceil((bounds[2]-bounds[0])/res[0]))\n output_height = int(math.ceil((bounds[3]-bounds[1])/res[1]))\n\n kwargs['driver'] == driver\n kwargs['transform'] = output_transform\n kwargs['width'] = output_width\n kwargs['height'] = output_height\n\n logger.debug(\"Kwargs: %r\", kwargs)\n logger.debug(\"bounds: %r\", bounds)\n logger.debug(\"Res: %r\", res)\n\n dst = rasterio.open(output, 'w', **kwargs)\n dest = np.zeros((first.count, output_height, output_width),\n dtype=dtype)\n\n logger.debug(\"In merge, dest shape: %r\", dest.shape)\n\n if nodata is not None:\n nodataval = nodata\n\n if nodataval is not None:\n # Only fill if the nodataval is within dtype's range.\n inrange = False\n if np.dtype(dtype).kind in ('i', 'u'):\n info = np.iinfo(dtype)\n inrange = (info.min <= nodataval <= info.max)\n elif np.dtype(dtype).kind == 'f':\n info = np.finfo(dtype)\n inrange = (info.min <= nodataval <= info.max)\n if inrange:\n dest.fill(nodataval)\n else:\n warnings.warn(\n \"Input file's nodata value, %s, is beyond the valid \"\n \"range of its data type, %s. Consider overriding it \"\n \"using the --nodata option for better results.\" % (\n nodataval, dtype))\n else:\n nodataval = 0\n\n dst_w, dst_s, dst_e, dst_n = dst.bounds\n\n for fname in reversed(files):\n with rasterio.open(fname) as src:\n # Real World (tm) use of boundless reads.\n # This approach uses the maximum amount of memory to solve\n # the problem. Making it more efficient is a TODO.\n\n # 1. Compute spatial intersection of destination\n # and source.\n src_w, src_s, src_e, src_n = src.bounds\n\n int_w = src_w if src_w > dst_w else dst_w\n int_s = src_s if src_s > dst_s else dst_s\n int_e = src_e if src_e < dst_e else dst_e\n int_n = src_n if src_n < dst_n else dst_n\n\n # 2. Compute the source window.\n src_window = src.window(int_w, int_s, int_e, int_n)\n\n # 3. Compute the destination window.\n dst_window = dst.window(int_w, int_s, int_e, int_n)\n\n # 4. Initialize temp array.\n temp = np.zeros(\n (first.count,) + tuple(b - a for a, b in dst_window),\n dtype=dtype)\n\n temp = src.read(\n out=temp,\n window=src_window,\n boundless=False,\n masked=True)\n\n # 5. Copy elements of temp into dest.\n roff, coff = dst.index(int_w, int_n)\n h, w = temp.shape[-2:]\n\n region = dest[:,roff:roff+h,coff:coff+w]\n np.copyto(region, temp,\n where=np.logical_and(\n region==nodataval, temp.mask==False))\n\n if dst.mode == 'r+':\n temp = dst.read(masked=True)\n np.copyto(dest, temp,\n where=np.logical_and(\n dest==nodataval, temp.mask==False))\n\n dst.write(dest)\n dst.close()\n\n except Exception:\n logger.exception(\"Exception caught during processing\")\n raise click.Abort()\n", "path": "rasterio/rio/merge.py"}], "after_files": [{"content": "# Merge command.\n\n\nimport logging\nimport math\nimport os.path\nimport warnings\n\nimport click\nfrom cligj import files_inout_arg, format_opt\n\nfrom .helpers import resolve_inout\nfrom . import options\nimport rasterio\nfrom rasterio.transform import Affine\n\n\[email protected](short_help=\"Merge a stack of raster datasets.\")\n@files_inout_arg\[email protected]_opt\n@format_opt\[email protected]_opt\[email protected]('-r', '--res', nargs=2, type=float, default=None,\n help=\"Output dataset resolution: pixel width, pixel height\")\[email protected]('--nodata', type=float, default=None,\n help=\"Override nodata values defined in input datasets\")\[email protected]_context\ndef merge(ctx, files, output, driver, bounds, res, nodata):\n \"\"\"Copy valid pixels from input files to an output file.\n\n All files must have the same number of bands, data type, and\n coordinate reference system.\n\n Input files are merged in their listed order using the reverse\n painter's algorithm. If the output file exists, its values will be\n overwritten by input values.\n\n Geospatial bounds and resolution of a new output file in the\n units of the input file coordinate reference system may be provided\n and are otherwise taken from the first input file.\n \"\"\"\n import numpy as np\n\n verbosity = (ctx.obj and ctx.obj.get('verbosity')) or 1\n logger = logging.getLogger('rio')\n\n try:\n with rasterio.drivers(CPL_DEBUG=verbosity>2):\n output, files = resolve_inout(files=files, output=output)\n\n with rasterio.open(files[0]) as first:\n first_res = first.res\n kwargs = first.meta\n kwargs.pop('affine')\n nodataval = first.nodatavals[0]\n dtype = first.dtypes[0]\n\n if os.path.exists(output):\n # TODO: prompt user to update existing file (-i option) like:\n # overwrite b.tif? (y/n [n]) n\n # not overwritten\n dst = rasterio.open(output, 'r+')\n nodataval = dst.nodatavals[0]\n dtype = dst.dtypes[0]\n dest = np.zeros((dst.count,) + dst.shape, dtype=dtype)\n else:\n # Create new output file.\n # Extent from option or extent of all inputs.\n if not bounds:\n # scan input files.\n xs = []\n ys = []\n for f in files:\n with rasterio.open(f) as src:\n left, bottom, right, top = src.bounds\n xs.extend([left, right])\n ys.extend([bottom, top])\n bounds = min(xs), min(ys), max(xs), max(ys)\n output_transform = Affine.translation(bounds[0], bounds[3])\n\n # Resolution/pixel size.\n if not res:\n res = first_res\n output_transform *= Affine.scale(res[0], -res[1])\n\n # Dataset shape.\n output_width = int(math.ceil((bounds[2]-bounds[0])/res[0]))\n output_height = int(math.ceil((bounds[3]-bounds[1])/res[1]))\n\n kwargs['driver'] = driver\n kwargs['transform'] = output_transform\n kwargs['width'] = output_width\n kwargs['height'] = output_height\n\n logger.debug(\"Kwargs: %r\", kwargs)\n logger.debug(\"bounds: %r\", bounds)\n logger.debug(\"Res: %r\", res)\n\n dst = rasterio.open(output, 'w', **kwargs)\n dest = np.zeros((first.count, output_height, output_width),\n dtype=dtype)\n\n logger.debug(\"In merge, dest shape: %r\", dest.shape)\n\n if nodata is not None:\n nodataval = nodata\n\n if nodataval is not None:\n # Only fill if the nodataval is within dtype's range.\n inrange = False\n if np.dtype(dtype).kind in ('i', 'u'):\n info = np.iinfo(dtype)\n inrange = (info.min <= nodataval <= info.max)\n elif np.dtype(dtype).kind == 'f':\n info = np.finfo(dtype)\n inrange = (info.min <= nodataval <= info.max)\n if inrange:\n dest.fill(nodataval)\n else:\n warnings.warn(\n \"Input file's nodata value, %s, is beyond the valid \"\n \"range of its data type, %s. Consider overriding it \"\n \"using the --nodata option for better results.\" % (\n nodataval, dtype))\n else:\n nodataval = 0\n\n dst_w, dst_s, dst_e, dst_n = dst.bounds\n\n for fname in reversed(files):\n with rasterio.open(fname) as src:\n # Real World (tm) use of boundless reads.\n # This approach uses the maximum amount of memory to solve\n # the problem. Making it more efficient is a TODO.\n\n # 1. Compute spatial intersection of destination\n # and source.\n src_w, src_s, src_e, src_n = src.bounds\n\n int_w = src_w if src_w > dst_w else dst_w\n int_s = src_s if src_s > dst_s else dst_s\n int_e = src_e if src_e < dst_e else dst_e\n int_n = src_n if src_n < dst_n else dst_n\n\n # 2. Compute the source window.\n src_window = src.window(int_w, int_s, int_e, int_n)\n\n # 3. Compute the destination window.\n dst_window = dst.window(int_w, int_s, int_e, int_n)\n\n # 4. Initialize temp array.\n temp = np.zeros(\n (first.count,) + tuple(b - a for a, b in dst_window),\n dtype=dtype)\n\n temp = src.read(\n out=temp,\n window=src_window,\n boundless=False,\n masked=True)\n\n # 5. Copy elements of temp into dest.\n roff, coff = dst.index(int_w, int_n)\n h, w = temp.shape[-2:]\n\n region = dest[:,roff:roff+h,coff:coff+w]\n np.copyto(region, temp,\n where=np.logical_and(\n region==nodataval, temp.mask==False))\n\n if dst.mode == 'r+':\n temp = dst.read(masked=True)\n np.copyto(dest, temp,\n where=np.logical_and(\n dest==nodataval, temp.mask==False))\n\n dst.write(dest)\n dst.close()\n\n except Exception:\n logger.exception(\"Exception caught during processing\")\n raise click.Abort()\n", "path": "rasterio/rio/merge.py"}]} | 2,252 | 142 |
gh_patches_debug_18789 | rasdani/github-patches | git_diff | numpy__numpy-14207 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Wrong capitalization on bitgenerator MT19937
Wrong capitalization of MT19937 in default argument values of functions in numpy.random._pickle.py
### Reproducing code example:
```python
import numpy as np
np.random._pickle.__randomstate_ctor() # => error
np.random._pickle.__randomstate_ctor('mt19937') # => error
np.random._pickle.__randomstate_ctor('MT19937') # => works
```
### Error message:
*** ValueError: mt19937 is not a known BitGenerator module.
### Numpy/Python version information:
1.17.0 3.6.8 (default, Jan 14 2019, 11:02:34)
[GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `numpy/random/_pickle.py`
Content:
```
1 from .mtrand import RandomState
2 from .philox import Philox
3 from .pcg64 import PCG64
4 from .sfc64 import SFC64
5
6 from .generator import Generator
7 from .mt19937 import MT19937
8
9 BitGenerators = {'MT19937': MT19937,
10 'PCG64': PCG64,
11 'Philox': Philox,
12 'SFC64': SFC64,
13 }
14
15
16 def __generator_ctor(bit_generator_name='mt19937'):
17 """
18 Pickling helper function that returns a Generator object
19
20 Parameters
21 ----------
22 bit_generator_name: str
23 String containing the core BitGenerator
24
25 Returns
26 -------
27 rg: Generator
28 Generator using the named core BitGenerator
29 """
30 if bit_generator_name in BitGenerators:
31 bit_generator = BitGenerators[bit_generator_name]
32 else:
33 raise ValueError(str(bit_generator_name) + ' is not a known '
34 'BitGenerator module.')
35
36 return Generator(bit_generator())
37
38
39 def __bit_generator_ctor(bit_generator_name='mt19937'):
40 """
41 Pickling helper function that returns a bit generator object
42
43 Parameters
44 ----------
45 bit_generator_name: str
46 String containing the name of the BitGenerator
47
48 Returns
49 -------
50 bit_generator: BitGenerator
51 BitGenerator instance
52 """
53 if bit_generator_name in BitGenerators:
54 bit_generator = BitGenerators[bit_generator_name]
55 else:
56 raise ValueError(str(bit_generator_name) + ' is not a known '
57 'BitGenerator module.')
58
59 return bit_generator()
60
61
62 def __randomstate_ctor(bit_generator_name='mt19937'):
63 """
64 Pickling helper function that returns a legacy RandomState-like object
65
66 Parameters
67 ----------
68 bit_generator_name: str
69 String containing the core BitGenerator
70
71 Returns
72 -------
73 rs: RandomState
74 Legacy RandomState using the named core BitGenerator
75 """
76 if bit_generator_name in BitGenerators:
77 bit_generator = BitGenerators[bit_generator_name]
78 else:
79 raise ValueError(str(bit_generator_name) + ' is not a known '
80 'BitGenerator module.')
81
82 return RandomState(bit_generator())
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/numpy/random/_pickle.py b/numpy/random/_pickle.py
--- a/numpy/random/_pickle.py
+++ b/numpy/random/_pickle.py
@@ -13,7 +13,7 @@
}
-def __generator_ctor(bit_generator_name='mt19937'):
+def __generator_ctor(bit_generator_name='MT19937'):
"""
Pickling helper function that returns a Generator object
@@ -36,7 +36,7 @@
return Generator(bit_generator())
-def __bit_generator_ctor(bit_generator_name='mt19937'):
+def __bit_generator_ctor(bit_generator_name='MT19937'):
"""
Pickling helper function that returns a bit generator object
@@ -59,7 +59,7 @@
return bit_generator()
-def __randomstate_ctor(bit_generator_name='mt19937'):
+def __randomstate_ctor(bit_generator_name='MT19937'):
"""
Pickling helper function that returns a legacy RandomState-like object
| {"golden_diff": "diff --git a/numpy/random/_pickle.py b/numpy/random/_pickle.py\n--- a/numpy/random/_pickle.py\n+++ b/numpy/random/_pickle.py\n@@ -13,7 +13,7 @@\n }\n \n \n-def __generator_ctor(bit_generator_name='mt19937'):\n+def __generator_ctor(bit_generator_name='MT19937'):\n \"\"\"\n Pickling helper function that returns a Generator object\n \n@@ -36,7 +36,7 @@\n return Generator(bit_generator())\n \n \n-def __bit_generator_ctor(bit_generator_name='mt19937'):\n+def __bit_generator_ctor(bit_generator_name='MT19937'):\n \"\"\"\n Pickling helper function that returns a bit generator object\n \n@@ -59,7 +59,7 @@\n return bit_generator()\n \n \n-def __randomstate_ctor(bit_generator_name='mt19937'):\n+def __randomstate_ctor(bit_generator_name='MT19937'):\n \"\"\"\n Pickling helper function that returns a legacy RandomState-like object\n", "issue": "Wrong capitalization on bitgenerator MT19937\nWrong capitalization of MT19937 in default argument values of functions in numpy.random._pickle.py\r\n\r\n### Reproducing code example:\r\n\r\n```python\r\nimport numpy as np\r\nnp.random._pickle.__randomstate_ctor() # => error\r\nnp.random._pickle.__randomstate_ctor('mt19937') # => error\r\nnp.random._pickle.__randomstate_ctor('MT19937') # => works\r\n```\r\n\r\n### Error message:\r\n\r\n*** ValueError: mt19937 is not a known BitGenerator module.\r\n\r\n### Numpy/Python version information:\r\n\r\n1.17.0 3.6.8 (default, Jan 14 2019, 11:02:34) \r\n[GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]\r\n\n", "before_files": [{"content": "from .mtrand import RandomState\nfrom .philox import Philox\nfrom .pcg64 import PCG64\nfrom .sfc64 import SFC64\n\nfrom .generator import Generator\nfrom .mt19937 import MT19937\n\nBitGenerators = {'MT19937': MT19937,\n 'PCG64': PCG64,\n 'Philox': Philox,\n 'SFC64': SFC64,\n }\n\n\ndef __generator_ctor(bit_generator_name='mt19937'):\n \"\"\"\n Pickling helper function that returns a Generator object\n\n Parameters\n ----------\n bit_generator_name: str\n String containing the core BitGenerator\n\n Returns\n -------\n rg: Generator\n Generator using the named core BitGenerator\n \"\"\"\n if bit_generator_name in BitGenerators:\n bit_generator = BitGenerators[bit_generator_name]\n else:\n raise ValueError(str(bit_generator_name) + ' is not a known '\n 'BitGenerator module.')\n\n return Generator(bit_generator())\n\n\ndef __bit_generator_ctor(bit_generator_name='mt19937'):\n \"\"\"\n Pickling helper function that returns a bit generator object\n\n Parameters\n ----------\n bit_generator_name: str\n String containing the name of the BitGenerator\n\n Returns\n -------\n bit_generator: BitGenerator\n BitGenerator instance\n \"\"\"\n if bit_generator_name in BitGenerators:\n bit_generator = BitGenerators[bit_generator_name]\n else:\n raise ValueError(str(bit_generator_name) + ' is not a known '\n 'BitGenerator module.')\n\n return bit_generator()\n\n\ndef __randomstate_ctor(bit_generator_name='mt19937'):\n \"\"\"\n Pickling helper function that returns a legacy RandomState-like object\n\n Parameters\n ----------\n bit_generator_name: str\n String containing the core BitGenerator\n\n Returns\n -------\n rs: RandomState\n Legacy RandomState using the named core BitGenerator\n \"\"\"\n if bit_generator_name in BitGenerators:\n bit_generator = BitGenerators[bit_generator_name]\n else:\n raise ValueError(str(bit_generator_name) + ' is not a known '\n 'BitGenerator module.')\n\n return RandomState(bit_generator())\n", "path": "numpy/random/_pickle.py"}], "after_files": [{"content": "from .mtrand import RandomState\nfrom .philox import Philox\nfrom .pcg64 import PCG64\nfrom .sfc64 import SFC64\n\nfrom .generator import Generator\nfrom .mt19937 import MT19937\n\nBitGenerators = {'MT19937': MT19937,\n 'PCG64': PCG64,\n 'Philox': Philox,\n 'SFC64': SFC64,\n }\n\n\ndef __generator_ctor(bit_generator_name='MT19937'):\n \"\"\"\n Pickling helper function that returns a Generator object\n\n Parameters\n ----------\n bit_generator_name: str\n String containing the core BitGenerator\n\n Returns\n -------\n rg: Generator\n Generator using the named core BitGenerator\n \"\"\"\n if bit_generator_name in BitGenerators:\n bit_generator = BitGenerators[bit_generator_name]\n else:\n raise ValueError(str(bit_generator_name) + ' is not a known '\n 'BitGenerator module.')\n\n return Generator(bit_generator())\n\n\ndef __bit_generator_ctor(bit_generator_name='MT19937'):\n \"\"\"\n Pickling helper function that returns a bit generator object\n\n Parameters\n ----------\n bit_generator_name: str\n String containing the name of the BitGenerator\n\n Returns\n -------\n bit_generator: BitGenerator\n BitGenerator instance\n \"\"\"\n if bit_generator_name in BitGenerators:\n bit_generator = BitGenerators[bit_generator_name]\n else:\n raise ValueError(str(bit_generator_name) + ' is not a known '\n 'BitGenerator module.')\n\n return bit_generator()\n\n\ndef __randomstate_ctor(bit_generator_name='MT19937'):\n \"\"\"\n Pickling helper function that returns a legacy RandomState-like object\n\n Parameters\n ----------\n bit_generator_name: str\n String containing the core BitGenerator\n\n Returns\n -------\n rs: RandomState\n Legacy RandomState using the named core BitGenerator\n \"\"\"\n if bit_generator_name in BitGenerators:\n bit_generator = BitGenerators[bit_generator_name]\n else:\n raise ValueError(str(bit_generator_name) + ' is not a known '\n 'BitGenerator module.')\n\n return RandomState(bit_generator())\n", "path": "numpy/random/_pickle.py"}]} | 1,124 | 228 |
gh_patches_debug_10624 | rasdani/github-patches | git_diff | mlcommons__GaNDLF-429 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add an error check for compute dimensions not supported by GaNDLF
### > I think the error message is caused by the yaml file. My input image size is 128 x 128 .
>
**> this is error mesage :**
`(venv_gandlf) C:\Users\pc\GaNDLF>python gandlf_run -c ./images_and_labels/model.yaml -i ./images_and_labels/train.csv -m ./images_and_labels/model -t True -d cpu
WARNING: Initializing 'norm_type' as 'batch'
WARNING: Initializing 'verbose' as False
WARNING: Initializing 'medcam_enabled' as False
WARNING: Initializing 'save_training' as False
WARNING: Initializing 'save_output' as False
WARNING: Initializing 'in_memory' as False
WARNING: Initializing 'pin_memory_dataloader' as False
WARNING: Initializing 'enable_padding' as False
WARNING: Initializing 'scaling_factor' as 1
WARNING: Initializing 'clip_grad' as None
WARNING: Initializing 'track_memory_usage' as False
WARNING: Initializing 'print_rgb_label_warning' as True
WARNING: Initializing 'data_postprocessing' as {}
WARNING: Initializing 'grid_aggregator_overlap' as crop
WARNING: Initializing 'clip_mode' as None
WARNING: Setting default step_size to: 0.02
Trying to resume training without changing any parameters from previous run.
Using previously saved parameter file ./images_and_labels/model\parameters.pkl
Using previously saved testing data ./images_and_labels/model\testing_0\data_testing.pkl
Using previously saved training+validation data ./images_and_labels/model\testing_0\data_trainAndVal.pkl
Number of channels : 1
Constructing queue for train data: 100%|████████████████████████| 643/643 [00:02<00:00, 216.35it/s]
C:\Users\pc\anaconda3\envs\venv_gandlf\lib\site-packages\torch\utils\data\dataloader.py:477: UserWarning: This DataLoader will create 16 worker processes in total. Our suggested max number of worker in current system is 4 (`cpuset` is not taken into account), which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
cpuset_checked))
Calculating weights
Constructing queue for penalty data: 100%|██████████████████████| 643/643 [00:02<00:00, 241.46it/s]
Looping over training data for penalty calculation: 100%|███████| 643/643 [00:05<00:00, 121.90it/s]
Constructing queue for validation data: 100%|███████████████████| 161/161 [00:01<00:00, 124.04it/s]
All Keys : ['subject_id', '1', 'spacing', 'label', 'path_to_metadata']
Traceback (most recent call last):
File "gandlf_run", line 116, in <module>
args.reset,
File "C:\Users\pc\GaNDLF\GANDLF\cli\main_run.py", line 91, in main_run
reset=reset,
File "C:\Users\pc\GaNDLF\GANDLF\training_manager.py", line 254, in TrainingManager
testing_data=testingData,
File "C:\Users\pc\GaNDLF\GANDLF\compute\training_loop.py", line 239, in training_loop
) = create_pytorch_objects(params, training_data, validation_data, device)
File "C:\Users\pc\GaNDLF\GANDLF\compute\generic.py", line 69, in create_pytorch_objects
model = get_model(parameters)
File "C:\Users\pc\GaNDLF\GANDLF\models\__init__.py", line 106, in get_model
return global_models_dict[params["model"]["architecture"]](parameters=params)
File "C:\Users\pc\GaNDLF\GANDLF\models\unet.py", line 41, in __init__
conv=self.Conv,
File "C:\Users\pc\anaconda3\envs\venv_gandlf\lib\site-packages\torch\nn\modules\module.py", line 948, in __getattr__
type(self).__name__, name))
AttributeError: 'unet' object has no attribute 'Conv'`
**> And this is my yaml file**
[model.pdf](https://github.com/CBICA/GaNDLF/files/8631955/model.pdf)
_Originally posted by @HILALOZTEMEL in https://github.com/CBICA/GaNDLF/discussions/427_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `GANDLF/models/modelBase.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """All Models in GANDLF are to be derived from this base class code."""
3
4 import torch
5 import torch.nn as nn
6 import torch.nn.functional as F
7
8 from GANDLF.utils import get_linear_interpolation_mode
9 from GANDLF.utils.modelbase import get_modelbase_final_layer
10 from GANDLF.models.seg_modules.average_pool import (
11 GlobalAveragePooling3D,
12 GlobalAveragePooling2D,
13 )
14
15
16 class ModelBase(nn.Module):
17 """
18 This is the base model class that all other architectures will need to derive from
19 """
20
21 def __init__(self, parameters):
22 """
23 This defines all defaults that the model base uses
24
25 Args:
26 parameters (dict): This is a dictionary of all parameters that are needed for the model.
27 """
28 super(ModelBase, self).__init__()
29 self.model_name = parameters["model"]["architecture"]
30 self.n_dimensions = parameters["model"]["dimension"]
31 self.n_channels = parameters["model"]["num_channels"]
32 if "num_classes" in parameters["model"]:
33 self.n_classes = parameters["model"]["num_classes"]
34 else:
35 self.n_classes = len(parameters["model"]["class_list"])
36 self.base_filters = parameters["model"]["base_filters"]
37 self.norm_type = parameters["model"]["norm_type"]
38 self.patch_size = parameters["patch_size"]
39 self.batch_size = parameters["batch_size"]
40 self.amp = parameters["model"]["amp"]
41 self.final_convolution_layer = self.get_final_layer(
42 parameters["model"]["final_layer"]
43 )
44
45 self.linear_interpolation_mode = get_linear_interpolation_mode(
46 self.n_dimensions
47 )
48
49 self.sigmoid_input_multiplier = parameters["model"].get(
50 "sigmoid_input_multiplier", 1.0
51 )
52
53 # based on dimensionality, the following need to defined:
54 # convolution, batch_norm, instancenorm, dropout
55 if self.n_dimensions == 2:
56 self.Conv = nn.Conv2d
57 self.ConvTranspose = nn.ConvTranspose2d
58 self.InstanceNorm = nn.InstanceNorm2d
59 self.Dropout = nn.Dropout2d
60 self.BatchNorm = nn.BatchNorm2d
61 self.MaxPool = nn.MaxPool2d
62 self.AvgPool = nn.AvgPool2d
63 self.AdaptiveAvgPool = nn.AdaptiveAvgPool2d
64 self.AdaptiveMaxPool = nn.AdaptiveMaxPool2d
65 self.GlobalAvgPool = GlobalAveragePooling2D
66 self.Norm = self.get_norm_type(self.norm_type.lower(), self.n_dimensions)
67
68 elif self.n_dimensions == 3:
69 self.Conv = nn.Conv3d
70 self.ConvTranspose = nn.ConvTranspose3d
71 self.InstanceNorm = nn.InstanceNorm3d
72 self.Dropout = nn.Dropout3d
73 self.BatchNorm = nn.BatchNorm3d
74 self.MaxPool = nn.MaxPool3d
75 self.AvgPool = nn.AvgPool3d
76 self.AdaptiveAvgPool = nn.AdaptiveAvgPool3d
77 self.AdaptiveMaxPool = nn.AdaptiveMaxPool3d
78 self.GlobalAvgPool = GlobalAveragePooling3D
79 self.Norm = self.get_norm_type(self.norm_type.lower(), self.n_dimensions)
80
81 def get_final_layer(self, final_convolution_layer):
82 return get_modelbase_final_layer(final_convolution_layer)
83
84 def get_norm_type(self, norm_type, dimensions):
85 """
86 This function gets the normalization type for the model.
87
88 Args:
89 norm_type (str): Normalization type as a string.
90 dimensions (str): The dimensionality of the model.
91
92 Returns:
93 _InstanceNorm or _BatchNorm: The normalization type for the model.
94 """
95 if dimensions == 3:
96 if norm_type == "batch":
97 norm_type = nn.BatchNorm3d
98 elif norm_type == "instance":
99 norm_type = nn.InstanceNorm3d
100 else:
101 norm_type = None
102 elif dimensions == 2:
103 if norm_type == "batch":
104 norm_type = nn.BatchNorm2d
105 elif norm_type == "instance":
106 norm_type = nn.InstanceNorm2d
107 else:
108 norm_type = None
109
110 return norm_type
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/GANDLF/models/modelBase.py b/GANDLF/models/modelBase.py
--- a/GANDLF/models/modelBase.py
+++ b/GANDLF/models/modelBase.py
@@ -78,6 +78,13 @@
self.GlobalAvgPool = GlobalAveragePooling3D
self.Norm = self.get_norm_type(self.norm_type.lower(), self.n_dimensions)
+ else:
+ raise ValueError(
+ "GaNDLF only supports 2D and 3D computations. {}D computations are not currently supported".format(
+ self.n_dimensions
+ )
+ )
+
def get_final_layer(self, final_convolution_layer):
return get_modelbase_final_layer(final_convolution_layer)
| {"golden_diff": "diff --git a/GANDLF/models/modelBase.py b/GANDLF/models/modelBase.py\n--- a/GANDLF/models/modelBase.py\n+++ b/GANDLF/models/modelBase.py\n@@ -78,6 +78,13 @@\n self.GlobalAvgPool = GlobalAveragePooling3D\n self.Norm = self.get_norm_type(self.norm_type.lower(), self.n_dimensions)\n \n+ else:\n+ raise ValueError(\n+ \"GaNDLF only supports 2D and 3D computations. {}D computations are not currently supported\".format(\n+ self.n_dimensions\n+ )\n+ )\n+\n def get_final_layer(self, final_convolution_layer):\n return get_modelbase_final_layer(final_convolution_layer)\n", "issue": "Add an error check for compute dimensions not supported by GaNDLF\n### > I think the error message is caused by the yaml file. My input image size is 128 x 128 . \r\n> \r\n **> this is error mesage :**\r\n\r\n`(venv_gandlf) C:\\Users\\pc\\GaNDLF>python gandlf_run -c ./images_and_labels/model.yaml -i ./images_and_labels/train.csv -m ./images_and_labels/model -t True -d cpu\r\nWARNING: Initializing 'norm_type' as 'batch'\r\nWARNING: Initializing 'verbose' as False\r\nWARNING: Initializing 'medcam_enabled' as False\r\nWARNING: Initializing 'save_training' as False\r\nWARNING: Initializing 'save_output' as False\r\nWARNING: Initializing 'in_memory' as False\r\nWARNING: Initializing 'pin_memory_dataloader' as False\r\nWARNING: Initializing 'enable_padding' as False\r\nWARNING: Initializing 'scaling_factor' as 1\r\nWARNING: Initializing 'clip_grad' as None\r\nWARNING: Initializing 'track_memory_usage' as False\r\nWARNING: Initializing 'print_rgb_label_warning' as True\r\nWARNING: Initializing 'data_postprocessing' as {}\r\nWARNING: Initializing 'grid_aggregator_overlap' as crop\r\nWARNING: Initializing 'clip_mode' as None\r\nWARNING: Setting default step_size to: 0.02\r\nTrying to resume training without changing any parameters from previous run.\r\nUsing previously saved parameter file ./images_and_labels/model\\parameters.pkl\r\nUsing previously saved testing data ./images_and_labels/model\\testing_0\\data_testing.pkl\r\nUsing previously saved training+validation data ./images_and_labels/model\\testing_0\\data_trainAndVal.pkl\r\nNumber of channels : 1\r\nConstructing queue for train data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 643/643 [00:02<00:00, 216.35it/s]\r\nC:\\Users\\pc\\anaconda3\\envs\\venv_gandlf\\lib\\site-packages\\torch\\utils\\data\\dataloader.py:477: UserWarning: This DataLoader will create 16 worker processes in total. Our suggested max number of worker in current system is 4 (`cpuset` is not taken into account), which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\r\n cpuset_checked))\r\nCalculating weights\r\nConstructing queue for penalty data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 643/643 [00:02<00:00, 241.46it/s]\r\nLooping over training data for penalty calculation: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 643/643 [00:05<00:00, 121.90it/s]\r\nConstructing queue for validation data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 161/161 [00:01<00:00, 124.04it/s]\r\nAll Keys : ['subject_id', '1', 'spacing', 'label', 'path_to_metadata']\r\nTraceback (most recent call last):\r\n File \"gandlf_run\", line 116, in <module>\r\n args.reset,\r\n File \"C:\\Users\\pc\\GaNDLF\\GANDLF\\cli\\main_run.py\", line 91, in main_run\r\n reset=reset,\r\n File \"C:\\Users\\pc\\GaNDLF\\GANDLF\\training_manager.py\", line 254, in TrainingManager\r\n testing_data=testingData,\r\n File \"C:\\Users\\pc\\GaNDLF\\GANDLF\\compute\\training_loop.py\", line 239, in training_loop\r\n ) = create_pytorch_objects(params, training_data, validation_data, device)\r\n File \"C:\\Users\\pc\\GaNDLF\\GANDLF\\compute\\generic.py\", line 69, in create_pytorch_objects\r\n model = get_model(parameters)\r\n File \"C:\\Users\\pc\\GaNDLF\\GANDLF\\models\\__init__.py\", line 106, in get_model\r\n return global_models_dict[params[\"model\"][\"architecture\"]](parameters=params)\r\n File \"C:\\Users\\pc\\GaNDLF\\GANDLF\\models\\unet.py\", line 41, in __init__\r\n conv=self.Conv,\r\n File \"C:\\Users\\pc\\anaconda3\\envs\\venv_gandlf\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 948, in __getattr__\r\n type(self).__name__, name))\r\nAttributeError: 'unet' object has no attribute 'Conv'`\r\n\r\n\r\n**> And this is my yaml file**\r\n\r\n\r\n[model.pdf](https://github.com/CBICA/GaNDLF/files/8631955/model.pdf)\r\n\r\n_Originally posted by @HILALOZTEMEL in https://github.com/CBICA/GaNDLF/discussions/427_\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"All Models in GANDLF are to be derived from this base class code.\"\"\"\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nfrom GANDLF.utils import get_linear_interpolation_mode\nfrom GANDLF.utils.modelbase import get_modelbase_final_layer\nfrom GANDLF.models.seg_modules.average_pool import (\n GlobalAveragePooling3D,\n GlobalAveragePooling2D,\n)\n\n\nclass ModelBase(nn.Module):\n \"\"\"\n This is the base model class that all other architectures will need to derive from\n \"\"\"\n\n def __init__(self, parameters):\n \"\"\"\n This defines all defaults that the model base uses\n\n Args:\n parameters (dict): This is a dictionary of all parameters that are needed for the model.\n \"\"\"\n super(ModelBase, self).__init__()\n self.model_name = parameters[\"model\"][\"architecture\"]\n self.n_dimensions = parameters[\"model\"][\"dimension\"]\n self.n_channels = parameters[\"model\"][\"num_channels\"]\n if \"num_classes\" in parameters[\"model\"]:\n self.n_classes = parameters[\"model\"][\"num_classes\"]\n else:\n self.n_classes = len(parameters[\"model\"][\"class_list\"])\n self.base_filters = parameters[\"model\"][\"base_filters\"]\n self.norm_type = parameters[\"model\"][\"norm_type\"]\n self.patch_size = parameters[\"patch_size\"]\n self.batch_size = parameters[\"batch_size\"]\n self.amp = parameters[\"model\"][\"amp\"]\n self.final_convolution_layer = self.get_final_layer(\n parameters[\"model\"][\"final_layer\"]\n )\n\n self.linear_interpolation_mode = get_linear_interpolation_mode(\n self.n_dimensions\n )\n\n self.sigmoid_input_multiplier = parameters[\"model\"].get(\n \"sigmoid_input_multiplier\", 1.0\n )\n\n # based on dimensionality, the following need to defined:\n # convolution, batch_norm, instancenorm, dropout\n if self.n_dimensions == 2:\n self.Conv = nn.Conv2d\n self.ConvTranspose = nn.ConvTranspose2d\n self.InstanceNorm = nn.InstanceNorm2d\n self.Dropout = nn.Dropout2d\n self.BatchNorm = nn.BatchNorm2d\n self.MaxPool = nn.MaxPool2d\n self.AvgPool = nn.AvgPool2d\n self.AdaptiveAvgPool = nn.AdaptiveAvgPool2d\n self.AdaptiveMaxPool = nn.AdaptiveMaxPool2d\n self.GlobalAvgPool = GlobalAveragePooling2D\n self.Norm = self.get_norm_type(self.norm_type.lower(), self.n_dimensions)\n\n elif self.n_dimensions == 3:\n self.Conv = nn.Conv3d\n self.ConvTranspose = nn.ConvTranspose3d\n self.InstanceNorm = nn.InstanceNorm3d\n self.Dropout = nn.Dropout3d\n self.BatchNorm = nn.BatchNorm3d\n self.MaxPool = nn.MaxPool3d\n self.AvgPool = nn.AvgPool3d\n self.AdaptiveAvgPool = nn.AdaptiveAvgPool3d\n self.AdaptiveMaxPool = nn.AdaptiveMaxPool3d\n self.GlobalAvgPool = GlobalAveragePooling3D\n self.Norm = self.get_norm_type(self.norm_type.lower(), self.n_dimensions)\n\n def get_final_layer(self, final_convolution_layer):\n return get_modelbase_final_layer(final_convolution_layer)\n\n def get_norm_type(self, norm_type, dimensions):\n \"\"\"\n This function gets the normalization type for the model.\n\n Args:\n norm_type (str): Normalization type as a string.\n dimensions (str): The dimensionality of the model.\n\n Returns:\n _InstanceNorm or _BatchNorm: The normalization type for the model.\n \"\"\"\n if dimensions == 3:\n if norm_type == \"batch\":\n norm_type = nn.BatchNorm3d\n elif norm_type == \"instance\":\n norm_type = nn.InstanceNorm3d\n else:\n norm_type = None\n elif dimensions == 2:\n if norm_type == \"batch\":\n norm_type = nn.BatchNorm2d\n elif norm_type == \"instance\":\n norm_type = nn.InstanceNorm2d\n else:\n norm_type = None\n\n return norm_type\n", "path": "GANDLF/models/modelBase.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"All Models in GANDLF are to be derived from this base class code.\"\"\"\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nfrom GANDLF.utils import get_linear_interpolation_mode\nfrom GANDLF.utils.modelbase import get_modelbase_final_layer\nfrom GANDLF.models.seg_modules.average_pool import (\n GlobalAveragePooling3D,\n GlobalAveragePooling2D,\n)\n\n\nclass ModelBase(nn.Module):\n \"\"\"\n This is the base model class that all other architectures will need to derive from\n \"\"\"\n\n def __init__(self, parameters):\n \"\"\"\n This defines all defaults that the model base uses\n\n Args:\n parameters (dict): This is a dictionary of all parameters that are needed for the model.\n \"\"\"\n super(ModelBase, self).__init__()\n self.model_name = parameters[\"model\"][\"architecture\"]\n self.n_dimensions = parameters[\"model\"][\"dimension\"]\n self.n_channels = parameters[\"model\"][\"num_channels\"]\n if \"num_classes\" in parameters[\"model\"]:\n self.n_classes = parameters[\"model\"][\"num_classes\"]\n else:\n self.n_classes = len(parameters[\"model\"][\"class_list\"])\n self.base_filters = parameters[\"model\"][\"base_filters\"]\n self.norm_type = parameters[\"model\"][\"norm_type\"]\n self.patch_size = parameters[\"patch_size\"]\n self.batch_size = parameters[\"batch_size\"]\n self.amp = parameters[\"model\"][\"amp\"]\n self.final_convolution_layer = self.get_final_layer(\n parameters[\"model\"][\"final_layer\"]\n )\n\n self.linear_interpolation_mode = get_linear_interpolation_mode(\n self.n_dimensions\n )\n\n self.sigmoid_input_multiplier = parameters[\"model\"].get(\n \"sigmoid_input_multiplier\", 1.0\n )\n\n # based on dimensionality, the following need to defined:\n # convolution, batch_norm, instancenorm, dropout\n if self.n_dimensions == 2:\n self.Conv = nn.Conv2d\n self.ConvTranspose = nn.ConvTranspose2d\n self.InstanceNorm = nn.InstanceNorm2d\n self.Dropout = nn.Dropout2d\n self.BatchNorm = nn.BatchNorm2d\n self.MaxPool = nn.MaxPool2d\n self.AvgPool = nn.AvgPool2d\n self.AdaptiveAvgPool = nn.AdaptiveAvgPool2d\n self.AdaptiveMaxPool = nn.AdaptiveMaxPool2d\n self.GlobalAvgPool = GlobalAveragePooling2D\n self.Norm = self.get_norm_type(self.norm_type.lower(), self.n_dimensions)\n\n elif self.n_dimensions == 3:\n self.Conv = nn.Conv3d\n self.ConvTranspose = nn.ConvTranspose3d\n self.InstanceNorm = nn.InstanceNorm3d\n self.Dropout = nn.Dropout3d\n self.BatchNorm = nn.BatchNorm3d\n self.MaxPool = nn.MaxPool3d\n self.AvgPool = nn.AvgPool3d\n self.AdaptiveAvgPool = nn.AdaptiveAvgPool3d\n self.AdaptiveMaxPool = nn.AdaptiveMaxPool3d\n self.GlobalAvgPool = GlobalAveragePooling3D\n self.Norm = self.get_norm_type(self.norm_type.lower(), self.n_dimensions)\n\n else:\n raise ValueError(\n \"GaNDLF only supports 2D and 3D computations. {}D computations are not currently supported\".format(\n self.n_dimensions\n )\n )\n\n def get_final_layer(self, final_convolution_layer):\n return get_modelbase_final_layer(final_convolution_layer)\n\n def get_norm_type(self, norm_type, dimensions):\n \"\"\"\n This function gets the normalization type for the model.\n\n Args:\n norm_type (str): Normalization type as a string.\n dimensions (str): The dimensionality of the model.\n\n Returns:\n _InstanceNorm or _BatchNorm: The normalization type for the model.\n \"\"\"\n if dimensions == 3:\n if norm_type == \"batch\":\n norm_type = nn.BatchNorm3d\n elif norm_type == \"instance\":\n norm_type = nn.InstanceNorm3d\n else:\n norm_type = None\n elif dimensions == 2:\n if norm_type == \"batch\":\n norm_type = nn.BatchNorm2d\n elif norm_type == \"instance\":\n norm_type = nn.InstanceNorm2d\n else:\n norm_type = None\n\n return norm_type\n", "path": "GANDLF/models/modelBase.py"}]} | 2,511 | 157 |
gh_patches_debug_23308 | rasdani/github-patches | git_diff | pyro-ppl__numpyro-912 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ZeroDivisionError when running SVI with num_steps < 20
Reproducible code
```python
import jax
import numpyro
def model():
pass
def guide():
pass
svi = numpyro.infer.SVI(model, guide, numpyro.optim.Adam(1), numpyro.infer.Trace_ELBO())
svi.run(jax.random.PRNGKey(0), 10)
```
Proposed fix: setting `batch = max(num_steps // 20, 1)` in [SVI.run](https://github.com/pyro-ppl/numpyro/blob/aee945895dff9ffb6a12637a9bfbddb1d8c20363/numpyro/infer/svi.py#L179) to guarantee that `batch` is positive
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `numpyro/infer/svi.py`
Content:
```
1 # Copyright Contributors to the Pyro project.
2 # SPDX-License-Identifier: Apache-2.0
3
4 from functools import namedtuple, partial
5
6 import tqdm
7
8 from jax import jit, lax, random
9 import jax.numpy as jnp
10
11 from numpyro.distributions import constraints
12 from numpyro.distributions.transforms import biject_to
13 from numpyro.handlers import replay, seed, trace
14 from numpyro.infer.util import transform_fn
15
16 SVIState = namedtuple('SVIState', ['optim_state', 'rng_key'])
17 """
18 A :func:`~collections.namedtuple` consisting of the following fields:
19 - **optim_state** - current optimizer's state.
20 - **rng_key** - random number generator seed used for the iteration.
21 """
22
23
24 SVIRunResult = namedtuple('SVIRunResult', ['params', 'losses'])
25 """
26 A :func:`~collections.namedtuple` consisting of the following fields:
27 - **params** - the optimized parameters.
28 - **losses** - the losses collected at every step.
29 """
30
31
32 def _apply_loss_fn(loss_fn, rng_key, constrain_fn, model, guide,
33 args, kwargs, static_kwargs, params):
34 return loss_fn(rng_key, constrain_fn(params), model, guide, *args, **kwargs, **static_kwargs)
35
36
37 class SVI(object):
38 """
39 Stochastic Variational Inference given an ELBO loss objective.
40
41 **References**
42
43 1. *SVI Part I: An Introduction to Stochastic Variational Inference in Pyro*,
44 (http://pyro.ai/examples/svi_part_i.html)
45
46 **Example:**
47
48 .. doctest::
49
50 >>> from jax import random
51 >>> import jax.numpy as jnp
52 >>> import numpyro
53 >>> import numpyro.distributions as dist
54 >>> from numpyro.distributions import constraints
55 >>> from numpyro.infer import SVI, Trace_ELBO
56
57 >>> def model(data):
58 ... f = numpyro.sample("latent_fairness", dist.Beta(10, 10))
59 ... with numpyro.plate("N", data.shape[0]):
60 ... numpyro.sample("obs", dist.Bernoulli(f), obs=data)
61
62 >>> def guide(data):
63 ... alpha_q = numpyro.param("alpha_q", 15., constraint=constraints.positive)
64 ... beta_q = numpyro.param("beta_q", lambda rng_key: random.exponential(rng_key),
65 ... constraint=constraints.positive)
66 ... numpyro.sample("latent_fairness", dist.Beta(alpha_q, beta_q))
67
68 >>> data = jnp.concatenate([jnp.ones(6), jnp.zeros(4)])
69 >>> optimizer = numpyro.optim.Adam(step_size=0.0005)
70 >>> svi = SVI(model, guide, optimizer, loss=Trace_ELBO())
71 >>> svi_result = svi.run(random.PRNGKey(0), 2000, data)
72 >>> params = svi_result.params
73 >>> inferred_mean = params["alpha_q"] / (params["alpha_q"] + params["beta_q"])
74
75 :param model: Python callable with Pyro primitives for the model.
76 :param guide: Python callable with Pyro primitives for the guide
77 (recognition network).
78 :param optim: an instance of :class:`~numpyro.optim._NumpyroOptim`.
79 :param loss: ELBO loss, i.e. negative Evidence Lower Bound, to minimize.
80 :param static_kwargs: static arguments for the model / guide, i.e. arguments
81 that remain constant during fitting.
82 :return: tuple of `(init_fn, update_fn, evaluate)`.
83 """
84 def __init__(self, model, guide, optim, loss, **static_kwargs):
85 self.model = model
86 self.guide = guide
87 self.loss = loss
88 self.optim = optim
89 self.static_kwargs = static_kwargs
90 self.constrain_fn = None
91
92 def init(self, rng_key, *args, **kwargs):
93 """
94 Gets the initial SVI state.
95
96 :param jax.random.PRNGKey rng_key: random number generator seed.
97 :param args: arguments to the model / guide (these can possibly vary during
98 the course of fitting).
99 :param kwargs: keyword arguments to the model / guide (these can possibly vary
100 during the course of fitting).
101 :return: the initial :data:`SVIState`
102 """
103 rng_key, model_seed, guide_seed = random.split(rng_key, 3)
104 model_init = seed(self.model, model_seed)
105 guide_init = seed(self.guide, guide_seed)
106 guide_trace = trace(guide_init).get_trace(*args, **kwargs, **self.static_kwargs)
107 model_trace = trace(replay(model_init, guide_trace)).get_trace(*args, **kwargs, **self.static_kwargs)
108 params = {}
109 inv_transforms = {}
110 # NB: params in model_trace will be overwritten by params in guide_trace
111 for site in list(model_trace.values()) + list(guide_trace.values()):
112 if site['type'] == 'param':
113 constraint = site['kwargs'].pop('constraint', constraints.real)
114 transform = biject_to(constraint)
115 inv_transforms[site['name']] = transform
116 params[site['name']] = transform.inv(site['value'])
117
118 self.constrain_fn = partial(transform_fn, inv_transforms)
119 return SVIState(self.optim.init(params), rng_key)
120
121 def get_params(self, svi_state):
122 """
123 Gets values at `param` sites of the `model` and `guide`.
124
125 :param svi_state: current state of SVI.
126 :return: the corresponding parameters
127 """
128 params = self.constrain_fn(self.optim.get_params(svi_state.optim_state))
129 return params
130
131 def update(self, svi_state, *args, **kwargs):
132 """
133 Take a single step of SVI (possibly on a batch / minibatch of data),
134 using the optimizer.
135
136 :param svi_state: current state of SVI.
137 :param args: arguments to the model / guide (these can possibly vary during
138 the course of fitting).
139 :param kwargs: keyword arguments to the model / guide (these can possibly vary
140 during the course of fitting).
141 :return: tuple of `(svi_state, loss)`.
142 """
143 rng_key, rng_key_step = random.split(svi_state.rng_key)
144 loss_fn = partial(_apply_loss_fn, self.loss.loss, rng_key_step, self.constrain_fn, self.model,
145 self.guide, args, kwargs, self.static_kwargs)
146 loss_val, optim_state = self.optim.eval_and_update(loss_fn, svi_state.optim_state)
147 return SVIState(optim_state, rng_key), loss_val
148
149 def run(self, rng_key, num_steps, *args, progress_bar=True, **kwargs):
150 """
151 (EXPERIMENTAL INTERFACE) Run SVI with `num_steps` iterations, then return
152 the optimized parameters and the stacked losses at every step. If `num_steps`
153 is large, setting `progress_bar=False` can make the run faster.
154
155 .. note:: For a complex training process (e.g. the one requires early stopping,
156 epoch training, varying args/kwargs,...), we recommend to use the more
157 flexible methods :meth:`init`, :meth:`update`, :meth:`evaluate` to
158 customize your training procedure.
159
160 :param jax.random.PRNGKey rng_key: random number generator seed.
161 :param int num_steps: the number of optimization steps.
162 :param args: arguments to the model / guide
163 :param bool progress_bar: Whether to enable progress bar updates. Defaults to
164 ``True``.
165 :param kwargs: keyword arguments to the model / guide
166 :return: a namedtuple with fields `params` and `losses` where `params`
167 holds the optimized values at :class:`numpyro.param` sites,
168 and `losses` is the collected loss during the process.
169 :rtype: SVIRunResult
170 """
171 def body_fn(svi_state, carry):
172 svi_state, loss = self.update(svi_state, *args, **kwargs)
173 return svi_state, loss
174
175 svi_state = self.init(rng_key, *args, **kwargs)
176 if progress_bar:
177 losses = []
178 with tqdm.trange(1, num_steps + 1) as t:
179 batch = num_steps // 20
180 for i in t:
181 svi_state, loss = jit(body_fn)(svi_state, None)
182 if i % batch == 0:
183 avg_loss = sum(losses[i-batch:]) / batch
184 t.set_postfix_str("init loss: {:.4f}, avg. loss [{}-{}]: {:.4f}"
185 .format(losses[0], i - batch + 1, i, avg_loss),
186 refresh=False)
187 losses.append(loss)
188 losses = jnp.stack(losses)
189 else:
190 svi_state, losses = lax.scan(body_fn, svi_state, None, length=num_steps)
191
192 return SVIRunResult(self.get_params(svi_state), losses)
193
194 def evaluate(self, svi_state, *args, **kwargs):
195 """
196 Take a single step of SVI (possibly on a batch / minibatch of data).
197
198 :param svi_state: current state of SVI.
199 :param args: arguments to the model / guide (these can possibly vary during
200 the course of fitting).
201 :param kwargs: keyword arguments to the model / guide.
202 :return: evaluate ELBO loss given the current parameter values
203 (held within `svi_state.optim_state`).
204 """
205 # we split to have the same seed as `update_fn` given an svi_state
206 _, rng_key_eval = random.split(svi_state.rng_key)
207 params = self.get_params(svi_state)
208 return self.loss.loss(rng_key_eval, params, self.model, self.guide,
209 *args, **kwargs, **self.static_kwargs)
210
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/numpyro/infer/svi.py b/numpyro/infer/svi.py
--- a/numpyro/infer/svi.py
+++ b/numpyro/infer/svi.py
@@ -176,15 +176,15 @@
if progress_bar:
losses = []
with tqdm.trange(1, num_steps + 1) as t:
- batch = num_steps // 20
+ batch = max(num_steps // 20, 1)
for i in t:
svi_state, loss = jit(body_fn)(svi_state, None)
+ losses.append(loss)
if i % batch == 0:
avg_loss = sum(losses[i-batch:]) / batch
t.set_postfix_str("init loss: {:.4f}, avg. loss [{}-{}]: {:.4f}"
.format(losses[0], i - batch + 1, i, avg_loss),
refresh=False)
- losses.append(loss)
losses = jnp.stack(losses)
else:
svi_state, losses = lax.scan(body_fn, svi_state, None, length=num_steps)
| {"golden_diff": "diff --git a/numpyro/infer/svi.py b/numpyro/infer/svi.py\n--- a/numpyro/infer/svi.py\n+++ b/numpyro/infer/svi.py\n@@ -176,15 +176,15 @@\n if progress_bar:\n losses = []\n with tqdm.trange(1, num_steps + 1) as t:\n- batch = num_steps // 20\n+ batch = max(num_steps // 20, 1)\n for i in t:\n svi_state, loss = jit(body_fn)(svi_state, None)\n+ losses.append(loss)\n if i % batch == 0:\n avg_loss = sum(losses[i-batch:]) / batch\n t.set_postfix_str(\"init loss: {:.4f}, avg. loss [{}-{}]: {:.4f}\"\n .format(losses[0], i - batch + 1, i, avg_loss),\n refresh=False)\n- losses.append(loss)\n losses = jnp.stack(losses)\n else:\n svi_state, losses = lax.scan(body_fn, svi_state, None, length=num_steps)\n", "issue": "ZeroDivisionError when running SVI with num_steps < 20\nReproducible code\r\n```python\r\nimport jax\r\nimport numpyro\r\n\r\ndef model():\r\n pass\r\n\r\ndef guide():\r\n pass\r\n\r\nsvi = numpyro.infer.SVI(model, guide, numpyro.optim.Adam(1), numpyro.infer.Trace_ELBO())\r\nsvi.run(jax.random.PRNGKey(0), 10)\r\n```\r\n\r\nProposed fix: setting `batch = max(num_steps // 20, 1)` in [SVI.run](https://github.com/pyro-ppl/numpyro/blob/aee945895dff9ffb6a12637a9bfbddb1d8c20363/numpyro/infer/svi.py#L179) to guarantee that `batch` is positive\n", "before_files": [{"content": "# Copyright Contributors to the Pyro project.\n# SPDX-License-Identifier: Apache-2.0\n\nfrom functools import namedtuple, partial\n\nimport tqdm\n\nfrom jax import jit, lax, random\nimport jax.numpy as jnp\n\nfrom numpyro.distributions import constraints\nfrom numpyro.distributions.transforms import biject_to\nfrom numpyro.handlers import replay, seed, trace\nfrom numpyro.infer.util import transform_fn\n\nSVIState = namedtuple('SVIState', ['optim_state', 'rng_key'])\n\"\"\"\nA :func:`~collections.namedtuple` consisting of the following fields:\n - **optim_state** - current optimizer's state.\n - **rng_key** - random number generator seed used for the iteration.\n\"\"\"\n\n\nSVIRunResult = namedtuple('SVIRunResult', ['params', 'losses'])\n\"\"\"\nA :func:`~collections.namedtuple` consisting of the following fields:\n - **params** - the optimized parameters.\n - **losses** - the losses collected at every step.\n\"\"\"\n\n\ndef _apply_loss_fn(loss_fn, rng_key, constrain_fn, model, guide,\n args, kwargs, static_kwargs, params):\n return loss_fn(rng_key, constrain_fn(params), model, guide, *args, **kwargs, **static_kwargs)\n\n\nclass SVI(object):\n \"\"\"\n Stochastic Variational Inference given an ELBO loss objective.\n\n **References**\n\n 1. *SVI Part I: An Introduction to Stochastic Variational Inference in Pyro*,\n (http://pyro.ai/examples/svi_part_i.html)\n\n **Example:**\n\n .. doctest::\n\n >>> from jax import random\n >>> import jax.numpy as jnp\n >>> import numpyro\n >>> import numpyro.distributions as dist\n >>> from numpyro.distributions import constraints\n >>> from numpyro.infer import SVI, Trace_ELBO\n\n >>> def model(data):\n ... f = numpyro.sample(\"latent_fairness\", dist.Beta(10, 10))\n ... with numpyro.plate(\"N\", data.shape[0]):\n ... numpyro.sample(\"obs\", dist.Bernoulli(f), obs=data)\n\n >>> def guide(data):\n ... alpha_q = numpyro.param(\"alpha_q\", 15., constraint=constraints.positive)\n ... beta_q = numpyro.param(\"beta_q\", lambda rng_key: random.exponential(rng_key),\n ... constraint=constraints.positive)\n ... numpyro.sample(\"latent_fairness\", dist.Beta(alpha_q, beta_q))\n\n >>> data = jnp.concatenate([jnp.ones(6), jnp.zeros(4)])\n >>> optimizer = numpyro.optim.Adam(step_size=0.0005)\n >>> svi = SVI(model, guide, optimizer, loss=Trace_ELBO())\n >>> svi_result = svi.run(random.PRNGKey(0), 2000, data)\n >>> params = svi_result.params\n >>> inferred_mean = params[\"alpha_q\"] / (params[\"alpha_q\"] + params[\"beta_q\"])\n\n :param model: Python callable with Pyro primitives for the model.\n :param guide: Python callable with Pyro primitives for the guide\n (recognition network).\n :param optim: an instance of :class:`~numpyro.optim._NumpyroOptim`.\n :param loss: ELBO loss, i.e. negative Evidence Lower Bound, to minimize.\n :param static_kwargs: static arguments for the model / guide, i.e. arguments\n that remain constant during fitting.\n :return: tuple of `(init_fn, update_fn, evaluate)`.\n \"\"\"\n def __init__(self, model, guide, optim, loss, **static_kwargs):\n self.model = model\n self.guide = guide\n self.loss = loss\n self.optim = optim\n self.static_kwargs = static_kwargs\n self.constrain_fn = None\n\n def init(self, rng_key, *args, **kwargs):\n \"\"\"\n Gets the initial SVI state.\n\n :param jax.random.PRNGKey rng_key: random number generator seed.\n :param args: arguments to the model / guide (these can possibly vary during\n the course of fitting).\n :param kwargs: keyword arguments to the model / guide (these can possibly vary\n during the course of fitting).\n :return: the initial :data:`SVIState`\n \"\"\"\n rng_key, model_seed, guide_seed = random.split(rng_key, 3)\n model_init = seed(self.model, model_seed)\n guide_init = seed(self.guide, guide_seed)\n guide_trace = trace(guide_init).get_trace(*args, **kwargs, **self.static_kwargs)\n model_trace = trace(replay(model_init, guide_trace)).get_trace(*args, **kwargs, **self.static_kwargs)\n params = {}\n inv_transforms = {}\n # NB: params in model_trace will be overwritten by params in guide_trace\n for site in list(model_trace.values()) + list(guide_trace.values()):\n if site['type'] == 'param':\n constraint = site['kwargs'].pop('constraint', constraints.real)\n transform = biject_to(constraint)\n inv_transforms[site['name']] = transform\n params[site['name']] = transform.inv(site['value'])\n\n self.constrain_fn = partial(transform_fn, inv_transforms)\n return SVIState(self.optim.init(params), rng_key)\n\n def get_params(self, svi_state):\n \"\"\"\n Gets values at `param` sites of the `model` and `guide`.\n\n :param svi_state: current state of SVI.\n :return: the corresponding parameters\n \"\"\"\n params = self.constrain_fn(self.optim.get_params(svi_state.optim_state))\n return params\n\n def update(self, svi_state, *args, **kwargs):\n \"\"\"\n Take a single step of SVI (possibly on a batch / minibatch of data),\n using the optimizer.\n\n :param svi_state: current state of SVI.\n :param args: arguments to the model / guide (these can possibly vary during\n the course of fitting).\n :param kwargs: keyword arguments to the model / guide (these can possibly vary\n during the course of fitting).\n :return: tuple of `(svi_state, loss)`.\n \"\"\"\n rng_key, rng_key_step = random.split(svi_state.rng_key)\n loss_fn = partial(_apply_loss_fn, self.loss.loss, rng_key_step, self.constrain_fn, self.model,\n self.guide, args, kwargs, self.static_kwargs)\n loss_val, optim_state = self.optim.eval_and_update(loss_fn, svi_state.optim_state)\n return SVIState(optim_state, rng_key), loss_val\n\n def run(self, rng_key, num_steps, *args, progress_bar=True, **kwargs):\n \"\"\"\n (EXPERIMENTAL INTERFACE) Run SVI with `num_steps` iterations, then return\n the optimized parameters and the stacked losses at every step. If `num_steps`\n is large, setting `progress_bar=False` can make the run faster.\n\n .. note:: For a complex training process (e.g. the one requires early stopping,\n epoch training, varying args/kwargs,...), we recommend to use the more\n flexible methods :meth:`init`, :meth:`update`, :meth:`evaluate` to\n customize your training procedure.\n\n :param jax.random.PRNGKey rng_key: random number generator seed.\n :param int num_steps: the number of optimization steps.\n :param args: arguments to the model / guide\n :param bool progress_bar: Whether to enable progress bar updates. Defaults to\n ``True``.\n :param kwargs: keyword arguments to the model / guide\n :return: a namedtuple with fields `params` and `losses` where `params`\n holds the optimized values at :class:`numpyro.param` sites,\n and `losses` is the collected loss during the process.\n :rtype: SVIRunResult\n \"\"\"\n def body_fn(svi_state, carry):\n svi_state, loss = self.update(svi_state, *args, **kwargs)\n return svi_state, loss\n\n svi_state = self.init(rng_key, *args, **kwargs)\n if progress_bar:\n losses = []\n with tqdm.trange(1, num_steps + 1) as t:\n batch = num_steps // 20\n for i in t:\n svi_state, loss = jit(body_fn)(svi_state, None)\n if i % batch == 0:\n avg_loss = sum(losses[i-batch:]) / batch\n t.set_postfix_str(\"init loss: {:.4f}, avg. loss [{}-{}]: {:.4f}\"\n .format(losses[0], i - batch + 1, i, avg_loss),\n refresh=False)\n losses.append(loss)\n losses = jnp.stack(losses)\n else:\n svi_state, losses = lax.scan(body_fn, svi_state, None, length=num_steps)\n\n return SVIRunResult(self.get_params(svi_state), losses)\n\n def evaluate(self, svi_state, *args, **kwargs):\n \"\"\"\n Take a single step of SVI (possibly on a batch / minibatch of data).\n\n :param svi_state: current state of SVI.\n :param args: arguments to the model / guide (these can possibly vary during\n the course of fitting).\n :param kwargs: keyword arguments to the model / guide.\n :return: evaluate ELBO loss given the current parameter values\n (held within `svi_state.optim_state`).\n \"\"\"\n # we split to have the same seed as `update_fn` given an svi_state\n _, rng_key_eval = random.split(svi_state.rng_key)\n params = self.get_params(svi_state)\n return self.loss.loss(rng_key_eval, params, self.model, self.guide,\n *args, **kwargs, **self.static_kwargs)\n", "path": "numpyro/infer/svi.py"}], "after_files": [{"content": "# Copyright Contributors to the Pyro project.\n# SPDX-License-Identifier: Apache-2.0\n\nfrom functools import namedtuple, partial\n\nimport tqdm\n\nfrom jax import jit, lax, random\nimport jax.numpy as jnp\n\nfrom numpyro.distributions import constraints\nfrom numpyro.distributions.transforms import biject_to\nfrom numpyro.handlers import replay, seed, trace\nfrom numpyro.infer.util import transform_fn\n\nSVIState = namedtuple('SVIState', ['optim_state', 'rng_key'])\n\"\"\"\nA :func:`~collections.namedtuple` consisting of the following fields:\n - **optim_state** - current optimizer's state.\n - **rng_key** - random number generator seed used for the iteration.\n\"\"\"\n\n\nSVIRunResult = namedtuple('SVIRunResult', ['params', 'losses'])\n\"\"\"\nA :func:`~collections.namedtuple` consisting of the following fields:\n - **params** - the optimized parameters.\n - **losses** - the losses collected at every step.\n\"\"\"\n\n\ndef _apply_loss_fn(loss_fn, rng_key, constrain_fn, model, guide,\n args, kwargs, static_kwargs, params):\n return loss_fn(rng_key, constrain_fn(params), model, guide, *args, **kwargs, **static_kwargs)\n\n\nclass SVI(object):\n \"\"\"\n Stochastic Variational Inference given an ELBO loss objective.\n\n **References**\n\n 1. *SVI Part I: An Introduction to Stochastic Variational Inference in Pyro*,\n (http://pyro.ai/examples/svi_part_i.html)\n\n **Example:**\n\n .. doctest::\n\n >>> from jax import random\n >>> import jax.numpy as jnp\n >>> import numpyro\n >>> import numpyro.distributions as dist\n >>> from numpyro.distributions import constraints\n >>> from numpyro.infer import SVI, Trace_ELBO\n\n >>> def model(data):\n ... f = numpyro.sample(\"latent_fairness\", dist.Beta(10, 10))\n ... with numpyro.plate(\"N\", data.shape[0]):\n ... numpyro.sample(\"obs\", dist.Bernoulli(f), obs=data)\n\n >>> def guide(data):\n ... alpha_q = numpyro.param(\"alpha_q\", 15., constraint=constraints.positive)\n ... beta_q = numpyro.param(\"beta_q\", lambda rng_key: random.exponential(rng_key),\n ... constraint=constraints.positive)\n ... numpyro.sample(\"latent_fairness\", dist.Beta(alpha_q, beta_q))\n\n >>> data = jnp.concatenate([jnp.ones(6), jnp.zeros(4)])\n >>> optimizer = numpyro.optim.Adam(step_size=0.0005)\n >>> svi = SVI(model, guide, optimizer, loss=Trace_ELBO())\n >>> svi_result = svi.run(random.PRNGKey(0), 2000, data)\n >>> params = svi_result.params\n >>> inferred_mean = params[\"alpha_q\"] / (params[\"alpha_q\"] + params[\"beta_q\"])\n\n :param model: Python callable with Pyro primitives for the model.\n :param guide: Python callable with Pyro primitives for the guide\n (recognition network).\n :param optim: an instance of :class:`~numpyro.optim._NumpyroOptim`.\n :param loss: ELBO loss, i.e. negative Evidence Lower Bound, to minimize.\n :param static_kwargs: static arguments for the model / guide, i.e. arguments\n that remain constant during fitting.\n :return: tuple of `(init_fn, update_fn, evaluate)`.\n \"\"\"\n def __init__(self, model, guide, optim, loss, **static_kwargs):\n self.model = model\n self.guide = guide\n self.loss = loss\n self.optim = optim\n self.static_kwargs = static_kwargs\n self.constrain_fn = None\n\n def init(self, rng_key, *args, **kwargs):\n \"\"\"\n Gets the initial SVI state.\n\n :param jax.random.PRNGKey rng_key: random number generator seed.\n :param args: arguments to the model / guide (these can possibly vary during\n the course of fitting).\n :param kwargs: keyword arguments to the model / guide (these can possibly vary\n during the course of fitting).\n :return: the initial :data:`SVIState`\n \"\"\"\n rng_key, model_seed, guide_seed = random.split(rng_key, 3)\n model_init = seed(self.model, model_seed)\n guide_init = seed(self.guide, guide_seed)\n guide_trace = trace(guide_init).get_trace(*args, **kwargs, **self.static_kwargs)\n model_trace = trace(replay(model_init, guide_trace)).get_trace(*args, **kwargs, **self.static_kwargs)\n params = {}\n inv_transforms = {}\n # NB: params in model_trace will be overwritten by params in guide_trace\n for site in list(model_trace.values()) + list(guide_trace.values()):\n if site['type'] == 'param':\n constraint = site['kwargs'].pop('constraint', constraints.real)\n transform = biject_to(constraint)\n inv_transforms[site['name']] = transform\n params[site['name']] = transform.inv(site['value'])\n\n self.constrain_fn = partial(transform_fn, inv_transforms)\n return SVIState(self.optim.init(params), rng_key)\n\n def get_params(self, svi_state):\n \"\"\"\n Gets values at `param` sites of the `model` and `guide`.\n\n :param svi_state: current state of SVI.\n :return: the corresponding parameters\n \"\"\"\n params = self.constrain_fn(self.optim.get_params(svi_state.optim_state))\n return params\n\n def update(self, svi_state, *args, **kwargs):\n \"\"\"\n Take a single step of SVI (possibly on a batch / minibatch of data),\n using the optimizer.\n\n :param svi_state: current state of SVI.\n :param args: arguments to the model / guide (these can possibly vary during\n the course of fitting).\n :param kwargs: keyword arguments to the model / guide (these can possibly vary\n during the course of fitting).\n :return: tuple of `(svi_state, loss)`.\n \"\"\"\n rng_key, rng_key_step = random.split(svi_state.rng_key)\n loss_fn = partial(_apply_loss_fn, self.loss.loss, rng_key_step, self.constrain_fn, self.model,\n self.guide, args, kwargs, self.static_kwargs)\n loss_val, optim_state = self.optim.eval_and_update(loss_fn, svi_state.optim_state)\n return SVIState(optim_state, rng_key), loss_val\n\n def run(self, rng_key, num_steps, *args, progress_bar=True, **kwargs):\n \"\"\"\n (EXPERIMENTAL INTERFACE) Run SVI with `num_steps` iterations, then return\n the optimized parameters and the stacked losses at every step. If `num_steps`\n is large, setting `progress_bar=False` can make the run faster.\n\n .. note:: For a complex training process (e.g. the one requires early stopping,\n epoch training, varying args/kwargs,...), we recommend to use the more\n flexible methods :meth:`init`, :meth:`update`, :meth:`evaluate` to\n customize your training procedure.\n\n :param jax.random.PRNGKey rng_key: random number generator seed.\n :param int num_steps: the number of optimization steps.\n :param args: arguments to the model / guide\n :param bool progress_bar: Whether to enable progress bar updates. Defaults to\n ``True``.\n :param kwargs: keyword arguments to the model / guide\n :return: a namedtuple with fields `params` and `losses` where `params`\n holds the optimized values at :class:`numpyro.param` sites,\n and `losses` is the collected loss during the process.\n :rtype: SVIRunResult\n \"\"\"\n def body_fn(svi_state, carry):\n svi_state, loss = self.update(svi_state, *args, **kwargs)\n return svi_state, loss\n\n svi_state = self.init(rng_key, *args, **kwargs)\n if progress_bar:\n losses = []\n with tqdm.trange(1, num_steps + 1) as t:\n batch = max(num_steps // 20, 1)\n for i in t:\n svi_state, loss = jit(body_fn)(svi_state, None)\n losses.append(loss)\n if i % batch == 0:\n avg_loss = sum(losses[i-batch:]) / batch\n t.set_postfix_str(\"init loss: {:.4f}, avg. loss [{}-{}]: {:.4f}\"\n .format(losses[0], i - batch + 1, i, avg_loss),\n refresh=False)\n losses = jnp.stack(losses)\n else:\n svi_state, losses = lax.scan(body_fn, svi_state, None, length=num_steps)\n\n return SVIRunResult(self.get_params(svi_state), losses)\n\n def evaluate(self, svi_state, *args, **kwargs):\n \"\"\"\n Take a single step of SVI (possibly on a batch / minibatch of data).\n\n :param svi_state: current state of SVI.\n :param args: arguments to the model / guide (these can possibly vary during\n the course of fitting).\n :param kwargs: keyword arguments to the model / guide.\n :return: evaluate ELBO loss given the current parameter values\n (held within `svi_state.optim_state`).\n \"\"\"\n # we split to have the same seed as `update_fn` given an svi_state\n _, rng_key_eval = random.split(svi_state.rng_key)\n params = self.get_params(svi_state)\n return self.loss.loss(rng_key_eval, params, self.model, self.guide,\n *args, **kwargs, **self.static_kwargs)\n", "path": "numpyro/infer/svi.py"}]} | 3,145 | 254 |
gh_patches_debug_25352 | rasdani/github-patches | git_diff | OpenCTI-Platform__connectors-975 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[VXVault] Connector imports garbage data (HTML tags and blank events)
## Description
The VXVault connector is importing garbage data as indicators and observables, including events like `<pre>`, </pre>` and blank entities.
## Environment
1. OS (where OpenCTI server runs): Docker on Ubuntu 20
2. OpenCTI version: 5.5.2
3. OpenCTI client: connectors/python
## Expected Output
Connector should download the URLs from the VXVault threat feed here: http://vxvault.net/URL_List.php and import them into OpenCTI as Observables and Indicators.
## Actual Output
The connector does import the URLs, but it fails to parse the page correctly and imports some of the HTML tags as well by accident, leading to garbage data like Indicators with names and values as just `<pre>`
## Additional information
I discussed this and a couple of other related issues with the VXVault connector + data with Samuel Hassine in the slack channel: https://filigran-community.slack.com/archives/CHZC2D38C/p1673599524232109
There are three related issues - I will raise a GH issue for two of them and then submit a proposed PR to fix both:
1 - Connector imports garbage data (HTML tags)
2 - Connector leaves trailing white-space at the end of Indicator/Observable name and value (Raised as #974)
3 - VXVault entities do not show up in bulk search, which is caused by issue 2, as bulk-search only does an exact match currently and can't match on the trailing `\n` characters. The developers are aware of this, so I will not raise an issue for it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `external-import/vxvault/src/vxvault.py`
Content:
```
1 import os
2 import ssl
3 import sys
4 import time
5 import urllib.request
6 from datetime import datetime
7
8 import certifi
9 import yaml
10 from pycti import OpenCTIConnectorHelper, get_config_variable
11 from stix2 import TLP_WHITE, URL, Bundle, ExternalReference
12
13
14 class VXVault:
15 def __init__(self):
16 # Instantiate the connector helper from config
17 config_file_path = os.path.dirname(os.path.abspath(__file__)) + "/config.yml"
18 config = (
19 yaml.load(open(config_file_path), Loader=yaml.FullLoader)
20 if os.path.isfile(config_file_path)
21 else {}
22 )
23 self.helper = OpenCTIConnectorHelper(config)
24 # Extra config
25 self.vxvault_url = get_config_variable(
26 "VXVAULT_URL", ["vxvault", "url"], config
27 )
28 self.vxvault_interval = get_config_variable(
29 "VXVAULT_INTERVAL", ["vxvault", "interval"], config, True
30 )
31 self.create_indicators = get_config_variable(
32 "VXVAULT_CREATE_INDICATORS",
33 ["vxvault", "create_indicators"],
34 config,
35 False,
36 True,
37 )
38 self.update_existing_data = get_config_variable(
39 "CONNECTOR_UPDATE_EXISTING_DATA",
40 ["connector", "update_existing_data"],
41 config,
42 )
43 self.identity = self.helper.api.identity.create(
44 type="Organization",
45 name="VX Vault",
46 description="VX Vault is providing URLs of potential malicious payload.",
47 )
48
49 def get_interval(self):
50 return int(self.vxvault_interval) * 60 * 60 * 24
51
52 def next_run(self, seconds):
53 return
54
55 def run(self):
56 self.helper.log_info("Fetching VXVault dataset...")
57 while True:
58 try:
59 # Get the current timestamp and check
60 timestamp = int(time.time())
61 current_state = self.helper.get_state()
62 if current_state is not None and "last_run" in current_state:
63 last_run = current_state["last_run"]
64 self.helper.log_info(
65 "Connector last run: "
66 + datetime.utcfromtimestamp(last_run).strftime(
67 "%Y-%m-%d %H:%M:%S"
68 )
69 )
70 else:
71 last_run = None
72 self.helper.log_info("Connector has never run")
73 # If the last_run is more than interval-1 day
74 if last_run is None or (
75 (timestamp - last_run)
76 > ((int(self.vxvault_interval) - 1) * 60 * 60 * 24)
77 ):
78 self.helper.log_info("Connector will run!")
79 now = datetime.utcfromtimestamp(timestamp)
80 friendly_name = "VXVault run @ " + now.strftime("%Y-%m-%d %H:%M:%S")
81 work_id = self.helper.api.work.initiate_work(
82 self.helper.connect_id, friendly_name
83 )
84 try:
85 response = urllib.request.urlopen(
86 self.vxvault_url,
87 context=ssl.create_default_context(cafile=certifi.where()),
88 )
89 image = response.read()
90 with open(
91 os.path.dirname(os.path.abspath(__file__)) + "/data.txt",
92 "wb",
93 ) as file:
94 file.write(image)
95 count = 0
96 bundle_objects = []
97 with open(
98 os.path.dirname(os.path.abspath(__file__)) + "/data.txt"
99 ) as fp:
100 for line in fp:
101 count += 1
102 if count <= 3:
103 continue
104 external_reference = ExternalReference(
105 source_name="VX Vault",
106 url="http://vxvault.net",
107 description="VX Vault repository URL",
108 )
109 stix_observable = URL(
110 value=line,
111 object_marking_refs=[TLP_WHITE],
112 custom_properties={
113 "description": "VX Vault URL",
114 "x_opencti_score": 80,
115 "created_by_ref": self.identity["standard_id"],
116 "x_opencti_create_indicator": self.create_indicators,
117 "external_references": [external_reference],
118 },
119 )
120 bundle_objects.append(stix_observable)
121 bundle = Bundle(
122 objects=bundle_objects, allow_custom=True
123 ).serialize()
124 self.helper.send_stix2_bundle(
125 bundle,
126 update=self.update_existing_data,
127 work_id=work_id,
128 )
129 if os.path.exists(
130 os.path.dirname(os.path.abspath(__file__)) + "/data.txt"
131 ):
132 os.remove(
133 os.path.dirname(os.path.abspath(__file__)) + "/data.txt"
134 )
135 except Exception as e:
136 self.helper.log_error(str(e))
137 # Store the current timestamp as a last run
138 message = "Connector successfully run, storing last_run as " + str(
139 timestamp
140 )
141 self.helper.log_info(message)
142 self.helper.set_state({"last_run": timestamp})
143 self.helper.api.work.to_processed(work_id, message)
144 self.helper.log_info(
145 "Last_run stored, next run in: "
146 + str(round(self.get_interval() / 60 / 60 / 24, 2))
147 + " days"
148 )
149 else:
150 new_interval = self.get_interval() - (timestamp - last_run)
151 self.helper.log_info(
152 "Connector will not run, next run in: "
153 + str(round(new_interval / 60 / 60 / 24, 2))
154 + " days"
155 )
156 except (KeyboardInterrupt, SystemExit):
157 self.helper.log_info("Connector stop")
158 sys.exit(0)
159 except Exception as e:
160 self.helper.log_error(str(e))
161
162 if self.helper.connect_run_and_terminate:
163 self.helper.log_info("Connector stop")
164 sys.exit(0)
165
166 time.sleep(60)
167
168
169 if __name__ == "__main__":
170 try:
171 VXVaultConnector = VXVault()
172 VXVaultConnector.run()
173 except Exception as e:
174 print(e)
175 time.sleep(10)
176 sys.exit(0)
177
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/external-import/vxvault/src/vxvault.py b/external-import/vxvault/src/vxvault.py
--- a/external-import/vxvault/src/vxvault.py
+++ b/external-import/vxvault/src/vxvault.py
@@ -1,4 +1,5 @@
import os
+import re
import ssl
import sys
import time
@@ -7,6 +8,7 @@
import certifi
import yaml
+
from pycti import OpenCTIConnectorHelper, get_config_variable
from stix2 import TLP_WHITE, URL, Bundle, ExternalReference
@@ -101,6 +103,13 @@
count += 1
if count <= 3:
continue
+ line=line.strip()
+ matchHtmlTag = re.search(r'^<\/?\w+>', line)
+ if matchHtmlTag:
+ continue
+ matchBlankLine = re.search(r'^\s*$', line)
+ if matchBlankLine:
+ continue
external_reference = ExternalReference(
source_name="VX Vault",
url="http://vxvault.net",
| {"golden_diff": "diff --git a/external-import/vxvault/src/vxvault.py b/external-import/vxvault/src/vxvault.py\n--- a/external-import/vxvault/src/vxvault.py\n+++ b/external-import/vxvault/src/vxvault.py\n@@ -1,4 +1,5 @@\n import os\n+import re\n import ssl\n import sys\n import time\n@@ -7,6 +8,7 @@\n \n import certifi\n import yaml\n+\n from pycti import OpenCTIConnectorHelper, get_config_variable\n from stix2 import TLP_WHITE, URL, Bundle, ExternalReference\n \n@@ -101,6 +103,13 @@\n count += 1\n if count <= 3:\n continue\n+ line=line.strip()\n+ matchHtmlTag = re.search(r'^<\\/?\\w+>', line)\n+ if matchHtmlTag:\n+ continue\n+ matchBlankLine = re.search(r'^\\s*$', line)\n+ if matchBlankLine:\n+ continue\n external_reference = ExternalReference(\n source_name=\"VX Vault\",\n url=\"http://vxvault.net\",\n", "issue": "[VXVault] Connector imports garbage data (HTML tags and blank events)\n## Description\r\n\r\nThe VXVault connector is importing garbage data as indicators and observables, including events like `<pre>`, </pre>` and blank entities. \r\n\r\n## Environment\r\n\r\n1. OS (where OpenCTI server runs): Docker on Ubuntu 20 \r\n2. OpenCTI version: 5.5.2\r\n3. OpenCTI client: connectors/python\r\n\r\n## Expected Output\r\nConnector should download the URLs from the VXVault threat feed here: http://vxvault.net/URL_List.php and import them into OpenCTI as Observables and Indicators.\r\n\r\n## Actual Output\r\nThe connector does import the URLs, but it fails to parse the page correctly and imports some of the HTML tags as well by accident, leading to garbage data like Indicators with names and values as just `<pre>`\r\n\r\n## Additional information\r\nI discussed this and a couple of other related issues with the VXVault connector + data with Samuel Hassine in the slack channel: https://filigran-community.slack.com/archives/CHZC2D38C/p1673599524232109\r\n\r\nThere are three related issues - I will raise a GH issue for two of them and then submit a proposed PR to fix both:\r\n1 - Connector imports garbage data (HTML tags)\r\n2 - Connector leaves trailing white-space at the end of Indicator/Observable name and value (Raised as #974)\r\n3 - VXVault entities do not show up in bulk search, which is caused by issue 2, as bulk-search only does an exact match currently and can't match on the trailing `\\n` characters. The developers are aware of this, so I will not raise an issue for it.\n", "before_files": [{"content": "import os\nimport ssl\nimport sys\nimport time\nimport urllib.request\nfrom datetime import datetime\n\nimport certifi\nimport yaml\nfrom pycti import OpenCTIConnectorHelper, get_config_variable\nfrom stix2 import TLP_WHITE, URL, Bundle, ExternalReference\n\n\nclass VXVault:\n def __init__(self):\n # Instantiate the connector helper from config\n config_file_path = os.path.dirname(os.path.abspath(__file__)) + \"/config.yml\"\n config = (\n yaml.load(open(config_file_path), Loader=yaml.FullLoader)\n if os.path.isfile(config_file_path)\n else {}\n )\n self.helper = OpenCTIConnectorHelper(config)\n # Extra config\n self.vxvault_url = get_config_variable(\n \"VXVAULT_URL\", [\"vxvault\", \"url\"], config\n )\n self.vxvault_interval = get_config_variable(\n \"VXVAULT_INTERVAL\", [\"vxvault\", \"interval\"], config, True\n )\n self.create_indicators = get_config_variable(\n \"VXVAULT_CREATE_INDICATORS\",\n [\"vxvault\", \"create_indicators\"],\n config,\n False,\n True,\n )\n self.update_existing_data = get_config_variable(\n \"CONNECTOR_UPDATE_EXISTING_DATA\",\n [\"connector\", \"update_existing_data\"],\n config,\n )\n self.identity = self.helper.api.identity.create(\n type=\"Organization\",\n name=\"VX Vault\",\n description=\"VX Vault is providing URLs of potential malicious payload.\",\n )\n\n def get_interval(self):\n return int(self.vxvault_interval) * 60 * 60 * 24\n\n def next_run(self, seconds):\n return\n\n def run(self):\n self.helper.log_info(\"Fetching VXVault dataset...\")\n while True:\n try:\n # Get the current timestamp and check\n timestamp = int(time.time())\n current_state = self.helper.get_state()\n if current_state is not None and \"last_run\" in current_state:\n last_run = current_state[\"last_run\"]\n self.helper.log_info(\n \"Connector last run: \"\n + datetime.utcfromtimestamp(last_run).strftime(\n \"%Y-%m-%d %H:%M:%S\"\n )\n )\n else:\n last_run = None\n self.helper.log_info(\"Connector has never run\")\n # If the last_run is more than interval-1 day\n if last_run is None or (\n (timestamp - last_run)\n > ((int(self.vxvault_interval) - 1) * 60 * 60 * 24)\n ):\n self.helper.log_info(\"Connector will run!\")\n now = datetime.utcfromtimestamp(timestamp)\n friendly_name = \"VXVault run @ \" + now.strftime(\"%Y-%m-%d %H:%M:%S\")\n work_id = self.helper.api.work.initiate_work(\n self.helper.connect_id, friendly_name\n )\n try:\n response = urllib.request.urlopen(\n self.vxvault_url,\n context=ssl.create_default_context(cafile=certifi.where()),\n )\n image = response.read()\n with open(\n os.path.dirname(os.path.abspath(__file__)) + \"/data.txt\",\n \"wb\",\n ) as file:\n file.write(image)\n count = 0\n bundle_objects = []\n with open(\n os.path.dirname(os.path.abspath(__file__)) + \"/data.txt\"\n ) as fp:\n for line in fp:\n count += 1\n if count <= 3:\n continue\n external_reference = ExternalReference(\n source_name=\"VX Vault\",\n url=\"http://vxvault.net\",\n description=\"VX Vault repository URL\",\n )\n stix_observable = URL(\n value=line,\n object_marking_refs=[TLP_WHITE],\n custom_properties={\n \"description\": \"VX Vault URL\",\n \"x_opencti_score\": 80,\n \"created_by_ref\": self.identity[\"standard_id\"],\n \"x_opencti_create_indicator\": self.create_indicators,\n \"external_references\": [external_reference],\n },\n )\n bundle_objects.append(stix_observable)\n bundle = Bundle(\n objects=bundle_objects, allow_custom=True\n ).serialize()\n self.helper.send_stix2_bundle(\n bundle,\n update=self.update_existing_data,\n work_id=work_id,\n )\n if os.path.exists(\n os.path.dirname(os.path.abspath(__file__)) + \"/data.txt\"\n ):\n os.remove(\n os.path.dirname(os.path.abspath(__file__)) + \"/data.txt\"\n )\n except Exception as e:\n self.helper.log_error(str(e))\n # Store the current timestamp as a last run\n message = \"Connector successfully run, storing last_run as \" + str(\n timestamp\n )\n self.helper.log_info(message)\n self.helper.set_state({\"last_run\": timestamp})\n self.helper.api.work.to_processed(work_id, message)\n self.helper.log_info(\n \"Last_run stored, next run in: \"\n + str(round(self.get_interval() / 60 / 60 / 24, 2))\n + \" days\"\n )\n else:\n new_interval = self.get_interval() - (timestamp - last_run)\n self.helper.log_info(\n \"Connector will not run, next run in: \"\n + str(round(new_interval / 60 / 60 / 24, 2))\n + \" days\"\n )\n except (KeyboardInterrupt, SystemExit):\n self.helper.log_info(\"Connector stop\")\n sys.exit(0)\n except Exception as e:\n self.helper.log_error(str(e))\n\n if self.helper.connect_run_and_terminate:\n self.helper.log_info(\"Connector stop\")\n sys.exit(0)\n\n time.sleep(60)\n\n\nif __name__ == \"__main__\":\n try:\n VXVaultConnector = VXVault()\n VXVaultConnector.run()\n except Exception as e:\n print(e)\n time.sleep(10)\n sys.exit(0)\n", "path": "external-import/vxvault/src/vxvault.py"}], "after_files": [{"content": "import os\nimport re\nimport ssl\nimport sys\nimport time\nimport urllib.request\nfrom datetime import datetime\n\nimport certifi\nimport yaml\n\nfrom pycti import OpenCTIConnectorHelper, get_config_variable\nfrom stix2 import TLP_WHITE, URL, Bundle, ExternalReference\n\n\nclass VXVault:\n def __init__(self):\n # Instantiate the connector helper from config\n config_file_path = os.path.dirname(os.path.abspath(__file__)) + \"/config.yml\"\n config = (\n yaml.load(open(config_file_path), Loader=yaml.FullLoader)\n if os.path.isfile(config_file_path)\n else {}\n )\n self.helper = OpenCTIConnectorHelper(config)\n # Extra config\n self.vxvault_url = get_config_variable(\n \"VXVAULT_URL\", [\"vxvault\", \"url\"], config\n )\n self.vxvault_interval = get_config_variable(\n \"VXVAULT_INTERVAL\", [\"vxvault\", \"interval\"], config, True\n )\n self.create_indicators = get_config_variable(\n \"VXVAULT_CREATE_INDICATORS\",\n [\"vxvault\", \"create_indicators\"],\n config,\n False,\n True,\n )\n self.update_existing_data = get_config_variable(\n \"CONNECTOR_UPDATE_EXISTING_DATA\",\n [\"connector\", \"update_existing_data\"],\n config,\n )\n self.identity = self.helper.api.identity.create(\n type=\"Organization\",\n name=\"VX Vault\",\n description=\"VX Vault is providing URLs of potential malicious payload.\",\n )\n\n def get_interval(self):\n return int(self.vxvault_interval) * 60 * 60 * 24\n\n def next_run(self, seconds):\n return\n\n def run(self):\n self.helper.log_info(\"Fetching VXVault dataset...\")\n while True:\n try:\n # Get the current timestamp and check\n timestamp = int(time.time())\n current_state = self.helper.get_state()\n if current_state is not None and \"last_run\" in current_state:\n last_run = current_state[\"last_run\"]\n self.helper.log_info(\n \"Connector last run: \"\n + datetime.utcfromtimestamp(last_run).strftime(\n \"%Y-%m-%d %H:%M:%S\"\n )\n )\n else:\n last_run = None\n self.helper.log_info(\"Connector has never run\")\n # If the last_run is more than interval-1 day\n if last_run is None or (\n (timestamp - last_run)\n > ((int(self.vxvault_interval) - 1) * 60 * 60 * 24)\n ):\n self.helper.log_info(\"Connector will run!\")\n now = datetime.utcfromtimestamp(timestamp)\n friendly_name = \"VXVault run @ \" + now.strftime(\"%Y-%m-%d %H:%M:%S\")\n work_id = self.helper.api.work.initiate_work(\n self.helper.connect_id, friendly_name\n )\n try:\n response = urllib.request.urlopen(\n self.vxvault_url,\n context=ssl.create_default_context(cafile=certifi.where()),\n )\n image = response.read()\n with open(\n os.path.dirname(os.path.abspath(__file__)) + \"/data.txt\",\n \"wb\",\n ) as file:\n file.write(image)\n count = 0\n bundle_objects = []\n with open(\n os.path.dirname(os.path.abspath(__file__)) + \"/data.txt\"\n ) as fp:\n for line in fp:\n count += 1\n if count <= 3:\n continue\n line=line.strip()\n matchHtmlTag = re.search(r'^<\\/?\\w+>', line)\n if matchHtmlTag:\n continue\n matchBlankLine = re.search(r'^\\s*$', line)\n if matchBlankLine:\n continue\n external_reference = ExternalReference(\n source_name=\"VX Vault\",\n url=\"http://vxvault.net\",\n description=\"VX Vault repository URL\",\n )\n stix_observable = URL(\n value=line,\n object_marking_refs=[TLP_WHITE],\n custom_properties={\n \"description\": \"VX Vault URL\",\n \"x_opencti_score\": 80,\n \"created_by_ref\": self.identity[\"standard_id\"],\n \"x_opencti_create_indicator\": self.create_indicators,\n \"external_references\": [external_reference],\n },\n )\n bundle_objects.append(stix_observable)\n bundle = Bundle(\n objects=bundle_objects, allow_custom=True\n ).serialize()\n self.helper.send_stix2_bundle(\n bundle,\n update=self.update_existing_data,\n work_id=work_id,\n )\n if os.path.exists(\n os.path.dirname(os.path.abspath(__file__)) + \"/data.txt\"\n ):\n os.remove(\n os.path.dirname(os.path.abspath(__file__)) + \"/data.txt\"\n )\n except Exception as e:\n self.helper.log_error(str(e))\n # Store the current timestamp as a last run\n message = \"Connector successfully run, storing last_run as \" + str(\n timestamp\n )\n self.helper.log_info(message)\n self.helper.set_state({\"last_run\": timestamp})\n self.helper.api.work.to_processed(work_id, message)\n self.helper.log_info(\n \"Last_run stored, next run in: \"\n + str(round(self.get_interval() / 60 / 60 / 24, 2))\n + \" days\"\n )\n else:\n new_interval = self.get_interval() - (timestamp - last_run)\n self.helper.log_info(\n \"Connector will not run, next run in: \"\n + str(round(new_interval / 60 / 60 / 24, 2))\n + \" days\"\n )\n except (KeyboardInterrupt, SystemExit):\n self.helper.log_info(\"Connector stop\")\n sys.exit(0)\n except Exception as e:\n self.helper.log_error(str(e))\n\n if self.helper.connect_run_and_terminate:\n self.helper.log_info(\"Connector stop\")\n sys.exit(0)\n\n time.sleep(60)\n\n\nif __name__ == \"__main__\":\n try:\n VXVaultConnector = VXVault()\n VXVaultConnector.run()\n except Exception as e:\n print(e)\n time.sleep(10)\n sys.exit(0)\n", "path": "external-import/vxvault/src/vxvault.py"}]} | 2,340 | 248 |
gh_patches_debug_31177 | rasdani/github-patches | git_diff | pypi__warehouse-9341 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Reset password doesn't work if user doesn't have verified/primary email address.
**Describe the bug**
Password reset is not working. I get the email, follow the link, enter the password twice, submit the form, and get back a page with no error messages. But the page title is "Error processing form -- Log in".
**Expected behavior**
I expected a positive confirmation that my password was reset, and to be able to login with the new password. Neither one happens.
**To Reproduce**
1. Start at `https://pypi.org/account/login/`
2. Click "Forgot password?" link
3. Enter the email address associated with my PyPI account (gerg.ward at gmail dot com)
4. Check my inbox and the email is already there.
5. Click the link: `https://pypi.org/account/reset-password/?token=.eJw1...`
6. Generate a new password outside my browser.
7. Copy/paste the new password into both password fields; text changes to "Passwords match" and the "Reset password" button is enabled.
8. Click "Reset password"
9. Receive the same "Reset password" form again, but with positive confirmation and no error messages. Only the page title gives a clue: "Error processing form – Reset your password · PyPI". When I try to login with the new password (again, copy/pasted), it does not work.
I'll attach HTML and screenshot.
**My Platform**
Firefox 87.0 on Ubuntu 20.04.2.
Same behaviour with Chromium.
No HTTP proxy.
**Additional context**
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `warehouse/admin/views/users.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import shlex
14
15 import wtforms
16 import wtforms.fields.html5
17
18 from paginate_sqlalchemy import SqlalchemyOrmPage as SQLAlchemyORMPage
19 from pyramid.httpexceptions import HTTPBadRequest, HTTPNotFound, HTTPSeeOther
20 from pyramid.view import view_config
21 from sqlalchemy import or_
22 from sqlalchemy.orm import joinedload
23 from sqlalchemy.orm.exc import NoResultFound
24
25 from warehouse import forms
26 from warehouse.accounts.interfaces import IUserService
27 from warehouse.accounts.models import DisableReason, Email, User
28 from warehouse.email import send_password_compromised_email
29 from warehouse.packaging.models import JournalEntry, Project, Role
30 from warehouse.utils.paginate import paginate_url_factory
31
32
33 @view_config(
34 route_name="admin.user.list",
35 renderer="admin/users/list.html",
36 permission="moderator",
37 uses_session=True,
38 )
39 def user_list(request):
40 q = request.params.get("q")
41
42 try:
43 page_num = int(request.params.get("page", 1))
44 except ValueError:
45 raise HTTPBadRequest("'page' must be an integer.") from None
46
47 users_query = request.db.query(User).order_by(User.username)
48
49 if q:
50 terms = shlex.split(q)
51
52 filters = []
53 for term in terms:
54 if ":" in term:
55 field, value = term.split(":", 1)
56 if field.lower() == "email":
57 filters.append(User.emails.any(Email.email.ilike(value)))
58 else:
59 filters.append(User.username.ilike(term))
60
61 users_query = users_query.filter(or_(*filters))
62
63 users = SQLAlchemyORMPage(
64 users_query,
65 page=page_num,
66 items_per_page=25,
67 url_maker=paginate_url_factory(request),
68 )
69
70 return {"users": users, "query": q}
71
72
73 class EmailForm(forms.Form):
74
75 email = wtforms.fields.html5.EmailField(
76 validators=[wtforms.validators.DataRequired()]
77 )
78 primary = wtforms.fields.BooleanField()
79 verified = wtforms.fields.BooleanField()
80 public = wtforms.fields.BooleanField()
81
82
83 class UserForm(forms.Form):
84
85 name = wtforms.StringField(
86 validators=[wtforms.validators.Optional(), wtforms.validators.Length(max=100)]
87 )
88
89 is_active = wtforms.fields.BooleanField()
90 is_superuser = wtforms.fields.BooleanField()
91 is_moderator = wtforms.fields.BooleanField()
92 is_psf_staff = wtforms.fields.BooleanField()
93
94 prohibit_password_reset = wtforms.fields.BooleanField()
95
96 emails = wtforms.fields.FieldList(wtforms.fields.FormField(EmailForm))
97
98
99 @view_config(
100 route_name="admin.user.detail",
101 renderer="admin/users/detail.html",
102 permission="moderator",
103 request_method="GET",
104 uses_session=True,
105 require_csrf=True,
106 require_methods=False,
107 )
108 @view_config(
109 route_name="admin.user.detail",
110 renderer="admin/users/detail.html",
111 permission="admin",
112 request_method="POST",
113 uses_session=True,
114 require_csrf=True,
115 require_methods=False,
116 )
117 def user_detail(request):
118 try:
119 user = (
120 request.db.query(User).filter(User.id == request.matchdict["user_id"]).one()
121 )
122 except NoResultFound:
123 raise HTTPNotFound
124
125 roles = (
126 request.db.query(Role)
127 .join(User)
128 .filter(Role.user == user)
129 .order_by(Role.role_name, Role.project_id)
130 .all()
131 )
132
133 form = UserForm(request.POST if request.method == "POST" else None, user)
134
135 if request.method == "POST" and form.validate():
136 form.populate_obj(user)
137 return HTTPSeeOther(location=request.current_route_path())
138
139 return {"user": user, "form": form, "roles": roles, "add_email_form": EmailForm()}
140
141
142 @view_config(
143 route_name="admin.user.add_email",
144 require_methods=["POST"],
145 permission="admin",
146 uses_session=True,
147 require_csrf=True,
148 )
149 def user_add_email(request):
150 user = request.db.query(User).get(request.matchdict["user_id"])
151 form = EmailForm(request.POST)
152
153 if form.validate():
154 email = Email(
155 email=form.email.data,
156 user=user,
157 primary=form.primary.data,
158 verified=form.verified.data,
159 public=form.public.data,
160 )
161 request.db.add(email)
162 request.session.flash(
163 f"Added email for user {user.username!r}", queue="success"
164 )
165
166 return HTTPSeeOther(request.route_path("admin.user.detail", user_id=user.id))
167
168
169 @view_config(
170 route_name="admin.user.delete",
171 require_methods=["POST"],
172 permission="admin",
173 uses_session=True,
174 require_csrf=True,
175 )
176 def user_delete(request):
177 user = request.db.query(User).get(request.matchdict["user_id"])
178
179 if user.username != request.params.get("username"):
180 request.session.flash("Wrong confirmation input", queue="error")
181 return HTTPSeeOther(request.route_path("admin.user.detail", user_id=user.id))
182
183 # Delete all the user's projects
184 projects = request.db.query(Project).filter(
185 Project.name.in_(
186 request.db.query(Project.name)
187 .join(Role.project)
188 .filter(Role.user == user)
189 .subquery()
190 )
191 )
192 for project in projects:
193 request.db.add(
194 JournalEntry(
195 name=project.name,
196 action="remove project",
197 submitted_by=request.user,
198 submitted_from=request.remote_addr,
199 )
200 )
201 projects.delete(synchronize_session=False)
202
203 # Update all journals to point to `deleted-user` instead
204 deleted_user = request.db.query(User).filter(User.username == "deleted-user").one()
205
206 journals = (
207 request.db.query(JournalEntry)
208 .options(joinedload("submitted_by"))
209 .filter(JournalEntry.submitted_by == user)
210 .all()
211 )
212
213 for journal in journals:
214 journal.submitted_by = deleted_user
215
216 # Delete the user
217 request.db.delete(user)
218 request.db.add(
219 JournalEntry(
220 name=f"user:{user.username}",
221 action="nuke user",
222 submitted_by=request.user,
223 submitted_from=request.remote_addr,
224 )
225 )
226 request.session.flash(f"Nuked user {user.username!r}", queue="success")
227 return HTTPSeeOther(request.route_path("admin.user.list"))
228
229
230 @view_config(
231 route_name="admin.user.reset_password",
232 require_methods=["POST"],
233 permission="admin",
234 has_translations=True,
235 uses_session=True,
236 require_csrf=True,
237 )
238 def user_reset_password(request):
239 user = request.db.query(User).get(request.matchdict["user_id"])
240
241 if user.username != request.params.get("username"):
242 request.session.flash("Wrong confirmation input", queue="error")
243 return HTTPSeeOther(request.route_path("admin.user.detail", user_id=user.id))
244
245 login_service = request.find_service(IUserService, context=None)
246 send_password_compromised_email(request, user)
247 login_service.disable_password(user.id, reason=DisableReason.CompromisedPassword)
248
249 request.session.flash(f"Reset password for {user.username!r}", queue="success")
250 return HTTPSeeOther(request.route_path("admin.user.detail", user_id=user.id))
251
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/warehouse/admin/views/users.py b/warehouse/admin/views/users.py
--- a/warehouse/admin/views/users.py
+++ b/warehouse/admin/views/users.py
@@ -14,6 +14,7 @@
import wtforms
import wtforms.fields.html5
+import wtforms.validators
from paginate_sqlalchemy import SqlalchemyOrmPage as SQLAlchemyORMPage
from pyramid.httpexceptions import HTTPBadRequest, HTTPNotFound, HTTPSeeOther
@@ -95,6 +96,14 @@
emails = wtforms.fields.FieldList(wtforms.fields.FormField(EmailForm))
+ def validate_emails(self, field):
+ # If there's no email on the account, it's ok. Otherwise, ensure
+ # we have 1 primary email.
+ if field.data and len([1 for email in field.data if email["primary"]]) != 1:
+ raise wtforms.validators.ValidationError(
+ "There must be exactly one primary email"
+ )
+
@view_config(
route_name="admin.user.detail",
@@ -134,6 +143,7 @@
if request.method == "POST" and form.validate():
form.populate_obj(user)
+ request.session.flash(f"User {user.username!r} updated", queue="success")
return HTTPSeeOther(location=request.current_route_path())
return {"user": user, "form": form, "roles": roles, "add_email_form": EmailForm()}
@@ -151,6 +161,11 @@
form = EmailForm(request.POST)
if form.validate():
+
+ if form.primary.data:
+ for other in user.emails:
+ other.primary = False
+
email = Email(
email=form.email.data,
user=user,
| {"golden_diff": "diff --git a/warehouse/admin/views/users.py b/warehouse/admin/views/users.py\n--- a/warehouse/admin/views/users.py\n+++ b/warehouse/admin/views/users.py\n@@ -14,6 +14,7 @@\n \n import wtforms\n import wtforms.fields.html5\n+import wtforms.validators\n \n from paginate_sqlalchemy import SqlalchemyOrmPage as SQLAlchemyORMPage\n from pyramid.httpexceptions import HTTPBadRequest, HTTPNotFound, HTTPSeeOther\n@@ -95,6 +96,14 @@\n \n emails = wtforms.fields.FieldList(wtforms.fields.FormField(EmailForm))\n \n+ def validate_emails(self, field):\n+ # If there's no email on the account, it's ok. Otherwise, ensure\n+ # we have 1 primary email.\n+ if field.data and len([1 for email in field.data if email[\"primary\"]]) != 1:\n+ raise wtforms.validators.ValidationError(\n+ \"There must be exactly one primary email\"\n+ )\n+\n \n @view_config(\n route_name=\"admin.user.detail\",\n@@ -134,6 +143,7 @@\n \n if request.method == \"POST\" and form.validate():\n form.populate_obj(user)\n+ request.session.flash(f\"User {user.username!r} updated\", queue=\"success\")\n return HTTPSeeOther(location=request.current_route_path())\n \n return {\"user\": user, \"form\": form, \"roles\": roles, \"add_email_form\": EmailForm()}\n@@ -151,6 +161,11 @@\n form = EmailForm(request.POST)\n \n if form.validate():\n+\n+ if form.primary.data:\n+ for other in user.emails:\n+ other.primary = False\n+\n email = Email(\n email=form.email.data,\n user=user,\n", "issue": "Reset password doesn't work if user doesn't have verified/primary email address.\n**Describe the bug**\r\n\r\nPassword reset is not working. I get the email, follow the link, enter the password twice, submit the form, and get back a page with no error messages. But the page title is \"Error processing form -- Log in\".\r\n\r\n**Expected behavior**\r\n\r\nI expected a positive confirmation that my password was reset, and to be able to login with the new password. Neither one happens.\r\n\r\n**To Reproduce**\r\n\r\n1. Start at `https://pypi.org/account/login/`\r\n2. Click \"Forgot password?\" link\r\n3. Enter the email address associated with my PyPI account (gerg.ward at gmail dot com)\r\n4. Check my inbox and the email is already there.\r\n5. Click the link: `https://pypi.org/account/reset-password/?token=.eJw1...`\r\n6. Generate a new password outside my browser.\r\n7. Copy/paste the new password into both password fields; text changes to \"Passwords match\" and the \"Reset password\" button is enabled.\r\n8. Click \"Reset password\"\r\n9. Receive the same \"Reset password\" form again, but with positive confirmation and no error messages. Only the page title gives a clue: \"Error processing form \u2013 Reset your password \u00b7 PyPI\". When I try to login with the new password (again, copy/pasted), it does not work.\r\n \r\nI'll attach HTML and screenshot.\r\n\r\n**My Platform**\r\n\r\nFirefox 87.0 on Ubuntu 20.04.2.\r\n\r\nSame behaviour with Chromium.\r\n\r\nNo HTTP proxy.\r\n\r\n**Additional context**\r\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport shlex\n\nimport wtforms\nimport wtforms.fields.html5\n\nfrom paginate_sqlalchemy import SqlalchemyOrmPage as SQLAlchemyORMPage\nfrom pyramid.httpexceptions import HTTPBadRequest, HTTPNotFound, HTTPSeeOther\nfrom pyramid.view import view_config\nfrom sqlalchemy import or_\nfrom sqlalchemy.orm import joinedload\nfrom sqlalchemy.orm.exc import NoResultFound\n\nfrom warehouse import forms\nfrom warehouse.accounts.interfaces import IUserService\nfrom warehouse.accounts.models import DisableReason, Email, User\nfrom warehouse.email import send_password_compromised_email\nfrom warehouse.packaging.models import JournalEntry, Project, Role\nfrom warehouse.utils.paginate import paginate_url_factory\n\n\n@view_config(\n route_name=\"admin.user.list\",\n renderer=\"admin/users/list.html\",\n permission=\"moderator\",\n uses_session=True,\n)\ndef user_list(request):\n q = request.params.get(\"q\")\n\n try:\n page_num = int(request.params.get(\"page\", 1))\n except ValueError:\n raise HTTPBadRequest(\"'page' must be an integer.\") from None\n\n users_query = request.db.query(User).order_by(User.username)\n\n if q:\n terms = shlex.split(q)\n\n filters = []\n for term in terms:\n if \":\" in term:\n field, value = term.split(\":\", 1)\n if field.lower() == \"email\":\n filters.append(User.emails.any(Email.email.ilike(value)))\n else:\n filters.append(User.username.ilike(term))\n\n users_query = users_query.filter(or_(*filters))\n\n users = SQLAlchemyORMPage(\n users_query,\n page=page_num,\n items_per_page=25,\n url_maker=paginate_url_factory(request),\n )\n\n return {\"users\": users, \"query\": q}\n\n\nclass EmailForm(forms.Form):\n\n email = wtforms.fields.html5.EmailField(\n validators=[wtforms.validators.DataRequired()]\n )\n primary = wtforms.fields.BooleanField()\n verified = wtforms.fields.BooleanField()\n public = wtforms.fields.BooleanField()\n\n\nclass UserForm(forms.Form):\n\n name = wtforms.StringField(\n validators=[wtforms.validators.Optional(), wtforms.validators.Length(max=100)]\n )\n\n is_active = wtforms.fields.BooleanField()\n is_superuser = wtforms.fields.BooleanField()\n is_moderator = wtforms.fields.BooleanField()\n is_psf_staff = wtforms.fields.BooleanField()\n\n prohibit_password_reset = wtforms.fields.BooleanField()\n\n emails = wtforms.fields.FieldList(wtforms.fields.FormField(EmailForm))\n\n\n@view_config(\n route_name=\"admin.user.detail\",\n renderer=\"admin/users/detail.html\",\n permission=\"moderator\",\n request_method=\"GET\",\n uses_session=True,\n require_csrf=True,\n require_methods=False,\n)\n@view_config(\n route_name=\"admin.user.detail\",\n renderer=\"admin/users/detail.html\",\n permission=\"admin\",\n request_method=\"POST\",\n uses_session=True,\n require_csrf=True,\n require_methods=False,\n)\ndef user_detail(request):\n try:\n user = (\n request.db.query(User).filter(User.id == request.matchdict[\"user_id\"]).one()\n )\n except NoResultFound:\n raise HTTPNotFound\n\n roles = (\n request.db.query(Role)\n .join(User)\n .filter(Role.user == user)\n .order_by(Role.role_name, Role.project_id)\n .all()\n )\n\n form = UserForm(request.POST if request.method == \"POST\" else None, user)\n\n if request.method == \"POST\" and form.validate():\n form.populate_obj(user)\n return HTTPSeeOther(location=request.current_route_path())\n\n return {\"user\": user, \"form\": form, \"roles\": roles, \"add_email_form\": EmailForm()}\n\n\n@view_config(\n route_name=\"admin.user.add_email\",\n require_methods=[\"POST\"],\n permission=\"admin\",\n uses_session=True,\n require_csrf=True,\n)\ndef user_add_email(request):\n user = request.db.query(User).get(request.matchdict[\"user_id\"])\n form = EmailForm(request.POST)\n\n if form.validate():\n email = Email(\n email=form.email.data,\n user=user,\n primary=form.primary.data,\n verified=form.verified.data,\n public=form.public.data,\n )\n request.db.add(email)\n request.session.flash(\n f\"Added email for user {user.username!r}\", queue=\"success\"\n )\n\n return HTTPSeeOther(request.route_path(\"admin.user.detail\", user_id=user.id))\n\n\n@view_config(\n route_name=\"admin.user.delete\",\n require_methods=[\"POST\"],\n permission=\"admin\",\n uses_session=True,\n require_csrf=True,\n)\ndef user_delete(request):\n user = request.db.query(User).get(request.matchdict[\"user_id\"])\n\n if user.username != request.params.get(\"username\"):\n request.session.flash(\"Wrong confirmation input\", queue=\"error\")\n return HTTPSeeOther(request.route_path(\"admin.user.detail\", user_id=user.id))\n\n # Delete all the user's projects\n projects = request.db.query(Project).filter(\n Project.name.in_(\n request.db.query(Project.name)\n .join(Role.project)\n .filter(Role.user == user)\n .subquery()\n )\n )\n for project in projects:\n request.db.add(\n JournalEntry(\n name=project.name,\n action=\"remove project\",\n submitted_by=request.user,\n submitted_from=request.remote_addr,\n )\n )\n projects.delete(synchronize_session=False)\n\n # Update all journals to point to `deleted-user` instead\n deleted_user = request.db.query(User).filter(User.username == \"deleted-user\").one()\n\n journals = (\n request.db.query(JournalEntry)\n .options(joinedload(\"submitted_by\"))\n .filter(JournalEntry.submitted_by == user)\n .all()\n )\n\n for journal in journals:\n journal.submitted_by = deleted_user\n\n # Delete the user\n request.db.delete(user)\n request.db.add(\n JournalEntry(\n name=f\"user:{user.username}\",\n action=\"nuke user\",\n submitted_by=request.user,\n submitted_from=request.remote_addr,\n )\n )\n request.session.flash(f\"Nuked user {user.username!r}\", queue=\"success\")\n return HTTPSeeOther(request.route_path(\"admin.user.list\"))\n\n\n@view_config(\n route_name=\"admin.user.reset_password\",\n require_methods=[\"POST\"],\n permission=\"admin\",\n has_translations=True,\n uses_session=True,\n require_csrf=True,\n)\ndef user_reset_password(request):\n user = request.db.query(User).get(request.matchdict[\"user_id\"])\n\n if user.username != request.params.get(\"username\"):\n request.session.flash(\"Wrong confirmation input\", queue=\"error\")\n return HTTPSeeOther(request.route_path(\"admin.user.detail\", user_id=user.id))\n\n login_service = request.find_service(IUserService, context=None)\n send_password_compromised_email(request, user)\n login_service.disable_password(user.id, reason=DisableReason.CompromisedPassword)\n\n request.session.flash(f\"Reset password for {user.username!r}\", queue=\"success\")\n return HTTPSeeOther(request.route_path(\"admin.user.detail\", user_id=user.id))\n", "path": "warehouse/admin/views/users.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport shlex\n\nimport wtforms\nimport wtforms.fields.html5\nimport wtforms.validators\n\nfrom paginate_sqlalchemy import SqlalchemyOrmPage as SQLAlchemyORMPage\nfrom pyramid.httpexceptions import HTTPBadRequest, HTTPNotFound, HTTPSeeOther\nfrom pyramid.view import view_config\nfrom sqlalchemy import or_\nfrom sqlalchemy.orm import joinedload\nfrom sqlalchemy.orm.exc import NoResultFound\n\nfrom warehouse import forms\nfrom warehouse.accounts.interfaces import IUserService\nfrom warehouse.accounts.models import DisableReason, Email, User\nfrom warehouse.email import send_password_compromised_email\nfrom warehouse.packaging.models import JournalEntry, Project, Role\nfrom warehouse.utils.paginate import paginate_url_factory\n\n\n@view_config(\n route_name=\"admin.user.list\",\n renderer=\"admin/users/list.html\",\n permission=\"moderator\",\n uses_session=True,\n)\ndef user_list(request):\n q = request.params.get(\"q\")\n\n try:\n page_num = int(request.params.get(\"page\", 1))\n except ValueError:\n raise HTTPBadRequest(\"'page' must be an integer.\") from None\n\n users_query = request.db.query(User).order_by(User.username)\n\n if q:\n terms = shlex.split(q)\n\n filters = []\n for term in terms:\n if \":\" in term:\n field, value = term.split(\":\", 1)\n if field.lower() == \"email\":\n filters.append(User.emails.any(Email.email.ilike(value)))\n else:\n filters.append(User.username.ilike(term))\n\n users_query = users_query.filter(or_(*filters))\n\n users = SQLAlchemyORMPage(\n users_query,\n page=page_num,\n items_per_page=25,\n url_maker=paginate_url_factory(request),\n )\n\n return {\"users\": users, \"query\": q}\n\n\nclass EmailForm(forms.Form):\n\n email = wtforms.fields.html5.EmailField(\n validators=[wtforms.validators.DataRequired()]\n )\n primary = wtforms.fields.BooleanField()\n verified = wtforms.fields.BooleanField()\n public = wtforms.fields.BooleanField()\n\n\nclass UserForm(forms.Form):\n\n name = wtforms.StringField(\n validators=[wtforms.validators.Optional(), wtforms.validators.Length(max=100)]\n )\n\n is_active = wtforms.fields.BooleanField()\n is_superuser = wtforms.fields.BooleanField()\n is_moderator = wtforms.fields.BooleanField()\n is_psf_staff = wtforms.fields.BooleanField()\n\n prohibit_password_reset = wtforms.fields.BooleanField()\n\n emails = wtforms.fields.FieldList(wtforms.fields.FormField(EmailForm))\n\n def validate_emails(self, field):\n # If there's no email on the account, it's ok. Otherwise, ensure\n # we have 1 primary email.\n if field.data and len([1 for email in field.data if email[\"primary\"]]) != 1:\n raise wtforms.validators.ValidationError(\n \"There must be exactly one primary email\"\n )\n\n\n@view_config(\n route_name=\"admin.user.detail\",\n renderer=\"admin/users/detail.html\",\n permission=\"moderator\",\n request_method=\"GET\",\n uses_session=True,\n require_csrf=True,\n require_methods=False,\n)\n@view_config(\n route_name=\"admin.user.detail\",\n renderer=\"admin/users/detail.html\",\n permission=\"admin\",\n request_method=\"POST\",\n uses_session=True,\n require_csrf=True,\n require_methods=False,\n)\ndef user_detail(request):\n try:\n user = (\n request.db.query(User).filter(User.id == request.matchdict[\"user_id\"]).one()\n )\n except NoResultFound:\n raise HTTPNotFound\n\n roles = (\n request.db.query(Role)\n .join(User)\n .filter(Role.user == user)\n .order_by(Role.role_name, Role.project_id)\n .all()\n )\n\n form = UserForm(request.POST if request.method == \"POST\" else None, user)\n\n if request.method == \"POST\" and form.validate():\n form.populate_obj(user)\n request.session.flash(f\"User {user.username!r} updated\", queue=\"success\")\n return HTTPSeeOther(location=request.current_route_path())\n\n return {\"user\": user, \"form\": form, \"roles\": roles, \"add_email_form\": EmailForm()}\n\n\n@view_config(\n route_name=\"admin.user.add_email\",\n require_methods=[\"POST\"],\n permission=\"admin\",\n uses_session=True,\n require_csrf=True,\n)\ndef user_add_email(request):\n user = request.db.query(User).get(request.matchdict[\"user_id\"])\n form = EmailForm(request.POST)\n\n if form.validate():\n\n if form.primary.data:\n for other in user.emails:\n other.primary = False\n\n email = Email(\n email=form.email.data,\n user=user,\n primary=form.primary.data,\n verified=form.verified.data,\n public=form.public.data,\n )\n request.db.add(email)\n request.session.flash(\n f\"Added email for user {user.username!r}\", queue=\"success\"\n )\n\n return HTTPSeeOther(request.route_path(\"admin.user.detail\", user_id=user.id))\n\n\n@view_config(\n route_name=\"admin.user.delete\",\n require_methods=[\"POST\"],\n permission=\"admin\",\n uses_session=True,\n require_csrf=True,\n)\ndef user_delete(request):\n user = request.db.query(User).get(request.matchdict[\"user_id\"])\n\n if user.username != request.params.get(\"username\"):\n request.session.flash(\"Wrong confirmation input\", queue=\"error\")\n return HTTPSeeOther(request.route_path(\"admin.user.detail\", user_id=user.id))\n\n # Delete all the user's projects\n projects = request.db.query(Project).filter(\n Project.name.in_(\n request.db.query(Project.name)\n .join(Role.project)\n .filter(Role.user == user)\n .subquery()\n )\n )\n for project in projects:\n request.db.add(\n JournalEntry(\n name=project.name,\n action=\"remove project\",\n submitted_by=request.user,\n submitted_from=request.remote_addr,\n )\n )\n projects.delete(synchronize_session=False)\n\n # Update all journals to point to `deleted-user` instead\n deleted_user = request.db.query(User).filter(User.username == \"deleted-user\").one()\n\n journals = (\n request.db.query(JournalEntry)\n .options(joinedload(\"submitted_by\"))\n .filter(JournalEntry.submitted_by == user)\n .all()\n )\n\n for journal in journals:\n journal.submitted_by = deleted_user\n\n # Delete the user\n request.db.delete(user)\n request.db.add(\n JournalEntry(\n name=f\"user:{user.username}\",\n action=\"nuke user\",\n submitted_by=request.user,\n submitted_from=request.remote_addr,\n )\n )\n request.session.flash(f\"Nuked user {user.username!r}\", queue=\"success\")\n return HTTPSeeOther(request.route_path(\"admin.user.list\"))\n\n\n@view_config(\n route_name=\"admin.user.reset_password\",\n require_methods=[\"POST\"],\n permission=\"admin\",\n has_translations=True,\n uses_session=True,\n require_csrf=True,\n)\ndef user_reset_password(request):\n user = request.db.query(User).get(request.matchdict[\"user_id\"])\n\n if user.username != request.params.get(\"username\"):\n request.session.flash(\"Wrong confirmation input\", queue=\"error\")\n return HTTPSeeOther(request.route_path(\"admin.user.detail\", user_id=user.id))\n\n login_service = request.find_service(IUserService, context=None)\n send_password_compromised_email(request, user)\n login_service.disable_password(user.id, reason=DisableReason.CompromisedPassword)\n\n request.session.flash(f\"Reset password for {user.username!r}\", queue=\"success\")\n return HTTPSeeOther(request.route_path(\"admin.user.detail\", user_id=user.id))\n", "path": "warehouse/admin/views/users.py"}]} | 2,899 | 386 |
gh_patches_debug_4272 | rasdani/github-patches | git_diff | akvo__akvo-rsr-3260 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove scrollbars when they are not necessary
The home page shows a scroll bar even when there is nothing to scroll


--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `akvo/rest/views/organisation.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Akvo RSR is covered by the GNU Affero General Public License.
4 # See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6
7 from django.conf import settings
8 from django.db.models import Q
9 from django.utils import six
10 from rest_framework.decorators import api_view
11 from rest_framework.exceptions import ParseError
12 from rest_framework.parsers import JSONParser
13 from rest_framework.response import Response
14 from rest_framework_xml.parsers import XMLParser
15 from rest_framework_xml.compat import etree
16
17 from akvo.rest.views.utils import int_or_none, get_qs_elements_for_page
18 from akvo.rsr.filters import location_choices, get_m49_filter
19 from akvo.rsr.models import Project, Organisation, Country
20 from akvo.rsr.views.utils import apply_keywords, org_projects
21 from ..serializers import OrganisationSerializer, OrganisationDirectorySerializer
22 from ..viewsets import BaseRSRViewSet
23
24
25 class AkvoOrganisationParser(XMLParser):
26 def parse(self, stream, media_type=None, parser_context=None):
27 assert etree, 'XMLParser requires defusedxml to be installed'
28
29 parser_context = parser_context or {}
30 encoding = parser_context.get('encoding', settings.DEFAULT_CHARSET)
31 parser = etree.DefusedXMLParser(encoding=encoding)
32 try:
33 tree = etree.parse(stream, parser=parser, forbid_dtd=True)
34 except (etree.ParseError, ValueError) as exc:
35 raise ParseError('XML parse error - %s' % six.text_type(exc))
36 return self.organisation_data_from_etree(tree.getroot())
37
38 def organisation_data_from_etree(self, tree):
39 def find_text(tree, str):
40 element = tree.find(str)
41 if element is None:
42 return ''
43 return element.text.strip() if element.text else ""
44
45 def location_data(location_tree):
46 if location_tree is None:
47 return []
48 iso_code = find_text(location_tree, 'iso_code').lower()
49 country, created = Country.objects.get_or_create(**Country.fields_from_iso_code(iso_code))
50 country = country.id
51 latitude = find_text(location_tree, 'latitude') or 0
52 longitude = find_text(location_tree, 'longitude') or 0
53 primary = True
54 return [dict(latitude=latitude, longitude=longitude, country=country, primary=primary)]
55
56 long_name = find_text(tree, 'name')
57 name = long_name[:25]
58 description = find_text(tree, 'description')
59 url = find_text(tree, 'url')
60 iati_type = find_text(tree, 'iati_organisation_type')
61 new_organisation_type = int(iati_type) if iati_type else 22
62 organisation_type = Organisation.org_type_from_iati_type(new_organisation_type)
63 locations = location_data(tree.find('location/object'))
64 return dict(
65 name=name, long_name=long_name, description=description, url=url,
66 organisation_type=organisation_type, new_organisation_type=new_organisation_type,
67 locations=locations
68 )
69
70
71 class OrganisationViewSet(BaseRSRViewSet):
72 """
73 API endpoint that allows organisations to be viewed or edited.
74 """
75 queryset = Organisation.objects.all()
76 serializer_class = OrganisationSerializer
77 parser_classes = (AkvoOrganisationParser, JSONParser,)
78
79
80 @api_view(['GET'])
81 def organisation_directory(request):
82 """REST view for the update directory."""
83
84 page = request.rsr_page
85 all_organisations = Organisation.objects.all() if not page else _page_organisations(page)
86
87 # Filter updates based on query parameters
88 filter_, text_filter = _create_filters_query(request)
89 organisations = (
90 all_organisations.filter(filter_).distinct() if filter_ is not None else all_organisations
91 )
92 organisations_text_filtered = (
93 organisations.filter(text_filter) if text_filter is not None else organisations
94 )
95 if organisations_text_filtered.exists():
96 organisations = organisations_text_filtered
97
98 # Get the relevant data for typeaheads based on filtered organisations (minus
99 # text filtering, if no organisations were found)
100 locations = [
101 {'id': choice[0], 'name': choice[1]}
102 for choice in location_choices(organisations)
103 ]
104
105 display_organisations = get_qs_elements_for_page(organisations_text_filtered, request)
106
107 # Get related objects of page at once
108 response = {
109 'project_count': all_organisations.count(),
110 'projects': OrganisationDirectorySerializer(display_organisations, many=True).data,
111 'location': locations,
112 }
113 return Response(response)
114
115
116 def _public_projects():
117 """Return all public projects."""
118 return Project.objects.public().published().select_related('partners')
119
120
121 def _page_organisations(page):
122 """Dig out the list or organisations to use."""
123 projects = org_projects(page.organisation) if page.partner_projects else _public_projects()
124 keyword_projects = apply_keywords(page, projects)
125 return keyword_projects.all_partners()
126
127
128 def _create_filters_query(request):
129 """Returns a Q object expression based on query parameters."""
130 location_param = int_or_none(request.GET.get('location'))
131 title_or_subtitle_param = request.GET.get('title_or_subtitle')
132
133 location_filter = (
134 get_m49_filter(location_param, use_recipient_country=False) if location_param else None
135 )
136 title_filter = (
137 Q(name__icontains=title_or_subtitle_param) |
138 Q(long_name__icontains=title_or_subtitle_param)
139 ) if title_or_subtitle_param else None
140 all_filters = [
141 location_filter,
142 ]
143 filters = filter(None, all_filters)
144 return reduce(lambda x, y: x & y, filters) if filters else None, title_filter
145
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/akvo/rest/views/organisation.py b/akvo/rest/views/organisation.py
--- a/akvo/rest/views/organisation.py
+++ b/akvo/rest/views/organisation.py
@@ -106,7 +106,7 @@
# Get related objects of page at once
response = {
- 'project_count': all_organisations.count(),
+ 'project_count': organisations_text_filtered.count(),
'projects': OrganisationDirectorySerializer(display_organisations, many=True).data,
'location': locations,
}
| {"golden_diff": "diff --git a/akvo/rest/views/organisation.py b/akvo/rest/views/organisation.py\n--- a/akvo/rest/views/organisation.py\n+++ b/akvo/rest/views/organisation.py\n@@ -106,7 +106,7 @@\n \n # Get related objects of page at once\n response = {\n- 'project_count': all_organisations.count(),\n+ 'project_count': organisations_text_filtered.count(),\n 'projects': OrganisationDirectorySerializer(display_organisations, many=True).data,\n 'location': locations,\n }\n", "issue": "Remove scrollbars when they are not necessary\nThe home page shows a scroll bar even when there is nothing to scroll\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\nfrom django.conf import settings\nfrom django.db.models import Q\nfrom django.utils import six\nfrom rest_framework.decorators import api_view\nfrom rest_framework.exceptions import ParseError\nfrom rest_framework.parsers import JSONParser\nfrom rest_framework.response import Response\nfrom rest_framework_xml.parsers import XMLParser\nfrom rest_framework_xml.compat import etree\n\nfrom akvo.rest.views.utils import int_or_none, get_qs_elements_for_page\nfrom akvo.rsr.filters import location_choices, get_m49_filter\nfrom akvo.rsr.models import Project, Organisation, Country\nfrom akvo.rsr.views.utils import apply_keywords, org_projects\nfrom ..serializers import OrganisationSerializer, OrganisationDirectorySerializer\nfrom ..viewsets import BaseRSRViewSet\n\n\nclass AkvoOrganisationParser(XMLParser):\n def parse(self, stream, media_type=None, parser_context=None):\n assert etree, 'XMLParser requires defusedxml to be installed'\n\n parser_context = parser_context or {}\n encoding = parser_context.get('encoding', settings.DEFAULT_CHARSET)\n parser = etree.DefusedXMLParser(encoding=encoding)\n try:\n tree = etree.parse(stream, parser=parser, forbid_dtd=True)\n except (etree.ParseError, ValueError) as exc:\n raise ParseError('XML parse error - %s' % six.text_type(exc))\n return self.organisation_data_from_etree(tree.getroot())\n\n def organisation_data_from_etree(self, tree):\n def find_text(tree, str):\n element = tree.find(str)\n if element is None:\n return ''\n return element.text.strip() if element.text else \"\"\n\n def location_data(location_tree):\n if location_tree is None:\n return []\n iso_code = find_text(location_tree, 'iso_code').lower()\n country, created = Country.objects.get_or_create(**Country.fields_from_iso_code(iso_code))\n country = country.id\n latitude = find_text(location_tree, 'latitude') or 0\n longitude = find_text(location_tree, 'longitude') or 0\n primary = True\n return [dict(latitude=latitude, longitude=longitude, country=country, primary=primary)]\n\n long_name = find_text(tree, 'name')\n name = long_name[:25]\n description = find_text(tree, 'description')\n url = find_text(tree, 'url')\n iati_type = find_text(tree, 'iati_organisation_type')\n new_organisation_type = int(iati_type) if iati_type else 22\n organisation_type = Organisation.org_type_from_iati_type(new_organisation_type)\n locations = location_data(tree.find('location/object'))\n return dict(\n name=name, long_name=long_name, description=description, url=url,\n organisation_type=organisation_type, new_organisation_type=new_organisation_type,\n locations=locations\n )\n\n\nclass OrganisationViewSet(BaseRSRViewSet):\n \"\"\"\n API endpoint that allows organisations to be viewed or edited.\n \"\"\"\n queryset = Organisation.objects.all()\n serializer_class = OrganisationSerializer\n parser_classes = (AkvoOrganisationParser, JSONParser,)\n\n\n@api_view(['GET'])\ndef organisation_directory(request):\n \"\"\"REST view for the update directory.\"\"\"\n\n page = request.rsr_page\n all_organisations = Organisation.objects.all() if not page else _page_organisations(page)\n\n # Filter updates based on query parameters\n filter_, text_filter = _create_filters_query(request)\n organisations = (\n all_organisations.filter(filter_).distinct() if filter_ is not None else all_organisations\n )\n organisations_text_filtered = (\n organisations.filter(text_filter) if text_filter is not None else organisations\n )\n if organisations_text_filtered.exists():\n organisations = organisations_text_filtered\n\n # Get the relevant data for typeaheads based on filtered organisations (minus\n # text filtering, if no organisations were found)\n locations = [\n {'id': choice[0], 'name': choice[1]}\n for choice in location_choices(organisations)\n ]\n\n display_organisations = get_qs_elements_for_page(organisations_text_filtered, request)\n\n # Get related objects of page at once\n response = {\n 'project_count': all_organisations.count(),\n 'projects': OrganisationDirectorySerializer(display_organisations, many=True).data,\n 'location': locations,\n }\n return Response(response)\n\n\ndef _public_projects():\n \"\"\"Return all public projects.\"\"\"\n return Project.objects.public().published().select_related('partners')\n\n\ndef _page_organisations(page):\n \"\"\"Dig out the list or organisations to use.\"\"\"\n projects = org_projects(page.organisation) if page.partner_projects else _public_projects()\n keyword_projects = apply_keywords(page, projects)\n return keyword_projects.all_partners()\n\n\ndef _create_filters_query(request):\n \"\"\"Returns a Q object expression based on query parameters.\"\"\"\n location_param = int_or_none(request.GET.get('location'))\n title_or_subtitle_param = request.GET.get('title_or_subtitle')\n\n location_filter = (\n get_m49_filter(location_param, use_recipient_country=False) if location_param else None\n )\n title_filter = (\n Q(name__icontains=title_or_subtitle_param) |\n Q(long_name__icontains=title_or_subtitle_param)\n ) if title_or_subtitle_param else None\n all_filters = [\n location_filter,\n ]\n filters = filter(None, all_filters)\n return reduce(lambda x, y: x & y, filters) if filters else None, title_filter\n", "path": "akvo/rest/views/organisation.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\nfrom django.conf import settings\nfrom django.db.models import Q\nfrom django.utils import six\nfrom rest_framework.decorators import api_view\nfrom rest_framework.exceptions import ParseError\nfrom rest_framework.parsers import JSONParser\nfrom rest_framework.response import Response\nfrom rest_framework_xml.parsers import XMLParser\nfrom rest_framework_xml.compat import etree\n\nfrom akvo.rest.views.utils import int_or_none, get_qs_elements_for_page\nfrom akvo.rsr.filters import location_choices, get_m49_filter\nfrom akvo.rsr.models import Project, Organisation, Country\nfrom akvo.rsr.views.utils import apply_keywords, org_projects\nfrom ..serializers import OrganisationSerializer, OrganisationDirectorySerializer\nfrom ..viewsets import BaseRSRViewSet\n\n\nclass AkvoOrganisationParser(XMLParser):\n def parse(self, stream, media_type=None, parser_context=None):\n assert etree, 'XMLParser requires defusedxml to be installed'\n\n parser_context = parser_context or {}\n encoding = parser_context.get('encoding', settings.DEFAULT_CHARSET)\n parser = etree.DefusedXMLParser(encoding=encoding)\n try:\n tree = etree.parse(stream, parser=parser, forbid_dtd=True)\n except (etree.ParseError, ValueError) as exc:\n raise ParseError('XML parse error - %s' % six.text_type(exc))\n return self.organisation_data_from_etree(tree.getroot())\n\n def organisation_data_from_etree(self, tree):\n def find_text(tree, str):\n element = tree.find(str)\n if element is None:\n return ''\n return element.text.strip() if element.text else \"\"\n\n def location_data(location_tree):\n if location_tree is None:\n return []\n iso_code = find_text(location_tree, 'iso_code').lower()\n country, created = Country.objects.get_or_create(**Country.fields_from_iso_code(iso_code))\n country = country.id\n latitude = find_text(location_tree, 'latitude') or 0\n longitude = find_text(location_tree, 'longitude') or 0\n primary = True\n return [dict(latitude=latitude, longitude=longitude, country=country, primary=primary)]\n\n long_name = find_text(tree, 'name')\n name = long_name[:25]\n description = find_text(tree, 'description')\n url = find_text(tree, 'url')\n iati_type = find_text(tree, 'iati_organisation_type')\n new_organisation_type = int(iati_type) if iati_type else 22\n organisation_type = Organisation.org_type_from_iati_type(new_organisation_type)\n locations = location_data(tree.find('location/object'))\n return dict(\n name=name, long_name=long_name, description=description, url=url,\n organisation_type=organisation_type, new_organisation_type=new_organisation_type,\n locations=locations\n )\n\n\nclass OrganisationViewSet(BaseRSRViewSet):\n \"\"\"\n API endpoint that allows organisations to be viewed or edited.\n \"\"\"\n queryset = Organisation.objects.all()\n serializer_class = OrganisationSerializer\n parser_classes = (AkvoOrganisationParser, JSONParser,)\n\n\n@api_view(['GET'])\ndef organisation_directory(request):\n \"\"\"REST view for the update directory.\"\"\"\n\n page = request.rsr_page\n all_organisations = Organisation.objects.all() if not page else _page_organisations(page)\n\n # Filter updates based on query parameters\n filter_, text_filter = _create_filters_query(request)\n organisations = (\n all_organisations.filter(filter_).distinct() if filter_ is not None else all_organisations\n )\n organisations_text_filtered = (\n organisations.filter(text_filter) if text_filter is not None else organisations\n )\n if organisations_text_filtered.exists():\n organisations = organisations_text_filtered\n\n # Get the relevant data for typeaheads based on filtered organisations (minus\n # text filtering, if no organisations were found)\n locations = [\n {'id': choice[0], 'name': choice[1]}\n for choice in location_choices(organisations)\n ]\n\n display_organisations = get_qs_elements_for_page(organisations_text_filtered, request)\n\n # Get related objects of page at once\n response = {\n 'project_count': organisations_text_filtered.count(),\n 'projects': OrganisationDirectorySerializer(display_organisations, many=True).data,\n 'location': locations,\n }\n return Response(response)\n\n\ndef _public_projects():\n \"\"\"Return all public projects.\"\"\"\n return Project.objects.public().published().select_related('partners')\n\n\ndef _page_organisations(page):\n \"\"\"Dig out the list or organisations to use.\"\"\"\n projects = org_projects(page.organisation) if page.partner_projects else _public_projects()\n keyword_projects = apply_keywords(page, projects)\n return keyword_projects.all_partners()\n\n\ndef _create_filters_query(request):\n \"\"\"Returns a Q object expression based on query parameters.\"\"\"\n location_param = int_or_none(request.GET.get('location'))\n title_or_subtitle_param = request.GET.get('title_or_subtitle')\n\n location_filter = (\n get_m49_filter(location_param, use_recipient_country=False) if location_param else None\n )\n title_filter = (\n Q(name__icontains=title_or_subtitle_param) |\n Q(long_name__icontains=title_or_subtitle_param)\n ) if title_or_subtitle_param else None\n all_filters = [\n location_filter,\n ]\n filters = filter(None, all_filters)\n return reduce(lambda x, y: x & y, filters) if filters else None, title_filter\n", "path": "akvo/rest/views/organisation.py"}]} | 1,967 | 121 |
gh_patches_debug_23524 | rasdani/github-patches | git_diff | pypa__setuptools-2316 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Reconsidering distutils replacement strategy
I first expressed concerns about the fact that `distutils` needs to be imported *after* `setuptools` in [this post](https://github.com/pypa/setuptools/pull/2143#issuecomment-637586551), and I did not realize that we had not adequately addressed this before the `distutils` adoption. I think we should re-consider the way this is done, to avoid any sort of requirement on the sort order.
Currently, when you `import setuptools`, [`distutils` is injected into `sys.modules`](https://github.com/pypa/setuptools/blob/37d81f4ce8f08c4baf44b6ff0f3f1bd3f6b2a127/setuptools/distutils_patch.py#L48-L61).
The problem here is that it requires importing `setuptools` before `distutils`, which most people will *not* be doing right now (by almost all common import sort orders, `distutils` comes first).
@jaraco [explains why it's done this way here](https://github.com/pypa/setuptools/issues/2230#issuecomment-657327796):
> I did consider another approach, where Setuptools could add a .pth file that would import setuptools.distutils_patch. Such an approach would always happen earlier and so would not be subject to the race that's happening here, but it would also happen whether or not setuptools was imported (on any invocation of Python in that environment).
I think one preferred solution would be for us to install a `distutils` package that would be imported before `distutils`. The problem is that in the standard `sys.path`, `site-packages` comes *after* the standard library. Since anything we do here will be a tremendous hack, I'm thinking that maybe the most surgical way to accomplish this is to do something like this:
1. Create a `setuptools._distutils/import_hack/distutils/__init__.py` (where there's no `__init__.py` in `import_hack`) that invokes `import setuptools.distutils_patch`.
2. Add a `.pth` file that injects `$SITEPACKAGES/setuptools/_import_hack/` into `sys.path` *before* the standard library.
Another option I think we should consider is stepping up the timetable on providing everything `distutils` provides directly from the `setuptools` namespace, [as I described here](https://github.com/pypa/setuptools/pull/2143#issuecomment-637586551):
> I personally would prefer it if we made the setuptools namespace the canonical namespace, and have distutils just import symbols from setuptools and possibly wrap them in deprecation warnings.
Right now, there are things that you *must* import from `distutils`, and so we can't say, "You can avoid an issue with the sort order by just not importing `distutils`". If we provide everything that `distutils` provides in the `setuptools` namespace, we can just say, "Just use the `setuptools` version and you won't have this problem."
Personally, I think we should do a combination of both — make it so importing `distutils` still works in whatever order you do it in *and* move everything to the canonical `setuptools` namespace. That will make it easier to actively deprecate direct use of `distutils`, and pave the way for us eventually removing the need for the `.pth` file (though the fastest road to not having the `.pth` file will be via removing `distutils` from the standard library).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `_distutils_hack/__init__.py`
Content:
```
1 import sys
2 import os
3 import re
4 import importlib
5 import warnings
6
7
8 is_pypy = '__pypy__' in sys.builtin_module_names
9
10
11 def warn_distutils_present():
12 if 'distutils' not in sys.modules:
13 return
14 if is_pypy and sys.version_info < (3, 7):
15 # PyPy for 3.6 unconditionally imports distutils, so bypass the warning
16 # https://foss.heptapod.net/pypy/pypy/-/blob/be829135bc0d758997b3566062999ee8b23872b4/lib-python/3/site.py#L250
17 return
18 warnings.warn(
19 "Distutils was imported before Setuptools. This usage is discouraged "
20 "and may exhibit undesirable behaviors or errors. Please use "
21 "Setuptools' objects directly or at least import Setuptools first.")
22
23
24 def clear_distutils():
25 if 'distutils' not in sys.modules:
26 return
27 warnings.warn("Setuptools is replacing distutils.")
28 mods = [name for name in sys.modules if re.match(r'distutils\b', name)]
29 for name in mods:
30 del sys.modules[name]
31
32
33 def enabled():
34 """
35 Allow selection of distutils by environment variable.
36 """
37 which = os.environ.get('SETUPTOOLS_USE_DISTUTILS', 'stdlib')
38 return which == 'local'
39
40
41 def ensure_local_distutils():
42 clear_distutils()
43 distutils = importlib.import_module('setuptools._distutils')
44 distutils.__name__ = 'distutils'
45 sys.modules['distutils'] = distutils
46
47 # sanity check that submodules load as expected
48 core = importlib.import_module('distutils.core')
49 assert '_distutils' in core.__file__, core.__file__
50
51
52 def do_override():
53 """
54 Ensure that the local copy of distutils is preferred over stdlib.
55
56 See https://github.com/pypa/setuptools/issues/417#issuecomment-392298401
57 for more motivation.
58 """
59 warn_distutils_present()
60 if enabled():
61 ensure_local_distutils()
62
63
64 class DistutilsMetaFinder:
65 def find_spec(self, fullname, path, target=None):
66 if path is not None or fullname != "distutils":
67 return None
68
69 return self.get_distutils_spec()
70
71 def get_distutils_spec(self):
72 import importlib.util
73
74 class DistutilsLoader(importlib.util.abc.Loader):
75
76 def create_module(self, spec):
77 return importlib.import_module('._distutils', 'setuptools')
78
79 def exec_module(self, module):
80 pass
81
82 return importlib.util.spec_from_loader('distutils', DistutilsLoader())
83
84
85 DISTUTILS_FINDER = DistutilsMetaFinder()
86
87
88 def add_shim():
89 sys.meta_path.insert(0, DISTUTILS_FINDER)
90
91
92 def remove_shim():
93 try:
94 sys.meta_path.remove(DISTUTILS_FINDER)
95 except ValueError:
96 pass
97
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/_distutils_hack/__init__.py b/_distutils_hack/__init__.py
--- a/_distutils_hack/__init__.py
+++ b/_distutils_hack/__init__.py
@@ -16,9 +16,12 @@
# https://foss.heptapod.net/pypy/pypy/-/blob/be829135bc0d758997b3566062999ee8b23872b4/lib-python/3/site.py#L250
return
warnings.warn(
- "Distutils was imported before Setuptools. This usage is discouraged "
- "and may exhibit undesirable behaviors or errors. Please use "
- "Setuptools' objects directly or at least import Setuptools first.")
+ "Distutils was imported before Setuptools, but importing Setuptools "
+ "also replaces the `distutils` module in `sys.modules`. This may lead "
+ "to undesirable behaviors or errors. To avoid these issues, avoid "
+ "using distutils directly, ensure that setuptools is installed in the "
+ "traditional way (e.g. not an editable install), and/or make sure that "
+ "setuptools is always imported before distutils.")
def clear_distutils():
@@ -56,8 +59,8 @@
See https://github.com/pypa/setuptools/issues/417#issuecomment-392298401
for more motivation.
"""
- warn_distutils_present()
if enabled():
+ warn_distutils_present()
ensure_local_distutils()
| {"golden_diff": "diff --git a/_distutils_hack/__init__.py b/_distutils_hack/__init__.py\n--- a/_distutils_hack/__init__.py\n+++ b/_distutils_hack/__init__.py\n@@ -16,9 +16,12 @@\n # https://foss.heptapod.net/pypy/pypy/-/blob/be829135bc0d758997b3566062999ee8b23872b4/lib-python/3/site.py#L250\n return\n warnings.warn(\n- \"Distutils was imported before Setuptools. This usage is discouraged \"\n- \"and may exhibit undesirable behaviors or errors. Please use \"\n- \"Setuptools' objects directly or at least import Setuptools first.\")\n+ \"Distutils was imported before Setuptools, but importing Setuptools \"\n+ \"also replaces the `distutils` module in `sys.modules`. This may lead \"\n+ \"to undesirable behaviors or errors. To avoid these issues, avoid \"\n+ \"using distutils directly, ensure that setuptools is installed in the \"\n+ \"traditional way (e.g. not an editable install), and/or make sure that \"\n+ \"setuptools is always imported before distutils.\")\n \n \n def clear_distutils():\n@@ -56,8 +59,8 @@\n See https://github.com/pypa/setuptools/issues/417#issuecomment-392298401\n for more motivation.\n \"\"\"\n- warn_distutils_present()\n if enabled():\n+ warn_distutils_present()\n ensure_local_distutils()\n", "issue": "Reconsidering distutils replacement strategy\nI first expressed concerns about the fact that `distutils` needs to be imported *after* `setuptools` in [this post](https://github.com/pypa/setuptools/pull/2143#issuecomment-637586551), and I did not realize that we had not adequately addressed this before the `distutils` adoption. I think we should re-consider the way this is done, to avoid any sort of requirement on the sort order.\r\n\r\nCurrently, when you `import setuptools`, [`distutils` is injected into `sys.modules`](https://github.com/pypa/setuptools/blob/37d81f4ce8f08c4baf44b6ff0f3f1bd3f6b2a127/setuptools/distutils_patch.py#L48-L61).\r\n\r\nThe problem here is that it requires importing `setuptools` before `distutils`, which most people will *not* be doing right now (by almost all common import sort orders, `distutils` comes first).\r\n\r\n@jaraco [explains why it's done this way here](https://github.com/pypa/setuptools/issues/2230#issuecomment-657327796):\r\n\r\n> I did consider another approach, where Setuptools could add a .pth file that would import setuptools.distutils_patch. Such an approach would always happen earlier and so would not be subject to the race that's happening here, but it would also happen whether or not setuptools was imported (on any invocation of Python in that environment).\r\n\r\nI think one preferred solution would be for us to install a `distutils` package that would be imported before `distutils`. The problem is that in the standard `sys.path`, `site-packages` comes *after* the standard library. Since anything we do here will be a tremendous hack, I'm thinking that maybe the most surgical way to accomplish this is to do something like this:\r\n\r\n1. Create a `setuptools._distutils/import_hack/distutils/__init__.py` (where there's no `__init__.py` in `import_hack`) that invokes `import setuptools.distutils_patch`.\r\n2. Add a `.pth` file that injects `$SITEPACKAGES/setuptools/_import_hack/` into `sys.path` *before* the standard library.\r\n\r\nAnother option I think we should consider is stepping up the timetable on providing everything `distutils` provides directly from the `setuptools` namespace, [as I described here](https://github.com/pypa/setuptools/pull/2143#issuecomment-637586551):\r\n\r\n> I personally would prefer it if we made the setuptools namespace the canonical namespace, and have distutils just import symbols from setuptools and possibly wrap them in deprecation warnings.\r\n\r\nRight now, there are things that you *must* import from `distutils`, and so we can't say, \"You can avoid an issue with the sort order by just not importing `distutils`\". If we provide everything that `distutils` provides in the `setuptools` namespace, we can just say, \"Just use the `setuptools` version and you won't have this problem.\"\r\n\r\nPersonally, I think we should do a combination of both \u2014 make it so importing `distutils` still works in whatever order you do it in *and* move everything to the canonical `setuptools` namespace. That will make it easier to actively deprecate direct use of `distutils`, and pave the way for us eventually removing the need for the `.pth` file (though the fastest road to not having the `.pth` file will be via removing `distutils` from the standard library).\n", "before_files": [{"content": "import sys\nimport os\nimport re\nimport importlib\nimport warnings\n\n\nis_pypy = '__pypy__' in sys.builtin_module_names\n\n\ndef warn_distutils_present():\n if 'distutils' not in sys.modules:\n return\n if is_pypy and sys.version_info < (3, 7):\n # PyPy for 3.6 unconditionally imports distutils, so bypass the warning\n # https://foss.heptapod.net/pypy/pypy/-/blob/be829135bc0d758997b3566062999ee8b23872b4/lib-python/3/site.py#L250\n return\n warnings.warn(\n \"Distutils was imported before Setuptools. This usage is discouraged \"\n \"and may exhibit undesirable behaviors or errors. Please use \"\n \"Setuptools' objects directly or at least import Setuptools first.\")\n\n\ndef clear_distutils():\n if 'distutils' not in sys.modules:\n return\n warnings.warn(\"Setuptools is replacing distutils.\")\n mods = [name for name in sys.modules if re.match(r'distutils\\b', name)]\n for name in mods:\n del sys.modules[name]\n\n\ndef enabled():\n \"\"\"\n Allow selection of distutils by environment variable.\n \"\"\"\n which = os.environ.get('SETUPTOOLS_USE_DISTUTILS', 'stdlib')\n return which == 'local'\n\n\ndef ensure_local_distutils():\n clear_distutils()\n distutils = importlib.import_module('setuptools._distutils')\n distutils.__name__ = 'distutils'\n sys.modules['distutils'] = distutils\n\n # sanity check that submodules load as expected\n core = importlib.import_module('distutils.core')\n assert '_distutils' in core.__file__, core.__file__\n\n\ndef do_override():\n \"\"\"\n Ensure that the local copy of distutils is preferred over stdlib.\n\n See https://github.com/pypa/setuptools/issues/417#issuecomment-392298401\n for more motivation.\n \"\"\"\n warn_distutils_present()\n if enabled():\n ensure_local_distutils()\n\n\nclass DistutilsMetaFinder:\n def find_spec(self, fullname, path, target=None):\n if path is not None or fullname != \"distutils\":\n return None\n\n return self.get_distutils_spec()\n\n def get_distutils_spec(self):\n import importlib.util\n\n class DistutilsLoader(importlib.util.abc.Loader):\n\n def create_module(self, spec):\n return importlib.import_module('._distutils', 'setuptools')\n\n def exec_module(self, module):\n pass\n\n return importlib.util.spec_from_loader('distutils', DistutilsLoader())\n\n\nDISTUTILS_FINDER = DistutilsMetaFinder()\n\n\ndef add_shim():\n sys.meta_path.insert(0, DISTUTILS_FINDER)\n\n\ndef remove_shim():\n try:\n sys.meta_path.remove(DISTUTILS_FINDER)\n except ValueError:\n pass\n", "path": "_distutils_hack/__init__.py"}], "after_files": [{"content": "import sys\nimport os\nimport re\nimport importlib\nimport warnings\n\n\nis_pypy = '__pypy__' in sys.builtin_module_names\n\n\ndef warn_distutils_present():\n if 'distutils' not in sys.modules:\n return\n if is_pypy and sys.version_info < (3, 7):\n # PyPy for 3.6 unconditionally imports distutils, so bypass the warning\n # https://foss.heptapod.net/pypy/pypy/-/blob/be829135bc0d758997b3566062999ee8b23872b4/lib-python/3/site.py#L250\n return\n warnings.warn(\n \"Distutils was imported before Setuptools, but importing Setuptools \"\n \"also replaces the `distutils` module in `sys.modules`. This may lead \"\n \"to undesirable behaviors or errors. To avoid these issues, avoid \"\n \"using distutils directly, ensure that setuptools is installed in the \"\n \"traditional way (e.g. not an editable install), and/or make sure that \"\n \"setuptools is always imported before distutils.\")\n\n\ndef clear_distutils():\n if 'distutils' not in sys.modules:\n return\n warnings.warn(\"Setuptools is replacing distutils.\")\n mods = [name for name in sys.modules if re.match(r'distutils\\b', name)]\n for name in mods:\n del sys.modules[name]\n\n\ndef enabled():\n \"\"\"\n Allow selection of distutils by environment variable.\n \"\"\"\n which = os.environ.get('SETUPTOOLS_USE_DISTUTILS', 'stdlib')\n return which == 'local'\n\n\ndef ensure_local_distutils():\n clear_distutils()\n distutils = importlib.import_module('setuptools._distutils')\n distutils.__name__ = 'distutils'\n sys.modules['distutils'] = distutils\n\n # sanity check that submodules load as expected\n core = importlib.import_module('distutils.core')\n assert '_distutils' in core.__file__, core.__file__\n\n\ndef do_override():\n \"\"\"\n Ensure that the local copy of distutils is preferred over stdlib.\n\n See https://github.com/pypa/setuptools/issues/417#issuecomment-392298401\n for more motivation.\n \"\"\"\n if enabled():\n warn_distutils_present()\n ensure_local_distutils()\n\n\nclass DistutilsMetaFinder:\n def find_spec(self, fullname, path, target=None):\n if path is not None or fullname != \"distutils\":\n return None\n\n return self.get_distutils_spec()\n\n def get_distutils_spec(self):\n import importlib.util\n\n class DistutilsLoader(importlib.util.abc.Loader):\n\n def create_module(self, spec):\n return importlib.import_module('._distutils', 'setuptools')\n\n def exec_module(self, module):\n pass\n\n return importlib.util.spec_from_loader('distutils', DistutilsLoader())\n\n\nDISTUTILS_FINDER = DistutilsMetaFinder()\n\n\ndef add_shim():\n sys.meta_path.insert(0, DISTUTILS_FINDER)\n\n\ndef remove_shim():\n try:\n sys.meta_path.remove(DISTUTILS_FINDER)\n except ValueError:\n pass\n", "path": "_distutils_hack/__init__.py"}]} | 1,916 | 360 |
gh_patches_debug_17513 | rasdani/github-patches | git_diff | bentoml__BentoML-337 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Failing to create and API on Windows
**Describe the bug**
Failing to create an api on windows ,
`bentoml serve {saved_path}`
**To Reproduce**
Steps to reproduce the behavior:
1. Failing with an error of the fnctl was not found
2. fixed it by using this and creating a fcntl.py
```
def fcntl(fd, op, arg=0):
return 0
def ioctl(fd, op, arg=0, mutable_flag=True):
if mutable_flag:
return 0
else:
return ""
def flock(fd, op):
return
def lockf(fd, operation, length=0, start=0, whence=0):
return
```
3. Then the import pwd started failing , for which I dont have any work around till now .
**Environment:**
- OS: [windows 10 ]
- Python/BentoML Version [Python3.6]
**Additional context**
Add any other context about the problem here.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bentoml/cli/__init__.py`
Content:
```
1 # Copyright 2019 Atalaya Tech, Inc.
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from __future__ import absolute_import
16 from __future__ import division
17 from __future__ import print_function
18
19 import re
20 import os
21 import json
22 import click
23 import logging
24 import tempfile
25 import subprocess
26
27 from ruamel.yaml import YAML
28
29 from bentoml.archive import (
30 load,
31 load_service_api,
32 load_bentoml_config,
33 load_bento_service_metadata,
34 )
35 from bentoml.server import BentoAPIServer, get_docs
36 from bentoml.server.gunicorn_server import GunicornBentoServer
37 from bentoml.cli.click_utils import BentoMLCommandGroup, conditional_argument, _echo
38 from bentoml.cli.deployment import get_deployment_sub_command
39 from bentoml.cli.config import get_configuration_sub_command
40 from bentoml.utils import Path, ProtoMessageToDict
41 from bentoml.utils.log import configure_logging
42 from bentoml.utils.usage_stats import track_cli
43
44
45 def escape_shell_params(param):
46 k, v = param.split('=')
47 v = re.sub(r'([^a-zA-Z0-9])', r'\\\1', v)
48 return '{}={}'.format(k, v)
49
50
51 def create_bento_service_cli(archive_path=None):
52 # pylint: disable=unused-variable
53
54 @click.group(cls=BentoMLCommandGroup)
55 @click.option(
56 '-q',
57 '--quiet',
58 is_flag=True,
59 default=False,
60 help="Hide process logs and only print command results",
61 )
62 @click.option(
63 '--verbose',
64 is_flag=True,
65 default=False,
66 help="Print verbose debugging information for BentoML developer",
67 )
68 @click.version_option()
69 @click.pass_context
70 def bentoml_cli(ctx, verbose, quiet):
71 """
72 BentoML CLI tool
73 """
74 ctx.verbose = verbose
75 ctx.quiet = quiet
76
77 if verbose:
78 configure_logging(logging.DEBUG)
79 elif quiet:
80 configure_logging(logging.ERROR)
81 else:
82 configure_logging() # use default setting in local bentoml.cfg
83
84 # Example Usage: bentoml API_NAME /SAVED_ARCHIVE_PATH --input=INPUT
85 @bentoml_cli.command(
86 default_command=True,
87 default_command_usage="API_NAME BENTO_ARCHIVE_PATH --input=INPUT",
88 default_command_display_name="<API_NAME>",
89 short_help="Run API function",
90 help="Run a API defined in saved BentoArchive with cli args as input",
91 context_settings=dict(ignore_unknown_options=True, allow_extra_args=True),
92 )
93 @click.argument("api-name", type=click.STRING)
94 @conditional_argument(archive_path is None, "archive-path", type=click.STRING)
95 @click.option(
96 '--with-conda',
97 is_flag=True,
98 default=False,
99 help="Run API server in a BentoML managed Conda environment",
100 )
101 @click.pass_context
102 def run(ctx, api_name, archive_path=archive_path, with_conda=False):
103 if with_conda:
104 config = load_bentoml_config(archive_path)
105 metadata = config['metadata']
106 env_name = metadata['service_name'] + '_' + metadata['service_version']
107
108 yaml = YAML()
109 yaml.default_flow_style = False
110 tmpf = tempfile.NamedTemporaryFile(delete=False)
111 env_path = tmpf.name
112 yaml.dump(config['env']['conda_env'], Path(env_path))
113
114 pip_req = os.path.join(archive_path, 'requirements.txt')
115
116 subprocess.call(
117 'command -v conda >/dev/null 2>&1 || {{ echo >&2 "--with-conda '
118 'parameter requires conda but it\'s not installed."; exit 1; }} && '
119 'conda env update -n {env_name} -f {env_file} && '
120 'conda init bash && '
121 'eval "$(conda shell.bash hook)" && '
122 'conda activate {env_name} && '
123 '{{ [ -f {pip_req} ] && pip install -r {pip_req} || echo "no pip '
124 'dependencies."; }} &&'
125 'bentoml {api_name} {archive_path} {args}'.format(
126 env_name=env_name,
127 env_file=env_path,
128 archive_path=archive_path,
129 api_name=api_name,
130 args=' '.join(map(escape_shell_params, ctx.args)),
131 pip_req=pip_req,
132 ),
133 shell=True,
134 )
135 return
136
137 track_cli('run')
138
139 api = load_service_api(archive_path, api_name)
140 api.handle_cli(ctx.args)
141
142 # Example Usage: bentoml info /SAVED_ARCHIVE_PATH
143 @bentoml_cli.command(
144 help="List all APIs defined in the BentoService loaded from archive.",
145 short_help="List APIs",
146 )
147 @conditional_argument(archive_path is None, "archive-path", type=click.STRING)
148 def info(archive_path=archive_path):
149 """
150 List all APIs defined in the BentoService loaded from archive
151 """
152 track_cli('info')
153 bento_service_metadata_pb = load_bento_service_metadata(archive_path)
154 output = json.dumps(ProtoMessageToDict(bento_service_metadata_pb), indent=2)
155 _echo(output)
156
157 # Example usage: bentoml open-api-spec /SAVED_ARCHIVE_PATH
158 @bentoml_cli.command(
159 name="open-api-spec",
160 help="Display API specification JSON in Open-API format",
161 short_help="Display OpenAPI/Swagger JSON specs",
162 )
163 @conditional_argument(archive_path is None, "archive-path", type=click.STRING)
164 def open_api_spec(archive_path=archive_path):
165 track_cli('open-api-spec')
166 bento_service = load(archive_path)
167
168 _echo(json.dumps(get_docs(bento_service), indent=2))
169
170 # Example Usage: bentoml serve ./SAVED_ARCHIVE_PATH --port=PORT
171 @bentoml_cli.command(
172 help="Start REST API server hosting BentoService loaded from archive",
173 short_help="Start local rest server",
174 )
175 @conditional_argument(archive_path is None, "archive-path", type=click.STRING)
176 @click.option(
177 "--port",
178 type=click.INT,
179 default=BentoAPIServer._DEFAULT_PORT,
180 help="The port to listen on for the REST api server, default is 5000.",
181 )
182 @click.option(
183 '--with-conda',
184 is_flag=True,
185 default=False,
186 help="Run API server in a BentoML managed Conda environment",
187 )
188 def serve(port, archive_path=archive_path, with_conda=False):
189 if with_conda:
190 config = load_bentoml_config(archive_path)
191 metadata = config['metadata']
192 env_name = metadata['service_name'] + '_' + metadata['service_version']
193 pip_req = os.path.join(archive_path, 'requirements.txt')
194
195 subprocess.call(
196 'command -v conda >/dev/null 2>&1 || {{ echo >&2 "--with-conda '
197 'parameter requires conda but it\'s not installed."; exit 1; }} && '
198 'conda env update -n {env_name} -f {env_file} && '
199 'conda init bash && '
200 'eval "$(conda shell.bash hook)" && '
201 'conda activate {env_name} && '
202 '{{ [ -f {pip_req} ] && pip install -r {pip_req} || echo "no pip '
203 'dependencies."; }} &&'
204 'bentoml serve {archive_path} --port {port}'.format(
205 env_name=env_name,
206 env_file=os.path.join(archive_path, 'environment.yml'),
207 archive_path=archive_path,
208 port=port,
209 pip_req=pip_req,
210 ),
211 shell=True,
212 )
213 return
214
215 track_cli('serve')
216
217 bento_service = load(archive_path)
218 server = BentoAPIServer(bento_service, port=port)
219 server.start()
220
221 # Example Usage:
222 # bentoml serve-gunicorn ./SAVED_ARCHIVE_PATH --port=PORT --workers=WORKERS
223 @bentoml_cli.command(
224 help="Start REST API gunicorn server hosting BentoService loaded from archive",
225 short_help="Start local gunicorn server",
226 )
227 @conditional_argument(archive_path is None, "archive-path", type=click.STRING)
228 @click.option("-p", "--port", type=click.INT, default=None)
229 @click.option(
230 "-w",
231 "--workers",
232 type=click.INT,
233 default=None,
234 help="Number of workers will start for the gunicorn server",
235 )
236 @click.option("--timeout", type=click.INT, default=None)
237 @click.option(
238 '--with-conda',
239 is_flag=True,
240 default=False,
241 help="Run API server in a BentoML managed Conda environment",
242 )
243 def serve_gunicorn(
244 port, workers, timeout, archive_path=archive_path, with_conda=False
245 ):
246 if with_conda:
247 config = load_bentoml_config(archive_path)
248 metadata = config['metadata']
249 env_name = metadata['service_name'] + '_' + metadata['service_version']
250 pip_req = os.path.join(archive_path, 'requirements.txt')
251
252 subprocess.call(
253 'command -v conda >/dev/null 2>&1 || {{ echo >&2 "--with-conda '
254 'parameter requires conda but it\'s not installed."; exit 1; }} && '
255 'conda env update -n {env_name} -f {env_file} && '
256 'conda init bash && '
257 'eval "$(conda shell.bash hook)" && '
258 'conda activate {env_name} && '
259 '{{ [ -f {pip_req} ] && pip install -r {pip_req} || echo "no pip '
260 'dependencies."; }} &&'
261 'bentoml serve_gunicorn {archive_path} -p {port} -w {workers} '
262 '--timeout {timeout}'.format(
263 env_name=env_name,
264 env_file=os.path.join(archive_path, 'environment.yml'),
265 archive_path=archive_path,
266 port=port,
267 workers=workers,
268 timeout=timeout,
269 pip_req=pip_req,
270 ),
271 shell=True,
272 )
273 return
274
275 track_cli('serve_gunicorn')
276
277 gunicorn_app = GunicornBentoServer(archive_path, port, workers, timeout)
278 gunicorn_app.run()
279
280 # pylint: enable=unused-variable
281 return bentoml_cli
282
283
284 def create_bentoml_cli():
285 _cli = create_bento_service_cli()
286
287 # Commands created here aren't mean to be used from generated service archive. They
288 # are used as part of BentoML cli commands only.
289
290 deployment_sub_command = get_deployment_sub_command()
291 config_sub_command = get_configuration_sub_command()
292 _cli.add_command(config_sub_command)
293 _cli.add_command(deployment_sub_command)
294
295 return _cli
296
297
298 cli = create_bentoml_cli()
299
300 if __name__ == "__main__":
301 cli() # pylint: disable=no-value-for-parameter
302
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bentoml/cli/__init__.py b/bentoml/cli/__init__.py
--- a/bentoml/cli/__init__.py
+++ b/bentoml/cli/__init__.py
@@ -33,7 +33,6 @@
load_bento_service_metadata,
)
from bentoml.server import BentoAPIServer, get_docs
-from bentoml.server.gunicorn_server import GunicornBentoServer
from bentoml.cli.click_utils import BentoMLCommandGroup, conditional_argument, _echo
from bentoml.cli.deployment import get_deployment_sub_command
from bentoml.cli.config import get_configuration_sub_command
@@ -274,6 +273,7 @@
track_cli('serve_gunicorn')
+ from bentoml.server.gunicorn_server import GunicornBentoServer
gunicorn_app = GunicornBentoServer(archive_path, port, workers, timeout)
gunicorn_app.run()
| {"golden_diff": "diff --git a/bentoml/cli/__init__.py b/bentoml/cli/__init__.py\n--- a/bentoml/cli/__init__.py\n+++ b/bentoml/cli/__init__.py\n@@ -33,7 +33,6 @@\n load_bento_service_metadata,\n )\n from bentoml.server import BentoAPIServer, get_docs\n-from bentoml.server.gunicorn_server import GunicornBentoServer\n from bentoml.cli.click_utils import BentoMLCommandGroup, conditional_argument, _echo\n from bentoml.cli.deployment import get_deployment_sub_command\n from bentoml.cli.config import get_configuration_sub_command\n@@ -274,6 +273,7 @@\n \n track_cli('serve_gunicorn')\n \n+ from bentoml.server.gunicorn_server import GunicornBentoServer\n gunicorn_app = GunicornBentoServer(archive_path, port, workers, timeout)\n gunicorn_app.run()\n", "issue": "Failing to create and API on Windows\n**Describe the bug**\r\nFailing to create an api on windows , \r\n`bentoml serve {saved_path}`\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Failing with an error of the fnctl was not found \r\n2. fixed it by using this and creating a fcntl.py \r\n```\r\ndef fcntl(fd, op, arg=0):\r\n return 0\r\n\r\ndef ioctl(fd, op, arg=0, mutable_flag=True):\r\n if mutable_flag:\r\n return 0\r\n else:\r\n return \"\"\r\n\r\ndef flock(fd, op):\r\n return\r\n\r\ndef lockf(fd, operation, length=0, start=0, whence=0):\r\n return\r\n```\r\n3. Then the import pwd started failing , for which I dont have any work around till now .\r\n\r\n\r\n**Environment:**\r\n - OS: [windows 10 ]\r\n - Python/BentoML Version [Python3.6]\r\n\r\n**Additional context**\r\nAdd any other context about the problem here.\r\n\n", "before_files": [{"content": "# Copyright 2019 Atalaya Tech, Inc.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport re\nimport os\nimport json\nimport click\nimport logging\nimport tempfile\nimport subprocess\n\nfrom ruamel.yaml import YAML\n\nfrom bentoml.archive import (\n load,\n load_service_api,\n load_bentoml_config,\n load_bento_service_metadata,\n)\nfrom bentoml.server import BentoAPIServer, get_docs\nfrom bentoml.server.gunicorn_server import GunicornBentoServer\nfrom bentoml.cli.click_utils import BentoMLCommandGroup, conditional_argument, _echo\nfrom bentoml.cli.deployment import get_deployment_sub_command\nfrom bentoml.cli.config import get_configuration_sub_command\nfrom bentoml.utils import Path, ProtoMessageToDict\nfrom bentoml.utils.log import configure_logging\nfrom bentoml.utils.usage_stats import track_cli\n\n\ndef escape_shell_params(param):\n k, v = param.split('=')\n v = re.sub(r'([^a-zA-Z0-9])', r'\\\\\\1', v)\n return '{}={}'.format(k, v)\n\n\ndef create_bento_service_cli(archive_path=None):\n # pylint: disable=unused-variable\n\n @click.group(cls=BentoMLCommandGroup)\n @click.option(\n '-q',\n '--quiet',\n is_flag=True,\n default=False,\n help=\"Hide process logs and only print command results\",\n )\n @click.option(\n '--verbose',\n is_flag=True,\n default=False,\n help=\"Print verbose debugging information for BentoML developer\",\n )\n @click.version_option()\n @click.pass_context\n def bentoml_cli(ctx, verbose, quiet):\n \"\"\"\n BentoML CLI tool\n \"\"\"\n ctx.verbose = verbose\n ctx.quiet = quiet\n\n if verbose:\n configure_logging(logging.DEBUG)\n elif quiet:\n configure_logging(logging.ERROR)\n else:\n configure_logging() # use default setting in local bentoml.cfg\n\n # Example Usage: bentoml API_NAME /SAVED_ARCHIVE_PATH --input=INPUT\n @bentoml_cli.command(\n default_command=True,\n default_command_usage=\"API_NAME BENTO_ARCHIVE_PATH --input=INPUT\",\n default_command_display_name=\"<API_NAME>\",\n short_help=\"Run API function\",\n help=\"Run a API defined in saved BentoArchive with cli args as input\",\n context_settings=dict(ignore_unknown_options=True, allow_extra_args=True),\n )\n @click.argument(\"api-name\", type=click.STRING)\n @conditional_argument(archive_path is None, \"archive-path\", type=click.STRING)\n @click.option(\n '--with-conda',\n is_flag=True,\n default=False,\n help=\"Run API server in a BentoML managed Conda environment\",\n )\n @click.pass_context\n def run(ctx, api_name, archive_path=archive_path, with_conda=False):\n if with_conda:\n config = load_bentoml_config(archive_path)\n metadata = config['metadata']\n env_name = metadata['service_name'] + '_' + metadata['service_version']\n\n yaml = YAML()\n yaml.default_flow_style = False\n tmpf = tempfile.NamedTemporaryFile(delete=False)\n env_path = tmpf.name\n yaml.dump(config['env']['conda_env'], Path(env_path))\n\n pip_req = os.path.join(archive_path, 'requirements.txt')\n\n subprocess.call(\n 'command -v conda >/dev/null 2>&1 || {{ echo >&2 \"--with-conda '\n 'parameter requires conda but it\\'s not installed.\"; exit 1; }} && '\n 'conda env update -n {env_name} -f {env_file} && '\n 'conda init bash && '\n 'eval \"$(conda shell.bash hook)\" && '\n 'conda activate {env_name} && '\n '{{ [ -f {pip_req} ] && pip install -r {pip_req} || echo \"no pip '\n 'dependencies.\"; }} &&'\n 'bentoml {api_name} {archive_path} {args}'.format(\n env_name=env_name,\n env_file=env_path,\n archive_path=archive_path,\n api_name=api_name,\n args=' '.join(map(escape_shell_params, ctx.args)),\n pip_req=pip_req,\n ),\n shell=True,\n )\n return\n\n track_cli('run')\n\n api = load_service_api(archive_path, api_name)\n api.handle_cli(ctx.args)\n\n # Example Usage: bentoml info /SAVED_ARCHIVE_PATH\n @bentoml_cli.command(\n help=\"List all APIs defined in the BentoService loaded from archive.\",\n short_help=\"List APIs\",\n )\n @conditional_argument(archive_path is None, \"archive-path\", type=click.STRING)\n def info(archive_path=archive_path):\n \"\"\"\n List all APIs defined in the BentoService loaded from archive\n \"\"\"\n track_cli('info')\n bento_service_metadata_pb = load_bento_service_metadata(archive_path)\n output = json.dumps(ProtoMessageToDict(bento_service_metadata_pb), indent=2)\n _echo(output)\n\n # Example usage: bentoml open-api-spec /SAVED_ARCHIVE_PATH\n @bentoml_cli.command(\n name=\"open-api-spec\",\n help=\"Display API specification JSON in Open-API format\",\n short_help=\"Display OpenAPI/Swagger JSON specs\",\n )\n @conditional_argument(archive_path is None, \"archive-path\", type=click.STRING)\n def open_api_spec(archive_path=archive_path):\n track_cli('open-api-spec')\n bento_service = load(archive_path)\n\n _echo(json.dumps(get_docs(bento_service), indent=2))\n\n # Example Usage: bentoml serve ./SAVED_ARCHIVE_PATH --port=PORT\n @bentoml_cli.command(\n help=\"Start REST API server hosting BentoService loaded from archive\",\n short_help=\"Start local rest server\",\n )\n @conditional_argument(archive_path is None, \"archive-path\", type=click.STRING)\n @click.option(\n \"--port\",\n type=click.INT,\n default=BentoAPIServer._DEFAULT_PORT,\n help=\"The port to listen on for the REST api server, default is 5000.\",\n )\n @click.option(\n '--with-conda',\n is_flag=True,\n default=False,\n help=\"Run API server in a BentoML managed Conda environment\",\n )\n def serve(port, archive_path=archive_path, with_conda=False):\n if with_conda:\n config = load_bentoml_config(archive_path)\n metadata = config['metadata']\n env_name = metadata['service_name'] + '_' + metadata['service_version']\n pip_req = os.path.join(archive_path, 'requirements.txt')\n\n subprocess.call(\n 'command -v conda >/dev/null 2>&1 || {{ echo >&2 \"--with-conda '\n 'parameter requires conda but it\\'s not installed.\"; exit 1; }} && '\n 'conda env update -n {env_name} -f {env_file} && '\n 'conda init bash && '\n 'eval \"$(conda shell.bash hook)\" && '\n 'conda activate {env_name} && '\n '{{ [ -f {pip_req} ] && pip install -r {pip_req} || echo \"no pip '\n 'dependencies.\"; }} &&'\n 'bentoml serve {archive_path} --port {port}'.format(\n env_name=env_name,\n env_file=os.path.join(archive_path, 'environment.yml'),\n archive_path=archive_path,\n port=port,\n pip_req=pip_req,\n ),\n shell=True,\n )\n return\n\n track_cli('serve')\n\n bento_service = load(archive_path)\n server = BentoAPIServer(bento_service, port=port)\n server.start()\n\n # Example Usage:\n # bentoml serve-gunicorn ./SAVED_ARCHIVE_PATH --port=PORT --workers=WORKERS\n @bentoml_cli.command(\n help=\"Start REST API gunicorn server hosting BentoService loaded from archive\",\n short_help=\"Start local gunicorn server\",\n )\n @conditional_argument(archive_path is None, \"archive-path\", type=click.STRING)\n @click.option(\"-p\", \"--port\", type=click.INT, default=None)\n @click.option(\n \"-w\",\n \"--workers\",\n type=click.INT,\n default=None,\n help=\"Number of workers will start for the gunicorn server\",\n )\n @click.option(\"--timeout\", type=click.INT, default=None)\n @click.option(\n '--with-conda',\n is_flag=True,\n default=False,\n help=\"Run API server in a BentoML managed Conda environment\",\n )\n def serve_gunicorn(\n port, workers, timeout, archive_path=archive_path, with_conda=False\n ):\n if with_conda:\n config = load_bentoml_config(archive_path)\n metadata = config['metadata']\n env_name = metadata['service_name'] + '_' + metadata['service_version']\n pip_req = os.path.join(archive_path, 'requirements.txt')\n\n subprocess.call(\n 'command -v conda >/dev/null 2>&1 || {{ echo >&2 \"--with-conda '\n 'parameter requires conda but it\\'s not installed.\"; exit 1; }} && '\n 'conda env update -n {env_name} -f {env_file} && '\n 'conda init bash && '\n 'eval \"$(conda shell.bash hook)\" && '\n 'conda activate {env_name} && '\n '{{ [ -f {pip_req} ] && pip install -r {pip_req} || echo \"no pip '\n 'dependencies.\"; }} &&'\n 'bentoml serve_gunicorn {archive_path} -p {port} -w {workers} '\n '--timeout {timeout}'.format(\n env_name=env_name,\n env_file=os.path.join(archive_path, 'environment.yml'),\n archive_path=archive_path,\n port=port,\n workers=workers,\n timeout=timeout,\n pip_req=pip_req,\n ),\n shell=True,\n )\n return\n\n track_cli('serve_gunicorn')\n\n gunicorn_app = GunicornBentoServer(archive_path, port, workers, timeout)\n gunicorn_app.run()\n\n # pylint: enable=unused-variable\n return bentoml_cli\n\n\ndef create_bentoml_cli():\n _cli = create_bento_service_cli()\n\n # Commands created here aren't mean to be used from generated service archive. They\n # are used as part of BentoML cli commands only.\n\n deployment_sub_command = get_deployment_sub_command()\n config_sub_command = get_configuration_sub_command()\n _cli.add_command(config_sub_command)\n _cli.add_command(deployment_sub_command)\n\n return _cli\n\n\ncli = create_bentoml_cli()\n\nif __name__ == \"__main__\":\n cli() # pylint: disable=no-value-for-parameter\n", "path": "bentoml/cli/__init__.py"}], "after_files": [{"content": "# Copyright 2019 Atalaya Tech, Inc.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport re\nimport os\nimport json\nimport click\nimport logging\nimport tempfile\nimport subprocess\n\nfrom ruamel.yaml import YAML\n\nfrom bentoml.archive import (\n load,\n load_service_api,\n load_bentoml_config,\n load_bento_service_metadata,\n)\nfrom bentoml.server import BentoAPIServer, get_docs\nfrom bentoml.cli.click_utils import BentoMLCommandGroup, conditional_argument, _echo\nfrom bentoml.cli.deployment import get_deployment_sub_command\nfrom bentoml.cli.config import get_configuration_sub_command\nfrom bentoml.utils import Path, ProtoMessageToDict\nfrom bentoml.utils.log import configure_logging\nfrom bentoml.utils.usage_stats import track_cli\n\n\ndef escape_shell_params(param):\n k, v = param.split('=')\n v = re.sub(r'([^a-zA-Z0-9])', r'\\\\\\1', v)\n return '{}={}'.format(k, v)\n\n\ndef create_bento_service_cli(archive_path=None):\n # pylint: disable=unused-variable\n\n @click.group(cls=BentoMLCommandGroup)\n @click.option(\n '-q',\n '--quiet',\n is_flag=True,\n default=False,\n help=\"Hide process logs and only print command results\",\n )\n @click.option(\n '--verbose',\n is_flag=True,\n default=False,\n help=\"Print verbose debugging information for BentoML developer\",\n )\n @click.version_option()\n @click.pass_context\n def bentoml_cli(ctx, verbose, quiet):\n \"\"\"\n BentoML CLI tool\n \"\"\"\n ctx.verbose = verbose\n ctx.quiet = quiet\n\n if verbose:\n configure_logging(logging.DEBUG)\n elif quiet:\n configure_logging(logging.ERROR)\n else:\n configure_logging() # use default setting in local bentoml.cfg\n\n # Example Usage: bentoml API_NAME /SAVED_ARCHIVE_PATH --input=INPUT\n @bentoml_cli.command(\n default_command=True,\n default_command_usage=\"API_NAME BENTO_ARCHIVE_PATH --input=INPUT\",\n default_command_display_name=\"<API_NAME>\",\n short_help=\"Run API function\",\n help=\"Run a API defined in saved BentoArchive with cli args as input\",\n context_settings=dict(ignore_unknown_options=True, allow_extra_args=True),\n )\n @click.argument(\"api-name\", type=click.STRING)\n @conditional_argument(archive_path is None, \"archive-path\", type=click.STRING)\n @click.option(\n '--with-conda',\n is_flag=True,\n default=False,\n help=\"Run API server in a BentoML managed Conda environment\",\n )\n @click.pass_context\n def run(ctx, api_name, archive_path=archive_path, with_conda=False):\n if with_conda:\n config = load_bentoml_config(archive_path)\n metadata = config['metadata']\n env_name = metadata['service_name'] + '_' + metadata['service_version']\n\n yaml = YAML()\n yaml.default_flow_style = False\n tmpf = tempfile.NamedTemporaryFile(delete=False)\n env_path = tmpf.name\n yaml.dump(config['env']['conda_env'], Path(env_path))\n\n pip_req = os.path.join(archive_path, 'requirements.txt')\n\n subprocess.call(\n 'command -v conda >/dev/null 2>&1 || {{ echo >&2 \"--with-conda '\n 'parameter requires conda but it\\'s not installed.\"; exit 1; }} && '\n 'conda env update -n {env_name} -f {env_file} && '\n 'conda init bash && '\n 'eval \"$(conda shell.bash hook)\" && '\n 'conda activate {env_name} && '\n '{{ [ -f {pip_req} ] && pip install -r {pip_req} || echo \"no pip '\n 'dependencies.\"; }} &&'\n 'bentoml {api_name} {archive_path} {args}'.format(\n env_name=env_name,\n env_file=env_path,\n archive_path=archive_path,\n api_name=api_name,\n args=' '.join(map(escape_shell_params, ctx.args)),\n pip_req=pip_req,\n ),\n shell=True,\n )\n return\n\n track_cli('run')\n\n api = load_service_api(archive_path, api_name)\n api.handle_cli(ctx.args)\n\n # Example Usage: bentoml info /SAVED_ARCHIVE_PATH\n @bentoml_cli.command(\n help=\"List all APIs defined in the BentoService loaded from archive.\",\n short_help=\"List APIs\",\n )\n @conditional_argument(archive_path is None, \"archive-path\", type=click.STRING)\n def info(archive_path=archive_path):\n \"\"\"\n List all APIs defined in the BentoService loaded from archive\n \"\"\"\n track_cli('info')\n bento_service_metadata_pb = load_bento_service_metadata(archive_path)\n output = json.dumps(ProtoMessageToDict(bento_service_metadata_pb), indent=2)\n _echo(output)\n\n # Example usage: bentoml open-api-spec /SAVED_ARCHIVE_PATH\n @bentoml_cli.command(\n name=\"open-api-spec\",\n help=\"Display API specification JSON in Open-API format\",\n short_help=\"Display OpenAPI/Swagger JSON specs\",\n )\n @conditional_argument(archive_path is None, \"archive-path\", type=click.STRING)\n def open_api_spec(archive_path=archive_path):\n track_cli('open-api-spec')\n bento_service = load(archive_path)\n\n _echo(json.dumps(get_docs(bento_service), indent=2))\n\n # Example Usage: bentoml serve ./SAVED_ARCHIVE_PATH --port=PORT\n @bentoml_cli.command(\n help=\"Start REST API server hosting BentoService loaded from archive\",\n short_help=\"Start local rest server\",\n )\n @conditional_argument(archive_path is None, \"archive-path\", type=click.STRING)\n @click.option(\n \"--port\",\n type=click.INT,\n default=BentoAPIServer._DEFAULT_PORT,\n help=\"The port to listen on for the REST api server, default is 5000.\",\n )\n @click.option(\n '--with-conda',\n is_flag=True,\n default=False,\n help=\"Run API server in a BentoML managed Conda environment\",\n )\n def serve(port, archive_path=archive_path, with_conda=False):\n if with_conda:\n config = load_bentoml_config(archive_path)\n metadata = config['metadata']\n env_name = metadata['service_name'] + '_' + metadata['service_version']\n pip_req = os.path.join(archive_path, 'requirements.txt')\n\n subprocess.call(\n 'command -v conda >/dev/null 2>&1 || {{ echo >&2 \"--with-conda '\n 'parameter requires conda but it\\'s not installed.\"; exit 1; }} && '\n 'conda env update -n {env_name} -f {env_file} && '\n 'conda init bash && '\n 'eval \"$(conda shell.bash hook)\" && '\n 'conda activate {env_name} && '\n '{{ [ -f {pip_req} ] && pip install -r {pip_req} || echo \"no pip '\n 'dependencies.\"; }} &&'\n 'bentoml serve {archive_path} --port {port}'.format(\n env_name=env_name,\n env_file=os.path.join(archive_path, 'environment.yml'),\n archive_path=archive_path,\n port=port,\n pip_req=pip_req,\n ),\n shell=True,\n )\n return\n\n track_cli('serve')\n\n bento_service = load(archive_path)\n server = BentoAPIServer(bento_service, port=port)\n server.start()\n\n # Example Usage:\n # bentoml serve-gunicorn ./SAVED_ARCHIVE_PATH --port=PORT --workers=WORKERS\n @bentoml_cli.command(\n help=\"Start REST API gunicorn server hosting BentoService loaded from archive\",\n short_help=\"Start local gunicorn server\",\n )\n @conditional_argument(archive_path is None, \"archive-path\", type=click.STRING)\n @click.option(\"-p\", \"--port\", type=click.INT, default=None)\n @click.option(\n \"-w\",\n \"--workers\",\n type=click.INT,\n default=None,\n help=\"Number of workers will start for the gunicorn server\",\n )\n @click.option(\"--timeout\", type=click.INT, default=None)\n @click.option(\n '--with-conda',\n is_flag=True,\n default=False,\n help=\"Run API server in a BentoML managed Conda environment\",\n )\n def serve_gunicorn(\n port, workers, timeout, archive_path=archive_path, with_conda=False\n ):\n if with_conda:\n config = load_bentoml_config(archive_path)\n metadata = config['metadata']\n env_name = metadata['service_name'] + '_' + metadata['service_version']\n pip_req = os.path.join(archive_path, 'requirements.txt')\n\n subprocess.call(\n 'command -v conda >/dev/null 2>&1 || {{ echo >&2 \"--with-conda '\n 'parameter requires conda but it\\'s not installed.\"; exit 1; }} && '\n 'conda env update -n {env_name} -f {env_file} && '\n 'conda init bash && '\n 'eval \"$(conda shell.bash hook)\" && '\n 'conda activate {env_name} && '\n '{{ [ -f {pip_req} ] && pip install -r {pip_req} || echo \"no pip '\n 'dependencies.\"; }} &&'\n 'bentoml serve_gunicorn {archive_path} -p {port} -w {workers} '\n '--timeout {timeout}'.format(\n env_name=env_name,\n env_file=os.path.join(archive_path, 'environment.yml'),\n archive_path=archive_path,\n port=port,\n workers=workers,\n timeout=timeout,\n pip_req=pip_req,\n ),\n shell=True,\n )\n return\n\n track_cli('serve_gunicorn')\n\n from bentoml.server.gunicorn_server import GunicornBentoServer\n gunicorn_app = GunicornBentoServer(archive_path, port, workers, timeout)\n gunicorn_app.run()\n\n # pylint: enable=unused-variable\n return bentoml_cli\n\n\ndef create_bentoml_cli():\n _cli = create_bento_service_cli()\n\n # Commands created here aren't mean to be used from generated service archive. They\n # are used as part of BentoML cli commands only.\n\n deployment_sub_command = get_deployment_sub_command()\n config_sub_command = get_configuration_sub_command()\n _cli.add_command(config_sub_command)\n _cli.add_command(deployment_sub_command)\n\n return _cli\n\n\ncli = create_bentoml_cli()\n\nif __name__ == \"__main__\":\n cli() # pylint: disable=no-value-for-parameter\n", "path": "bentoml/cli/__init__.py"}]} | 3,823 | 211 |
gh_patches_debug_2666 | rasdani/github-patches | git_diff | netbox-community__netbox-14935 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Typo in DataSourceBulkEditForm
### Deployment Type
Self-hosted
### NetBox Version
v3.7.1
### Python Version
3.8
### Steps to Reproduce
"lavel" is defined as "Enforce unique space", but I think the correct definition is "Enabled".
https://github.com/netbox-community/netbox/blob/487f1ccfde26ef3c1f8a28089826acc0cd6fadb2/netbox/core/forms/bulk_edit.py#L21-L25
- Add a new data source

- Editing 1 Data Sources

### Expected Behavior
Enabled
### Observed Behavior
Enforce unique space
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `netbox/core/forms/bulk_edit.py`
Content:
```
1 from django import forms
2 from django.utils.translation import gettext_lazy as _
3
4 from core.models import *
5 from netbox.forms import NetBoxModelBulkEditForm
6 from netbox.utils import get_data_backend_choices
7 from utilities.forms.fields import CommentField
8 from utilities.forms.widgets import BulkEditNullBooleanSelect
9
10 __all__ = (
11 'DataSourceBulkEditForm',
12 )
13
14
15 class DataSourceBulkEditForm(NetBoxModelBulkEditForm):
16 type = forms.ChoiceField(
17 label=_('Type'),
18 choices=get_data_backend_choices,
19 required=False
20 )
21 enabled = forms.NullBooleanField(
22 required=False,
23 widget=BulkEditNullBooleanSelect(),
24 label=_('Enforce unique space')
25 )
26 description = forms.CharField(
27 label=_('Description'),
28 max_length=200,
29 required=False
30 )
31 comments = CommentField()
32 parameters = forms.JSONField(
33 label=_('Parameters'),
34 required=False
35 )
36 ignore_rules = forms.CharField(
37 label=_('Ignore rules'),
38 required=False,
39 widget=forms.Textarea()
40 )
41
42 model = DataSource
43 fieldsets = (
44 (None, ('type', 'enabled', 'description', 'comments', 'parameters', 'ignore_rules')),
45 )
46 nullable_fields = (
47 'description', 'description', 'parameters', 'comments', 'parameters', 'ignore_rules',
48 )
49
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/netbox/core/forms/bulk_edit.py b/netbox/core/forms/bulk_edit.py
--- a/netbox/core/forms/bulk_edit.py
+++ b/netbox/core/forms/bulk_edit.py
@@ -21,7 +21,7 @@
enabled = forms.NullBooleanField(
required=False,
widget=BulkEditNullBooleanSelect(),
- label=_('Enforce unique space')
+ label=_('Enabled')
)
description = forms.CharField(
label=_('Description'),
| {"golden_diff": "diff --git a/netbox/core/forms/bulk_edit.py b/netbox/core/forms/bulk_edit.py\n--- a/netbox/core/forms/bulk_edit.py\n+++ b/netbox/core/forms/bulk_edit.py\n@@ -21,7 +21,7 @@\n enabled = forms.NullBooleanField(\n required=False,\n widget=BulkEditNullBooleanSelect(),\n- label=_('Enforce unique space')\n+ label=_('Enabled')\n )\n description = forms.CharField(\n label=_('Description'),\n", "issue": "Typo in DataSourceBulkEditForm\n### Deployment Type\n\nSelf-hosted\n\n### NetBox Version\n\nv3.7.1\n\n### Python Version\n\n3.8\n\n### Steps to Reproduce\n\n\"lavel\" is defined as \"Enforce unique space\", but I think the correct definition is \"Enabled\".\r\n\r\nhttps://github.com/netbox-community/netbox/blob/487f1ccfde26ef3c1f8a28089826acc0cd6fadb2/netbox/core/forms/bulk_edit.py#L21-L25\r\n\r\n- Add a new data source\r\n\r\n\r\n- Editing 1 Data Sources\r\n\r\n\n\n### Expected Behavior\n\nEnabled\n\n### Observed Behavior\n\nEnforce unique space\n", "before_files": [{"content": "from django import forms\nfrom django.utils.translation import gettext_lazy as _\n\nfrom core.models import *\nfrom netbox.forms import NetBoxModelBulkEditForm\nfrom netbox.utils import get_data_backend_choices\nfrom utilities.forms.fields import CommentField\nfrom utilities.forms.widgets import BulkEditNullBooleanSelect\n\n__all__ = (\n 'DataSourceBulkEditForm',\n)\n\n\nclass DataSourceBulkEditForm(NetBoxModelBulkEditForm):\n type = forms.ChoiceField(\n label=_('Type'),\n choices=get_data_backend_choices,\n required=False\n )\n enabled = forms.NullBooleanField(\n required=False,\n widget=BulkEditNullBooleanSelect(),\n label=_('Enforce unique space')\n )\n description = forms.CharField(\n label=_('Description'),\n max_length=200,\n required=False\n )\n comments = CommentField()\n parameters = forms.JSONField(\n label=_('Parameters'),\n required=False\n )\n ignore_rules = forms.CharField(\n label=_('Ignore rules'),\n required=False,\n widget=forms.Textarea()\n )\n\n model = DataSource\n fieldsets = (\n (None, ('type', 'enabled', 'description', 'comments', 'parameters', 'ignore_rules')),\n )\n nullable_fields = (\n 'description', 'description', 'parameters', 'comments', 'parameters', 'ignore_rules',\n )\n", "path": "netbox/core/forms/bulk_edit.py"}], "after_files": [{"content": "from django import forms\nfrom django.utils.translation import gettext_lazy as _\n\nfrom core.models import *\nfrom netbox.forms import NetBoxModelBulkEditForm\nfrom netbox.utils import get_data_backend_choices\nfrom utilities.forms.fields import CommentField\nfrom utilities.forms.widgets import BulkEditNullBooleanSelect\n\n__all__ = (\n 'DataSourceBulkEditForm',\n)\n\n\nclass DataSourceBulkEditForm(NetBoxModelBulkEditForm):\n type = forms.ChoiceField(\n label=_('Type'),\n choices=get_data_backend_choices,\n required=False\n )\n enabled = forms.NullBooleanField(\n required=False,\n widget=BulkEditNullBooleanSelect(),\n label=_('Enabled')\n )\n description = forms.CharField(\n label=_('Description'),\n max_length=200,\n required=False\n )\n comments = CommentField()\n parameters = forms.JSONField(\n label=_('Parameters'),\n required=False\n )\n ignore_rules = forms.CharField(\n label=_('Ignore rules'),\n required=False,\n widget=forms.Textarea()\n )\n\n model = DataSource\n fieldsets = (\n (None, ('type', 'enabled', 'description', 'comments', 'parameters', 'ignore_rules')),\n )\n nullable_fields = (\n 'description', 'description', 'parameters', 'comments', 'parameters', 'ignore_rules',\n )\n", "path": "netbox/core/forms/bulk_edit.py"}]} | 922 | 105 |
gh_patches_debug_32942 | rasdani/github-patches | git_diff | genialis__resolwe-316 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Data.create returns bad message when missing ADD permission on collection
If a user tries to run a process inside a collection, but doesn't have `ADD` permission, the api returns `{"detail":"Not found."}`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `resolwe/flow/views/mixins.py`
Content:
```
1 """Mixins used in Resolwe Viewsets."""
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 from django.db import IntegrityError, transaction
5
6 from guardian.shortcuts import assign_perm
7 from rest_framework import mixins, status
8 from rest_framework.decorators import list_route
9 from rest_framework.response import Response
10
11 from resolwe.flow.models import DescriptorSchema
12 from resolwe.permissions.shortcuts import get_objects_for_user
13
14
15 class ResolweCreateModelMixin(mixins.CreateModelMixin):
16 """Mixin to support creating new `Resolwe` models.
17
18 Extends `django_rest_framework`'s class `CreateModelMixin` with:
19
20 * append user's id from request to posted data as `contributor`
21 key
22 * catch `IntegrityError`s, so we can return HTTP status 409
23 instead of raising error
24
25 """
26
27 def create(self, request, *args, **kwargs):
28 """Create a resource."""
29 ds_slug = request.data.get('descriptor_schema', None)
30 if ds_slug:
31 ds_query = DescriptorSchema.objects.filter(slug=ds_slug)
32 ds_query = get_objects_for_user(request.user, 'view_descriptorschema', ds_query)
33 try:
34 request.data['descriptor_schema'] = ds_query.latest().pk
35 except DescriptorSchema.DoesNotExist:
36 return Response(
37 {'descriptor_schema': [
38 'Invalid descriptor_schema slug "{}" - object does not exist.'.format(ds_slug)]},
39 status=status.HTTP_400_BAD_REQUEST)
40
41 request.data['contributor'] = request.user.pk
42 try:
43 return super(ResolweCreateModelMixin, self).create(request, *args, **kwargs)
44
45 except IntegrityError as ex:
46 return Response({u'error': str(ex)}, status=status.HTTP_409_CONFLICT)
47
48 def perform_create(self, serializer):
49 """Create a resource."""
50 with transaction.atomic():
51 instance = serializer.save()
52
53 # Assign all permissions to the object contributor.
54 for permission in list(zip(*instance._meta.permissions))[0]: # pylint: disable=protected-access
55 assign_perm(permission, instance.contributor, instance)
56
57
58 class ResolweUpdateModelMixin(mixins.UpdateModelMixin):
59 """Mixin to support updating `Resolwe` models.
60
61 Extends `django_rest_framework`'s class `UpdateModelMixin` with:
62
63 * translate `descriptor_schema` field from DescriptorSchema's
64 slug to its id and return 400 error Response if it doesn't
65 exists
66
67 """
68
69 def update(self, request, *args, **kwargs):
70 """Update a resource."""
71 ds_slug = request.data.get('descriptor_schema', None)
72 if ds_slug:
73 ds_query = DescriptorSchema.objects.filter(slug=ds_slug)
74 ds_query = get_objects_for_user(request.user, 'view_descriptorschema', ds_query)
75 try:
76 request.data['descriptor_schema'] = ds_query.latest().pk
77 except DescriptorSchema.DoesNotExist:
78 return Response(
79 {'descriptor_schema': [
80 'Invalid descriptor_schema slug "{}" - object does not exist.'.format(ds_slug)]},
81 status=status.HTTP_400_BAD_REQUEST)
82
83 return super(ResolweUpdateModelMixin, self).update(request, *args, **kwargs)
84
85
86 class ResolweCheckSlugMixin(object):
87 """Slug validation."""
88
89 @list_route(methods=[u'get'])
90 def slug_exists(self, request):
91 """Check if given url slug exists.
92
93 Check if slug given in query parameter ``name`` exists. Return
94 ``True`` if slug already exists and ``False`` otherwise.
95
96 """
97 if not request.user.is_authenticated():
98 return Response(status=status.HTTP_401_UNAUTHORIZED)
99
100 if 'name' not in request.query_params:
101 return Response({'error': 'Query parameter `name` must be given.'},
102 status=status.HTTP_400_BAD_REQUEST)
103
104 queryset = self.get_queryset()
105 slug_name = request.query_params['name']
106 return Response(queryset.filter(slug__iexact=slug_name).exists())
107
```
Path: `resolwe/flow/views/data.py`
Content:
```
1 """Data viewset."""
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 from django.db import transaction
5
6 from guardian.shortcuts import assign_perm
7 from rest_framework import exceptions, mixins, status, viewsets
8 from rest_framework.decorators import list_route
9 from rest_framework.response import Response
10
11 from resolwe.flow.filters import DataFilter
12 from resolwe.flow.managers import manager
13 from resolwe.flow.models import Collection, Data, Process
14 from resolwe.flow.serializers import DataSerializer
15 from resolwe.flow.utils import dict_dot, get_data_checksum, iterate_schema
16 from resolwe.permissions.loader import get_permissions_class
17 from resolwe.permissions.mixins import ResolwePermissionsMixin
18 from resolwe.permissions.shortcuts import get_objects_for_user
19
20 from .mixins import ResolweCheckSlugMixin, ResolweCreateModelMixin, ResolweUpdateModelMixin
21
22
23 class DataViewSet(ResolweCreateModelMixin,
24 mixins.RetrieveModelMixin,
25 ResolweUpdateModelMixin,
26 mixins.DestroyModelMixin,
27 mixins.ListModelMixin,
28 ResolwePermissionsMixin,
29 ResolweCheckSlugMixin,
30 viewsets.GenericViewSet):
31 """API view for :class:`Data` objects."""
32
33 queryset = Data.objects.all().prefetch_related('process', 'descriptor_schema', 'contributor')
34 serializer_class = DataSerializer
35 permission_classes = (get_permissions_class(),)
36 filter_class = DataFilter
37 ordering_fields = ('id', 'created', 'modified', 'started', 'finished', 'name')
38 ordering = ('id',)
39
40 def create(self, request, *args, **kwargs):
41 """Create a resource."""
42 collections = request.data.get('collections', [])
43
44 # check that user has permissions on all collections that Data
45 # object will be added to
46 for collection_id in collections:
47 try:
48 collection = Collection.objects.get(pk=collection_id)
49 except Collection.DoesNotExist:
50 return Response({'collections': ['Invalid pk "{}" - object does not exist.'.format(collection_id)]},
51 status=status.HTTP_400_BAD_REQUEST)
52
53 if not request.user.has_perm('add_collection', obj=collection):
54 if request.user.is_authenticated():
55 raise exceptions.PermissionDenied
56 else:
57 raise exceptions.NotFound
58
59 # translate processe's slug to id
60 process_slug = request.data.get('process', None)
61 process_query = Process.objects.filter(slug=process_slug)
62 process_query = get_objects_for_user(request.user, 'view_process', process_query)
63 try:
64 process = process_query.latest()
65 except Process.DoesNotExist:
66 return Response({'process': ['Invalid process slug "{}" - object does not exist.'.format(process_slug)]},
67 status=status.HTTP_400_BAD_REQUEST)
68 request.data['process'] = process.pk
69
70 # perform "get_or_create" if requested - return existing object
71 # if found
72 if kwargs.pop('get_or_create', False):
73 process_input = request.data.get('input', {})
74
75 # use default values if they are not given
76 for field_schema, fields, path in iterate_schema(process_input, process.input_schema):
77 if 'default' in field_schema and field_schema['name'] not in fields:
78 dict_dot(process_input, path, field_schema['default'])
79
80 checksum = get_data_checksum(process_input, process.slug, process.version)
81 data_qs = Data.objects.filter(
82 checksum=checksum,
83 process__persistence__in=[Process.PERSISTENCE_CACHED, Process.PERSISTENCE_TEMP],
84 )
85 data_qs = get_objects_for_user(request.user, 'view_data', data_qs)
86 if data_qs.exists():
87 data = data_qs.order_by('created').last()
88 serializer = self.get_serializer(data)
89 return Response(serializer.data)
90
91 # create the objects
92 resp = super(DataViewSet, self).create(request, *args, **kwargs)
93
94 # run manager
95 manager.communicate()
96
97 return resp
98
99 @list_route(methods=[u'post'])
100 def get_or_create(self, request, *args, **kwargs):
101 """Get ``Data`` object if similar already exists, otherwise create it."""
102 kwargs['get_or_create'] = True
103 return self.create(request, *args, **kwargs)
104
105 def perform_create(self, serializer):
106 """Create a resource."""
107 with transaction.atomic():
108 instance = serializer.save()
109
110 # Assign all permissions to the object contributor.
111 for permission in list(zip(*instance._meta.permissions))[0]: # pylint: disable=protected-access
112 assign_perm(permission, instance.contributor, instance)
113
114 # Assign data object to all specified collections.
115 collections = self.request.data.get('collections', [])
116 for c in collections:
117 collection = Collection.objects.get(pk=c)
118 collection.data.add(instance)
119
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/resolwe/flow/views/data.py b/resolwe/flow/views/data.py
--- a/resolwe/flow/views/data.py
+++ b/resolwe/flow/views/data.py
@@ -51,10 +51,14 @@
status=status.HTTP_400_BAD_REQUEST)
if not request.user.has_perm('add_collection', obj=collection):
- if request.user.is_authenticated():
- raise exceptions.PermissionDenied
+ if request.user.has_perm('view_collection', obj=collection):
+ raise exceptions.PermissionDenied(
+ "You don't have `ADD` permission on collection (id: {}).".format(collection_id)
+ )
else:
- raise exceptions.NotFound
+ raise exceptions.NotFound(
+ "Collection not found (id: {}).".format(collection_id)
+ )
# translate processe's slug to id
process_slug = request.data.get('process', None)
diff --git a/resolwe/flow/views/mixins.py b/resolwe/flow/views/mixins.py
--- a/resolwe/flow/views/mixins.py
+++ b/resolwe/flow/views/mixins.py
@@ -4,6 +4,7 @@
from django.db import IntegrityError, transaction
from guardian.shortcuts import assign_perm
+from guardian.utils import get_anonymous_user
from rest_framework import mixins, status
from rest_framework.decorators import list_route
from rest_framework.response import Response
@@ -38,7 +39,11 @@
'Invalid descriptor_schema slug "{}" - object does not exist.'.format(ds_slug)]},
status=status.HTTP_400_BAD_REQUEST)
- request.data['contributor'] = request.user.pk
+ if request.user.is_anonymous():
+ request.data['contributor'] = get_anonymous_user().pk
+ else:
+ request.data['contributor'] = request.user.pk
+
try:
return super(ResolweCreateModelMixin, self).create(request, *args, **kwargs)
| {"golden_diff": "diff --git a/resolwe/flow/views/data.py b/resolwe/flow/views/data.py\n--- a/resolwe/flow/views/data.py\n+++ b/resolwe/flow/views/data.py\n@@ -51,10 +51,14 @@\n status=status.HTTP_400_BAD_REQUEST)\n \n if not request.user.has_perm('add_collection', obj=collection):\n- if request.user.is_authenticated():\n- raise exceptions.PermissionDenied\n+ if request.user.has_perm('view_collection', obj=collection):\n+ raise exceptions.PermissionDenied(\n+ \"You don't have `ADD` permission on collection (id: {}).\".format(collection_id)\n+ )\n else:\n- raise exceptions.NotFound\n+ raise exceptions.NotFound(\n+ \"Collection not found (id: {}).\".format(collection_id)\n+ )\n \n # translate processe's slug to id\n process_slug = request.data.get('process', None)\ndiff --git a/resolwe/flow/views/mixins.py b/resolwe/flow/views/mixins.py\n--- a/resolwe/flow/views/mixins.py\n+++ b/resolwe/flow/views/mixins.py\n@@ -4,6 +4,7 @@\n from django.db import IntegrityError, transaction\n \n from guardian.shortcuts import assign_perm\n+from guardian.utils import get_anonymous_user\n from rest_framework import mixins, status\n from rest_framework.decorators import list_route\n from rest_framework.response import Response\n@@ -38,7 +39,11 @@\n 'Invalid descriptor_schema slug \"{}\" - object does not exist.'.format(ds_slug)]},\n status=status.HTTP_400_BAD_REQUEST)\n \n- request.data['contributor'] = request.user.pk\n+ if request.user.is_anonymous():\n+ request.data['contributor'] = get_anonymous_user().pk\n+ else:\n+ request.data['contributor'] = request.user.pk\n+\n try:\n return super(ResolweCreateModelMixin, self).create(request, *args, **kwargs)\n", "issue": "Data.create returns bad message when missing ADD permission on collection\nIf a user tries to run a process inside a collection, but doesn't have `ADD` permission, the api returns `{\"detail\":\"Not found.\"}`\n", "before_files": [{"content": "\"\"\"Mixins used in Resolwe Viewsets.\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom django.db import IntegrityError, transaction\n\nfrom guardian.shortcuts import assign_perm\nfrom rest_framework import mixins, status\nfrom rest_framework.decorators import list_route\nfrom rest_framework.response import Response\n\nfrom resolwe.flow.models import DescriptorSchema\nfrom resolwe.permissions.shortcuts import get_objects_for_user\n\n\nclass ResolweCreateModelMixin(mixins.CreateModelMixin):\n \"\"\"Mixin to support creating new `Resolwe` models.\n\n Extends `django_rest_framework`'s class `CreateModelMixin` with:\n\n * append user's id from request to posted data as `contributor`\n key\n * catch `IntegrityError`s, so we can return HTTP status 409\n instead of raising error\n\n \"\"\"\n\n def create(self, request, *args, **kwargs):\n \"\"\"Create a resource.\"\"\"\n ds_slug = request.data.get('descriptor_schema', None)\n if ds_slug:\n ds_query = DescriptorSchema.objects.filter(slug=ds_slug)\n ds_query = get_objects_for_user(request.user, 'view_descriptorschema', ds_query)\n try:\n request.data['descriptor_schema'] = ds_query.latest().pk\n except DescriptorSchema.DoesNotExist:\n return Response(\n {'descriptor_schema': [\n 'Invalid descriptor_schema slug \"{}\" - object does not exist.'.format(ds_slug)]},\n status=status.HTTP_400_BAD_REQUEST)\n\n request.data['contributor'] = request.user.pk\n try:\n return super(ResolweCreateModelMixin, self).create(request, *args, **kwargs)\n\n except IntegrityError as ex:\n return Response({u'error': str(ex)}, status=status.HTTP_409_CONFLICT)\n\n def perform_create(self, serializer):\n \"\"\"Create a resource.\"\"\"\n with transaction.atomic():\n instance = serializer.save()\n\n # Assign all permissions to the object contributor.\n for permission in list(zip(*instance._meta.permissions))[0]: # pylint: disable=protected-access\n assign_perm(permission, instance.contributor, instance)\n\n\nclass ResolweUpdateModelMixin(mixins.UpdateModelMixin):\n \"\"\"Mixin to support updating `Resolwe` models.\n\n Extends `django_rest_framework`'s class `UpdateModelMixin` with:\n\n * translate `descriptor_schema` field from DescriptorSchema's\n slug to its id and return 400 error Response if it doesn't\n exists\n\n \"\"\"\n\n def update(self, request, *args, **kwargs):\n \"\"\"Update a resource.\"\"\"\n ds_slug = request.data.get('descriptor_schema', None)\n if ds_slug:\n ds_query = DescriptorSchema.objects.filter(slug=ds_slug)\n ds_query = get_objects_for_user(request.user, 'view_descriptorschema', ds_query)\n try:\n request.data['descriptor_schema'] = ds_query.latest().pk\n except DescriptorSchema.DoesNotExist:\n return Response(\n {'descriptor_schema': [\n 'Invalid descriptor_schema slug \"{}\" - object does not exist.'.format(ds_slug)]},\n status=status.HTTP_400_BAD_REQUEST)\n\n return super(ResolweUpdateModelMixin, self).update(request, *args, **kwargs)\n\n\nclass ResolweCheckSlugMixin(object):\n \"\"\"Slug validation.\"\"\"\n\n @list_route(methods=[u'get'])\n def slug_exists(self, request):\n \"\"\"Check if given url slug exists.\n\n Check if slug given in query parameter ``name`` exists. Return\n ``True`` if slug already exists and ``False`` otherwise.\n\n \"\"\"\n if not request.user.is_authenticated():\n return Response(status=status.HTTP_401_UNAUTHORIZED)\n\n if 'name' not in request.query_params:\n return Response({'error': 'Query parameter `name` must be given.'},\n status=status.HTTP_400_BAD_REQUEST)\n\n queryset = self.get_queryset()\n slug_name = request.query_params['name']\n return Response(queryset.filter(slug__iexact=slug_name).exists())\n", "path": "resolwe/flow/views/mixins.py"}, {"content": "\"\"\"Data viewset.\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom django.db import transaction\n\nfrom guardian.shortcuts import assign_perm\nfrom rest_framework import exceptions, mixins, status, viewsets\nfrom rest_framework.decorators import list_route\nfrom rest_framework.response import Response\n\nfrom resolwe.flow.filters import DataFilter\nfrom resolwe.flow.managers import manager\nfrom resolwe.flow.models import Collection, Data, Process\nfrom resolwe.flow.serializers import DataSerializer\nfrom resolwe.flow.utils import dict_dot, get_data_checksum, iterate_schema\nfrom resolwe.permissions.loader import get_permissions_class\nfrom resolwe.permissions.mixins import ResolwePermissionsMixin\nfrom resolwe.permissions.shortcuts import get_objects_for_user\n\nfrom .mixins import ResolweCheckSlugMixin, ResolweCreateModelMixin, ResolweUpdateModelMixin\n\n\nclass DataViewSet(ResolweCreateModelMixin,\n mixins.RetrieveModelMixin,\n ResolweUpdateModelMixin,\n mixins.DestroyModelMixin,\n mixins.ListModelMixin,\n ResolwePermissionsMixin,\n ResolweCheckSlugMixin,\n viewsets.GenericViewSet):\n \"\"\"API view for :class:`Data` objects.\"\"\"\n\n queryset = Data.objects.all().prefetch_related('process', 'descriptor_schema', 'contributor')\n serializer_class = DataSerializer\n permission_classes = (get_permissions_class(),)\n filter_class = DataFilter\n ordering_fields = ('id', 'created', 'modified', 'started', 'finished', 'name')\n ordering = ('id',)\n\n def create(self, request, *args, **kwargs):\n \"\"\"Create a resource.\"\"\"\n collections = request.data.get('collections', [])\n\n # check that user has permissions on all collections that Data\n # object will be added to\n for collection_id in collections:\n try:\n collection = Collection.objects.get(pk=collection_id)\n except Collection.DoesNotExist:\n return Response({'collections': ['Invalid pk \"{}\" - object does not exist.'.format(collection_id)]},\n status=status.HTTP_400_BAD_REQUEST)\n\n if not request.user.has_perm('add_collection', obj=collection):\n if request.user.is_authenticated():\n raise exceptions.PermissionDenied\n else:\n raise exceptions.NotFound\n\n # translate processe's slug to id\n process_slug = request.data.get('process', None)\n process_query = Process.objects.filter(slug=process_slug)\n process_query = get_objects_for_user(request.user, 'view_process', process_query)\n try:\n process = process_query.latest()\n except Process.DoesNotExist:\n return Response({'process': ['Invalid process slug \"{}\" - object does not exist.'.format(process_slug)]},\n status=status.HTTP_400_BAD_REQUEST)\n request.data['process'] = process.pk\n\n # perform \"get_or_create\" if requested - return existing object\n # if found\n if kwargs.pop('get_or_create', False):\n process_input = request.data.get('input', {})\n\n # use default values if they are not given\n for field_schema, fields, path in iterate_schema(process_input, process.input_schema):\n if 'default' in field_schema and field_schema['name'] not in fields:\n dict_dot(process_input, path, field_schema['default'])\n\n checksum = get_data_checksum(process_input, process.slug, process.version)\n data_qs = Data.objects.filter(\n checksum=checksum,\n process__persistence__in=[Process.PERSISTENCE_CACHED, Process.PERSISTENCE_TEMP],\n )\n data_qs = get_objects_for_user(request.user, 'view_data', data_qs)\n if data_qs.exists():\n data = data_qs.order_by('created').last()\n serializer = self.get_serializer(data)\n return Response(serializer.data)\n\n # create the objects\n resp = super(DataViewSet, self).create(request, *args, **kwargs)\n\n # run manager\n manager.communicate()\n\n return resp\n\n @list_route(methods=[u'post'])\n def get_or_create(self, request, *args, **kwargs):\n \"\"\"Get ``Data`` object if similar already exists, otherwise create it.\"\"\"\n kwargs['get_or_create'] = True\n return self.create(request, *args, **kwargs)\n\n def perform_create(self, serializer):\n \"\"\"Create a resource.\"\"\"\n with transaction.atomic():\n instance = serializer.save()\n\n # Assign all permissions to the object contributor.\n for permission in list(zip(*instance._meta.permissions))[0]: # pylint: disable=protected-access\n assign_perm(permission, instance.contributor, instance)\n\n # Assign data object to all specified collections.\n collections = self.request.data.get('collections', [])\n for c in collections:\n collection = Collection.objects.get(pk=c)\n collection.data.add(instance)\n", "path": "resolwe/flow/views/data.py"}], "after_files": [{"content": "\"\"\"Mixins used in Resolwe Viewsets.\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom django.db import IntegrityError, transaction\n\nfrom guardian.shortcuts import assign_perm\nfrom guardian.utils import get_anonymous_user\nfrom rest_framework import mixins, status\nfrom rest_framework.decorators import list_route\nfrom rest_framework.response import Response\n\nfrom resolwe.flow.models import DescriptorSchema\nfrom resolwe.permissions.shortcuts import get_objects_for_user\n\n\nclass ResolweCreateModelMixin(mixins.CreateModelMixin):\n \"\"\"Mixin to support creating new `Resolwe` models.\n\n Extends `django_rest_framework`'s class `CreateModelMixin` with:\n\n * append user's id from request to posted data as `contributor`\n key\n * catch `IntegrityError`s, so we can return HTTP status 409\n instead of raising error\n\n \"\"\"\n\n def create(self, request, *args, **kwargs):\n \"\"\"Create a resource.\"\"\"\n ds_slug = request.data.get('descriptor_schema', None)\n if ds_slug:\n ds_query = DescriptorSchema.objects.filter(slug=ds_slug)\n ds_query = get_objects_for_user(request.user, 'view_descriptorschema', ds_query)\n try:\n request.data['descriptor_schema'] = ds_query.latest().pk\n except DescriptorSchema.DoesNotExist:\n return Response(\n {'descriptor_schema': [\n 'Invalid descriptor_schema slug \"{}\" - object does not exist.'.format(ds_slug)]},\n status=status.HTTP_400_BAD_REQUEST)\n\n if request.user.is_anonymous():\n request.data['contributor'] = get_anonymous_user().pk\n else:\n request.data['contributor'] = request.user.pk\n\n try:\n return super(ResolweCreateModelMixin, self).create(request, *args, **kwargs)\n\n except IntegrityError as ex:\n return Response({u'error': str(ex)}, status=status.HTTP_409_CONFLICT)\n\n def perform_create(self, serializer):\n \"\"\"Create a resource.\"\"\"\n with transaction.atomic():\n instance = serializer.save()\n\n # Assign all permissions to the object contributor.\n for permission in list(zip(*instance._meta.permissions))[0]: # pylint: disable=protected-access\n assign_perm(permission, instance.contributor, instance)\n\n\nclass ResolweUpdateModelMixin(mixins.UpdateModelMixin):\n \"\"\"Mixin to support updating `Resolwe` models.\n\n Extends `django_rest_framework`'s class `UpdateModelMixin` with:\n\n * translate `descriptor_schema` field from DescriptorSchema's\n slug to its id and return 400 error Response if it doesn't\n exists\n\n \"\"\"\n\n def update(self, request, *args, **kwargs):\n \"\"\"Update a resource.\"\"\"\n ds_slug = request.data.get('descriptor_schema', None)\n if ds_slug:\n ds_query = DescriptorSchema.objects.filter(slug=ds_slug)\n ds_query = get_objects_for_user(request.user, 'view_descriptorschema', ds_query)\n try:\n request.data['descriptor_schema'] = ds_query.latest().pk\n except DescriptorSchema.DoesNotExist:\n return Response(\n {'descriptor_schema': [\n 'Invalid descriptor_schema slug \"{}\" - object does not exist.'.format(ds_slug)]},\n status=status.HTTP_400_BAD_REQUEST)\n\n return super(ResolweUpdateModelMixin, self).update(request, *args, **kwargs)\n\n\nclass ResolweCheckSlugMixin(object):\n \"\"\"Slug validation.\"\"\"\n\n @list_route(methods=[u'get'])\n def slug_exists(self, request):\n \"\"\"Check if given url slug exists.\n\n Check if slug given in query parameter ``name`` exists. Return\n ``True`` if slug already exists and ``False`` otherwise.\n\n \"\"\"\n if not request.user.is_authenticated():\n return Response(status=status.HTTP_401_UNAUTHORIZED)\n\n if 'name' not in request.query_params:\n return Response({'error': 'Query parameter `name` must be given.'},\n status=status.HTTP_400_BAD_REQUEST)\n\n queryset = self.get_queryset()\n slug_name = request.query_params['name']\n return Response(queryset.filter(slug__iexact=slug_name).exists())\n", "path": "resolwe/flow/views/mixins.py"}, {"content": "\"\"\"Data viewset.\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom django.db import transaction\n\nfrom guardian.shortcuts import assign_perm\nfrom rest_framework import exceptions, mixins, status, viewsets\nfrom rest_framework.decorators import list_route\nfrom rest_framework.response import Response\n\nfrom resolwe.flow.filters import DataFilter\nfrom resolwe.flow.managers import manager\nfrom resolwe.flow.models import Collection, Data, Process\nfrom resolwe.flow.serializers import DataSerializer\nfrom resolwe.flow.utils import dict_dot, get_data_checksum, iterate_schema\nfrom resolwe.permissions.loader import get_permissions_class\nfrom resolwe.permissions.mixins import ResolwePermissionsMixin\nfrom resolwe.permissions.shortcuts import get_objects_for_user\n\nfrom .mixins import ResolweCheckSlugMixin, ResolweCreateModelMixin, ResolweUpdateModelMixin\n\n\nclass DataViewSet(ResolweCreateModelMixin,\n mixins.RetrieveModelMixin,\n ResolweUpdateModelMixin,\n mixins.DestroyModelMixin,\n mixins.ListModelMixin,\n ResolwePermissionsMixin,\n ResolweCheckSlugMixin,\n viewsets.GenericViewSet):\n \"\"\"API view for :class:`Data` objects.\"\"\"\n\n queryset = Data.objects.all().prefetch_related('process', 'descriptor_schema', 'contributor')\n serializer_class = DataSerializer\n permission_classes = (get_permissions_class(),)\n filter_class = DataFilter\n ordering_fields = ('id', 'created', 'modified', 'started', 'finished', 'name')\n ordering = ('id',)\n\n def create(self, request, *args, **kwargs):\n \"\"\"Create a resource.\"\"\"\n collections = request.data.get('collections', [])\n\n # check that user has permissions on all collections that Data\n # object will be added to\n for collection_id in collections:\n try:\n collection = Collection.objects.get(pk=collection_id)\n except Collection.DoesNotExist:\n return Response({'collections': ['Invalid pk \"{}\" - object does not exist.'.format(collection_id)]},\n status=status.HTTP_400_BAD_REQUEST)\n\n if not request.user.has_perm('add_collection', obj=collection):\n if request.user.has_perm('view_collection', obj=collection):\n raise exceptions.PermissionDenied(\n \"You don't have `ADD` permission on collection (id: {}).\".format(collection_id)\n )\n else:\n raise exceptions.NotFound(\n \"Collection not found (id: {}).\".format(collection_id)\n )\n\n # translate processe's slug to id\n process_slug = request.data.get('process', None)\n process_query = Process.objects.filter(slug=process_slug)\n process_query = get_objects_for_user(request.user, 'view_process', process_query)\n try:\n process = process_query.latest()\n except Process.DoesNotExist:\n return Response({'process': ['Invalid process slug \"{}\" - object does not exist.'.format(process_slug)]},\n status=status.HTTP_400_BAD_REQUEST)\n request.data['process'] = process.pk\n\n # perform \"get_or_create\" if requested - return existing object\n # if found\n if kwargs.pop('get_or_create', False):\n process_input = request.data.get('input', {})\n\n # use default values if they are not given\n for field_schema, fields, path in iterate_schema(process_input, process.input_schema):\n if 'default' in field_schema and field_schema['name'] not in fields:\n dict_dot(process_input, path, field_schema['default'])\n\n checksum = get_data_checksum(process_input, process.slug, process.version)\n data_qs = Data.objects.filter(\n checksum=checksum,\n process__persistence__in=[Process.PERSISTENCE_CACHED, Process.PERSISTENCE_TEMP],\n )\n data_qs = get_objects_for_user(request.user, 'view_data', data_qs)\n if data_qs.exists():\n data = data_qs.order_by('created').last()\n serializer = self.get_serializer(data)\n return Response(serializer.data)\n\n # create the objects\n resp = super(DataViewSet, self).create(request, *args, **kwargs)\n\n # run manager\n manager.communicate()\n\n return resp\n\n @list_route(methods=[u'post'])\n def get_or_create(self, request, *args, **kwargs):\n \"\"\"Get ``Data`` object if similar already exists, otherwise create it.\"\"\"\n kwargs['get_or_create'] = True\n return self.create(request, *args, **kwargs)\n\n def perform_create(self, serializer):\n \"\"\"Create a resource.\"\"\"\n with transaction.atomic():\n instance = serializer.save()\n\n # Assign all permissions to the object contributor.\n for permission in list(zip(*instance._meta.permissions))[0]: # pylint: disable=protected-access\n assign_perm(permission, instance.contributor, instance)\n\n # Assign data object to all specified collections.\n collections = self.request.data.get('collections', [])\n for c in collections:\n collection = Collection.objects.get(pk=c)\n collection.data.add(instance)\n", "path": "resolwe/flow/views/data.py"}]} | 2,689 | 439 |
gh_patches_debug_63309 | rasdani/github-patches | git_diff | scikit-hep__pyhf-924 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove cloudpickle constraints when updating to TensorFlow Probability v0.11
# Description
Once TensorFlow Probability `v0.11.0` is released there will no longer be the need for PR #915, and so that should be reverted.
Related Issues: #815
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import setup
2
3 extras_require = {
4 'tensorflow': [
5 'tensorflow~=2.0',
6 'tensorflow-probability~=0.8',
7 'cloudpickle!=1.5.0', # TODO: Temp patch until tfp v0.11
8 ],
9 'torch': ['torch~=1.2'],
10 'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],
11 'xmlio': ['uproot'],
12 'minuit': ['iminuit'],
13 }
14 extras_require['backends'] = sorted(
15 set(
16 extras_require['tensorflow']
17 + extras_require['torch']
18 + extras_require['jax']
19 + extras_require['minuit']
20 )
21 )
22 extras_require['contrib'] = sorted(set(['matplotlib']))
23 extras_require['lint'] = sorted(set(['pyflakes', 'black']))
24
25 extras_require['test'] = sorted(
26 set(
27 extras_require['backends']
28 + extras_require['xmlio']
29 + extras_require['contrib']
30 + [
31 'pytest~=3.5',
32 'pytest-cov>=2.5.1',
33 'pytest-mock',
34 'pytest-benchmark[histogram]',
35 'pytest-console-scripts',
36 'pytest-mpl',
37 'pydocstyle',
38 'coverage>=4.0', # coveralls
39 'papermill~=2.0',
40 'nteract-scrapbook~=0.2',
41 'jupyter',
42 'uproot~=3.3',
43 'graphviz',
44 'jsonpatch',
45 ]
46 )
47 )
48 extras_require['docs'] = sorted(
49 set(
50 [
51 'sphinx~=3.0.0', # Sphinx v3.1.X regressions break docs
52 'sphinxcontrib-bibtex',
53 'sphinx-click',
54 'sphinx_rtd_theme',
55 'nbsphinx',
56 'ipywidgets',
57 'sphinx-issues',
58 'sphinx-copybutton>0.2.9',
59 ]
60 )
61 )
62 extras_require['develop'] = sorted(
63 set(
64 extras_require['docs']
65 + extras_require['lint']
66 + extras_require['test']
67 + ['nbdime', 'bumpversion', 'ipython', 'pre-commit', 'check-manifest', 'twine']
68 )
69 )
70 extras_require['complete'] = sorted(set(sum(extras_require.values(), [])))
71
72
73 setup(
74 extras_require=extras_require,
75 use_scm_version=lambda: {'local_scheme': lambda version: ''},
76 )
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -3,8 +3,7 @@
extras_require = {
'tensorflow': [
'tensorflow~=2.0',
- 'tensorflow-probability~=0.8',
- 'cloudpickle!=1.5.0', # TODO: Temp patch until tfp v0.11
+ 'tensorflow-probability~=0.10', # TODO: Temp patch until tfp v0.11
],
'torch': ['torch~=1.2'],
'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -3,8 +3,7 @@\n extras_require = {\n 'tensorflow': [\n 'tensorflow~=2.0',\n- 'tensorflow-probability~=0.8',\n- 'cloudpickle!=1.5.0', # TODO: Temp patch until tfp v0.11\n+ 'tensorflow-probability~=0.10', # TODO: Temp patch until tfp v0.11\n ],\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],\n", "issue": "Remove cloudpickle constraints when updating to TensorFlow Probability v0.11\n# Description\r\n\r\nOnce TensorFlow Probability `v0.11.0` is released there will no longer be the need for PR #915, and so that should be reverted.\r\n\r\nRelated Issues: #815 \r\n\n", "before_files": [{"content": "from setuptools import setup\n\nextras_require = {\n 'tensorflow': [\n 'tensorflow~=2.0',\n 'tensorflow-probability~=0.8',\n 'cloudpickle!=1.5.0', # TODO: Temp patch until tfp v0.11\n ],\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],\n 'xmlio': ['uproot'],\n 'minuit': ['iminuit'],\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted(set(['matplotlib']))\nextras_require['lint'] = sorted(set(['pyflakes', 'black']))\n\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + [\n 'pytest~=3.5',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'coverage>=4.0', # coveralls\n 'papermill~=2.0',\n 'nteract-scrapbook~=0.2',\n 'jupyter',\n 'uproot~=3.3',\n 'graphviz',\n 'jsonpatch',\n ]\n )\n)\nextras_require['docs'] = sorted(\n set(\n [\n 'sphinx~=3.0.0', # Sphinx v3.1.X regressions break docs\n 'sphinxcontrib-bibtex',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>0.2.9',\n ]\n )\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['lint']\n + extras_require['test']\n + ['nbdime', 'bumpversion', 'ipython', 'pre-commit', 'check-manifest', 'twine']\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n extras_require=extras_require,\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import setup\n\nextras_require = {\n 'tensorflow': [\n 'tensorflow~=2.0',\n 'tensorflow-probability~=0.10', # TODO: Temp patch until tfp v0.11\n ],\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],\n 'xmlio': ['uproot'],\n 'minuit': ['iminuit'],\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted(set(['matplotlib']))\nextras_require['lint'] = sorted(set(['pyflakes', 'black']))\n\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + [\n 'pytest~=3.5',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'coverage>=4.0', # coveralls\n 'papermill~=2.0',\n 'nteract-scrapbook~=0.2',\n 'jupyter',\n 'uproot~=3.3',\n 'graphviz',\n 'jsonpatch',\n ]\n )\n)\nextras_require['docs'] = sorted(\n set(\n [\n 'sphinx~=3.0.0', # Sphinx v3.1.X regressions break docs\n 'sphinxcontrib-bibtex',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>0.2.9',\n ]\n )\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['lint']\n + extras_require['test']\n + ['nbdime', 'bumpversion', 'ipython', 'pre-commit', 'check-manifest', 'twine']\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n extras_require=extras_require,\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py"}]} | 1,018 | 162 |
gh_patches_debug_29296 | rasdani/github-patches | git_diff | ethereum__consensus-specs-1156 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use custom types for container definitions
This is the "successor" to #695 which was recently closed for staleness. I'm intending to address this alongside a rewrite of #917.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/build_spec.py`
Content:
```
1 import re
2 from function_puller import (
3 get_spec,
4 SpecObject,
5 )
6 from argparse import ArgumentParser
7 from typing import (
8 Dict,
9 List,
10 Optional,
11 )
12
13
14 PHASE0_IMPORTS = '''from typing import (
15 Any,
16 Dict,
17 List,
18 NewType,
19 Tuple,
20 )
21
22 from eth2spec.utils.ssz.ssz_impl import (
23 hash_tree_root,
24 signing_root,
25 )
26 from eth2spec.utils.ssz.ssz_typing import (
27 # unused: uint8, uint16, uint32, uint128, uint256,
28 uint64, Container, Vector, BytesN
29 )
30 from eth2spec.utils.bls import (
31 bls_aggregate_pubkeys,
32 bls_verify,
33 bls_verify_multiple,
34 )
35 # Note: 'int' type defaults to being interpreted as a uint64 by SSZ implementation.
36
37 from eth2spec.utils.hash_function import hash
38 '''
39 PHASE1_IMPORTS = '''from typing import (
40 Any,
41 Dict,
42 List,
43 NewType,
44 Tuple,
45 )
46
47 from eth2spec.utils.ssz.ssz_impl import (
48 hash_tree_root,
49 signing_root,
50 serialize,
51 is_empty,
52 )
53 from eth2spec.utils.ssz.ssz_typing import (
54 # unused: uint8, uint16, uint32, uint128, uint256,
55 uint64, Container, Vector, BytesN
56 )
57 from eth2spec.utils.bls import (
58 bls_aggregate_pubkeys,
59 bls_verify,
60 bls_verify_multiple,
61 )
62
63 from eth2spec.utils.hash_function import hash
64 '''
65 NEW_TYPES = {
66 'Slot': 'int',
67 'Epoch': 'int',
68 'Shard': 'int',
69 'ValidatorIndex': 'int',
70 'Gwei': 'int',
71 }
72 BYTE_TYPES = [4, 32, 48, 96]
73 SUNDRY_FUNCTIONS = '''
74 def get_ssz_type_by_name(name: str) -> Container:
75 return globals()[name]
76
77
78 # Monkey patch validator compute committee code
79 _compute_committee = compute_committee
80 committee_cache = {}
81
82
83 def compute_committee(indices: List[ValidatorIndex], seed: Bytes32, index: int, count: int) -> List[ValidatorIndex]:
84 param_hash = (hash_tree_root(indices), seed, index, count)
85
86 if param_hash in committee_cache:
87 return committee_cache[param_hash]
88 else:
89 ret = _compute_committee(indices, seed, index, count)
90 committee_cache[param_hash] = ret
91 return ret
92
93
94 # Monkey patch hash cache
95 _hash = hash
96 hash_cache = {}
97
98
99 def hash(x):
100 if x in hash_cache:
101 return hash_cache[x]
102 else:
103 ret = _hash(x)
104 hash_cache[x] = ret
105 return ret
106
107
108 # Access to overwrite spec constants based on configuration
109 def apply_constants_preset(preset: Dict[str, Any]):
110 global_vars = globals()
111 for k, v in preset.items():
112 global_vars[k] = v
113
114 # Deal with derived constants
115 global_vars['GENESIS_EPOCH'] = slot_to_epoch(GENESIS_SLOT)
116
117 # Initialize SSZ types again, to account for changed lengths
118 init_SSZ_types()
119 '''
120
121
122 def objects_to_spec(functions: Dict[str, str],
123 constants: Dict[str, str],
124 ssz_objects: Dict[str, str],
125 inserts: Dict[str, str],
126 imports: Dict[str, str],
127 new_types: Dict[str, str],
128 byte_types: List[int],
129 ) -> str:
130 """
131 Given all the objects that constitute a spec, combine them into a single pyfile.
132 """
133 new_type_definitions = \
134 '\n'.join(['''%s = NewType('%s', %s)''' % (key, key, value) for key, value in new_types.items()])
135 new_type_definitions += '\n' + '\n'.join(['Bytes%s = BytesN[%s]' % (n, n) for n in byte_types])
136 functions_spec = '\n\n'.join(functions.values())
137 constants_spec = '\n'.join(map(lambda x: '%s = %s' % (x, constants[x]), constants))
138 ssz_objects_instantiation_spec = '\n\n'.join(ssz_objects.values())
139 ssz_objects_reinitialization_spec = (
140 'def init_SSZ_types():\n global_vars = globals()\n\n '
141 + '\n\n '.join([re.sub(r'(?!\n\n)\n', r'\n ', value[:-1]) for value in ssz_objects.values()])
142 + '\n\n'
143 + '\n'.join(map(lambda x: ' global_vars[\'%s\'] = %s' % (x, x), ssz_objects.keys()))
144 )
145 spec = (
146 imports
147 + '\n' + new_type_definitions
148 + '\n\n' + constants_spec
149 + '\n\n\n' + ssz_objects_instantiation_spec
150 + '\n\n' + functions_spec
151 + '\n' + SUNDRY_FUNCTIONS
152 + '\n\n' + ssz_objects_reinitialization_spec
153 + '\n'
154 )
155 # Handle @inserts
156 for key, value in inserts.items():
157 spec = re.sub('[ ]*# %s\\n' % key, value, spec)
158 return spec
159
160
161 def combine_functions(old_functions: Dict[str, str], new_functions: Dict[str, str]) -> Dict[str, str]:
162 for key, value in new_functions.items():
163 old_functions[key] = value
164 return old_functions
165
166
167 def combine_constants(old_constants: Dict[str, str], new_constants: Dict[str, str]) -> Dict[str, str]:
168 for key, value in new_constants.items():
169 old_constants[key] = value
170 return old_constants
171
172
173 def dependency_order_ssz_objects(objects: Dict[str, str]) -> None:
174 """
175 Determines which SSZ Object is depenedent on which other and orders them appropriately
176 """
177 items = list(objects.items())
178 for key, value in items:
179 dependencies = re.findall(r'(: [A-Z][\w[]*)', value)
180 dependencies = map(lambda x: re.sub(r'\W|Vector|List|Container|uint\d+|Bytes\d+|bytes', '', x), dependencies)
181 for dep in dependencies:
182 if dep in NEW_TYPES or len(dep) == 0:
183 continue
184 key_list = list(objects.keys())
185 for item in [dep, key] + key_list[key_list.index(dep)+1:]:
186 objects[item] = objects.pop(item)
187
188
189 def combine_ssz_objects(old_objects: Dict[str, str], new_objects: Dict[str, str]) -> Dict[str, str]:
190 """
191 Takes in old spec and new spec ssz objects, combines them,
192 and returns the newer versions of the objects in dependency order.
193 """
194 for key, value in new_objects.items():
195 if key in old_objects:
196 # remove trailing newline
197 old_objects[key] = old_objects[key]
198 # remove leading variable name
199 value = re.sub(r'^class [\w]*\(Container\):\n', '', value)
200 old_objects[key] = old_objects.get(key, '') + value
201 dependency_order_ssz_objects(old_objects)
202 return old_objects
203
204
205 # inserts are handeled the same way as functions
206 combine_inserts = combine_functions
207
208
209 def combine_spec_objects(spec0: SpecObject, spec1: SpecObject) -> SpecObject:
210 """
211 Takes in two spec variants (as tuples of their objects) and combines them using the appropriate combiner function.
212 """
213 functions0, constants0, ssz_objects0, inserts0 = spec0
214 functions1, constants1, ssz_objects1, inserts1 = spec1
215 functions = combine_functions(functions0, functions1)
216 constants = combine_constants(constants0, constants1)
217 ssz_objects = combine_ssz_objects(ssz_objects0, ssz_objects1)
218 inserts = combine_inserts(inserts0, inserts1)
219 return functions, constants, ssz_objects, inserts
220
221
222 def build_phase0_spec(sourcefile: str, outfile: str=None) -> Optional[str]:
223 functions, constants, ssz_objects, inserts = get_spec(sourcefile)
224 spec = objects_to_spec(functions, constants, ssz_objects, inserts, PHASE0_IMPORTS, NEW_TYPES, BYTE_TYPES)
225 if outfile is not None:
226 with open(outfile, 'w') as out:
227 out.write(spec)
228 return spec
229
230
231 def build_phase1_spec(phase0_sourcefile: str,
232 phase1_custody_sourcefile: str,
233 phase1_shard_sourcefile: str,
234 outfile: str=None) -> Optional[str]:
235 phase0_spec = get_spec(phase0_sourcefile)
236 phase1_custody = get_spec(phase1_custody_sourcefile)
237 phase1_shard_data = get_spec(phase1_shard_sourcefile)
238 spec_objects = phase0_spec
239 for value in [phase1_custody, phase1_shard_data]:
240 spec_objects = combine_spec_objects(spec_objects, value)
241 spec = objects_to_spec(*spec_objects, PHASE1_IMPORTS, NEW_TYPES, BYTE_TYPES)
242 if outfile is not None:
243 with open(outfile, 'w') as out:
244 out.write(spec)
245 return spec
246
247
248 if __name__ == '__main__':
249 description = '''
250 Build the specs from the md docs.
251 If building phase 0:
252 1st argument is input spec.md
253 2nd argument is output spec.py
254
255 If building phase 1:
256 1st argument is input spec_phase0.md
257 2nd argument is input spec_phase1_custody.md
258 3rd argument is input spec_phase1_shard_data.md
259 4th argument is output spec.py
260 '''
261 parser = ArgumentParser(description=description)
262 parser.add_argument("-p", "--phase", dest="phase", type=int, default=0, help="Build for phase #")
263 parser.add_argument(dest="files", help="Input and output files", nargs="+")
264
265 args = parser.parse_args()
266 if args.phase == 0:
267 if len(args.files) == 2:
268 build_phase0_spec(*args.files)
269 else:
270 print(" Phase 0 requires an output as well as an input file.")
271 elif args.phase == 1:
272 if len(args.files) == 4:
273 build_phase1_spec(*args.files)
274 else:
275 print(" Phase 1 requires an output as well as 3 input files (phase0.md and phase1.md, phase1.md)")
276 else:
277 print("Invalid phase: {0}".format(args.phase))
278
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scripts/build_spec.py b/scripts/build_spec.py
--- a/scripts/build_spec.py
+++ b/scripts/build_spec.py
@@ -130,9 +130,10 @@
"""
Given all the objects that constitute a spec, combine them into a single pyfile.
"""
- new_type_definitions = \
- '\n'.join(['''%s = NewType('%s', %s)''' % (key, key, value) for key, value in new_types.items()])
- new_type_definitions += '\n' + '\n'.join(['Bytes%s = BytesN[%s]' % (n, n) for n in byte_types])
+ new_type_definitions = '\n'.join(['Bytes%s = BytesN[%s]' % (n, n) for n in byte_types])
+ new_type_definitions += '\n' + '\n'.join(['Hash = Bytes32', 'BLSPubkey = Bytes48', 'BLSSignature = Bytes96'])
+ new_type_definitions += \
+ '\n' + '\n'.join(['''%s = NewType('%s', %s)''' % (key, key, value) for key, value in new_types.items()])
functions_spec = '\n\n'.join(functions.values())
constants_spec = '\n'.join(map(lambda x: '%s = %s' % (x, constants[x]), constants))
ssz_objects_instantiation_spec = '\n\n'.join(ssz_objects.values())
@@ -177,7 +178,7 @@
items = list(objects.items())
for key, value in items:
dependencies = re.findall(r'(: [A-Z][\w[]*)', value)
- dependencies = map(lambda x: re.sub(r'\W|Vector|List|Container|uint\d+|Bytes\d+|bytes', '', x), dependencies)
+ dependencies = map(lambda x: re.sub(r'\W|Vector|List|Container|Hash|BLSPubkey|BLSSignature|uint\d+|Bytes\d+|bytes', '', x), dependencies)
for dep in dependencies:
if dep in NEW_TYPES or len(dep) == 0:
continue
| {"golden_diff": "diff --git a/scripts/build_spec.py b/scripts/build_spec.py\n--- a/scripts/build_spec.py\n+++ b/scripts/build_spec.py\n@@ -130,9 +130,10 @@\n \"\"\"\n Given all the objects that constitute a spec, combine them into a single pyfile.\n \"\"\"\n- new_type_definitions = \\\n- '\\n'.join(['''%s = NewType('%s', %s)''' % (key, key, value) for key, value in new_types.items()])\n- new_type_definitions += '\\n' + '\\n'.join(['Bytes%s = BytesN[%s]' % (n, n) for n in byte_types])\n+ new_type_definitions = '\\n'.join(['Bytes%s = BytesN[%s]' % (n, n) for n in byte_types])\n+ new_type_definitions += '\\n' + '\\n'.join(['Hash = Bytes32', 'BLSPubkey = Bytes48', 'BLSSignature = Bytes96'])\n+ new_type_definitions += \\\n+ '\\n' + '\\n'.join(['''%s = NewType('%s', %s)''' % (key, key, value) for key, value in new_types.items()])\n functions_spec = '\\n\\n'.join(functions.values())\n constants_spec = '\\n'.join(map(lambda x: '%s = %s' % (x, constants[x]), constants))\n ssz_objects_instantiation_spec = '\\n\\n'.join(ssz_objects.values())\n@@ -177,7 +178,7 @@\n items = list(objects.items())\n for key, value in items:\n dependencies = re.findall(r'(: [A-Z][\\w[]*)', value)\n- dependencies = map(lambda x: re.sub(r'\\W|Vector|List|Container|uint\\d+|Bytes\\d+|bytes', '', x), dependencies)\n+ dependencies = map(lambda x: re.sub(r'\\W|Vector|List|Container|Hash|BLSPubkey|BLSSignature|uint\\d+|Bytes\\d+|bytes', '', x), dependencies)\n for dep in dependencies:\n if dep in NEW_TYPES or len(dep) == 0:\n continue\n", "issue": "Use custom types for container definitions\nThis is the \"successor\" to #695 which was recently closed for staleness. I'm intending to address this alongside a rewrite of #917.\n", "before_files": [{"content": "import re\nfrom function_puller import (\n get_spec,\n SpecObject,\n)\nfrom argparse import ArgumentParser\nfrom typing import (\n Dict,\n List,\n Optional,\n)\n\n\nPHASE0_IMPORTS = '''from typing import (\n Any,\n Dict,\n List,\n NewType,\n Tuple,\n)\n\nfrom eth2spec.utils.ssz.ssz_impl import (\n hash_tree_root,\n signing_root,\n)\nfrom eth2spec.utils.ssz.ssz_typing import (\n # unused: uint8, uint16, uint32, uint128, uint256,\n uint64, Container, Vector, BytesN\n)\nfrom eth2spec.utils.bls import (\n bls_aggregate_pubkeys,\n bls_verify,\n bls_verify_multiple,\n)\n# Note: 'int' type defaults to being interpreted as a uint64 by SSZ implementation.\n\nfrom eth2spec.utils.hash_function import hash\n'''\nPHASE1_IMPORTS = '''from typing import (\n Any,\n Dict,\n List,\n NewType,\n Tuple,\n)\n\nfrom eth2spec.utils.ssz.ssz_impl import (\n hash_tree_root,\n signing_root,\n serialize,\n is_empty,\n)\nfrom eth2spec.utils.ssz.ssz_typing import (\n # unused: uint8, uint16, uint32, uint128, uint256,\n uint64, Container, Vector, BytesN\n)\nfrom eth2spec.utils.bls import (\n bls_aggregate_pubkeys,\n bls_verify,\n bls_verify_multiple,\n)\n\nfrom eth2spec.utils.hash_function import hash\n'''\nNEW_TYPES = {\n 'Slot': 'int',\n 'Epoch': 'int',\n 'Shard': 'int',\n 'ValidatorIndex': 'int',\n 'Gwei': 'int',\n}\nBYTE_TYPES = [4, 32, 48, 96]\nSUNDRY_FUNCTIONS = '''\ndef get_ssz_type_by_name(name: str) -> Container:\n return globals()[name]\n\n\n# Monkey patch validator compute committee code\n_compute_committee = compute_committee\ncommittee_cache = {}\n\n\ndef compute_committee(indices: List[ValidatorIndex], seed: Bytes32, index: int, count: int) -> List[ValidatorIndex]:\n param_hash = (hash_tree_root(indices), seed, index, count)\n\n if param_hash in committee_cache:\n return committee_cache[param_hash]\n else:\n ret = _compute_committee(indices, seed, index, count)\n committee_cache[param_hash] = ret\n return ret\n\n\n# Monkey patch hash cache\n_hash = hash\nhash_cache = {}\n\n\ndef hash(x):\n if x in hash_cache:\n return hash_cache[x]\n else:\n ret = _hash(x)\n hash_cache[x] = ret\n return ret\n\n\n# Access to overwrite spec constants based on configuration\ndef apply_constants_preset(preset: Dict[str, Any]):\n global_vars = globals()\n for k, v in preset.items():\n global_vars[k] = v\n\n # Deal with derived constants\n global_vars['GENESIS_EPOCH'] = slot_to_epoch(GENESIS_SLOT)\n\n # Initialize SSZ types again, to account for changed lengths\n init_SSZ_types()\n'''\n\n\ndef objects_to_spec(functions: Dict[str, str],\n constants: Dict[str, str],\n ssz_objects: Dict[str, str],\n inserts: Dict[str, str],\n imports: Dict[str, str],\n new_types: Dict[str, str],\n byte_types: List[int],\n ) -> str:\n \"\"\"\n Given all the objects that constitute a spec, combine them into a single pyfile.\n \"\"\"\n new_type_definitions = \\\n '\\n'.join(['''%s = NewType('%s', %s)''' % (key, key, value) for key, value in new_types.items()])\n new_type_definitions += '\\n' + '\\n'.join(['Bytes%s = BytesN[%s]' % (n, n) for n in byte_types])\n functions_spec = '\\n\\n'.join(functions.values())\n constants_spec = '\\n'.join(map(lambda x: '%s = %s' % (x, constants[x]), constants))\n ssz_objects_instantiation_spec = '\\n\\n'.join(ssz_objects.values())\n ssz_objects_reinitialization_spec = (\n 'def init_SSZ_types():\\n global_vars = globals()\\n\\n '\n + '\\n\\n '.join([re.sub(r'(?!\\n\\n)\\n', r'\\n ', value[:-1]) for value in ssz_objects.values()])\n + '\\n\\n'\n + '\\n'.join(map(lambda x: ' global_vars[\\'%s\\'] = %s' % (x, x), ssz_objects.keys()))\n )\n spec = (\n imports\n + '\\n' + new_type_definitions\n + '\\n\\n' + constants_spec\n + '\\n\\n\\n' + ssz_objects_instantiation_spec\n + '\\n\\n' + functions_spec\n + '\\n' + SUNDRY_FUNCTIONS\n + '\\n\\n' + ssz_objects_reinitialization_spec\n + '\\n'\n )\n # Handle @inserts\n for key, value in inserts.items():\n spec = re.sub('[ ]*# %s\\\\n' % key, value, spec)\n return spec\n\n\ndef combine_functions(old_functions: Dict[str, str], new_functions: Dict[str, str]) -> Dict[str, str]:\n for key, value in new_functions.items():\n old_functions[key] = value\n return old_functions\n\n\ndef combine_constants(old_constants: Dict[str, str], new_constants: Dict[str, str]) -> Dict[str, str]:\n for key, value in new_constants.items():\n old_constants[key] = value\n return old_constants\n\n\ndef dependency_order_ssz_objects(objects: Dict[str, str]) -> None:\n \"\"\"\n Determines which SSZ Object is depenedent on which other and orders them appropriately\n \"\"\"\n items = list(objects.items())\n for key, value in items:\n dependencies = re.findall(r'(: [A-Z][\\w[]*)', value)\n dependencies = map(lambda x: re.sub(r'\\W|Vector|List|Container|uint\\d+|Bytes\\d+|bytes', '', x), dependencies)\n for dep in dependencies:\n if dep in NEW_TYPES or len(dep) == 0:\n continue\n key_list = list(objects.keys())\n for item in [dep, key] + key_list[key_list.index(dep)+1:]:\n objects[item] = objects.pop(item)\n\n\ndef combine_ssz_objects(old_objects: Dict[str, str], new_objects: Dict[str, str]) -> Dict[str, str]:\n \"\"\"\n Takes in old spec and new spec ssz objects, combines them,\n and returns the newer versions of the objects in dependency order.\n \"\"\"\n for key, value in new_objects.items():\n if key in old_objects:\n # remove trailing newline\n old_objects[key] = old_objects[key]\n # remove leading variable name\n value = re.sub(r'^class [\\w]*\\(Container\\):\\n', '', value)\n old_objects[key] = old_objects.get(key, '') + value\n dependency_order_ssz_objects(old_objects)\n return old_objects\n\n\n# inserts are handeled the same way as functions\ncombine_inserts = combine_functions\n\n\ndef combine_spec_objects(spec0: SpecObject, spec1: SpecObject) -> SpecObject:\n \"\"\"\n Takes in two spec variants (as tuples of their objects) and combines them using the appropriate combiner function.\n \"\"\"\n functions0, constants0, ssz_objects0, inserts0 = spec0\n functions1, constants1, ssz_objects1, inserts1 = spec1\n functions = combine_functions(functions0, functions1)\n constants = combine_constants(constants0, constants1)\n ssz_objects = combine_ssz_objects(ssz_objects0, ssz_objects1)\n inserts = combine_inserts(inserts0, inserts1)\n return functions, constants, ssz_objects, inserts\n\n\ndef build_phase0_spec(sourcefile: str, outfile: str=None) -> Optional[str]:\n functions, constants, ssz_objects, inserts = get_spec(sourcefile)\n spec = objects_to_spec(functions, constants, ssz_objects, inserts, PHASE0_IMPORTS, NEW_TYPES, BYTE_TYPES)\n if outfile is not None:\n with open(outfile, 'w') as out:\n out.write(spec)\n return spec\n\n\ndef build_phase1_spec(phase0_sourcefile: str,\n phase1_custody_sourcefile: str,\n phase1_shard_sourcefile: str,\n outfile: str=None) -> Optional[str]:\n phase0_spec = get_spec(phase0_sourcefile)\n phase1_custody = get_spec(phase1_custody_sourcefile)\n phase1_shard_data = get_spec(phase1_shard_sourcefile)\n spec_objects = phase0_spec\n for value in [phase1_custody, phase1_shard_data]:\n spec_objects = combine_spec_objects(spec_objects, value)\n spec = objects_to_spec(*spec_objects, PHASE1_IMPORTS, NEW_TYPES, BYTE_TYPES)\n if outfile is not None:\n with open(outfile, 'w') as out:\n out.write(spec)\n return spec\n\n\nif __name__ == '__main__':\n description = '''\nBuild the specs from the md docs.\nIf building phase 0:\n 1st argument is input spec.md\n 2nd argument is output spec.py\n\nIf building phase 1:\n 1st argument is input spec_phase0.md\n 2nd argument is input spec_phase1_custody.md\n 3rd argument is input spec_phase1_shard_data.md\n 4th argument is output spec.py\n'''\n parser = ArgumentParser(description=description)\n parser.add_argument(\"-p\", \"--phase\", dest=\"phase\", type=int, default=0, help=\"Build for phase #\")\n parser.add_argument(dest=\"files\", help=\"Input and output files\", nargs=\"+\")\n\n args = parser.parse_args()\n if args.phase == 0:\n if len(args.files) == 2:\n build_phase0_spec(*args.files)\n else:\n print(\" Phase 0 requires an output as well as an input file.\")\n elif args.phase == 1:\n if len(args.files) == 4:\n build_phase1_spec(*args.files)\n else:\n print(\" Phase 1 requires an output as well as 3 input files (phase0.md and phase1.md, phase1.md)\")\n else:\n print(\"Invalid phase: {0}\".format(args.phase))\n", "path": "scripts/build_spec.py"}], "after_files": [{"content": "import re\nfrom function_puller import (\n get_spec,\n SpecObject,\n)\nfrom argparse import ArgumentParser\nfrom typing import (\n Dict,\n List,\n Optional,\n)\n\n\nPHASE0_IMPORTS = '''from typing import (\n Any,\n Dict,\n List,\n NewType,\n Tuple,\n)\n\nfrom eth2spec.utils.ssz.ssz_impl import (\n hash_tree_root,\n signing_root,\n)\nfrom eth2spec.utils.ssz.ssz_typing import (\n # unused: uint8, uint16, uint32, uint128, uint256,\n uint64, Container, Vector, BytesN\n)\nfrom eth2spec.utils.bls import (\n bls_aggregate_pubkeys,\n bls_verify,\n bls_verify_multiple,\n)\n# Note: 'int' type defaults to being interpreted as a uint64 by SSZ implementation.\n\nfrom eth2spec.utils.hash_function import hash\n'''\nPHASE1_IMPORTS = '''from typing import (\n Any,\n Dict,\n List,\n NewType,\n Tuple,\n)\n\nfrom eth2spec.utils.ssz.ssz_impl import (\n hash_tree_root,\n signing_root,\n serialize,\n is_empty,\n)\nfrom eth2spec.utils.ssz.ssz_typing import (\n # unused: uint8, uint16, uint32, uint128, uint256,\n uint64, Container, Vector, BytesN\n)\nfrom eth2spec.utils.bls import (\n bls_aggregate_pubkeys,\n bls_verify,\n bls_verify_multiple,\n)\n\nfrom eth2spec.utils.hash_function import hash\n'''\nNEW_TYPES = {\n 'Slot': 'int',\n 'Epoch': 'int',\n 'Shard': 'int',\n 'ValidatorIndex': 'int',\n 'Gwei': 'int',\n}\nBYTE_TYPES = [4, 32, 48, 96]\nSUNDRY_FUNCTIONS = '''\ndef get_ssz_type_by_name(name: str) -> Container:\n return globals()[name]\n\n\n# Monkey patch validator compute committee code\n_compute_committee = compute_committee\ncommittee_cache = {}\n\n\ndef compute_committee(indices: List[ValidatorIndex], seed: Bytes32, index: int, count: int) -> List[ValidatorIndex]:\n param_hash = (hash_tree_root(indices), seed, index, count)\n\n if param_hash in committee_cache:\n return committee_cache[param_hash]\n else:\n ret = _compute_committee(indices, seed, index, count)\n committee_cache[param_hash] = ret\n return ret\n\n\n# Monkey patch hash cache\n_hash = hash\nhash_cache = {}\n\n\ndef hash(x):\n if x in hash_cache:\n return hash_cache[x]\n else:\n ret = _hash(x)\n hash_cache[x] = ret\n return ret\n\n\n# Access to overwrite spec constants based on configuration\ndef apply_constants_preset(preset: Dict[str, Any]):\n global_vars = globals()\n for k, v in preset.items():\n global_vars[k] = v\n\n # Deal with derived constants\n global_vars['GENESIS_EPOCH'] = slot_to_epoch(GENESIS_SLOT)\n\n # Initialize SSZ types again, to account for changed lengths\n init_SSZ_types()\n'''\n\n\ndef objects_to_spec(functions: Dict[str, str],\n constants: Dict[str, str],\n ssz_objects: Dict[str, str],\n inserts: Dict[str, str],\n imports: Dict[str, str],\n new_types: Dict[str, str],\n byte_types: List[int],\n ) -> str:\n \"\"\"\n Given all the objects that constitute a spec, combine them into a single pyfile.\n \"\"\"\n new_type_definitions = '\\n'.join(['Bytes%s = BytesN[%s]' % (n, n) for n in byte_types])\n new_type_definitions += '\\n' + '\\n'.join(['Hash = Bytes32', 'BLSPubkey = Bytes48', 'BLSSignature = Bytes96'])\n new_type_definitions += \\\n '\\n' + '\\n'.join(['''%s = NewType('%s', %s)''' % (key, key, value) for key, value in new_types.items()])\n functions_spec = '\\n\\n'.join(functions.values())\n constants_spec = '\\n'.join(map(lambda x: '%s = %s' % (x, constants[x]), constants))\n ssz_objects_instantiation_spec = '\\n\\n'.join(ssz_objects.values())\n ssz_objects_reinitialization_spec = (\n 'def init_SSZ_types():\\n global_vars = globals()\\n\\n '\n + '\\n\\n '.join([re.sub(r'(?!\\n\\n)\\n', r'\\n ', value[:-1]) for value in ssz_objects.values()])\n + '\\n\\n'\n + '\\n'.join(map(lambda x: ' global_vars[\\'%s\\'] = %s' % (x, x), ssz_objects.keys()))\n )\n spec = (\n imports\n + '\\n' + new_type_definitions\n + '\\n\\n' + constants_spec\n + '\\n\\n\\n' + ssz_objects_instantiation_spec\n + '\\n\\n' + functions_spec\n + '\\n' + SUNDRY_FUNCTIONS\n + '\\n\\n' + ssz_objects_reinitialization_spec\n + '\\n'\n )\n # Handle @inserts\n for key, value in inserts.items():\n spec = re.sub('[ ]*# %s\\\\n' % key, value, spec)\n return spec\n\n\ndef combine_functions(old_functions: Dict[str, str], new_functions: Dict[str, str]) -> Dict[str, str]:\n for key, value in new_functions.items():\n old_functions[key] = value\n return old_functions\n\n\ndef combine_constants(old_constants: Dict[str, str], new_constants: Dict[str, str]) -> Dict[str, str]:\n for key, value in new_constants.items():\n old_constants[key] = value\n return old_constants\n\n\ndef dependency_order_ssz_objects(objects: Dict[str, str]) -> None:\n \"\"\"\n Determines which SSZ Object is depenedent on which other and orders them appropriately\n \"\"\"\n items = list(objects.items())\n for key, value in items:\n dependencies = re.findall(r'(: [A-Z][\\w[]*)', value)\n dependencies = map(lambda x: re.sub(r'\\W|Vector|List|Container|Hash|BLSPubkey|BLSSignature|uint\\d+|Bytes\\d+|bytes', '', x), dependencies)\n for dep in dependencies:\n if dep in NEW_TYPES or len(dep) == 0:\n continue\n key_list = list(objects.keys())\n for item in [dep, key] + key_list[key_list.index(dep)+1:]:\n objects[item] = objects.pop(item)\n\n\ndef combine_ssz_objects(old_objects: Dict[str, str], new_objects: Dict[str, str]) -> Dict[str, str]:\n \"\"\"\n Takes in old spec and new spec ssz objects, combines them,\n and returns the newer versions of the objects in dependency order.\n \"\"\"\n for key, value in new_objects.items():\n if key in old_objects:\n # remove trailing newline\n old_objects[key] = old_objects[key]\n # remove leading variable name\n value = re.sub(r'^class [\\w]*\\(Container\\):\\n', '', value)\n old_objects[key] = old_objects.get(key, '') + value\n dependency_order_ssz_objects(old_objects)\n return old_objects\n\n\n# inserts are handeled the same way as functions\ncombine_inserts = combine_functions\n\n\ndef combine_spec_objects(spec0: SpecObject, spec1: SpecObject) -> SpecObject:\n \"\"\"\n Takes in two spec variants (as tuples of their objects) and combines them using the appropriate combiner function.\n \"\"\"\n functions0, constants0, ssz_objects0, inserts0 = spec0\n functions1, constants1, ssz_objects1, inserts1 = spec1\n functions = combine_functions(functions0, functions1)\n constants = combine_constants(constants0, constants1)\n ssz_objects = combine_ssz_objects(ssz_objects0, ssz_objects1)\n inserts = combine_inserts(inserts0, inserts1)\n return functions, constants, ssz_objects, inserts\n\n\ndef build_phase0_spec(sourcefile: str, outfile: str=None) -> Optional[str]:\n functions, constants, ssz_objects, inserts = get_spec(sourcefile)\n spec = objects_to_spec(functions, constants, ssz_objects, inserts, PHASE0_IMPORTS, NEW_TYPES, BYTE_TYPES)\n if outfile is not None:\n with open(outfile, 'w') as out:\n out.write(spec)\n return spec\n\n\ndef build_phase1_spec(phase0_sourcefile: str,\n phase1_custody_sourcefile: str,\n phase1_shard_sourcefile: str,\n outfile: str=None) -> Optional[str]:\n phase0_spec = get_spec(phase0_sourcefile)\n phase1_custody = get_spec(phase1_custody_sourcefile)\n phase1_shard_data = get_spec(phase1_shard_sourcefile)\n spec_objects = phase0_spec\n for value in [phase1_custody, phase1_shard_data]:\n spec_objects = combine_spec_objects(spec_objects, value)\n spec = objects_to_spec(*spec_objects, PHASE1_IMPORTS, NEW_TYPES, BYTE_TYPES)\n if outfile is not None:\n with open(outfile, 'w') as out:\n out.write(spec)\n return spec\n\n\nif __name__ == '__main__':\n description = '''\nBuild the specs from the md docs.\nIf building phase 0:\n 1st argument is input spec.md\n 2nd argument is output spec.py\n\nIf building phase 1:\n 1st argument is input spec_phase0.md\n 2nd argument is input spec_phase1_custody.md\n 3rd argument is input spec_phase1_shard_data.md\n 4th argument is output spec.py\n'''\n parser = ArgumentParser(description=description)\n parser.add_argument(\"-p\", \"--phase\", dest=\"phase\", type=int, default=0, help=\"Build for phase #\")\n parser.add_argument(dest=\"files\", help=\"Input and output files\", nargs=\"+\")\n\n args = parser.parse_args()\n if args.phase == 0:\n if len(args.files) == 2:\n build_phase0_spec(*args.files)\n else:\n print(\" Phase 0 requires an output as well as an input file.\")\n elif args.phase == 1:\n if len(args.files) == 4:\n build_phase1_spec(*args.files)\n else:\n print(\" Phase 1 requires an output as well as 3 input files (phase0.md and phase1.md, phase1.md)\")\n else:\n print(\"Invalid phase: {0}\".format(args.phase))\n", "path": "scripts/build_spec.py"}]} | 3,348 | 476 |
gh_patches_debug_1435 | rasdani/github-patches | git_diff | keras-team__keras-1039 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
keras.utils.visualize_util
line 9: if type(model) == Sequential
Global name Sequential is not defined
line 25 elif type(model) == Graph:
Global name Graph is not defined
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `keras/utils/visualize_util.py`
Content:
```
1 import pydot
2 # old pydot will not work with python3, must use one
3 # that works with python3 such as pydot2 or pydot
4
5
6 def plot(model, to_file='model.png'):
7
8 graph = pydot.Dot(graph_type='digraph')
9 if type(model) == Sequential:
10 previous_node = None
11 written_nodes = []
12 n = 1
13 for node in model.get_config()['layers']:
14 # append number in case layers have same name to differentiate
15 if (node['name'] + str(n)) in written_nodes:
16 n += 1
17 current_node = pydot.Node(node['name'] + str(n))
18 written_nodes.append(node['name'] + str(n))
19 graph.add_node(current_node)
20 if previous_node:
21 graph.add_edge(pydot.Edge(previous_node, current_node))
22 previous_node = current_node
23 graph.write_png(to_file)
24
25 elif type(model) == Graph:
26 # don't need to append number for names since all nodes labeled
27 for input_node in model.input_config:
28 graph.add_node(pydot.Node(input_node['name']))
29
30 # intermediate and output nodes have input defined
31 for layer_config in [model.node_config, model.output_config]:
32 for node in layer_config:
33 graph.add_node(pydot.Node(node['name']))
34 # possible to have multiple 'inputs' vs 1 'input'
35 if node['inputs']:
36 for e in node['inputs']:
37 graph.add_edge(pydot.Edge(e, node['name']))
38 else:
39 graph.add_edge(pydot.Edge(node['input'], node['name']))
40
41 graph.write_png(to_file)
42
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/keras/utils/visualize_util.py b/keras/utils/visualize_util.py
--- a/keras/utils/visualize_util.py
+++ b/keras/utils/visualize_util.py
@@ -1,7 +1,7 @@
import pydot
# old pydot will not work with python3, must use one
# that works with python3 such as pydot2 or pydot
-
+from keras.models import Sequential, Graph
def plot(model, to_file='model.png'):
| {"golden_diff": "diff --git a/keras/utils/visualize_util.py b/keras/utils/visualize_util.py\n--- a/keras/utils/visualize_util.py\n+++ b/keras/utils/visualize_util.py\n@@ -1,7 +1,7 @@\n import pydot\n # old pydot will not work with python3, must use one\n # that works with python3 such as pydot2 or pydot\n-\n+from keras.models import Sequential, Graph\n \n def plot(model, to_file='model.png'):\n", "issue": "keras.utils.visualize_util\nline 9: if type(model) == Sequential\nGlobal name Sequential is not defined\nline 25 elif type(model) == Graph:\nGlobal name Graph is not defined\n\n", "before_files": [{"content": "import pydot\n# old pydot will not work with python3, must use one\n# that works with python3 such as pydot2 or pydot\n\n\ndef plot(model, to_file='model.png'):\n\n graph = pydot.Dot(graph_type='digraph')\n if type(model) == Sequential:\n previous_node = None\n written_nodes = []\n n = 1\n for node in model.get_config()['layers']:\n # append number in case layers have same name to differentiate\n if (node['name'] + str(n)) in written_nodes:\n n += 1\n current_node = pydot.Node(node['name'] + str(n))\n written_nodes.append(node['name'] + str(n))\n graph.add_node(current_node)\n if previous_node:\n graph.add_edge(pydot.Edge(previous_node, current_node))\n previous_node = current_node\n graph.write_png(to_file)\n\n elif type(model) == Graph:\n # don't need to append number for names since all nodes labeled\n for input_node in model.input_config:\n graph.add_node(pydot.Node(input_node['name']))\n\n # intermediate and output nodes have input defined\n for layer_config in [model.node_config, model.output_config]:\n for node in layer_config:\n graph.add_node(pydot.Node(node['name']))\n # possible to have multiple 'inputs' vs 1 'input'\n if node['inputs']:\n for e in node['inputs']:\n graph.add_edge(pydot.Edge(e, node['name']))\n else:\n graph.add_edge(pydot.Edge(node['input'], node['name']))\n\n graph.write_png(to_file)\n", "path": "keras/utils/visualize_util.py"}], "after_files": [{"content": "import pydot\n# old pydot will not work with python3, must use one\n# that works with python3 such as pydot2 or pydot\nfrom keras.models import Sequential, Graph\n\ndef plot(model, to_file='model.png'):\n\n graph = pydot.Dot(graph_type='digraph')\n if type(model) == Sequential:\n previous_node = None\n written_nodes = []\n n = 1\n for node in model.get_config()['layers']:\n # append number in case layers have same name to differentiate\n if (node['name'] + str(n)) in written_nodes:\n n += 1\n current_node = pydot.Node(node['name'] + str(n))\n written_nodes.append(node['name'] + str(n))\n graph.add_node(current_node)\n if previous_node:\n graph.add_edge(pydot.Edge(previous_node, current_node))\n previous_node = current_node\n graph.write_png(to_file)\n\n elif type(model) == Graph:\n # don't need to append number for names since all nodes labeled\n for input_node in model.input_config:\n graph.add_node(pydot.Node(input_node['name']))\n\n # intermediate and output nodes have input defined\n for layer_config in [model.node_config, model.output_config]:\n for node in layer_config:\n graph.add_node(pydot.Node(node['name']))\n # possible to have multiple 'inputs' vs 1 'input'\n if node['inputs']:\n for e in node['inputs']:\n graph.add_edge(pydot.Edge(e, node['name']))\n else:\n graph.add_edge(pydot.Edge(node['input'], node['name']))\n\n graph.write_png(to_file)\n", "path": "keras/utils/visualize_util.py"}]} | 733 | 112 |
gh_patches_debug_25466 | rasdani/github-patches | git_diff | sublimelsp__LSP-1865 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unwanted splitting on dot of the setting keys
**Describe the bug**
In LSP-yaml, the user can specify schema mapping like so:
```
yaml.schemas: {
"https://json.schemastore.org/composer": "/*"
}
```
This doesn't work in ST because LSP splits the URL key on every dot, creating an object that the server doesn't understand.
**Expected behavior**
Should send the value of the `yaml.schemas` object without altering it
**Additional context**
I'm not sure how VSCode manages to handle it but I would guess it is somewhat taking into consideration the schema itself. We could try to check that first.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugin/core/collections.py`
Content:
```
1 """
2 Module with additional collections.
3 """
4 from .typing import Optional, Dict, Any, Generator
5 from copy import deepcopy
6 import sublime
7
8
9 class DottedDict:
10
11 __slots__ = ('_d',)
12
13 def __init__(self, d: Optional[Dict[str, Any]] = None) -> None:
14 """
15 Construct a DottedDict, optionally from an existing dictionary.
16
17 :param d: An existing dictionary.
18 """
19 self._d = {} # type: Dict[str, Any]
20 if d is not None:
21 self.update(d)
22
23 @classmethod
24 def from_base_and_override(cls, base: "DottedDict", override: Optional[Dict[str, Any]]) -> "DottedDict":
25 result = DottedDict(base.copy())
26 if override:
27 result.update(override)
28 return result
29
30 def get(self, path: Optional[str] = None) -> Any:
31 """
32 Get a value from the dictionary.
33
34 :param path: The path, e.g. foo.bar.baz, or None.
35
36 :returns: The value stored at the path, or None if it doesn't exist.
37 Note that this cannot distinguish between None values and
38 paths that don't exist. If the path is None, returns the
39 entire dictionary.
40 """
41 if path is None:
42 return self._d
43 current = self._d # type: Any
44 keys = path.split('.')
45 for key in keys:
46 if isinstance(current, dict):
47 current = current.get(key)
48 else:
49 return None
50 return current
51
52 def walk(self, path: str) -> Generator[Any, None, None]:
53 current = self._d # type: Any
54 keys = path.split('.')
55 for key in keys:
56 if isinstance(current, dict):
57 current = current.get(key)
58 yield current
59 else:
60 yield None
61 return
62
63 def set(self, path: str, value: Any) -> None:
64 """
65 Set a value in the dictionary.
66
67 :param path: The path, e.g. foo.bar.baz
68 :param value: The value
69 """
70 current = self._d
71 keys = path.split('.')
72 for i in range(0, len(keys) - 1):
73 key = keys[i]
74 next_current = current.get(key)
75 if not isinstance(next_current, dict):
76 next_current = {}
77 current[key] = next_current
78 current = next_current
79 current[keys[-1]] = value
80
81 def remove(self, path: str) -> None:
82 """
83 Remove a key from the dictionary.
84
85 :param path: The path, e.g. foo.bar.baz
86 """
87 current = self._d
88 keys = path.split('.')
89 for i in range(0, len(keys) - 1):
90 key = keys[i]
91 next_current = current.get(key)
92 if not isinstance(next_current, dict):
93 return
94 current = next_current
95 current.pop(keys[-1], None)
96
97 def copy(self, path: Optional[str] = None) -> Any:
98 """
99 Get a copy of the value from the dictionary or copy of whole dictionary.
100
101 :param path: The path, e.g. foo.bar.baz, or None.
102
103 :returns: A copy of the value stored at the path, or None if it doesn't exist.
104 Note that this cannot distinguish between None values and
105 paths that don't exist. If the path is None, returns a copy of the
106 entire dictionary.
107 """
108 return deepcopy(self.get(path))
109
110 def __bool__(self) -> bool:
111 """
112 If this collection has at least one key-value pair, return True, else return False.
113 """
114 return bool(self._d)
115
116 def __contains__(self, path: str) -> bool:
117 value = self.get(path)
118 return value is not None and value is not False
119
120 def clear(self) -> None:
121 """
122 Remove all key-value pairs.
123 """
124 self._d.clear()
125
126 def assign(self, d: Dict[str, Any]) -> None:
127 """
128 Overwrites the old stored dictionary with a fresh new dictionary.
129
130 :param d: The new dictionary to store
131 """
132 self._d = d
133
134 def update(self, d: Dict[str, Any]) -> None:
135 """
136 Overwrite and/or add new key-value pairs to the collection.
137
138 :param d: The overriding dictionary. Can contain nested dictionaries.
139 """
140 for key, value in d.items():
141 if isinstance(value, dict):
142 self._update_recursive(value, key)
143 else:
144 self.set(key, value)
145
146 def get_resolved(self, variables: Dict[str, str]) -> Dict[str, Any]:
147 """
148 Resolve a DottedDict that may potentially contain template variables like $folder.
149
150 :param variables: The variables
151
152 :returns: A copy of the underlying dictionary, but with the variables replaced
153 """
154 return sublime.expand_variables(self._d, variables)
155
156 def _update_recursive(self, current: Dict[str, Any], prefix: str) -> None:
157 if not current:
158 return self.set(prefix, current)
159 for key, value in current.items():
160 path = "{}.{}".format(prefix, key)
161 if isinstance(value, dict):
162 self._update_recursive(value, path)
163 else:
164 self.set(path, value)
165
166 def __repr__(self) -> str:
167 return "{}({})".format(self.__class__.__name__, repr(self._d))
168
169 def __eq__(self, other: Any) -> bool:
170 if not isinstance(other, DottedDict):
171 return False
172 return self._d == other._d
173
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/plugin/core/collections.py b/plugin/core/collections.py
--- a/plugin/core/collections.py
+++ b/plugin/core/collections.py
@@ -135,13 +135,10 @@
"""
Overwrite and/or add new key-value pairs to the collection.
- :param d: The overriding dictionary. Can contain nested dictionaries.
+ :param d: The overriding dictionary. Keys must be in the new-style dotted format.
"""
for key, value in d.items():
- if isinstance(value, dict):
- self._update_recursive(value, key)
- else:
- self.set(key, value)
+ self.set(key, value)
def get_resolved(self, variables: Dict[str, str]) -> Dict[str, Any]:
"""
@@ -153,16 +150,6 @@
"""
return sublime.expand_variables(self._d, variables)
- def _update_recursive(self, current: Dict[str, Any], prefix: str) -> None:
- if not current:
- return self.set(prefix, current)
- for key, value in current.items():
- path = "{}.{}".format(prefix, key)
- if isinstance(value, dict):
- self._update_recursive(value, path)
- else:
- self.set(path, value)
-
def __repr__(self) -> str:
return "{}({})".format(self.__class__.__name__, repr(self._d))
| {"golden_diff": "diff --git a/plugin/core/collections.py b/plugin/core/collections.py\n--- a/plugin/core/collections.py\n+++ b/plugin/core/collections.py\n@@ -135,13 +135,10 @@\n \"\"\"\n Overwrite and/or add new key-value pairs to the collection.\n \n- :param d: The overriding dictionary. Can contain nested dictionaries.\n+ :param d: The overriding dictionary. Keys must be in the new-style dotted format.\n \"\"\"\n for key, value in d.items():\n- if isinstance(value, dict):\n- self._update_recursive(value, key)\n- else:\n- self.set(key, value)\n+ self.set(key, value)\n \n def get_resolved(self, variables: Dict[str, str]) -> Dict[str, Any]:\n \"\"\"\n@@ -153,16 +150,6 @@\n \"\"\"\n return sublime.expand_variables(self._d, variables)\n \n- def _update_recursive(self, current: Dict[str, Any], prefix: str) -> None:\n- if not current:\n- return self.set(prefix, current)\n- for key, value in current.items():\n- path = \"{}.{}\".format(prefix, key)\n- if isinstance(value, dict):\n- self._update_recursive(value, path)\n- else:\n- self.set(path, value)\n-\n def __repr__(self) -> str:\n return \"{}({})\".format(self.__class__.__name__, repr(self._d))\n", "issue": "Unwanted splitting on dot of the setting keys\n**Describe the bug**\r\n\r\nIn LSP-yaml, the user can specify schema mapping like so:\r\n\r\n```\r\nyaml.schemas: {\r\n \"https://json.schemastore.org/composer\": \"/*\"\r\n}\r\n```\r\n\r\nThis doesn't work in ST because LSP splits the URL key on every dot, creating an object that the server doesn't understand.\r\n\r\n**Expected behavior**\r\n\r\nShould send the value of the `yaml.schemas` object without altering it\r\n\r\n**Additional context**\r\n\r\nI'm not sure how VSCode manages to handle it but I would guess it is somewhat taking into consideration the schema itself. We could try to check that first.\n", "before_files": [{"content": "\"\"\"\nModule with additional collections.\n\"\"\"\nfrom .typing import Optional, Dict, Any, Generator\nfrom copy import deepcopy\nimport sublime\n\n\nclass DottedDict:\n\n __slots__ = ('_d',)\n\n def __init__(self, d: Optional[Dict[str, Any]] = None) -> None:\n \"\"\"\n Construct a DottedDict, optionally from an existing dictionary.\n\n :param d: An existing dictionary.\n \"\"\"\n self._d = {} # type: Dict[str, Any]\n if d is not None:\n self.update(d)\n\n @classmethod\n def from_base_and_override(cls, base: \"DottedDict\", override: Optional[Dict[str, Any]]) -> \"DottedDict\":\n result = DottedDict(base.copy())\n if override:\n result.update(override)\n return result\n\n def get(self, path: Optional[str] = None) -> Any:\n \"\"\"\n Get a value from the dictionary.\n\n :param path: The path, e.g. foo.bar.baz, or None.\n\n :returns: The value stored at the path, or None if it doesn't exist.\n Note that this cannot distinguish between None values and\n paths that don't exist. If the path is None, returns the\n entire dictionary.\n \"\"\"\n if path is None:\n return self._d\n current = self._d # type: Any\n keys = path.split('.')\n for key in keys:\n if isinstance(current, dict):\n current = current.get(key)\n else:\n return None\n return current\n\n def walk(self, path: str) -> Generator[Any, None, None]:\n current = self._d # type: Any\n keys = path.split('.')\n for key in keys:\n if isinstance(current, dict):\n current = current.get(key)\n yield current\n else:\n yield None\n return\n\n def set(self, path: str, value: Any) -> None:\n \"\"\"\n Set a value in the dictionary.\n\n :param path: The path, e.g. foo.bar.baz\n :param value: The value\n \"\"\"\n current = self._d\n keys = path.split('.')\n for i in range(0, len(keys) - 1):\n key = keys[i]\n next_current = current.get(key)\n if not isinstance(next_current, dict):\n next_current = {}\n current[key] = next_current\n current = next_current\n current[keys[-1]] = value\n\n def remove(self, path: str) -> None:\n \"\"\"\n Remove a key from the dictionary.\n\n :param path: The path, e.g. foo.bar.baz\n \"\"\"\n current = self._d\n keys = path.split('.')\n for i in range(0, len(keys) - 1):\n key = keys[i]\n next_current = current.get(key)\n if not isinstance(next_current, dict):\n return\n current = next_current\n current.pop(keys[-1], None)\n\n def copy(self, path: Optional[str] = None) -> Any:\n \"\"\"\n Get a copy of the value from the dictionary or copy of whole dictionary.\n\n :param path: The path, e.g. foo.bar.baz, or None.\n\n :returns: A copy of the value stored at the path, or None if it doesn't exist.\n Note that this cannot distinguish between None values and\n paths that don't exist. If the path is None, returns a copy of the\n entire dictionary.\n \"\"\"\n return deepcopy(self.get(path))\n\n def __bool__(self) -> bool:\n \"\"\"\n If this collection has at least one key-value pair, return True, else return False.\n \"\"\"\n return bool(self._d)\n\n def __contains__(self, path: str) -> bool:\n value = self.get(path)\n return value is not None and value is not False\n\n def clear(self) -> None:\n \"\"\"\n Remove all key-value pairs.\n \"\"\"\n self._d.clear()\n\n def assign(self, d: Dict[str, Any]) -> None:\n \"\"\"\n Overwrites the old stored dictionary with a fresh new dictionary.\n\n :param d: The new dictionary to store\n \"\"\"\n self._d = d\n\n def update(self, d: Dict[str, Any]) -> None:\n \"\"\"\n Overwrite and/or add new key-value pairs to the collection.\n\n :param d: The overriding dictionary. Can contain nested dictionaries.\n \"\"\"\n for key, value in d.items():\n if isinstance(value, dict):\n self._update_recursive(value, key)\n else:\n self.set(key, value)\n\n def get_resolved(self, variables: Dict[str, str]) -> Dict[str, Any]:\n \"\"\"\n Resolve a DottedDict that may potentially contain template variables like $folder.\n\n :param variables: The variables\n\n :returns: A copy of the underlying dictionary, but with the variables replaced\n \"\"\"\n return sublime.expand_variables(self._d, variables)\n\n def _update_recursive(self, current: Dict[str, Any], prefix: str) -> None:\n if not current:\n return self.set(prefix, current)\n for key, value in current.items():\n path = \"{}.{}\".format(prefix, key)\n if isinstance(value, dict):\n self._update_recursive(value, path)\n else:\n self.set(path, value)\n\n def __repr__(self) -> str:\n return \"{}({})\".format(self.__class__.__name__, repr(self._d))\n\n def __eq__(self, other: Any) -> bool:\n if not isinstance(other, DottedDict):\n return False\n return self._d == other._d\n", "path": "plugin/core/collections.py"}], "after_files": [{"content": "\"\"\"\nModule with additional collections.\n\"\"\"\nfrom .typing import Optional, Dict, Any, Generator\nfrom copy import deepcopy\nimport sublime\n\n\nclass DottedDict:\n\n __slots__ = ('_d',)\n\n def __init__(self, d: Optional[Dict[str, Any]] = None) -> None:\n \"\"\"\n Construct a DottedDict, optionally from an existing dictionary.\n\n :param d: An existing dictionary.\n \"\"\"\n self._d = {} # type: Dict[str, Any]\n if d is not None:\n self.update(d)\n\n @classmethod\n def from_base_and_override(cls, base: \"DottedDict\", override: Optional[Dict[str, Any]]) -> \"DottedDict\":\n result = DottedDict(base.copy())\n if override:\n result.update(override)\n return result\n\n def get(self, path: Optional[str] = None) -> Any:\n \"\"\"\n Get a value from the dictionary.\n\n :param path: The path, e.g. foo.bar.baz, or None.\n\n :returns: The value stored at the path, or None if it doesn't exist.\n Note that this cannot distinguish between None values and\n paths that don't exist. If the path is None, returns the\n entire dictionary.\n \"\"\"\n if path is None:\n return self._d\n current = self._d # type: Any\n keys = path.split('.')\n for key in keys:\n if isinstance(current, dict):\n current = current.get(key)\n else:\n return None\n return current\n\n def walk(self, path: str) -> Generator[Any, None, None]:\n current = self._d # type: Any\n keys = path.split('.')\n for key in keys:\n if isinstance(current, dict):\n current = current.get(key)\n yield current\n else:\n yield None\n return\n\n def set(self, path: str, value: Any) -> None:\n \"\"\"\n Set a value in the dictionary.\n\n :param path: The path, e.g. foo.bar.baz\n :param value: The value\n \"\"\"\n current = self._d\n keys = path.split('.')\n for i in range(0, len(keys) - 1):\n key = keys[i]\n next_current = current.get(key)\n if not isinstance(next_current, dict):\n next_current = {}\n current[key] = next_current\n current = next_current\n current[keys[-1]] = value\n\n def remove(self, path: str) -> None:\n \"\"\"\n Remove a key from the dictionary.\n\n :param path: The path, e.g. foo.bar.baz\n \"\"\"\n current = self._d\n keys = path.split('.')\n for i in range(0, len(keys) - 1):\n key = keys[i]\n next_current = current.get(key)\n if not isinstance(next_current, dict):\n return\n current = next_current\n current.pop(keys[-1], None)\n\n def copy(self, path: Optional[str] = None) -> Any:\n \"\"\"\n Get a copy of the value from the dictionary or copy of whole dictionary.\n\n :param path: The path, e.g. foo.bar.baz, or None.\n\n :returns: A copy of the value stored at the path, or None if it doesn't exist.\n Note that this cannot distinguish between None values and\n paths that don't exist. If the path is None, returns a copy of the\n entire dictionary.\n \"\"\"\n return deepcopy(self.get(path))\n\n def __bool__(self) -> bool:\n \"\"\"\n If this collection has at least one key-value pair, return True, else return False.\n \"\"\"\n return bool(self._d)\n\n def __contains__(self, path: str) -> bool:\n value = self.get(path)\n return value is not None and value is not False\n\n def clear(self) -> None:\n \"\"\"\n Remove all key-value pairs.\n \"\"\"\n self._d.clear()\n\n def assign(self, d: Dict[str, Any]) -> None:\n \"\"\"\n Overwrites the old stored dictionary with a fresh new dictionary.\n\n :param d: The new dictionary to store\n \"\"\"\n self._d = d\n\n def update(self, d: Dict[str, Any]) -> None:\n \"\"\"\n Overwrite and/or add new key-value pairs to the collection.\n\n :param d: The overriding dictionary. Keys must be in the new-style dotted format.\n \"\"\"\n for key, value in d.items():\n self.set(key, value)\n\n def get_resolved(self, variables: Dict[str, str]) -> Dict[str, Any]:\n \"\"\"\n Resolve a DottedDict that may potentially contain template variables like $folder.\n\n :param variables: The variables\n\n :returns: A copy of the underlying dictionary, but with the variables replaced\n \"\"\"\n return sublime.expand_variables(self._d, variables)\n\n def __repr__(self) -> str:\n return \"{}({})\".format(self.__class__.__name__, repr(self._d))\n\n def __eq__(self, other: Any) -> bool:\n if not isinstance(other, DottedDict):\n return False\n return self._d == other._d\n", "path": "plugin/core/collections.py"}]} | 2,074 | 323 |
gh_patches_debug_6143 | rasdani/github-patches | git_diff | getsentry__sentry-python-2371 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
KeyError 'https,http' in async handling 1.31.0
### How do you use Sentry?
Self-hosted/on-premise
### Version
1.31.0
### Steps to Reproduce
I'm running sentry and uvicorn. We also have django-channels. On upgrading to 1.31.0 I'm getting the following:
```
Traceback (most recent call last):
File "/home/vcap/deps/0/python/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py", line 426, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/home/vcap/deps/0/python/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
return await self.app(scope, receive, send)
File "/home/vcap/deps/0/python/lib/python3.9/site-packages/channels/routing.py", line 62, in __call__
return await application(scope, receive, send)
File "/home/vcap/deps/0/python/lib/python3.9/site-packages/sentry_sdk/integrations/django/asgi.py", line 40, in sentry_patched_asgi_handler
return await middleware(scope, receive, send)
File "/home/vcap/deps/0/python/lib/python3.9/site-packages/sentry_sdk/integrations/asgi.py", line 146, in _run_asgi3
return await self._run_app(scope, receive, send, asgi_version=3)
File "/home/vcap/deps/0/python/lib/python3.9/site-packages/sentry_sdk/integrations/asgi.py", line 178, in _run_app
) = self._get_transaction_name_and_source(
File "/home/vcap/deps/0/python/lib/python3.9/site-packages/sentry_sdk/integrations/asgi.py", line 293, in _get_transaction_name_and_source
name = _get_url(asgi_scope, "http" if ty == "http" else "ws", host=None)
File "/home/vcap/deps/0/python/lib/python3.9/site-packages/sentry_sdk/integrations/_asgi_common.py", line 46, in _get_url
default_port = {"http": 80, "https": 443, "ws": 80, "wss": 443}[scheme]
KeyError: 'https,http'
```
I did not research further but it seems to be caused by the recent overhaul of the code (#2335)
### Expected Result
No exception
### Actual Result
.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sentry_sdk/integrations/_asgi_common.py`
Content:
```
1 import urllib
2
3 from sentry_sdk.hub import _should_send_default_pii
4 from sentry_sdk.integrations._wsgi_common import _filter_headers
5 from sentry_sdk._types import TYPE_CHECKING
6
7 if TYPE_CHECKING:
8 from typing import Any
9 from typing import Dict
10 from typing import Optional
11 from typing_extensions import Literal
12
13
14 def _get_headers(asgi_scope):
15 # type: (Any) -> Dict[str, str]
16 """
17 Extract headers from the ASGI scope, in the format that the Sentry protocol expects.
18 """
19 headers = {} # type: Dict[str, str]
20 for raw_key, raw_value in asgi_scope["headers"]:
21 key = raw_key.decode("latin-1")
22 value = raw_value.decode("latin-1")
23 if key in headers:
24 headers[key] = headers[key] + ", " + value
25 else:
26 headers[key] = value
27
28 return headers
29
30
31 def _get_url(asgi_scope, default_scheme, host):
32 # type: (Dict[str, Any], Literal["ws", "http"], Optional[str]) -> str
33 """
34 Extract URL from the ASGI scope, without also including the querystring.
35 """
36 scheme = asgi_scope.get("scheme", default_scheme)
37
38 server = asgi_scope.get("server", None)
39 path = asgi_scope.get("root_path", "") + asgi_scope.get("path", "")
40
41 if host:
42 return "%s://%s%s" % (scheme, host, path)
43
44 if server is not None:
45 host, port = server
46 default_port = {"http": 80, "https": 443, "ws": 80, "wss": 443}[scheme]
47 if port != default_port:
48 return "%s://%s:%s%s" % (scheme, host, port, path)
49 return "%s://%s%s" % (scheme, host, path)
50 return path
51
52
53 def _get_query(asgi_scope):
54 # type: (Any) -> Any
55 """
56 Extract querystring from the ASGI scope, in the format that the Sentry protocol expects.
57 """
58 qs = asgi_scope.get("query_string")
59 if not qs:
60 return None
61 return urllib.parse.unquote(qs.decode("latin-1"))
62
63
64 def _get_ip(asgi_scope):
65 # type: (Any) -> str
66 """
67 Extract IP Address from the ASGI scope based on request headers with fallback to scope client.
68 """
69 headers = _get_headers(asgi_scope)
70 try:
71 return headers["x-forwarded-for"].split(",")[0].strip()
72 except (KeyError, IndexError):
73 pass
74
75 try:
76 return headers["x-real-ip"]
77 except KeyError:
78 pass
79
80 return asgi_scope.get("client")[0]
81
82
83 def _get_request_data(asgi_scope):
84 # type: (Any) -> Dict[str, Any]
85 """
86 Returns data related to the HTTP request from the ASGI scope.
87 """
88 request_data = {} # type: Dict[str, Any]
89 ty = asgi_scope["type"]
90 if ty in ("http", "websocket"):
91 request_data["method"] = asgi_scope.get("method")
92
93 request_data["headers"] = headers = _filter_headers(_get_headers(asgi_scope))
94 request_data["query_string"] = _get_query(asgi_scope)
95
96 request_data["url"] = _get_url(
97 asgi_scope, "http" if ty == "http" else "ws", headers.get("host")
98 )
99
100 client = asgi_scope.get("client")
101 if client and _should_send_default_pii():
102 request_data["env"] = {"REMOTE_ADDR": _get_ip(asgi_scope)}
103
104 return request_data
105
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sentry_sdk/integrations/_asgi_common.py b/sentry_sdk/integrations/_asgi_common.py
--- a/sentry_sdk/integrations/_asgi_common.py
+++ b/sentry_sdk/integrations/_asgi_common.py
@@ -43,7 +43,7 @@
if server is not None:
host, port = server
- default_port = {"http": 80, "https": 443, "ws": 80, "wss": 443}[scheme]
+ default_port = {"http": 80, "https": 443, "ws": 80, "wss": 443}.get(scheme)
if port != default_port:
return "%s://%s:%s%s" % (scheme, host, port, path)
return "%s://%s%s" % (scheme, host, path)
| {"golden_diff": "diff --git a/sentry_sdk/integrations/_asgi_common.py b/sentry_sdk/integrations/_asgi_common.py\n--- a/sentry_sdk/integrations/_asgi_common.py\n+++ b/sentry_sdk/integrations/_asgi_common.py\n@@ -43,7 +43,7 @@\n \n if server is not None:\n host, port = server\n- default_port = {\"http\": 80, \"https\": 443, \"ws\": 80, \"wss\": 443}[scheme]\n+ default_port = {\"http\": 80, \"https\": 443, \"ws\": 80, \"wss\": 443}.get(scheme)\n if port != default_port:\n return \"%s://%s:%s%s\" % (scheme, host, port, path)\n return \"%s://%s%s\" % (scheme, host, path)\n", "issue": "KeyError 'https,http' in async handling 1.31.0\n### How do you use Sentry?\n\nSelf-hosted/on-premise\n\n### Version\n\n1.31.0\n\n### Steps to Reproduce\n\nI'm running sentry and uvicorn. We also have django-channels. On upgrading to 1.31.0 I'm getting the following:\r\n\r\n```\r\nTraceback (most recent call last):\r\nFile \"/home/vcap/deps/0/python/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py\", line 426, in run_asgi\r\nresult = await app( # type: ignore[func-returns-value]\r\nFile \"/home/vcap/deps/0/python/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py\", line 84, in __call__\r\nreturn await self.app(scope, receive, send)\r\nFile \"/home/vcap/deps/0/python/lib/python3.9/site-packages/channels/routing.py\", line 62, in __call__\r\nreturn await application(scope, receive, send)\r\nFile \"/home/vcap/deps/0/python/lib/python3.9/site-packages/sentry_sdk/integrations/django/asgi.py\", line 40, in sentry_patched_asgi_handler\r\nreturn await middleware(scope, receive, send)\r\nFile \"/home/vcap/deps/0/python/lib/python3.9/site-packages/sentry_sdk/integrations/asgi.py\", line 146, in _run_asgi3\r\nreturn await self._run_app(scope, receive, send, asgi_version=3)\r\nFile \"/home/vcap/deps/0/python/lib/python3.9/site-packages/sentry_sdk/integrations/asgi.py\", line 178, in _run_app\r\n) = self._get_transaction_name_and_source(\r\nFile \"/home/vcap/deps/0/python/lib/python3.9/site-packages/sentry_sdk/integrations/asgi.py\", line 293, in _get_transaction_name_and_source\r\nname = _get_url(asgi_scope, \"http\" if ty == \"http\" else \"ws\", host=None)\r\nFile \"/home/vcap/deps/0/python/lib/python3.9/site-packages/sentry_sdk/integrations/_asgi_common.py\", line 46, in _get_url\r\ndefault_port = {\"http\": 80, \"https\": 443, \"ws\": 80, \"wss\": 443}[scheme]\r\nKeyError: 'https,http'\r\n```\r\n\r\nI did not research further but it seems to be caused by the recent overhaul of the code (#2335)\n\n### Expected Result\n\nNo exception\n\n### Actual Result\n\n.\n", "before_files": [{"content": "import urllib\n\nfrom sentry_sdk.hub import _should_send_default_pii\nfrom sentry_sdk.integrations._wsgi_common import _filter_headers\nfrom sentry_sdk._types import TYPE_CHECKING\n\nif TYPE_CHECKING:\n from typing import Any\n from typing import Dict\n from typing import Optional\n from typing_extensions import Literal\n\n\ndef _get_headers(asgi_scope):\n # type: (Any) -> Dict[str, str]\n \"\"\"\n Extract headers from the ASGI scope, in the format that the Sentry protocol expects.\n \"\"\"\n headers = {} # type: Dict[str, str]\n for raw_key, raw_value in asgi_scope[\"headers\"]:\n key = raw_key.decode(\"latin-1\")\n value = raw_value.decode(\"latin-1\")\n if key in headers:\n headers[key] = headers[key] + \", \" + value\n else:\n headers[key] = value\n\n return headers\n\n\ndef _get_url(asgi_scope, default_scheme, host):\n # type: (Dict[str, Any], Literal[\"ws\", \"http\"], Optional[str]) -> str\n \"\"\"\n Extract URL from the ASGI scope, without also including the querystring.\n \"\"\"\n scheme = asgi_scope.get(\"scheme\", default_scheme)\n\n server = asgi_scope.get(\"server\", None)\n path = asgi_scope.get(\"root_path\", \"\") + asgi_scope.get(\"path\", \"\")\n\n if host:\n return \"%s://%s%s\" % (scheme, host, path)\n\n if server is not None:\n host, port = server\n default_port = {\"http\": 80, \"https\": 443, \"ws\": 80, \"wss\": 443}[scheme]\n if port != default_port:\n return \"%s://%s:%s%s\" % (scheme, host, port, path)\n return \"%s://%s%s\" % (scheme, host, path)\n return path\n\n\ndef _get_query(asgi_scope):\n # type: (Any) -> Any\n \"\"\"\n Extract querystring from the ASGI scope, in the format that the Sentry protocol expects.\n \"\"\"\n qs = asgi_scope.get(\"query_string\")\n if not qs:\n return None\n return urllib.parse.unquote(qs.decode(\"latin-1\"))\n\n\ndef _get_ip(asgi_scope):\n # type: (Any) -> str\n \"\"\"\n Extract IP Address from the ASGI scope based on request headers with fallback to scope client.\n \"\"\"\n headers = _get_headers(asgi_scope)\n try:\n return headers[\"x-forwarded-for\"].split(\",\")[0].strip()\n except (KeyError, IndexError):\n pass\n\n try:\n return headers[\"x-real-ip\"]\n except KeyError:\n pass\n\n return asgi_scope.get(\"client\")[0]\n\n\ndef _get_request_data(asgi_scope):\n # type: (Any) -> Dict[str, Any]\n \"\"\"\n Returns data related to the HTTP request from the ASGI scope.\n \"\"\"\n request_data = {} # type: Dict[str, Any]\n ty = asgi_scope[\"type\"]\n if ty in (\"http\", \"websocket\"):\n request_data[\"method\"] = asgi_scope.get(\"method\")\n\n request_data[\"headers\"] = headers = _filter_headers(_get_headers(asgi_scope))\n request_data[\"query_string\"] = _get_query(asgi_scope)\n\n request_data[\"url\"] = _get_url(\n asgi_scope, \"http\" if ty == \"http\" else \"ws\", headers.get(\"host\")\n )\n\n client = asgi_scope.get(\"client\")\n if client and _should_send_default_pii():\n request_data[\"env\"] = {\"REMOTE_ADDR\": _get_ip(asgi_scope)}\n\n return request_data\n", "path": "sentry_sdk/integrations/_asgi_common.py"}], "after_files": [{"content": "import urllib\n\nfrom sentry_sdk.hub import _should_send_default_pii\nfrom sentry_sdk.integrations._wsgi_common import _filter_headers\nfrom sentry_sdk._types import TYPE_CHECKING\n\nif TYPE_CHECKING:\n from typing import Any\n from typing import Dict\n from typing import Optional\n from typing_extensions import Literal\n\n\ndef _get_headers(asgi_scope):\n # type: (Any) -> Dict[str, str]\n \"\"\"\n Extract headers from the ASGI scope, in the format that the Sentry protocol expects.\n \"\"\"\n headers = {} # type: Dict[str, str]\n for raw_key, raw_value in asgi_scope[\"headers\"]:\n key = raw_key.decode(\"latin-1\")\n value = raw_value.decode(\"latin-1\")\n if key in headers:\n headers[key] = headers[key] + \", \" + value\n else:\n headers[key] = value\n\n return headers\n\n\ndef _get_url(asgi_scope, default_scheme, host):\n # type: (Dict[str, Any], Literal[\"ws\", \"http\"], Optional[str]) -> str\n \"\"\"\n Extract URL from the ASGI scope, without also including the querystring.\n \"\"\"\n scheme = asgi_scope.get(\"scheme\", default_scheme)\n\n server = asgi_scope.get(\"server\", None)\n path = asgi_scope.get(\"root_path\", \"\") + asgi_scope.get(\"path\", \"\")\n\n if host:\n return \"%s://%s%s\" % (scheme, host, path)\n\n if server is not None:\n host, port = server\n default_port = {\"http\": 80, \"https\": 443, \"ws\": 80, \"wss\": 443}.get(scheme)\n if port != default_port:\n return \"%s://%s:%s%s\" % (scheme, host, port, path)\n return \"%s://%s%s\" % (scheme, host, path)\n return path\n\n\ndef _get_query(asgi_scope):\n # type: (Any) -> Any\n \"\"\"\n Extract querystring from the ASGI scope, in the format that the Sentry protocol expects.\n \"\"\"\n qs = asgi_scope.get(\"query_string\")\n if not qs:\n return None\n return urllib.parse.unquote(qs.decode(\"latin-1\"))\n\n\ndef _get_ip(asgi_scope):\n # type: (Any) -> str\n \"\"\"\n Extract IP Address from the ASGI scope based on request headers with fallback to scope client.\n \"\"\"\n headers = _get_headers(asgi_scope)\n try:\n return headers[\"x-forwarded-for\"].split(\",\")[0].strip()\n except (KeyError, IndexError):\n pass\n\n try:\n return headers[\"x-real-ip\"]\n except KeyError:\n pass\n\n return asgi_scope.get(\"client\")[0]\n\n\ndef _get_request_data(asgi_scope):\n # type: (Any) -> Dict[str, Any]\n \"\"\"\n Returns data related to the HTTP request from the ASGI scope.\n \"\"\"\n request_data = {} # type: Dict[str, Any]\n ty = asgi_scope[\"type\"]\n if ty in (\"http\", \"websocket\"):\n request_data[\"method\"] = asgi_scope.get(\"method\")\n\n request_data[\"headers\"] = headers = _filter_headers(_get_headers(asgi_scope))\n request_data[\"query_string\"] = _get_query(asgi_scope)\n\n request_data[\"url\"] = _get_url(\n asgi_scope, \"http\" if ty == \"http\" else \"ws\", headers.get(\"host\")\n )\n\n client = asgi_scope.get(\"client\")\n if client and _should_send_default_pii():\n request_data[\"env\"] = {\"REMOTE_ADDR\": _get_ip(asgi_scope)}\n\n return request_data\n", "path": "sentry_sdk/integrations/_asgi_common.py"}]} | 1,878 | 204 |
gh_patches_debug_19457 | rasdani/github-patches | git_diff | yt-project__yt-3955 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BUG: yt crashes on startup if the config file is invalid toml
@cphyc originally noted this problem in #3831
```bash
echo "! this is invalid" > yt.toml
python -c "import yt"
```
fails with the backtrace
```backtrace
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/XXX/Documents/prog/yt/yt/__init__.py", line 13, in <module>
import yt.utilities.physical_constants as physical_constants
File "/home/XXX/Documents/prog/yt/yt/utilities/physical_constants.py", line 3, in <module>
from yt.units.yt_array import YTQuantity
File "/home/XXX/Documents/prog/yt/yt/units/yt_array.py", line 3, in <module>
from yt.funcs import array_like_field # NOQA: F401
File "/home/XXX/Documents/prog/yt/yt/funcs.py", line 33, in <module>
from yt.utilities.logger import ytLogger as mylog
File "/home/XXX/Documents/prog/yt/yt/utilities/logger.py", line 4, in <module>
from yt.config import ytcfg
File "/home/XXX/Documents/prog/yt/yt/config.py", line 215, in <module>
ytcfg.read(_local_config_file)
File "/home/XXX/Documents/prog/yt/yt/config.py", line 154, in read
data = tomllib.load(fh)
File "/home/XXX/anaconda3/envs/py39/lib/python3.9/site-packages/tomli/_parser.py", line 66, in load
return loads(s, parse_float=parse_float)
File "/home/XXX/anaconda3/envs/py39/lib/python3.9/site-packages/tomli/_parser.py", line 116, in loads
raise suffixed_err(src, pos, "Invalid statement")
tomli.TOMLDecodeError: Invalid statement (at line 1, column 1)
```
_Originally posted by @cphyc in https://github.com/yt-project/yt/pull/3831#pullrequestreview-898674860_
We could embed this step in a `try/except` block and raise a warning instead of an error (still keeping the critical information from the original error).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `yt/utilities/configure.py`
Content:
```
1 import os
2 from typing import Callable, List
3
4 # TODO: import tomllib from the standard library instead in Python >= 3.11
5 import tomli as tomllib
6 import tomli_w
7 from more_itertools import always_iterable
8
9 from yt.utilities.configuration_tree import ConfigLeaf, ConfigNode
10
11 configuration_callbacks: List[Callable[["YTConfig"], None]] = []
12
13
14 def config_dir():
15 config_root = os.environ.get(
16 "XDG_CONFIG_HOME", os.path.join(os.path.expanduser("~"), ".config")
17 )
18 conf_dir = os.path.join(config_root, "yt")
19 return conf_dir
20
21
22 class YTConfig:
23 def __init__(self, defaults=None):
24 if defaults is None:
25 defaults = {}
26 self.config_root = ConfigNode(None)
27
28 def get(self, section, *keys, callback=None):
29 node_or_leaf = self.config_root.get(section, *keys)
30 if isinstance(node_or_leaf, ConfigLeaf):
31 if callback is not None:
32 return callback(node_or_leaf)
33 return node_or_leaf.value
34 return node_or_leaf
35
36 def get_most_specific(self, section, *keys, **kwargs):
37 use_fallback = "fallback" in kwargs
38 fallback = kwargs.pop("fallback", None)
39 try:
40 return self.config_root.get_deepest_leaf(section, *keys)
41 except KeyError as err:
42 if use_fallback:
43 return fallback
44 else:
45 raise err
46
47 def update(self, new_values, metadata=None):
48 if metadata is None:
49 metadata = {}
50 self.config_root.update(new_values, metadata)
51
52 def has_section(self, section):
53 try:
54 self.config_root.get_child(section)
55 return True
56 except KeyError:
57 return False
58
59 def add_section(self, section):
60 self.config_root.add_child(section)
61
62 def remove_section(self, section):
63 if self.has_section(section):
64 self.config_root.remove_child(section)
65 return True
66 else:
67 return False
68
69 def set(self, *args, metadata=None):
70 section, *keys, value = args
71 if metadata is None:
72 metadata = {"source": "runtime"}
73 self.config_root.upsert_from_list(
74 [section] + list(keys), value, extra_data=metadata
75 )
76
77 def remove(self, *args):
78 self.config_root.pop_leaf(args)
79
80 def read(self, file_names):
81 file_names_read = []
82 for fname in always_iterable(file_names):
83 if not os.path.exists(fname):
84 continue
85 metadata = {"source": f"file: {fname}"}
86 with open(fname, "rb") as fh:
87 data = tomllib.load(fh)
88 self.update(data, metadata=metadata)
89 file_names_read.append(fname)
90
91 return file_names_read
92
93 def write(self, file_handler):
94 value = self.config_root.as_dict()
95 config_as_str = tomli_w.dumps(value)
96
97 try:
98 # Assuming file_handler has a write attribute
99 file_handler.write(config_as_str)
100 except AttributeError:
101 # Otherwise we expect a path to a file
102 with open(file_handler, mode="w") as fh:
103 fh.write(config_as_str)
104
105 @staticmethod
106 def get_global_config_file():
107 return os.path.join(config_dir(), "yt.toml")
108
109 @staticmethod
110 def get_local_config_file():
111 return os.path.join(os.path.abspath(os.curdir), "yt.toml")
112
113 def __setitem__(self, args, value):
114 section, *keys = always_iterable(args)
115 self.set(section, *keys, value, metadata=None)
116
117 def __getitem__(self, key):
118 section, *keys = always_iterable(key)
119 return self.get(section, *keys)
120
121 def __contains__(self, item):
122 return item in self.config_root
123
124 # Add support for IPython rich display
125 # see https://ipython.readthedocs.io/en/stable/config/integrating.html
126 def _repr_json_(self):
127 return self.config_root._repr_json_()
128
129
130 CONFIG = YTConfig()
131
132
133 def _cast_bool_helper(value):
134 if value == "True":
135 return True
136 elif value == "False":
137 return False
138 else:
139 raise ValueError("Cannot safely cast to bool")
140
141
142 def _expand_all(s):
143 return os.path.expandvars(os.path.expanduser(s))
144
145
146 def _cast_value_helper(value, types=(_cast_bool_helper, int, float, _expand_all)):
147 for t in types:
148 try:
149 retval = t(value)
150 return retval
151 except ValueError:
152 pass
153
154
155 def get_config(section, option):
156 *option_path, option_name = option.split(".")
157 return CONFIG.get(section, *option_path, option_name)
158
159
160 def set_config(section, option, value, config_file):
161 if not CONFIG.has_section(section):
162 CONFIG.add_section(section)
163
164 option_path = option.split(".")
165 CONFIG.set(section, *option_path, _cast_value_helper(value))
166 write_config(config_file)
167
168
169 def write_config(config_file):
170 CONFIG.write(config_file)
171
172
173 def rm_config(section, option, config_file):
174 option_path = option.split(".")
175 CONFIG.remove(section, *option_path)
176 write_config(config_file)
177
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/yt/utilities/configure.py b/yt/utilities/configure.py
--- a/yt/utilities/configure.py
+++ b/yt/utilities/configure.py
@@ -1,4 +1,5 @@
import os
+import warnings
from typing import Callable, List
# TODO: import tomllib from the standard library instead in Python >= 3.11
@@ -83,10 +84,16 @@
if not os.path.exists(fname):
continue
metadata = {"source": f"file: {fname}"}
- with open(fname, "rb") as fh:
- data = tomllib.load(fh)
- self.update(data, metadata=metadata)
- file_names_read.append(fname)
+ try:
+ with open(fname, "rb") as fh:
+ data = tomllib.load(fh)
+ except tomllib.TOMLDecodeError as exc:
+ warnings.warn(
+ f"Could not load configuration file {fname} (invalid TOML: {exc})"
+ )
+ else:
+ self.update(data, metadata=metadata)
+ file_names_read.append(fname)
return file_names_read
| {"golden_diff": "diff --git a/yt/utilities/configure.py b/yt/utilities/configure.py\n--- a/yt/utilities/configure.py\n+++ b/yt/utilities/configure.py\n@@ -1,4 +1,5 @@\n import os\n+import warnings\n from typing import Callable, List\n \n # TODO: import tomllib from the standard library instead in Python >= 3.11\n@@ -83,10 +84,16 @@\n if not os.path.exists(fname):\n continue\n metadata = {\"source\": f\"file: {fname}\"}\n- with open(fname, \"rb\") as fh:\n- data = tomllib.load(fh)\n- self.update(data, metadata=metadata)\n- file_names_read.append(fname)\n+ try:\n+ with open(fname, \"rb\") as fh:\n+ data = tomllib.load(fh)\n+ except tomllib.TOMLDecodeError as exc:\n+ warnings.warn(\n+ f\"Could not load configuration file {fname} (invalid TOML: {exc})\"\n+ )\n+ else:\n+ self.update(data, metadata=metadata)\n+ file_names_read.append(fname)\n \n return file_names_read\n", "issue": "BUG: yt crashes on startup if the config file is invalid toml\n@cphyc originally noted this problem in #3831\r\n```bash\r\necho \"! this is invalid\" > yt.toml \r\npython -c \"import yt\"\r\n```\r\nfails with the backtrace\r\n```backtrace\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/home/XXX/Documents/prog/yt/yt/__init__.py\", line 13, in <module>\r\n import yt.utilities.physical_constants as physical_constants\r\n File \"/home/XXX/Documents/prog/yt/yt/utilities/physical_constants.py\", line 3, in <module>\r\n from yt.units.yt_array import YTQuantity\r\n File \"/home/XXX/Documents/prog/yt/yt/units/yt_array.py\", line 3, in <module>\r\n from yt.funcs import array_like_field # NOQA: F401\r\n File \"/home/XXX/Documents/prog/yt/yt/funcs.py\", line 33, in <module>\r\n from yt.utilities.logger import ytLogger as mylog\r\n File \"/home/XXX/Documents/prog/yt/yt/utilities/logger.py\", line 4, in <module>\r\n from yt.config import ytcfg\r\n File \"/home/XXX/Documents/prog/yt/yt/config.py\", line 215, in <module>\r\n ytcfg.read(_local_config_file)\r\n File \"/home/XXX/Documents/prog/yt/yt/config.py\", line 154, in read\r\n data = tomllib.load(fh)\r\n File \"/home/XXX/anaconda3/envs/py39/lib/python3.9/site-packages/tomli/_parser.py\", line 66, in load\r\n return loads(s, parse_float=parse_float)\r\n File \"/home/XXX/anaconda3/envs/py39/lib/python3.9/site-packages/tomli/_parser.py\", line 116, in loads\r\n raise suffixed_err(src, pos, \"Invalid statement\")\r\ntomli.TOMLDecodeError: Invalid statement (at line 1, column 1)\r\n```\r\n\r\n_Originally posted by @cphyc in https://github.com/yt-project/yt/pull/3831#pullrequestreview-898674860_\r\n\r\nWe could embed this step in a `try/except` block and raise a warning instead of an error (still keeping the critical information from the original error).\n", "before_files": [{"content": "import os\nfrom typing import Callable, List\n\n# TODO: import tomllib from the standard library instead in Python >= 3.11\nimport tomli as tomllib\nimport tomli_w\nfrom more_itertools import always_iterable\n\nfrom yt.utilities.configuration_tree import ConfigLeaf, ConfigNode\n\nconfiguration_callbacks: List[Callable[[\"YTConfig\"], None]] = []\n\n\ndef config_dir():\n config_root = os.environ.get(\n \"XDG_CONFIG_HOME\", os.path.join(os.path.expanduser(\"~\"), \".config\")\n )\n conf_dir = os.path.join(config_root, \"yt\")\n return conf_dir\n\n\nclass YTConfig:\n def __init__(self, defaults=None):\n if defaults is None:\n defaults = {}\n self.config_root = ConfigNode(None)\n\n def get(self, section, *keys, callback=None):\n node_or_leaf = self.config_root.get(section, *keys)\n if isinstance(node_or_leaf, ConfigLeaf):\n if callback is not None:\n return callback(node_or_leaf)\n return node_or_leaf.value\n return node_or_leaf\n\n def get_most_specific(self, section, *keys, **kwargs):\n use_fallback = \"fallback\" in kwargs\n fallback = kwargs.pop(\"fallback\", None)\n try:\n return self.config_root.get_deepest_leaf(section, *keys)\n except KeyError as err:\n if use_fallback:\n return fallback\n else:\n raise err\n\n def update(self, new_values, metadata=None):\n if metadata is None:\n metadata = {}\n self.config_root.update(new_values, metadata)\n\n def has_section(self, section):\n try:\n self.config_root.get_child(section)\n return True\n except KeyError:\n return False\n\n def add_section(self, section):\n self.config_root.add_child(section)\n\n def remove_section(self, section):\n if self.has_section(section):\n self.config_root.remove_child(section)\n return True\n else:\n return False\n\n def set(self, *args, metadata=None):\n section, *keys, value = args\n if metadata is None:\n metadata = {\"source\": \"runtime\"}\n self.config_root.upsert_from_list(\n [section] + list(keys), value, extra_data=metadata\n )\n\n def remove(self, *args):\n self.config_root.pop_leaf(args)\n\n def read(self, file_names):\n file_names_read = []\n for fname in always_iterable(file_names):\n if not os.path.exists(fname):\n continue\n metadata = {\"source\": f\"file: {fname}\"}\n with open(fname, \"rb\") as fh:\n data = tomllib.load(fh)\n self.update(data, metadata=metadata)\n file_names_read.append(fname)\n\n return file_names_read\n\n def write(self, file_handler):\n value = self.config_root.as_dict()\n config_as_str = tomli_w.dumps(value)\n\n try:\n # Assuming file_handler has a write attribute\n file_handler.write(config_as_str)\n except AttributeError:\n # Otherwise we expect a path to a file\n with open(file_handler, mode=\"w\") as fh:\n fh.write(config_as_str)\n\n @staticmethod\n def get_global_config_file():\n return os.path.join(config_dir(), \"yt.toml\")\n\n @staticmethod\n def get_local_config_file():\n return os.path.join(os.path.abspath(os.curdir), \"yt.toml\")\n\n def __setitem__(self, args, value):\n section, *keys = always_iterable(args)\n self.set(section, *keys, value, metadata=None)\n\n def __getitem__(self, key):\n section, *keys = always_iterable(key)\n return self.get(section, *keys)\n\n def __contains__(self, item):\n return item in self.config_root\n\n # Add support for IPython rich display\n # see https://ipython.readthedocs.io/en/stable/config/integrating.html\n def _repr_json_(self):\n return self.config_root._repr_json_()\n\n\nCONFIG = YTConfig()\n\n\ndef _cast_bool_helper(value):\n if value == \"True\":\n return True\n elif value == \"False\":\n return False\n else:\n raise ValueError(\"Cannot safely cast to bool\")\n\n\ndef _expand_all(s):\n return os.path.expandvars(os.path.expanduser(s))\n\n\ndef _cast_value_helper(value, types=(_cast_bool_helper, int, float, _expand_all)):\n for t in types:\n try:\n retval = t(value)\n return retval\n except ValueError:\n pass\n\n\ndef get_config(section, option):\n *option_path, option_name = option.split(\".\")\n return CONFIG.get(section, *option_path, option_name)\n\n\ndef set_config(section, option, value, config_file):\n if not CONFIG.has_section(section):\n CONFIG.add_section(section)\n\n option_path = option.split(\".\")\n CONFIG.set(section, *option_path, _cast_value_helper(value))\n write_config(config_file)\n\n\ndef write_config(config_file):\n CONFIG.write(config_file)\n\n\ndef rm_config(section, option, config_file):\n option_path = option.split(\".\")\n CONFIG.remove(section, *option_path)\n write_config(config_file)\n", "path": "yt/utilities/configure.py"}], "after_files": [{"content": "import os\nimport warnings\nfrom typing import Callable, List\n\n# TODO: import tomllib from the standard library instead in Python >= 3.11\nimport tomli as tomllib\nimport tomli_w\nfrom more_itertools import always_iterable\n\nfrom yt.utilities.configuration_tree import ConfigLeaf, ConfigNode\n\nconfiguration_callbacks: List[Callable[[\"YTConfig\"], None]] = []\n\n\ndef config_dir():\n config_root = os.environ.get(\n \"XDG_CONFIG_HOME\", os.path.join(os.path.expanduser(\"~\"), \".config\")\n )\n conf_dir = os.path.join(config_root, \"yt\")\n return conf_dir\n\n\nclass YTConfig:\n def __init__(self, defaults=None):\n if defaults is None:\n defaults = {}\n self.config_root = ConfigNode(None)\n\n def get(self, section, *keys, callback=None):\n node_or_leaf = self.config_root.get(section, *keys)\n if isinstance(node_or_leaf, ConfigLeaf):\n if callback is not None:\n return callback(node_or_leaf)\n return node_or_leaf.value\n return node_or_leaf\n\n def get_most_specific(self, section, *keys, **kwargs):\n use_fallback = \"fallback\" in kwargs\n fallback = kwargs.pop(\"fallback\", None)\n try:\n return self.config_root.get_deepest_leaf(section, *keys)\n except KeyError as err:\n if use_fallback:\n return fallback\n else:\n raise err\n\n def update(self, new_values, metadata=None):\n if metadata is None:\n metadata = {}\n self.config_root.update(new_values, metadata)\n\n def has_section(self, section):\n try:\n self.config_root.get_child(section)\n return True\n except KeyError:\n return False\n\n def add_section(self, section):\n self.config_root.add_child(section)\n\n def remove_section(self, section):\n if self.has_section(section):\n self.config_root.remove_child(section)\n return True\n else:\n return False\n\n def set(self, *args, metadata=None):\n section, *keys, value = args\n if metadata is None:\n metadata = {\"source\": \"runtime\"}\n self.config_root.upsert_from_list(\n [section] + list(keys), value, extra_data=metadata\n )\n\n def remove(self, *args):\n self.config_root.pop_leaf(args)\n\n def read(self, file_names):\n file_names_read = []\n for fname in always_iterable(file_names):\n if not os.path.exists(fname):\n continue\n metadata = {\"source\": f\"file: {fname}\"}\n try:\n with open(fname, \"rb\") as fh:\n data = tomllib.load(fh)\n except tomllib.TOMLDecodeError as exc:\n warnings.warn(\n f\"Could not load configuration file {fname} (invalid TOML: {exc})\"\n )\n else:\n self.update(data, metadata=metadata)\n file_names_read.append(fname)\n\n return file_names_read\n\n def write(self, file_handler):\n value = self.config_root.as_dict()\n config_as_str = tomli_w.dumps(value)\n\n try:\n # Assuming file_handler has a write attribute\n file_handler.write(config_as_str)\n except AttributeError:\n # Otherwise we expect a path to a file\n with open(file_handler, mode=\"w\") as fh:\n fh.write(config_as_str)\n\n @staticmethod\n def get_global_config_file():\n return os.path.join(config_dir(), \"yt.toml\")\n\n @staticmethod\n def get_local_config_file():\n return os.path.join(os.path.abspath(os.curdir), \"yt.toml\")\n\n def __setitem__(self, args, value):\n section, *keys = always_iterable(args)\n self.set(section, *keys, value, metadata=None)\n\n def __getitem__(self, key):\n section, *keys = always_iterable(key)\n return self.get(section, *keys)\n\n def __contains__(self, item):\n return item in self.config_root\n\n # Add support for IPython rich display\n # see https://ipython.readthedocs.io/en/stable/config/integrating.html\n def _repr_json_(self):\n return self.config_root._repr_json_()\n\n\nCONFIG = YTConfig()\n\n\ndef _cast_bool_helper(value):\n if value == \"True\":\n return True\n elif value == \"False\":\n return False\n else:\n raise ValueError(\"Cannot safely cast to bool\")\n\n\ndef _expand_all(s):\n return os.path.expandvars(os.path.expanduser(s))\n\n\ndef _cast_value_helper(value, types=(_cast_bool_helper, int, float, _expand_all)):\n for t in types:\n try:\n retval = t(value)\n return retval\n except ValueError:\n pass\n\n\ndef get_config(section, option):\n *option_path, option_name = option.split(\".\")\n return CONFIG.get(section, *option_path, option_name)\n\n\ndef set_config(section, option, value, config_file):\n if not CONFIG.has_section(section):\n CONFIG.add_section(section)\n\n option_path = option.split(\".\")\n CONFIG.set(section, *option_path, _cast_value_helper(value))\n write_config(config_file)\n\n\ndef write_config(config_file):\n CONFIG.write(config_file)\n\n\ndef rm_config(section, option, config_file):\n option_path = option.split(\".\")\n CONFIG.remove(section, *option_path)\n write_config(config_file)\n", "path": "yt/utilities/configure.py"}]} | 2,360 | 257 |
gh_patches_debug_13968 | rasdani/github-patches | git_diff | Kinto__kinto-367 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
kinto init should install postgresql dependencies
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kinto/__main__.py`
Content:
```
1 from __future__ import print_function
2 import argparse
3 import os
4 import sys
5 from six.moves import input
6 from cliquet.scripts import cliquet
7 from pyramid.scripts import pserve
8 from pyramid.paster import bootstrap
9
10 from kinto.config import init
11
12 CONFIG_FILE = 'config/kinto.ini'
13
14
15 def main(args=None):
16 """The main routine."""
17 if args is None:
18 args = sys.argv[1:]
19
20 parser = argparse.ArgumentParser(description="Kinto commands")
21 parser.add_argument('--ini',
22 help='Application configuration file',
23 dest='ini_file',
24 required=False,
25 default=CONFIG_FILE)
26 parser.add_argument('--backend',
27 help='Specify backend',
28 dest='backend',
29 required=False,
30 default=None)
31
32 subparsers = parser.add_subparsers(title='subcommands',
33 description='valid subcommands',
34 help='init/start/migrate')
35
36 parser_init = subparsers.add_parser('init')
37 parser_init.set_defaults(which='init')
38
39 parser_migrate = subparsers.add_parser('migrate')
40 parser_migrate.set_defaults(which='migrate')
41
42 parser_start = subparsers.add_parser('start')
43 parser_start.set_defaults(which='start')
44
45 args = vars(parser.parse_args())
46 config_file = args['ini_file']
47
48 if args['which'] == 'init':
49 if os.path.exists(config_file):
50 print("%s already exist." % config_file, file=sys.stderr)
51 sys.exit(1)
52
53 backend = args['backend']
54 if not backend:
55 while True:
56 prompt = ("Which backend to use? "
57 "(1 - postgresql, 2 - redis, default - memory) ")
58 answer = input(prompt).strip()
59 try:
60 backends = {"1": "postgresql", "2": "redis", "": "memory"}
61 backend = backends[answer]
62 break
63 except KeyError:
64 pass
65
66 init(config_file, backend)
67
68 elif args['which'] == 'migrate':
69 env = bootstrap(config_file)
70 cliquet.init_schema(env)
71
72 elif args['which'] == 'start':
73 pserve_argv = ['pserve', config_file, '--reload']
74 pserve.main(pserve_argv)
75
76
77 if __name__ == "__main__":
78 main()
79
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kinto/__main__.py b/kinto/__main__.py
--- a/kinto/__main__.py
+++ b/kinto/__main__.py
@@ -2,6 +2,8 @@
import argparse
import os
import sys
+
+import pip
from six.moves import input
from cliquet.scripts import cliquet
from pyramid.scripts import pserve
@@ -65,6 +67,13 @@
init(config_file, backend)
+ # Install postgresql libraries if necessary
+ if backend == "postgresql":
+ try:
+ import psycopg2 # NOQA
+ except ImportError:
+ pip.main(['install', "cliquet[postgresql]"])
+
elif args['which'] == 'migrate':
env = bootstrap(config_file)
cliquet.init_schema(env)
| {"golden_diff": "diff --git a/kinto/__main__.py b/kinto/__main__.py\n--- a/kinto/__main__.py\n+++ b/kinto/__main__.py\n@@ -2,6 +2,8 @@\n import argparse\n import os\n import sys\n+\n+import pip\n from six.moves import input\n from cliquet.scripts import cliquet\n from pyramid.scripts import pserve\n@@ -65,6 +67,13 @@\n \n init(config_file, backend)\n \n+ # Install postgresql libraries if necessary\n+ if backend == \"postgresql\":\n+ try:\n+ import psycopg2 # NOQA\n+ except ImportError:\n+ pip.main(['install', \"cliquet[postgresql]\"])\n+\n elif args['which'] == 'migrate':\n env = bootstrap(config_file)\n cliquet.init_schema(env)\n", "issue": "kinto init should install postgresql dependencies\n\n", "before_files": [{"content": "from __future__ import print_function\nimport argparse\nimport os\nimport sys\nfrom six.moves import input\nfrom cliquet.scripts import cliquet\nfrom pyramid.scripts import pserve\nfrom pyramid.paster import bootstrap\n\nfrom kinto.config import init\n\nCONFIG_FILE = 'config/kinto.ini'\n\n\ndef main(args=None):\n \"\"\"The main routine.\"\"\"\n if args is None:\n args = sys.argv[1:]\n\n parser = argparse.ArgumentParser(description=\"Kinto commands\")\n parser.add_argument('--ini',\n help='Application configuration file',\n dest='ini_file',\n required=False,\n default=CONFIG_FILE)\n parser.add_argument('--backend',\n help='Specify backend',\n dest='backend',\n required=False,\n default=None)\n\n subparsers = parser.add_subparsers(title='subcommands',\n description='valid subcommands',\n help='init/start/migrate')\n\n parser_init = subparsers.add_parser('init')\n parser_init.set_defaults(which='init')\n\n parser_migrate = subparsers.add_parser('migrate')\n parser_migrate.set_defaults(which='migrate')\n\n parser_start = subparsers.add_parser('start')\n parser_start.set_defaults(which='start')\n\n args = vars(parser.parse_args())\n config_file = args['ini_file']\n\n if args['which'] == 'init':\n if os.path.exists(config_file):\n print(\"%s already exist.\" % config_file, file=sys.stderr)\n sys.exit(1)\n\n backend = args['backend']\n if not backend:\n while True:\n prompt = (\"Which backend to use? \"\n \"(1 - postgresql, 2 - redis, default - memory) \")\n answer = input(prompt).strip()\n try:\n backends = {\"1\": \"postgresql\", \"2\": \"redis\", \"\": \"memory\"}\n backend = backends[answer]\n break\n except KeyError:\n pass\n\n init(config_file, backend)\n\n elif args['which'] == 'migrate':\n env = bootstrap(config_file)\n cliquet.init_schema(env)\n\n elif args['which'] == 'start':\n pserve_argv = ['pserve', config_file, '--reload']\n pserve.main(pserve_argv)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "kinto/__main__.py"}], "after_files": [{"content": "from __future__ import print_function\nimport argparse\nimport os\nimport sys\n\nimport pip\nfrom six.moves import input\nfrom cliquet.scripts import cliquet\nfrom pyramid.scripts import pserve\nfrom pyramid.paster import bootstrap\n\nfrom kinto.config import init\n\nCONFIG_FILE = 'config/kinto.ini'\n\n\ndef main(args=None):\n \"\"\"The main routine.\"\"\"\n if args is None:\n args = sys.argv[1:]\n\n parser = argparse.ArgumentParser(description=\"Kinto commands\")\n parser.add_argument('--ini',\n help='Application configuration file',\n dest='ini_file',\n required=False,\n default=CONFIG_FILE)\n parser.add_argument('--backend',\n help='Specify backend',\n dest='backend',\n required=False,\n default=None)\n\n subparsers = parser.add_subparsers(title='subcommands',\n description='valid subcommands',\n help='init/start/migrate')\n\n parser_init = subparsers.add_parser('init')\n parser_init.set_defaults(which='init')\n\n parser_migrate = subparsers.add_parser('migrate')\n parser_migrate.set_defaults(which='migrate')\n\n parser_start = subparsers.add_parser('start')\n parser_start.set_defaults(which='start')\n\n args = vars(parser.parse_args())\n config_file = args['ini_file']\n\n if args['which'] == 'init':\n if os.path.exists(config_file):\n print(\"%s already exist.\" % config_file, file=sys.stderr)\n sys.exit(1)\n\n backend = args['backend']\n if not backend:\n while True:\n prompt = (\"Which backend to use? \"\n \"(1 - postgresql, 2 - redis, default - memory) \")\n answer = input(prompt).strip()\n try:\n backends = {\"1\": \"postgresql\", \"2\": \"redis\", \"\": \"memory\"}\n backend = backends[answer]\n break\n except KeyError:\n pass\n\n init(config_file, backend)\n\n # Install postgresql libraries if necessary\n if backend == \"postgresql\":\n try:\n import psycopg2 # NOQA\n except ImportError:\n pip.main(['install', \"cliquet[postgresql]\"])\n\n elif args['which'] == 'migrate':\n env = bootstrap(config_file)\n cliquet.init_schema(env)\n\n elif args['which'] == 'start':\n pserve_argv = ['pserve', config_file, '--reload']\n pserve.main(pserve_argv)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "kinto/__main__.py"}]} | 907 | 183 |
gh_patches_debug_35134 | rasdani/github-patches | git_diff | urllib3__urllib3-1036 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Differentiate socks5h from socks5 and socks4a from socks4 when handling proxy string
In a proxy string, socks5h:// and socks4a:// mean that the hostname is
resolved by the SOCKS server. socks5:// and socks4:// mean that the
hostname is resolved locally. socks4a:// means to use SOCKS4a, which is
an extension of SOCKS4. Let's make urllib3 honor it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dummyserver/testcase.py`
Content:
```
1 import unittest
2 import socket
3 import threading
4 from nose.plugins.skip import SkipTest
5 from tornado import ioloop, web
6
7 from dummyserver.server import (
8 SocketServerThread,
9 run_tornado_app,
10 run_loop_in_thread,
11 DEFAULT_CERTS,
12 )
13 from dummyserver.handlers import TestingApp
14 from dummyserver.proxy import ProxyHandler
15
16
17 def consume_socket(sock, chunks=65536):
18 while not sock.recv(chunks).endswith(b'\r\n\r\n'):
19 pass
20
21
22 class SocketDummyServerTestCase(unittest.TestCase):
23 """
24 A simple socket-based server is created for this class that is good for
25 exactly one request.
26 """
27 scheme = 'http'
28 host = 'localhost'
29
30 @classmethod
31 def _start_server(cls, socket_handler):
32 ready_event = threading.Event()
33 cls.server_thread = SocketServerThread(socket_handler=socket_handler,
34 ready_event=ready_event,
35 host=cls.host)
36 cls.server_thread.start()
37 ready_event.wait(5)
38 if not ready_event.is_set():
39 raise Exception("most likely failed to start server")
40 cls.port = cls.server_thread.port
41
42 @classmethod
43 def start_response_handler(cls, response, num=1, block_send=None):
44 ready_event = threading.Event()
45
46 def socket_handler(listener):
47 for _ in range(num):
48 ready_event.set()
49
50 sock = listener.accept()[0]
51 consume_socket(sock)
52 if block_send:
53 block_send.wait()
54 block_send.clear()
55 sock.send(response)
56 sock.close()
57
58 cls._start_server(socket_handler)
59 return ready_event
60
61 @classmethod
62 def start_basic_handler(cls, **kw):
63 return cls.start_response_handler(
64 b'HTTP/1.1 200 OK\r\n'
65 b'Content-Length: 0\r\n'
66 b'\r\n', **kw)
67
68 @classmethod
69 def tearDownClass(cls):
70 if hasattr(cls, 'server_thread'):
71 cls.server_thread.join(0.1)
72
73
74 class IPV4SocketDummyServerTestCase(SocketDummyServerTestCase):
75 @classmethod
76 def _start_server(cls, socket_handler):
77 ready_event = threading.Event()
78 cls.server_thread = SocketServerThread(socket_handler=socket_handler,
79 ready_event=ready_event,
80 host=cls.host)
81 cls.server_thread.USE_IPV6 = False
82 cls.server_thread.start()
83 ready_event.wait(5)
84 if not ready_event.is_set():
85 raise Exception("most likely failed to start server")
86 cls.port = cls.server_thread.port
87
88
89 class HTTPDummyServerTestCase(unittest.TestCase):
90 """ A simple HTTP server that runs when your test class runs
91
92 Have your unittest class inherit from this one, and then a simple server
93 will start when your tests run, and automatically shut down when they
94 complete. For examples of what test requests you can send to the server,
95 see the TestingApp in dummyserver/handlers.py.
96 """
97 scheme = 'http'
98 host = 'localhost'
99 host_alt = '127.0.0.1' # Some tests need two hosts
100 certs = DEFAULT_CERTS
101
102 @classmethod
103 def _start_server(cls):
104 cls.io_loop = ioloop.IOLoop()
105 app = web.Application([(r".*", TestingApp)])
106 cls.server, cls.port = run_tornado_app(app, cls.io_loop, cls.certs,
107 cls.scheme, cls.host)
108 cls.server_thread = run_loop_in_thread(cls.io_loop)
109
110 @classmethod
111 def _stop_server(cls):
112 cls.io_loop.add_callback(cls.server.stop)
113 cls.io_loop.add_callback(cls.io_loop.stop)
114 cls.server_thread.join()
115
116 @classmethod
117 def setUpClass(cls):
118 cls._start_server()
119
120 @classmethod
121 def tearDownClass(cls):
122 cls._stop_server()
123
124
125 class HTTPSDummyServerTestCase(HTTPDummyServerTestCase):
126 scheme = 'https'
127 host = 'localhost'
128 certs = DEFAULT_CERTS
129
130
131 class IPV6HTTPSDummyServerTestCase(HTTPSDummyServerTestCase):
132 host = '::1'
133
134 @classmethod
135 def setUpClass(cls):
136 if not socket.has_ipv6:
137 raise SkipTest('IPv6 not available')
138 else:
139 super(IPV6HTTPSDummyServerTestCase, cls).setUpClass()
140
141
142 class HTTPDummyProxyTestCase(unittest.TestCase):
143
144 http_host = 'localhost'
145 http_host_alt = '127.0.0.1'
146
147 https_host = 'localhost'
148 https_host_alt = '127.0.0.1'
149 https_certs = DEFAULT_CERTS
150
151 proxy_host = 'localhost'
152 proxy_host_alt = '127.0.0.1'
153
154 @classmethod
155 def setUpClass(cls):
156 cls.io_loop = ioloop.IOLoop()
157
158 app = web.Application([(r'.*', TestingApp)])
159 cls.http_server, cls.http_port = run_tornado_app(
160 app, cls.io_loop, None, 'http', cls.http_host)
161
162 app = web.Application([(r'.*', TestingApp)])
163 cls.https_server, cls.https_port = run_tornado_app(
164 app, cls.io_loop, cls.https_certs, 'https', cls.http_host)
165
166 app = web.Application([(r'.*', ProxyHandler)])
167 cls.proxy_server, cls.proxy_port = run_tornado_app(
168 app, cls.io_loop, None, 'http', cls.proxy_host)
169
170 cls.server_thread = run_loop_in_thread(cls.io_loop)
171
172 @classmethod
173 def tearDownClass(cls):
174 cls.io_loop.add_callback(cls.http_server.stop)
175 cls.io_loop.add_callback(cls.https_server.stop)
176 cls.io_loop.add_callback(cls.proxy_server.stop)
177 cls.io_loop.add_callback(cls.io_loop.stop)
178 cls.server_thread.join()
179
180
181 class IPv6HTTPDummyServerTestCase(HTTPDummyServerTestCase):
182 host = '::1'
183
184 @classmethod
185 def setUpClass(cls):
186 if not socket.has_ipv6:
187 raise SkipTest('IPv6 not available')
188 else:
189 super(IPv6HTTPDummyServerTestCase, cls).setUpClass()
190
191
192 class IPv6HTTPDummyProxyTestCase(HTTPDummyProxyTestCase):
193
194 http_host = 'localhost'
195 http_host_alt = '127.0.0.1'
196
197 https_host = 'localhost'
198 https_host_alt = '127.0.0.1'
199 https_certs = DEFAULT_CERTS
200
201 proxy_host = '::1'
202 proxy_host_alt = '127.0.0.1'
203
```
Path: `urllib3/contrib/socks.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 This module contains provisional support for SOCKS proxies from within
4 urllib3. This module supports SOCKS4 (specifically the SOCKS4A variant) and
5 SOCKS5. To enable its functionality, either install PySocks or install this
6 module with the ``socks`` extra.
7
8 The SOCKS implementation supports the full range of urllib3 features. It also
9 supports the following SOCKS features:
10
11 - SOCKS4
12 - SOCKS4a
13 - SOCKS5
14 - Usernames and passwords for the SOCKS proxy
15
16 Known Limitations:
17
18 - Currently PySocks does not support contacting remote websites via literal
19 IPv6 addresses. Any such connection attempt will fail. You must use a domain
20 name.
21 - Currently PySocks does not support IPv6 connections to the SOCKS proxy. Any
22 such connection attempt will fail.
23 """
24 from __future__ import absolute_import
25
26 try:
27 import socks
28 except ImportError:
29 import warnings
30 from ..exceptions import DependencyWarning
31
32 warnings.warn((
33 'SOCKS support in urllib3 requires the installation of optional '
34 'dependencies: specifically, PySocks. For more information, see '
35 'https://urllib3.readthedocs.io/en/latest/contrib.html#socks-proxies'
36 ),
37 DependencyWarning
38 )
39 raise
40
41 from socket import error as SocketError, timeout as SocketTimeout
42
43 from ..connection import (
44 HTTPConnection, HTTPSConnection
45 )
46 from ..connectionpool import (
47 HTTPConnectionPool, HTTPSConnectionPool
48 )
49 from ..exceptions import ConnectTimeoutError, NewConnectionError
50 from ..poolmanager import PoolManager
51 from ..util.url import parse_url
52
53 try:
54 import ssl
55 except ImportError:
56 ssl = None
57
58
59 class SOCKSConnection(HTTPConnection):
60 """
61 A plain-text HTTP connection that connects via a SOCKS proxy.
62 """
63 def __init__(self, *args, **kwargs):
64 self._socks_options = kwargs.pop('_socks_options')
65 super(SOCKSConnection, self).__init__(*args, **kwargs)
66
67 def _new_conn(self):
68 """
69 Establish a new connection via the SOCKS proxy.
70 """
71 extra_kw = {}
72 if self.source_address:
73 extra_kw['source_address'] = self.source_address
74
75 if self.socket_options:
76 extra_kw['socket_options'] = self.socket_options
77
78 try:
79 conn = socks.create_connection(
80 (self.host, self.port),
81 proxy_type=self._socks_options['socks_version'],
82 proxy_addr=self._socks_options['proxy_host'],
83 proxy_port=self._socks_options['proxy_port'],
84 proxy_username=self._socks_options['username'],
85 proxy_password=self._socks_options['password'],
86 timeout=self.timeout,
87 **extra_kw
88 )
89
90 except SocketTimeout as e:
91 raise ConnectTimeoutError(
92 self, "Connection to %s timed out. (connect timeout=%s)" %
93 (self.host, self.timeout))
94
95 except socks.ProxyError as e:
96 # This is fragile as hell, but it seems to be the only way to raise
97 # useful errors here.
98 if e.socket_err:
99 error = e.socket_err
100 if isinstance(error, SocketTimeout):
101 raise ConnectTimeoutError(
102 self,
103 "Connection to %s timed out. (connect timeout=%s)" %
104 (self.host, self.timeout)
105 )
106 else:
107 raise NewConnectionError(
108 self,
109 "Failed to establish a new connection: %s" % error
110 )
111 else:
112 raise NewConnectionError(
113 self,
114 "Failed to establish a new connection: %s" % e
115 )
116
117 except SocketError as e: # Defensive: PySocks should catch all these.
118 raise NewConnectionError(
119 self, "Failed to establish a new connection: %s" % e)
120
121 return conn
122
123
124 # We don't need to duplicate the Verified/Unverified distinction from
125 # urllib3/connection.py here because the HTTPSConnection will already have been
126 # correctly set to either the Verified or Unverified form by that module. This
127 # means the SOCKSHTTPSConnection will automatically be the correct type.
128 class SOCKSHTTPSConnection(SOCKSConnection, HTTPSConnection):
129 pass
130
131
132 class SOCKSHTTPConnectionPool(HTTPConnectionPool):
133 ConnectionCls = SOCKSConnection
134
135
136 class SOCKSHTTPSConnectionPool(HTTPSConnectionPool):
137 ConnectionCls = SOCKSHTTPSConnection
138
139
140 class SOCKSProxyManager(PoolManager):
141 """
142 A version of the urllib3 ProxyManager that routes connections via the
143 defined SOCKS proxy.
144 """
145 pool_classes_by_scheme = {
146 'http': SOCKSHTTPConnectionPool,
147 'https': SOCKSHTTPSConnectionPool,
148 }
149
150 def __init__(self, proxy_url, username=None, password=None,
151 num_pools=10, headers=None, **connection_pool_kw):
152 parsed = parse_url(proxy_url)
153
154 if parsed.scheme == 'socks5':
155 socks_version = socks.PROXY_TYPE_SOCKS5
156 elif parsed.scheme == 'socks4':
157 socks_version = socks.PROXY_TYPE_SOCKS4
158 else:
159 raise ValueError(
160 "Unable to determine SOCKS version from %s" % proxy_url
161 )
162
163 self.proxy_url = proxy_url
164
165 socks_options = {
166 'socks_version': socks_version,
167 'proxy_host': parsed.host,
168 'proxy_port': parsed.port,
169 'username': username,
170 'password': password,
171 }
172 connection_pool_kw['_socks_options'] = socks_options
173
174 super(SOCKSProxyManager, self).__init__(
175 num_pools, headers, **connection_pool_kw
176 )
177
178 self.pool_classes_by_scheme = SOCKSProxyManager.pool_classes_by_scheme
179
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/dummyserver/testcase.py b/dummyserver/testcase.py
--- a/dummyserver/testcase.py
+++ b/dummyserver/testcase.py
@@ -1,4 +1,4 @@
-import unittest
+import sys
import socket
import threading
from nose.plugins.skip import SkipTest
@@ -13,6 +13,11 @@
from dummyserver.handlers import TestingApp
from dummyserver.proxy import ProxyHandler
+if sys.version_info >= (2, 7):
+ import unittest
+else:
+ import unittest2 as unittest
+
def consume_socket(sock, chunks=65536):
while not sock.recv(chunks).endswith(b'\r\n\r\n'):
diff --git a/urllib3/contrib/socks.py b/urllib3/contrib/socks.py
--- a/urllib3/contrib/socks.py
+++ b/urllib3/contrib/socks.py
@@ -83,6 +83,7 @@
proxy_port=self._socks_options['proxy_port'],
proxy_username=self._socks_options['username'],
proxy_password=self._socks_options['password'],
+ proxy_rdns=self._socks_options['rdns'],
timeout=self.timeout,
**extra_kw
)
@@ -153,8 +154,16 @@
if parsed.scheme == 'socks5':
socks_version = socks.PROXY_TYPE_SOCKS5
+ rdns = False
+ elif parsed.scheme == 'socks5h':
+ socks_version = socks.PROXY_TYPE_SOCKS5
+ rdns = True
elif parsed.scheme == 'socks4':
socks_version = socks.PROXY_TYPE_SOCKS4
+ rdns = False
+ elif parsed.scheme == 'socks4a':
+ socks_version = socks.PROXY_TYPE_SOCKS4
+ rdns = True
else:
raise ValueError(
"Unable to determine SOCKS version from %s" % proxy_url
@@ -168,6 +177,7 @@
'proxy_port': parsed.port,
'username': username,
'password': password,
+ 'rdns': rdns
}
connection_pool_kw['_socks_options'] = socks_options
| {"golden_diff": "diff --git a/dummyserver/testcase.py b/dummyserver/testcase.py\n--- a/dummyserver/testcase.py\n+++ b/dummyserver/testcase.py\n@@ -1,4 +1,4 @@\n-import unittest\n+import sys\n import socket\n import threading\n from nose.plugins.skip import SkipTest\n@@ -13,6 +13,11 @@\n from dummyserver.handlers import TestingApp\n from dummyserver.proxy import ProxyHandler\n \n+if sys.version_info >= (2, 7):\n+ import unittest\n+else:\n+ import unittest2 as unittest\n+\n \n def consume_socket(sock, chunks=65536):\n while not sock.recv(chunks).endswith(b'\\r\\n\\r\\n'):\ndiff --git a/urllib3/contrib/socks.py b/urllib3/contrib/socks.py\n--- a/urllib3/contrib/socks.py\n+++ b/urllib3/contrib/socks.py\n@@ -83,6 +83,7 @@\n proxy_port=self._socks_options['proxy_port'],\n proxy_username=self._socks_options['username'],\n proxy_password=self._socks_options['password'],\n+ proxy_rdns=self._socks_options['rdns'],\n timeout=self.timeout,\n **extra_kw\n )\n@@ -153,8 +154,16 @@\n \n if parsed.scheme == 'socks5':\n socks_version = socks.PROXY_TYPE_SOCKS5\n+ rdns = False\n+ elif parsed.scheme == 'socks5h':\n+ socks_version = socks.PROXY_TYPE_SOCKS5\n+ rdns = True\n elif parsed.scheme == 'socks4':\n socks_version = socks.PROXY_TYPE_SOCKS4\n+ rdns = False\n+ elif parsed.scheme == 'socks4a':\n+ socks_version = socks.PROXY_TYPE_SOCKS4\n+ rdns = True\n else:\n raise ValueError(\n \"Unable to determine SOCKS version from %s\" % proxy_url\n@@ -168,6 +177,7 @@\n 'proxy_port': parsed.port,\n 'username': username,\n 'password': password,\n+ 'rdns': rdns\n }\n connection_pool_kw['_socks_options'] = socks_options\n", "issue": "Differentiate socks5h from socks5 and socks4a from socks4 when handling proxy string\nIn a proxy string, socks5h:// and socks4a:// mean that the hostname is\r\nresolved by the SOCKS server. socks5:// and socks4:// mean that the\r\nhostname is resolved locally. socks4a:// means to use SOCKS4a, which is\r\nan extension of SOCKS4. Let's make urllib3 honor it.\n", "before_files": [{"content": "import unittest\nimport socket\nimport threading\nfrom nose.plugins.skip import SkipTest\nfrom tornado import ioloop, web\n\nfrom dummyserver.server import (\n SocketServerThread,\n run_tornado_app,\n run_loop_in_thread,\n DEFAULT_CERTS,\n)\nfrom dummyserver.handlers import TestingApp\nfrom dummyserver.proxy import ProxyHandler\n\n\ndef consume_socket(sock, chunks=65536):\n while not sock.recv(chunks).endswith(b'\\r\\n\\r\\n'):\n pass\n\n\nclass SocketDummyServerTestCase(unittest.TestCase):\n \"\"\"\n A simple socket-based server is created for this class that is good for\n exactly one request.\n \"\"\"\n scheme = 'http'\n host = 'localhost'\n\n @classmethod\n def _start_server(cls, socket_handler):\n ready_event = threading.Event()\n cls.server_thread = SocketServerThread(socket_handler=socket_handler,\n ready_event=ready_event,\n host=cls.host)\n cls.server_thread.start()\n ready_event.wait(5)\n if not ready_event.is_set():\n raise Exception(\"most likely failed to start server\")\n cls.port = cls.server_thread.port\n\n @classmethod\n def start_response_handler(cls, response, num=1, block_send=None):\n ready_event = threading.Event()\n\n def socket_handler(listener):\n for _ in range(num):\n ready_event.set()\n\n sock = listener.accept()[0]\n consume_socket(sock)\n if block_send:\n block_send.wait()\n block_send.clear()\n sock.send(response)\n sock.close()\n\n cls._start_server(socket_handler)\n return ready_event\n\n @classmethod\n def start_basic_handler(cls, **kw):\n return cls.start_response_handler(\n b'HTTP/1.1 200 OK\\r\\n'\n b'Content-Length: 0\\r\\n'\n b'\\r\\n', **kw)\n\n @classmethod\n def tearDownClass(cls):\n if hasattr(cls, 'server_thread'):\n cls.server_thread.join(0.1)\n\n\nclass IPV4SocketDummyServerTestCase(SocketDummyServerTestCase):\n @classmethod\n def _start_server(cls, socket_handler):\n ready_event = threading.Event()\n cls.server_thread = SocketServerThread(socket_handler=socket_handler,\n ready_event=ready_event,\n host=cls.host)\n cls.server_thread.USE_IPV6 = False\n cls.server_thread.start()\n ready_event.wait(5)\n if not ready_event.is_set():\n raise Exception(\"most likely failed to start server\")\n cls.port = cls.server_thread.port\n\n\nclass HTTPDummyServerTestCase(unittest.TestCase):\n \"\"\" A simple HTTP server that runs when your test class runs\n\n Have your unittest class inherit from this one, and then a simple server\n will start when your tests run, and automatically shut down when they\n complete. For examples of what test requests you can send to the server,\n see the TestingApp in dummyserver/handlers.py.\n \"\"\"\n scheme = 'http'\n host = 'localhost'\n host_alt = '127.0.0.1' # Some tests need two hosts\n certs = DEFAULT_CERTS\n\n @classmethod\n def _start_server(cls):\n cls.io_loop = ioloop.IOLoop()\n app = web.Application([(r\".*\", TestingApp)])\n cls.server, cls.port = run_tornado_app(app, cls.io_loop, cls.certs,\n cls.scheme, cls.host)\n cls.server_thread = run_loop_in_thread(cls.io_loop)\n\n @classmethod\n def _stop_server(cls):\n cls.io_loop.add_callback(cls.server.stop)\n cls.io_loop.add_callback(cls.io_loop.stop)\n cls.server_thread.join()\n\n @classmethod\n def setUpClass(cls):\n cls._start_server()\n\n @classmethod\n def tearDownClass(cls):\n cls._stop_server()\n\n\nclass HTTPSDummyServerTestCase(HTTPDummyServerTestCase):\n scheme = 'https'\n host = 'localhost'\n certs = DEFAULT_CERTS\n\n\nclass IPV6HTTPSDummyServerTestCase(HTTPSDummyServerTestCase):\n host = '::1'\n\n @classmethod\n def setUpClass(cls):\n if not socket.has_ipv6:\n raise SkipTest('IPv6 not available')\n else:\n super(IPV6HTTPSDummyServerTestCase, cls).setUpClass()\n\n\nclass HTTPDummyProxyTestCase(unittest.TestCase):\n\n http_host = 'localhost'\n http_host_alt = '127.0.0.1'\n\n https_host = 'localhost'\n https_host_alt = '127.0.0.1'\n https_certs = DEFAULT_CERTS\n\n proxy_host = 'localhost'\n proxy_host_alt = '127.0.0.1'\n\n @classmethod\n def setUpClass(cls):\n cls.io_loop = ioloop.IOLoop()\n\n app = web.Application([(r'.*', TestingApp)])\n cls.http_server, cls.http_port = run_tornado_app(\n app, cls.io_loop, None, 'http', cls.http_host)\n\n app = web.Application([(r'.*', TestingApp)])\n cls.https_server, cls.https_port = run_tornado_app(\n app, cls.io_loop, cls.https_certs, 'https', cls.http_host)\n\n app = web.Application([(r'.*', ProxyHandler)])\n cls.proxy_server, cls.proxy_port = run_tornado_app(\n app, cls.io_loop, None, 'http', cls.proxy_host)\n\n cls.server_thread = run_loop_in_thread(cls.io_loop)\n\n @classmethod\n def tearDownClass(cls):\n cls.io_loop.add_callback(cls.http_server.stop)\n cls.io_loop.add_callback(cls.https_server.stop)\n cls.io_loop.add_callback(cls.proxy_server.stop)\n cls.io_loop.add_callback(cls.io_loop.stop)\n cls.server_thread.join()\n\n\nclass IPv6HTTPDummyServerTestCase(HTTPDummyServerTestCase):\n host = '::1'\n\n @classmethod\n def setUpClass(cls):\n if not socket.has_ipv6:\n raise SkipTest('IPv6 not available')\n else:\n super(IPv6HTTPDummyServerTestCase, cls).setUpClass()\n\n\nclass IPv6HTTPDummyProxyTestCase(HTTPDummyProxyTestCase):\n\n http_host = 'localhost'\n http_host_alt = '127.0.0.1'\n\n https_host = 'localhost'\n https_host_alt = '127.0.0.1'\n https_certs = DEFAULT_CERTS\n\n proxy_host = '::1'\n proxy_host_alt = '127.0.0.1'\n", "path": "dummyserver/testcase.py"}, {"content": "# -*- coding: utf-8 -*-\n\"\"\"\nThis module contains provisional support for SOCKS proxies from within\nurllib3. This module supports SOCKS4 (specifically the SOCKS4A variant) and\nSOCKS5. To enable its functionality, either install PySocks or install this\nmodule with the ``socks`` extra.\n\nThe SOCKS implementation supports the full range of urllib3 features. It also\nsupports the following SOCKS features:\n\n- SOCKS4\n- SOCKS4a\n- SOCKS5\n- Usernames and passwords for the SOCKS proxy\n\nKnown Limitations:\n\n- Currently PySocks does not support contacting remote websites via literal\n IPv6 addresses. Any such connection attempt will fail. You must use a domain\n name.\n- Currently PySocks does not support IPv6 connections to the SOCKS proxy. Any\n such connection attempt will fail.\n\"\"\"\nfrom __future__ import absolute_import\n\ntry:\n import socks\nexcept ImportError:\n import warnings\n from ..exceptions import DependencyWarning\n\n warnings.warn((\n 'SOCKS support in urllib3 requires the installation of optional '\n 'dependencies: specifically, PySocks. For more information, see '\n 'https://urllib3.readthedocs.io/en/latest/contrib.html#socks-proxies'\n ),\n DependencyWarning\n )\n raise\n\nfrom socket import error as SocketError, timeout as SocketTimeout\n\nfrom ..connection import (\n HTTPConnection, HTTPSConnection\n)\nfrom ..connectionpool import (\n HTTPConnectionPool, HTTPSConnectionPool\n)\nfrom ..exceptions import ConnectTimeoutError, NewConnectionError\nfrom ..poolmanager import PoolManager\nfrom ..util.url import parse_url\n\ntry:\n import ssl\nexcept ImportError:\n ssl = None\n\n\nclass SOCKSConnection(HTTPConnection):\n \"\"\"\n A plain-text HTTP connection that connects via a SOCKS proxy.\n \"\"\"\n def __init__(self, *args, **kwargs):\n self._socks_options = kwargs.pop('_socks_options')\n super(SOCKSConnection, self).__init__(*args, **kwargs)\n\n def _new_conn(self):\n \"\"\"\n Establish a new connection via the SOCKS proxy.\n \"\"\"\n extra_kw = {}\n if self.source_address:\n extra_kw['source_address'] = self.source_address\n\n if self.socket_options:\n extra_kw['socket_options'] = self.socket_options\n\n try:\n conn = socks.create_connection(\n (self.host, self.port),\n proxy_type=self._socks_options['socks_version'],\n proxy_addr=self._socks_options['proxy_host'],\n proxy_port=self._socks_options['proxy_port'],\n proxy_username=self._socks_options['username'],\n proxy_password=self._socks_options['password'],\n timeout=self.timeout,\n **extra_kw\n )\n\n except SocketTimeout as e:\n raise ConnectTimeoutError(\n self, \"Connection to %s timed out. (connect timeout=%s)\" %\n (self.host, self.timeout))\n\n except socks.ProxyError as e:\n # This is fragile as hell, but it seems to be the only way to raise\n # useful errors here.\n if e.socket_err:\n error = e.socket_err\n if isinstance(error, SocketTimeout):\n raise ConnectTimeoutError(\n self,\n \"Connection to %s timed out. (connect timeout=%s)\" %\n (self.host, self.timeout)\n )\n else:\n raise NewConnectionError(\n self,\n \"Failed to establish a new connection: %s\" % error\n )\n else:\n raise NewConnectionError(\n self,\n \"Failed to establish a new connection: %s\" % e\n )\n\n except SocketError as e: # Defensive: PySocks should catch all these.\n raise NewConnectionError(\n self, \"Failed to establish a new connection: %s\" % e)\n\n return conn\n\n\n# We don't need to duplicate the Verified/Unverified distinction from\n# urllib3/connection.py here because the HTTPSConnection will already have been\n# correctly set to either the Verified or Unverified form by that module. This\n# means the SOCKSHTTPSConnection will automatically be the correct type.\nclass SOCKSHTTPSConnection(SOCKSConnection, HTTPSConnection):\n pass\n\n\nclass SOCKSHTTPConnectionPool(HTTPConnectionPool):\n ConnectionCls = SOCKSConnection\n\n\nclass SOCKSHTTPSConnectionPool(HTTPSConnectionPool):\n ConnectionCls = SOCKSHTTPSConnection\n\n\nclass SOCKSProxyManager(PoolManager):\n \"\"\"\n A version of the urllib3 ProxyManager that routes connections via the\n defined SOCKS proxy.\n \"\"\"\n pool_classes_by_scheme = {\n 'http': SOCKSHTTPConnectionPool,\n 'https': SOCKSHTTPSConnectionPool,\n }\n\n def __init__(self, proxy_url, username=None, password=None,\n num_pools=10, headers=None, **connection_pool_kw):\n parsed = parse_url(proxy_url)\n\n if parsed.scheme == 'socks5':\n socks_version = socks.PROXY_TYPE_SOCKS5\n elif parsed.scheme == 'socks4':\n socks_version = socks.PROXY_TYPE_SOCKS4\n else:\n raise ValueError(\n \"Unable to determine SOCKS version from %s\" % proxy_url\n )\n\n self.proxy_url = proxy_url\n\n socks_options = {\n 'socks_version': socks_version,\n 'proxy_host': parsed.host,\n 'proxy_port': parsed.port,\n 'username': username,\n 'password': password,\n }\n connection_pool_kw['_socks_options'] = socks_options\n\n super(SOCKSProxyManager, self).__init__(\n num_pools, headers, **connection_pool_kw\n )\n\n self.pool_classes_by_scheme = SOCKSProxyManager.pool_classes_by_scheme\n", "path": "urllib3/contrib/socks.py"}], "after_files": [{"content": "import sys\nimport socket\nimport threading\nfrom nose.plugins.skip import SkipTest\nfrom tornado import ioloop, web\n\nfrom dummyserver.server import (\n SocketServerThread,\n run_tornado_app,\n run_loop_in_thread,\n DEFAULT_CERTS,\n)\nfrom dummyserver.handlers import TestingApp\nfrom dummyserver.proxy import ProxyHandler\n\nif sys.version_info >= (2, 7):\n import unittest\nelse:\n import unittest2 as unittest\n\n\ndef consume_socket(sock, chunks=65536):\n while not sock.recv(chunks).endswith(b'\\r\\n\\r\\n'):\n pass\n\n\nclass SocketDummyServerTestCase(unittest.TestCase):\n \"\"\"\n A simple socket-based server is created for this class that is good for\n exactly one request.\n \"\"\"\n scheme = 'http'\n host = 'localhost'\n\n @classmethod\n def _start_server(cls, socket_handler):\n ready_event = threading.Event()\n cls.server_thread = SocketServerThread(socket_handler=socket_handler,\n ready_event=ready_event,\n host=cls.host)\n cls.server_thread.start()\n ready_event.wait(5)\n if not ready_event.is_set():\n raise Exception(\"most likely failed to start server\")\n cls.port = cls.server_thread.port\n\n @classmethod\n def start_response_handler(cls, response, num=1, block_send=None):\n ready_event = threading.Event()\n\n def socket_handler(listener):\n for _ in range(num):\n ready_event.set()\n\n sock = listener.accept()[0]\n consume_socket(sock)\n if block_send:\n block_send.wait()\n block_send.clear()\n sock.send(response)\n sock.close()\n\n cls._start_server(socket_handler)\n return ready_event\n\n @classmethod\n def start_basic_handler(cls, **kw):\n return cls.start_response_handler(\n b'HTTP/1.1 200 OK\\r\\n'\n b'Content-Length: 0\\r\\n'\n b'\\r\\n', **kw)\n\n @classmethod\n def tearDownClass(cls):\n if hasattr(cls, 'server_thread'):\n cls.server_thread.join(0.1)\n\n\nclass IPV4SocketDummyServerTestCase(SocketDummyServerTestCase):\n @classmethod\n def _start_server(cls, socket_handler):\n ready_event = threading.Event()\n cls.server_thread = SocketServerThread(socket_handler=socket_handler,\n ready_event=ready_event,\n host=cls.host)\n cls.server_thread.USE_IPV6 = False\n cls.server_thread.start()\n ready_event.wait(5)\n if not ready_event.is_set():\n raise Exception(\"most likely failed to start server\")\n cls.port = cls.server_thread.port\n\n\nclass HTTPDummyServerTestCase(unittest.TestCase):\n \"\"\" A simple HTTP server that runs when your test class runs\n\n Have your unittest class inherit from this one, and then a simple server\n will start when your tests run, and automatically shut down when they\n complete. For examples of what test requests you can send to the server,\n see the TestingApp in dummyserver/handlers.py.\n \"\"\"\n scheme = 'http'\n host = 'localhost'\n host_alt = '127.0.0.1' # Some tests need two hosts\n certs = DEFAULT_CERTS\n\n @classmethod\n def _start_server(cls):\n cls.io_loop = ioloop.IOLoop()\n app = web.Application([(r\".*\", TestingApp)])\n cls.server, cls.port = run_tornado_app(app, cls.io_loop, cls.certs,\n cls.scheme, cls.host)\n cls.server_thread = run_loop_in_thread(cls.io_loop)\n\n @classmethod\n def _stop_server(cls):\n cls.io_loop.add_callback(cls.server.stop)\n cls.io_loop.add_callback(cls.io_loop.stop)\n cls.server_thread.join()\n\n @classmethod\n def setUpClass(cls):\n cls._start_server()\n\n @classmethod\n def tearDownClass(cls):\n cls._stop_server()\n\n\nclass HTTPSDummyServerTestCase(HTTPDummyServerTestCase):\n scheme = 'https'\n host = 'localhost'\n certs = DEFAULT_CERTS\n\n\nclass IPV6HTTPSDummyServerTestCase(HTTPSDummyServerTestCase):\n host = '::1'\n\n @classmethod\n def setUpClass(cls):\n if not socket.has_ipv6:\n raise SkipTest('IPv6 not available')\n else:\n super(IPV6HTTPSDummyServerTestCase, cls).setUpClass()\n\n\nclass HTTPDummyProxyTestCase(unittest.TestCase):\n\n http_host = 'localhost'\n http_host_alt = '127.0.0.1'\n\n https_host = 'localhost'\n https_host_alt = '127.0.0.1'\n https_certs = DEFAULT_CERTS\n\n proxy_host = 'localhost'\n proxy_host_alt = '127.0.0.1'\n\n @classmethod\n def setUpClass(cls):\n cls.io_loop = ioloop.IOLoop()\n\n app = web.Application([(r'.*', TestingApp)])\n cls.http_server, cls.http_port = run_tornado_app(\n app, cls.io_loop, None, 'http', cls.http_host)\n\n app = web.Application([(r'.*', TestingApp)])\n cls.https_server, cls.https_port = run_tornado_app(\n app, cls.io_loop, cls.https_certs, 'https', cls.http_host)\n\n app = web.Application([(r'.*', ProxyHandler)])\n cls.proxy_server, cls.proxy_port = run_tornado_app(\n app, cls.io_loop, None, 'http', cls.proxy_host)\n\n cls.server_thread = run_loop_in_thread(cls.io_loop)\n\n @classmethod\n def tearDownClass(cls):\n cls.io_loop.add_callback(cls.http_server.stop)\n cls.io_loop.add_callback(cls.https_server.stop)\n cls.io_loop.add_callback(cls.proxy_server.stop)\n cls.io_loop.add_callback(cls.io_loop.stop)\n cls.server_thread.join()\n\n\nclass IPv6HTTPDummyServerTestCase(HTTPDummyServerTestCase):\n host = '::1'\n\n @classmethod\n def setUpClass(cls):\n if not socket.has_ipv6:\n raise SkipTest('IPv6 not available')\n else:\n super(IPv6HTTPDummyServerTestCase, cls).setUpClass()\n\n\nclass IPv6HTTPDummyProxyTestCase(HTTPDummyProxyTestCase):\n\n http_host = 'localhost'\n http_host_alt = '127.0.0.1'\n\n https_host = 'localhost'\n https_host_alt = '127.0.0.1'\n https_certs = DEFAULT_CERTS\n\n proxy_host = '::1'\n proxy_host_alt = '127.0.0.1'\n", "path": "dummyserver/testcase.py"}, {"content": "# -*- coding: utf-8 -*-\n\"\"\"\nThis module contains provisional support for SOCKS proxies from within\nurllib3. This module supports SOCKS4 (specifically the SOCKS4A variant) and\nSOCKS5. To enable its functionality, either install PySocks or install this\nmodule with the ``socks`` extra.\n\nThe SOCKS implementation supports the full range of urllib3 features. It also\nsupports the following SOCKS features:\n\n- SOCKS4\n- SOCKS4a\n- SOCKS5\n- Usernames and passwords for the SOCKS proxy\n\nKnown Limitations:\n\n- Currently PySocks does not support contacting remote websites via literal\n IPv6 addresses. Any such connection attempt will fail. You must use a domain\n name.\n- Currently PySocks does not support IPv6 connections to the SOCKS proxy. Any\n such connection attempt will fail.\n\"\"\"\nfrom __future__ import absolute_import\n\ntry:\n import socks\nexcept ImportError:\n import warnings\n from ..exceptions import DependencyWarning\n\n warnings.warn((\n 'SOCKS support in urllib3 requires the installation of optional '\n 'dependencies: specifically, PySocks. For more information, see '\n 'https://urllib3.readthedocs.io/en/latest/contrib.html#socks-proxies'\n ),\n DependencyWarning\n )\n raise\n\nfrom socket import error as SocketError, timeout as SocketTimeout\n\nfrom ..connection import (\n HTTPConnection, HTTPSConnection\n)\nfrom ..connectionpool import (\n HTTPConnectionPool, HTTPSConnectionPool\n)\nfrom ..exceptions import ConnectTimeoutError, NewConnectionError\nfrom ..poolmanager import PoolManager\nfrom ..util.url import parse_url\n\ntry:\n import ssl\nexcept ImportError:\n ssl = None\n\n\nclass SOCKSConnection(HTTPConnection):\n \"\"\"\n A plain-text HTTP connection that connects via a SOCKS proxy.\n \"\"\"\n def __init__(self, *args, **kwargs):\n self._socks_options = kwargs.pop('_socks_options')\n super(SOCKSConnection, self).__init__(*args, **kwargs)\n\n def _new_conn(self):\n \"\"\"\n Establish a new connection via the SOCKS proxy.\n \"\"\"\n extra_kw = {}\n if self.source_address:\n extra_kw['source_address'] = self.source_address\n\n if self.socket_options:\n extra_kw['socket_options'] = self.socket_options\n\n try:\n conn = socks.create_connection(\n (self.host, self.port),\n proxy_type=self._socks_options['socks_version'],\n proxy_addr=self._socks_options['proxy_host'],\n proxy_port=self._socks_options['proxy_port'],\n proxy_username=self._socks_options['username'],\n proxy_password=self._socks_options['password'],\n proxy_rdns=self._socks_options['rdns'],\n timeout=self.timeout,\n **extra_kw\n )\n\n except SocketTimeout as e:\n raise ConnectTimeoutError(\n self, \"Connection to %s timed out. (connect timeout=%s)\" %\n (self.host, self.timeout))\n\n except socks.ProxyError as e:\n # This is fragile as hell, but it seems to be the only way to raise\n # useful errors here.\n if e.socket_err:\n error = e.socket_err\n if isinstance(error, SocketTimeout):\n raise ConnectTimeoutError(\n self,\n \"Connection to %s timed out. (connect timeout=%s)\" %\n (self.host, self.timeout)\n )\n else:\n raise NewConnectionError(\n self,\n \"Failed to establish a new connection: %s\" % error\n )\n else:\n raise NewConnectionError(\n self,\n \"Failed to establish a new connection: %s\" % e\n )\n\n except SocketError as e: # Defensive: PySocks should catch all these.\n raise NewConnectionError(\n self, \"Failed to establish a new connection: %s\" % e)\n\n return conn\n\n\n# We don't need to duplicate the Verified/Unverified distinction from\n# urllib3/connection.py here because the HTTPSConnection will already have been\n# correctly set to either the Verified or Unverified form by that module. This\n# means the SOCKSHTTPSConnection will automatically be the correct type.\nclass SOCKSHTTPSConnection(SOCKSConnection, HTTPSConnection):\n pass\n\n\nclass SOCKSHTTPConnectionPool(HTTPConnectionPool):\n ConnectionCls = SOCKSConnection\n\n\nclass SOCKSHTTPSConnectionPool(HTTPSConnectionPool):\n ConnectionCls = SOCKSHTTPSConnection\n\n\nclass SOCKSProxyManager(PoolManager):\n \"\"\"\n A version of the urllib3 ProxyManager that routes connections via the\n defined SOCKS proxy.\n \"\"\"\n pool_classes_by_scheme = {\n 'http': SOCKSHTTPConnectionPool,\n 'https': SOCKSHTTPSConnectionPool,\n }\n\n def __init__(self, proxy_url, username=None, password=None,\n num_pools=10, headers=None, **connection_pool_kw):\n parsed = parse_url(proxy_url)\n\n if parsed.scheme == 'socks5':\n socks_version = socks.PROXY_TYPE_SOCKS5\n rdns = False\n elif parsed.scheme == 'socks5h':\n socks_version = socks.PROXY_TYPE_SOCKS5\n rdns = True\n elif parsed.scheme == 'socks4':\n socks_version = socks.PROXY_TYPE_SOCKS4\n rdns = False\n elif parsed.scheme == 'socks4a':\n socks_version = socks.PROXY_TYPE_SOCKS4\n rdns = True\n else:\n raise ValueError(\n \"Unable to determine SOCKS version from %s\" % proxy_url\n )\n\n self.proxy_url = proxy_url\n\n socks_options = {\n 'socks_version': socks_version,\n 'proxy_host': parsed.host,\n 'proxy_port': parsed.port,\n 'username': username,\n 'password': password,\n 'rdns': rdns\n }\n connection_pool_kw['_socks_options'] = socks_options\n\n super(SOCKSProxyManager, self).__init__(\n num_pools, headers, **connection_pool_kw\n )\n\n self.pool_classes_by_scheme = SOCKSProxyManager.pool_classes_by_scheme\n", "path": "urllib3/contrib/socks.py"}]} | 3,966 | 503 |
gh_patches_debug_5332 | rasdani/github-patches | git_diff | feast-dev__feast-2111 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Feast setup script returns "'charmap' codec can't decode byte" on Windows
## Expected Behavior
`pip install -e "sdk/python[ci]"` must succeed and install feast package in editable mode.
## Current Behavior
Installation error:
```
Obtaining file:///C:/Users/<username>/Downloads/feast/sdk/python
Installing build dependencies ... done
Getting requirements to build wheel ... error
ERROR: Command errored out with exit status 1:
command: 'C:\Users\<username>\Downloads\feast\venv\Scripts\python.exe' 'C:\Users\<username>\Downloads\feast\venv\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py' get_requires_for_build_wheel 'C:\Users\<folder_name>\AppData\Local\Temp\tmpwd6unq88'
cwd: C:\Users\<username>\Downloads\feast\sdk\python
Complete output (20 lines):
Traceback (most recent call last):
File "C:\Users\<username>\Downloads\feast\venv\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py", line 280, in <module>
main()
File "C:\Users\<username>\Downloads\feast\venv\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py", line 263, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "C:\Users\<username>\Downloads\feast\venv\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py", line 114, in get_requires_for_build_wheel
return hook(config_settings)
File "C:\Users\<username>\AppData\Local\Temp\pip-build-env-2u3vtd1l\overlay\Lib\site-packages\setuptools\build_meta.py", line 162, in get_requires_for_build_wheel
return self._get_build_requires(
File "C:\Users\<username>\AppData\Local\Temp\pip-build-env-2u3vtd1l\overlay\Lib\site-packages\setuptools\build_meta.py", line 143, in _get_build_requires
self.run_setup()
File "C:\Users\<username>\AppData\Local\Temp\pip-build-env-2u3vtd1l\overlay\Lib\site-packages\setuptools\build_meta.py", line 267, in run_setup
super(_BuildMetaLegacyBackend,
File "C:\Users\<username>\AppData\Local\Temp\pip-build-env-2u3vtd1l\overlay\Lib\site-packages\setuptools\build_meta.py", line 158, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 137, in <module>
LONG_DESCRIPTION = f.read()
File "c:\users\<username>\appdata\local\programs\python\python39\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 1599: character maps to <undefined>
```
## Steps to reproduce
Install feast for development:
```
pip install -e "sdk/python[ci]"
```
### Specifications
- Version: Master branch
- Platform: Windows 10
- Python version: 3.9.5
- pip version: 21.3.1
- virtualenv version: 20.10.0
## Possible Solution
`README.md` contains emojis resulting character encoding issues in the package long description. This PR sets package long description encoding to UTF-8.
```python
# README file from Feast repo root directory
README_FILE = os.path.join(repo_root, "README.md")
with open(README_FILE, "r", encoding="utf8") as f:
LONG_DESCRIPTION = f.read()
```
Pull request #2111
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sdk/python/setup.py`
Content:
```
1 # Copyright 2019 The Feast Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import glob
15 import os
16 import re
17 import shutil
18 import subprocess
19 import pathlib
20
21 from distutils.cmd import Command
22 from setuptools import find_packages
23
24 try:
25 from setuptools import setup
26 from setuptools.command.install import install
27 from setuptools.command.develop import develop
28 from setuptools.command.egg_info import egg_info
29 from setuptools.command.sdist import sdist
30 from setuptools.command.build_py import build_py
31 except ImportError:
32 from distutils.core import setup
33 from distutils.command.install import install
34 from distutils.command.build_py import build_py
35
36 NAME = "feast"
37 DESCRIPTION = "Python SDK for Feast"
38 URL = "https://github.com/feast-dev/feast"
39 AUTHOR = "Feast"
40 REQUIRES_PYTHON = ">=3.7.0"
41
42 REQUIRED = [
43 "Click==7.*",
44 "colorama>=0.3.9",
45 "dill==0.3.*",
46 "fastavro>=1.1.0",
47 "google-api-core>=1.23.0",
48 "googleapis-common-protos==1.52.*",
49 "grpcio>=1.34.0",
50 "grpcio-reflection>=1.34.0",
51 "Jinja2>=2.0.0",
52 "jsonschema",
53 "mmh3",
54 "pandas>=1.0.0",
55 "pandavro==1.5.*",
56 "protobuf>=3.10",
57 "proto-plus<1.19.7",
58 "pyarrow>=4.0.0",
59 "pydantic>=1.0.0",
60 "PyYAML>=5.4.*",
61 "tabulate==0.8.*",
62 "tenacity>=7.*",
63 "toml==0.10.*",
64 "tqdm==4.*",
65 "fastapi>=0.68.0",
66 "uvicorn[standard]>=0.14.0",
67 "proto-plus<1.19.7",
68 "tensorflow-metadata>=1.0.0,<2.0.0",
69 ]
70
71 GCP_REQUIRED = [
72 "google-cloud-bigquery>=2.28.1",
73 "google-cloud-bigquery-storage >= 2.0.0",
74 "google-cloud-datastore>=2.1.*",
75 "google-cloud-storage>=1.34.*,<1.41",
76 "google-cloud-core>=1.4.0,<2.0.0",
77 ]
78
79 REDIS_REQUIRED = [
80 "redis-py-cluster==2.1.2",
81 "hiredis>=2.0.0",
82 ]
83
84 AWS_REQUIRED = [
85 "boto3==1.17.*",
86 "docker>=5.0.2",
87 ]
88
89 CI_REQUIRED = [
90 "cryptography==3.3.2",
91 "flake8",
92 "black==19.10b0",
93 "isort>=5",
94 "grpcio-tools==1.34.0",
95 "grpcio-testing==1.34.0",
96 "minio==7.1.0",
97 "mock==2.0.0",
98 "moto",
99 "mypy==0.790",
100 "mypy-protobuf==1.24",
101 "avro==1.10.0",
102 "gcsfs",
103 "urllib3>=1.25.4",
104 "pytest==6.0.0",
105 "pytest-cov",
106 "pytest-xdist",
107 "pytest-benchmark>=3.4.1",
108 "pytest-lazy-fixture==0.6.3",
109 "pytest-timeout==1.4.2",
110 "pytest-ordering==0.6.*",
111 "pytest-mock==1.10.4",
112 "Sphinx!=4.0.0",
113 "sphinx-rtd-theme",
114 "testcontainers==3.4.2",
115 "adlfs==0.5.9",
116 "firebase-admin==4.5.2",
117 "pre-commit",
118 "assertpy==1.1",
119 "pip-tools",
120 ] + GCP_REQUIRED + REDIS_REQUIRED + AWS_REQUIRED
121
122 DEV_REQUIRED = ["mypy-protobuf==1.*", "grpcio-testing==1.*"] + CI_REQUIRED
123
124 # Get git repo root directory
125 repo_root = str(pathlib.Path(__file__).resolve().parent.parent.parent)
126
127 # README file from Feast repo root directory
128 README_FILE = os.path.join(repo_root, "README.md")
129 with open(README_FILE, "r") as f:
130 LONG_DESCRIPTION = f.read()
131
132 # Add Support for parsing tags that have a prefix containing '/' (ie 'sdk/go') to setuptools_scm.
133 # Regex modified from default tag regex in:
134 # https://github.com/pypa/setuptools_scm/blob/2a1b46d38fb2b8aeac09853e660bcd0d7c1bc7be/src/setuptools_scm/config.py#L9
135 TAG_REGEX = re.compile(
136 r"^(?:[\/\w-]+)?(?P<version>[vV]?\d+(?:\.\d+){0,2}[^\+]*)(?:\+.*)?$"
137 )
138
139 # Only set use_scm_version if git executable exists (setting this variable causes pip to use git under the hood)
140 if shutil.which("git"):
141 use_scm_version = {"root": "../..", "relative_to": __file__, "tag_regex": TAG_REGEX}
142 else:
143 use_scm_version = None
144
145
146 class BuildProtoCommand(Command):
147 description = "Builds the proto files into python files."
148
149 def initialize_options(self):
150 self.protoc = ["python", "-m", "grpc_tools.protoc"] # find_executable("protoc")
151 self.proto_folder = os.path.join(repo_root, "protos")
152 self.this_package = os.path.join(os.path.dirname(__file__) or os.getcwd(), 'feast/protos')
153 self.sub_folders = ["core", "serving", "types", "storage"]
154
155 def finalize_options(self):
156 pass
157
158 def _generate_protos(self, path):
159 proto_files = glob.glob(os.path.join(self.proto_folder, path))
160
161 subprocess.check_call(self.protoc + [
162 '-I', self.proto_folder,
163 '--python_out', self.this_package,
164 '--grpc_python_out', self.this_package,
165 '--mypy_out', self.this_package] + proto_files)
166
167 def run(self):
168 for sub_folder in self.sub_folders:
169 self._generate_protos(f'feast/{sub_folder}/*.proto')
170
171 from pathlib import Path
172
173 for path in Path('feast/protos').rglob('*.py'):
174 for folder in self.sub_folders:
175 # Read in the file
176 with open(path, 'r') as file:
177 filedata = file.read()
178
179 # Replace the target string
180 filedata = filedata.replace(f'from feast.{folder}', f'from feast.protos.feast.{folder}')
181
182 # Write the file out again
183 with open(path, 'w') as file:
184 file.write(filedata)
185
186
187 class BuildCommand(build_py):
188 """Custom build command."""
189
190 def run(self):
191 self.run_command('build_proto')
192 build_py.run(self)
193
194
195 class DevelopCommand(develop):
196 """Custom develop command."""
197
198 def run(self):
199 self.run_command('build_proto')
200 develop.run(self)
201
202
203 setup(
204 name=NAME,
205 author=AUTHOR,
206 description=DESCRIPTION,
207 long_description=LONG_DESCRIPTION,
208 long_description_content_type="text/markdown",
209 python_requires=REQUIRES_PYTHON,
210 url=URL,
211 packages=find_packages(exclude=("tests",)),
212 install_requires=REQUIRED,
213 # https://stackoverflow.com/questions/28509965/setuptools-development-requirements
214 # Install dev requirements with: pip install -e .[dev]
215 extras_require={
216 "dev": DEV_REQUIRED,
217 "ci": CI_REQUIRED,
218 "gcp": GCP_REQUIRED,
219 "aws": AWS_REQUIRED,
220 "redis": REDIS_REQUIRED,
221 },
222 include_package_data=True,
223 license="Apache",
224 classifiers=[
225 # Trove classifiers
226 # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers
227 "License :: OSI Approved :: Apache Software License",
228 "Programming Language :: Python",
229 "Programming Language :: Python :: 3",
230 "Programming Language :: Python :: 3.7",
231 ],
232 entry_points={"console_scripts": ["feast=feast.cli:cli"]},
233 use_scm_version=use_scm_version,
234 setup_requires=["setuptools_scm", "grpcio", "grpcio-tools==1.34.0", "mypy-protobuf==1.*", "sphinx!=4.0.0"],
235 package_data={
236 "": [
237 "protos/feast/**/*.proto",
238 "protos/feast/third_party/grpc/health/v1/*.proto",
239 "feast/protos/feast/**/*.py",
240 ],
241 },
242 cmdclass={
243 "build_proto": BuildProtoCommand,
244 "build_py": BuildCommand,
245 "develop": DevelopCommand,
246 },
247 )
248
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sdk/python/setup.py b/sdk/python/setup.py
--- a/sdk/python/setup.py
+++ b/sdk/python/setup.py
@@ -126,7 +126,7 @@
# README file from Feast repo root directory
README_FILE = os.path.join(repo_root, "README.md")
-with open(README_FILE, "r") as f:
+with open(README_FILE, "r", encoding="utf8") as f:
LONG_DESCRIPTION = f.read()
# Add Support for parsing tags that have a prefix containing '/' (ie 'sdk/go') to setuptools_scm.
| {"golden_diff": "diff --git a/sdk/python/setup.py b/sdk/python/setup.py\n--- a/sdk/python/setup.py\n+++ b/sdk/python/setup.py\n@@ -126,7 +126,7 @@\n \n # README file from Feast repo root directory\n README_FILE = os.path.join(repo_root, \"README.md\")\n-with open(README_FILE, \"r\") as f:\n+with open(README_FILE, \"r\", encoding=\"utf8\") as f:\n LONG_DESCRIPTION = f.read()\n \n # Add Support for parsing tags that have a prefix containing '/' (ie 'sdk/go') to setuptools_scm.\n", "issue": "Feast setup script returns \"'charmap' codec can't decode byte\" on Windows\n## Expected Behavior\r\n\r\n`pip install -e \"sdk/python[ci]\"` must succeed and install feast package in editable mode.\r\n\r\n## Current Behavior\r\n\r\nInstallation error:\r\n\r\n```\r\nObtaining file:///C:/Users/<username>/Downloads/feast/sdk/python\r\n Installing build dependencies ... done\r\n Getting requirements to build wheel ... error\r\n ERROR: Command errored out with exit status 1:\r\n command: 'C:\\Users\\<username>\\Downloads\\feast\\venv\\Scripts\\python.exe' 'C:\\Users\\<username>\\Downloads\\feast\\venv\\lib\\site-packages\\pip\\_vendor\\pep517\\in_process\\_in_process.py' get_requires_for_build_wheel 'C:\\Users\\<folder_name>\\AppData\\Local\\Temp\\tmpwd6unq88'\r\n cwd: C:\\Users\\<username>\\Downloads\\feast\\sdk\\python\r\n Complete output (20 lines):\r\n Traceback (most recent call last):\r\n File \"C:\\Users\\<username>\\Downloads\\feast\\venv\\lib\\site-packages\\pip\\_vendor\\pep517\\in_process\\_in_process.py\", line 280, in <module>\r\n main()\r\n File \"C:\\Users\\<username>\\Downloads\\feast\\venv\\lib\\site-packages\\pip\\_vendor\\pep517\\in_process\\_in_process.py\", line 263, in main\r\n json_out['return_val'] = hook(**hook_input['kwargs'])\r\n File \"C:\\Users\\<username>\\Downloads\\feast\\venv\\lib\\site-packages\\pip\\_vendor\\pep517\\in_process\\_in_process.py\", line 114, in get_requires_for_build_wheel\r\n return hook(config_settings)\r\n File \"C:\\Users\\<username>\\AppData\\Local\\Temp\\pip-build-env-2u3vtd1l\\overlay\\Lib\\site-packages\\setuptools\\build_meta.py\", line 162, in get_requires_for_build_wheel\r\n return self._get_build_requires(\r\n File \"C:\\Users\\<username>\\AppData\\Local\\Temp\\pip-build-env-2u3vtd1l\\overlay\\Lib\\site-packages\\setuptools\\build_meta.py\", line 143, in _get_build_requires\r\n self.run_setup()\r\n File \"C:\\Users\\<username>\\AppData\\Local\\Temp\\pip-build-env-2u3vtd1l\\overlay\\Lib\\site-packages\\setuptools\\build_meta.py\", line 267, in run_setup \r\n super(_BuildMetaLegacyBackend,\r\n File \"C:\\Users\\<username>\\AppData\\Local\\Temp\\pip-build-env-2u3vtd1l\\overlay\\Lib\\site-packages\\setuptools\\build_meta.py\", line 158, in run_setup \r\n exec(compile(code, __file__, 'exec'), locals())\r\n File \"setup.py\", line 137, in <module>\r\n LONG_DESCRIPTION = f.read()\r\n File \"c:\\users\\<username>\\appdata\\local\\programs\\python\\python39\\lib\\encodings\\cp1252.py\", line 23, in decode\r\n return codecs.charmap_decode(input,self.errors,decoding_table)[0]\r\n UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 1599: character maps to <undefined>\r\n```\r\n\r\n## Steps to reproduce\r\n\r\nInstall feast for development:\r\n\r\n```\r\npip install -e \"sdk/python[ci]\"\r\n```\r\n\r\n### Specifications\r\n\r\n- Version: Master branch\r\n- Platform: Windows 10\r\n- Python version: 3.9.5\r\n- pip version: 21.3.1\r\n- virtualenv version: 20.10.0\r\n\r\n## Possible Solution\r\n\r\n`README.md` contains emojis resulting character encoding issues in the package long description. This PR sets package long description encoding to UTF-8.\r\n\r\n```python\r\n# README file from Feast repo root directory\r\nREADME_FILE = os.path.join(repo_root, \"README.md\")\r\nwith open(README_FILE, \"r\", encoding=\"utf8\") as f:\r\n LONG_DESCRIPTION = f.read()\r\n```\r\n\r\nPull request #2111\n", "before_files": [{"content": "# Copyright 2019 The Feast Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport glob\nimport os\nimport re\nimport shutil\nimport subprocess\nimport pathlib\n\nfrom distutils.cmd import Command\nfrom setuptools import find_packages\n\ntry:\n from setuptools import setup\n from setuptools.command.install import install\n from setuptools.command.develop import develop\n from setuptools.command.egg_info import egg_info\n from setuptools.command.sdist import sdist\n from setuptools.command.build_py import build_py\nexcept ImportError:\n from distutils.core import setup\n from distutils.command.install import install\n from distutils.command.build_py import build_py\n\nNAME = \"feast\"\nDESCRIPTION = \"Python SDK for Feast\"\nURL = \"https://github.com/feast-dev/feast\"\nAUTHOR = \"Feast\"\nREQUIRES_PYTHON = \">=3.7.0\"\n\nREQUIRED = [\n \"Click==7.*\",\n \"colorama>=0.3.9\",\n \"dill==0.3.*\",\n \"fastavro>=1.1.0\",\n \"google-api-core>=1.23.0\",\n \"googleapis-common-protos==1.52.*\",\n \"grpcio>=1.34.0\",\n \"grpcio-reflection>=1.34.0\",\n \"Jinja2>=2.0.0\",\n \"jsonschema\",\n \"mmh3\",\n \"pandas>=1.0.0\",\n \"pandavro==1.5.*\",\n \"protobuf>=3.10\",\n \"proto-plus<1.19.7\",\n \"pyarrow>=4.0.0\",\n \"pydantic>=1.0.0\",\n \"PyYAML>=5.4.*\",\n \"tabulate==0.8.*\",\n \"tenacity>=7.*\",\n \"toml==0.10.*\",\n \"tqdm==4.*\",\n \"fastapi>=0.68.0\",\n \"uvicorn[standard]>=0.14.0\",\n \"proto-plus<1.19.7\",\n \"tensorflow-metadata>=1.0.0,<2.0.0\",\n]\n\nGCP_REQUIRED = [\n \"google-cloud-bigquery>=2.28.1\",\n \"google-cloud-bigquery-storage >= 2.0.0\",\n \"google-cloud-datastore>=2.1.*\",\n \"google-cloud-storage>=1.34.*,<1.41\",\n \"google-cloud-core>=1.4.0,<2.0.0\",\n]\n\nREDIS_REQUIRED = [\n \"redis-py-cluster==2.1.2\",\n \"hiredis>=2.0.0\",\n]\n\nAWS_REQUIRED = [\n \"boto3==1.17.*\",\n \"docker>=5.0.2\",\n]\n\nCI_REQUIRED = [\n \"cryptography==3.3.2\",\n \"flake8\",\n \"black==19.10b0\",\n \"isort>=5\",\n \"grpcio-tools==1.34.0\",\n \"grpcio-testing==1.34.0\",\n \"minio==7.1.0\",\n \"mock==2.0.0\",\n \"moto\",\n \"mypy==0.790\",\n \"mypy-protobuf==1.24\",\n \"avro==1.10.0\",\n \"gcsfs\",\n \"urllib3>=1.25.4\",\n \"pytest==6.0.0\",\n \"pytest-cov\",\n \"pytest-xdist\",\n \"pytest-benchmark>=3.4.1\",\n \"pytest-lazy-fixture==0.6.3\",\n \"pytest-timeout==1.4.2\",\n \"pytest-ordering==0.6.*\",\n \"pytest-mock==1.10.4\",\n \"Sphinx!=4.0.0\",\n \"sphinx-rtd-theme\",\n \"testcontainers==3.4.2\",\n \"adlfs==0.5.9\",\n \"firebase-admin==4.5.2\",\n \"pre-commit\",\n \"assertpy==1.1\",\n \"pip-tools\",\n] + GCP_REQUIRED + REDIS_REQUIRED + AWS_REQUIRED\n\nDEV_REQUIRED = [\"mypy-protobuf==1.*\", \"grpcio-testing==1.*\"] + CI_REQUIRED\n\n# Get git repo root directory\nrepo_root = str(pathlib.Path(__file__).resolve().parent.parent.parent)\n\n# README file from Feast repo root directory\nREADME_FILE = os.path.join(repo_root, \"README.md\")\nwith open(README_FILE, \"r\") as f:\n LONG_DESCRIPTION = f.read()\n\n# Add Support for parsing tags that have a prefix containing '/' (ie 'sdk/go') to setuptools_scm.\n# Regex modified from default tag regex in:\n# https://github.com/pypa/setuptools_scm/blob/2a1b46d38fb2b8aeac09853e660bcd0d7c1bc7be/src/setuptools_scm/config.py#L9\nTAG_REGEX = re.compile(\n r\"^(?:[\\/\\w-]+)?(?P<version>[vV]?\\d+(?:\\.\\d+){0,2}[^\\+]*)(?:\\+.*)?$\"\n)\n\n# Only set use_scm_version if git executable exists (setting this variable causes pip to use git under the hood)\nif shutil.which(\"git\"):\n use_scm_version = {\"root\": \"../..\", \"relative_to\": __file__, \"tag_regex\": TAG_REGEX}\nelse:\n use_scm_version = None\n\n\nclass BuildProtoCommand(Command):\n description = \"Builds the proto files into python files.\"\n\n def initialize_options(self):\n self.protoc = [\"python\", \"-m\", \"grpc_tools.protoc\"] # find_executable(\"protoc\")\n self.proto_folder = os.path.join(repo_root, \"protos\")\n self.this_package = os.path.join(os.path.dirname(__file__) or os.getcwd(), 'feast/protos')\n self.sub_folders = [\"core\", \"serving\", \"types\", \"storage\"]\n\n def finalize_options(self):\n pass\n\n def _generate_protos(self, path):\n proto_files = glob.glob(os.path.join(self.proto_folder, path))\n\n subprocess.check_call(self.protoc + [\n '-I', self.proto_folder,\n '--python_out', self.this_package,\n '--grpc_python_out', self.this_package,\n '--mypy_out', self.this_package] + proto_files)\n\n def run(self):\n for sub_folder in self.sub_folders:\n self._generate_protos(f'feast/{sub_folder}/*.proto')\n\n from pathlib import Path\n\n for path in Path('feast/protos').rglob('*.py'):\n for folder in self.sub_folders:\n # Read in the file\n with open(path, 'r') as file:\n filedata = file.read()\n\n # Replace the target string\n filedata = filedata.replace(f'from feast.{folder}', f'from feast.protos.feast.{folder}')\n\n # Write the file out again\n with open(path, 'w') as file:\n file.write(filedata)\n\n\nclass BuildCommand(build_py):\n \"\"\"Custom build command.\"\"\"\n\n def run(self):\n self.run_command('build_proto')\n build_py.run(self)\n\n\nclass DevelopCommand(develop):\n \"\"\"Custom develop command.\"\"\"\n\n def run(self):\n self.run_command('build_proto')\n develop.run(self)\n\n\nsetup(\n name=NAME,\n author=AUTHOR,\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n long_description_content_type=\"text/markdown\",\n python_requires=REQUIRES_PYTHON,\n url=URL,\n packages=find_packages(exclude=(\"tests\",)),\n install_requires=REQUIRED,\n # https://stackoverflow.com/questions/28509965/setuptools-development-requirements\n # Install dev requirements with: pip install -e .[dev]\n extras_require={\n \"dev\": DEV_REQUIRED,\n \"ci\": CI_REQUIRED,\n \"gcp\": GCP_REQUIRED,\n \"aws\": AWS_REQUIRED,\n \"redis\": REDIS_REQUIRED,\n },\n include_package_data=True,\n license=\"Apache\",\n classifiers=[\n # Trove classifiers\n # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n ],\n entry_points={\"console_scripts\": [\"feast=feast.cli:cli\"]},\n use_scm_version=use_scm_version,\n setup_requires=[\"setuptools_scm\", \"grpcio\", \"grpcio-tools==1.34.0\", \"mypy-protobuf==1.*\", \"sphinx!=4.0.0\"],\n package_data={\n \"\": [\n \"protos/feast/**/*.proto\",\n \"protos/feast/third_party/grpc/health/v1/*.proto\",\n \"feast/protos/feast/**/*.py\",\n ],\n },\n cmdclass={\n \"build_proto\": BuildProtoCommand,\n \"build_py\": BuildCommand,\n \"develop\": DevelopCommand,\n },\n)\n", "path": "sdk/python/setup.py"}], "after_files": [{"content": "# Copyright 2019 The Feast Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport glob\nimport os\nimport re\nimport shutil\nimport subprocess\nimport pathlib\n\nfrom distutils.cmd import Command\nfrom setuptools import find_packages\n\ntry:\n from setuptools import setup\n from setuptools.command.install import install\n from setuptools.command.develop import develop\n from setuptools.command.egg_info import egg_info\n from setuptools.command.sdist import sdist\n from setuptools.command.build_py import build_py\nexcept ImportError:\n from distutils.core import setup\n from distutils.command.install import install\n from distutils.command.build_py import build_py\n\nNAME = \"feast\"\nDESCRIPTION = \"Python SDK for Feast\"\nURL = \"https://github.com/feast-dev/feast\"\nAUTHOR = \"Feast\"\nREQUIRES_PYTHON = \">=3.7.0\"\n\nREQUIRED = [\n \"Click==7.*\",\n \"colorama>=0.3.9\",\n \"dill==0.3.*\",\n \"fastavro>=1.1.0\",\n \"google-api-core>=1.23.0\",\n \"googleapis-common-protos==1.52.*\",\n \"grpcio>=1.34.0\",\n \"grpcio-reflection>=1.34.0\",\n \"Jinja2>=2.0.0\",\n \"jsonschema\",\n \"mmh3\",\n \"pandas>=1.0.0\",\n \"pandavro==1.5.*\",\n \"protobuf>=3.10\",\n \"proto-plus<1.19.7\",\n \"pyarrow>=4.0.0\",\n \"pydantic>=1.0.0\",\n \"PyYAML>=5.4.*\",\n \"tabulate==0.8.*\",\n \"tenacity>=7.*\",\n \"toml==0.10.*\",\n \"tqdm==4.*\",\n \"fastapi>=0.68.0\",\n \"uvicorn[standard]>=0.14.0\",\n \"proto-plus<1.19.7\",\n \"tensorflow-metadata>=1.0.0,<2.0.0\",\n]\n\nGCP_REQUIRED = [\n \"google-cloud-bigquery>=2.28.1\",\n \"google-cloud-bigquery-storage >= 2.0.0\",\n \"google-cloud-datastore>=2.1.*\",\n \"google-cloud-storage>=1.34.*,<1.41\",\n \"google-cloud-core>=1.4.0,<2.0.0\",\n]\n\nREDIS_REQUIRED = [\n \"redis-py-cluster==2.1.2\",\n \"hiredis>=2.0.0\",\n]\n\nAWS_REQUIRED = [\n \"boto3==1.17.*\",\n \"docker>=5.0.2\",\n]\n\nCI_REQUIRED = [\n \"cryptography==3.3.2\",\n \"flake8\",\n \"black==19.10b0\",\n \"isort>=5\",\n \"grpcio-tools==1.34.0\",\n \"grpcio-testing==1.34.0\",\n \"minio==7.1.0\",\n \"mock==2.0.0\",\n \"moto\",\n \"mypy==0.790\",\n \"mypy-protobuf==1.24\",\n \"avro==1.10.0\",\n \"gcsfs\",\n \"urllib3>=1.25.4\",\n \"pytest==6.0.0\",\n \"pytest-cov\",\n \"pytest-xdist\",\n \"pytest-benchmark>=3.4.1\",\n \"pytest-lazy-fixture==0.6.3\",\n \"pytest-timeout==1.4.2\",\n \"pytest-ordering==0.6.*\",\n \"pytest-mock==1.10.4\",\n \"Sphinx!=4.0.0\",\n \"sphinx-rtd-theme\",\n \"testcontainers==3.4.2\",\n \"adlfs==0.5.9\",\n \"firebase-admin==4.5.2\",\n \"pre-commit\",\n \"assertpy==1.1\",\n \"pip-tools\",\n] + GCP_REQUIRED + REDIS_REQUIRED + AWS_REQUIRED\n\nDEV_REQUIRED = [\"mypy-protobuf==1.*\", \"grpcio-testing==1.*\"] + CI_REQUIRED\n\n# Get git repo root directory\nrepo_root = str(pathlib.Path(__file__).resolve().parent.parent.parent)\n\n# README file from Feast repo root directory\nREADME_FILE = os.path.join(repo_root, \"README.md\")\nwith open(README_FILE, \"r\", encoding=\"utf8\") as f:\n LONG_DESCRIPTION = f.read()\n\n# Add Support for parsing tags that have a prefix containing '/' (ie 'sdk/go') to setuptools_scm.\n# Regex modified from default tag regex in:\n# https://github.com/pypa/setuptools_scm/blob/2a1b46d38fb2b8aeac09853e660bcd0d7c1bc7be/src/setuptools_scm/config.py#L9\nTAG_REGEX = re.compile(\n r\"^(?:[\\/\\w-]+)?(?P<version>[vV]?\\d+(?:\\.\\d+){0,2}[^\\+]*)(?:\\+.*)?$\"\n)\n\n# Only set use_scm_version if git executable exists (setting this variable causes pip to use git under the hood)\nif shutil.which(\"git\"):\n use_scm_version = {\"root\": \"../..\", \"relative_to\": __file__, \"tag_regex\": TAG_REGEX}\nelse:\n use_scm_version = None\n\n\nclass BuildProtoCommand(Command):\n description = \"Builds the proto files into python files.\"\n\n def initialize_options(self):\n self.protoc = [\"python\", \"-m\", \"grpc_tools.protoc\"] # find_executable(\"protoc\")\n self.proto_folder = os.path.join(repo_root, \"protos\")\n self.this_package = os.path.join(os.path.dirname(__file__) or os.getcwd(), 'feast/protos')\n self.sub_folders = [\"core\", \"serving\", \"types\", \"storage\"]\n\n def finalize_options(self):\n pass\n\n def _generate_protos(self, path):\n proto_files = glob.glob(os.path.join(self.proto_folder, path))\n\n subprocess.check_call(self.protoc + [\n '-I', self.proto_folder,\n '--python_out', self.this_package,\n '--grpc_python_out', self.this_package,\n '--mypy_out', self.this_package] + proto_files)\n\n def run(self):\n for sub_folder in self.sub_folders:\n self._generate_protos(f'feast/{sub_folder}/*.proto')\n\n from pathlib import Path\n\n for path in Path('feast/protos').rglob('*.py'):\n for folder in self.sub_folders:\n # Read in the file\n with open(path, 'r') as file:\n filedata = file.read()\n\n # Replace the target string\n filedata = filedata.replace(f'from feast.{folder}', f'from feast.protos.feast.{folder}')\n\n # Write the file out again\n with open(path, 'w') as file:\n file.write(filedata)\n\n\nclass BuildCommand(build_py):\n \"\"\"Custom build command.\"\"\"\n\n def run(self):\n self.run_command('build_proto')\n build_py.run(self)\n\n\nclass DevelopCommand(develop):\n \"\"\"Custom develop command.\"\"\"\n\n def run(self):\n self.run_command('build_proto')\n develop.run(self)\n\n\nsetup(\n name=NAME,\n author=AUTHOR,\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n long_description_content_type=\"text/markdown\",\n python_requires=REQUIRES_PYTHON,\n url=URL,\n packages=find_packages(exclude=(\"tests\",)),\n install_requires=REQUIRED,\n # https://stackoverflow.com/questions/28509965/setuptools-development-requirements\n # Install dev requirements with: pip install -e .[dev]\n extras_require={\n \"dev\": DEV_REQUIRED,\n \"ci\": CI_REQUIRED,\n \"gcp\": GCP_REQUIRED,\n \"aws\": AWS_REQUIRED,\n \"redis\": REDIS_REQUIRED,\n },\n include_package_data=True,\n license=\"Apache\",\n classifiers=[\n # Trove classifiers\n # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n ],\n entry_points={\"console_scripts\": [\"feast=feast.cli:cli\"]},\n use_scm_version=use_scm_version,\n setup_requires=[\"setuptools_scm\", \"grpcio\", \"grpcio-tools==1.34.0\", \"mypy-protobuf==1.*\", \"sphinx!=4.0.0\"],\n package_data={\n \"\": [\n \"protos/feast/**/*.proto\",\n \"protos/feast/third_party/grpc/health/v1/*.proto\",\n \"feast/protos/feast/**/*.py\",\n ],\n },\n cmdclass={\n \"build_proto\": BuildProtoCommand,\n \"build_py\": BuildCommand,\n \"develop\": DevelopCommand,\n },\n)\n", "path": "sdk/python/setup.py"}]} | 3,975 | 126 |
gh_patches_debug_355 | rasdani/github-patches | git_diff | AnalogJ__lexicon-479 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
GoDaddy provider should recognize domaincontrol.com as its nameserver
For the auto provider, it should recognize that domains managed by GoDaddy often have nameservers under the *.domaincontrol.com namespace. You can verify this is GoDaddy via whois; and I also tested this by adding 'domaincontrol.com' to the recognized nameservers with the following patch.
```
--- providers/godaddy.py.orig 2020-01-09 08:58:26.160360574 +0000
+++ providers/godaddy.py 2020-01-10 19:27:29.292030195 +0000
@@ -14,5 +14,5 @@
LOGGER = logging.getLogger(__name__)
-NAMESERVER_DOMAINS = ['godaddy.com']
+NAMESERVER_DOMAINS = ['godaddy.com','domaincontrol.com']
```
And the current whois excerpt:
```
$ whois domaincontrol.com
Domain Name: DOMAINCONTROL.COM
...
Updated Date: 2018-08-07T19:25:37Z
...
Registrant Organization: Go Daddy Operating Company, LLC
Registrant State/Province: Arizona
Registrant Country: US
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lexicon/providers/godaddy.py`
Content:
```
1 """Module provider for Godaddy"""
2 from __future__ import absolute_import
3 import hashlib
4 import json
5 import logging
6
7 import requests
8 from requests.adapters import HTTPAdapter
9 from urllib3.util.retry import Retry
10
11 from lexicon.providers.base import Provider as BaseProvider
12
13
14 LOGGER = logging.getLogger(__name__)
15
16 NAMESERVER_DOMAINS = ['godaddy.com']
17
18
19 def provider_parser(subparser):
20 """Generate a subparser for Godaddy"""
21 subparser.add_argument(
22 '--auth-key', help='specify the key to access the API')
23 subparser.add_argument(
24 '--auth-secret', help='specify the secret to access the API')
25
26
27 class Provider(BaseProvider):
28 """
29 Implements the DNS GoDaddy provider.
30 Some general remarks about this provider, because it uses a weirdly designed API.
31 Indeed, there is no direct way to insert, update or delete a specific record.
32 Furthermore, there is no unique identifier for a record.
33 Instead GoDaddy use a replace approach: for a given set of records one
34 can replace this set with a new set sent through API.
35 For the sake of simplicity and consistency across the provider edit methods,
36 the set will be always all records in the DNS zone.
37 With this approach:
38 - adding a record consists in appending a record to the obtained set and call
39 replace with the updated set,
40 - updating a record consists in modifying a record in the obtained set and call
41 replace with the updated set,
42 - deleting a record consists in removing a record in the obtained set and call
43 replace with the updated set.
44 In parallel, as said before, there is no unique identifier.
45 This provider then implement a pseudo-identifier, to allow an easy update or delete
46 using the '--identifier' lexicon parameter.
47 But you need to call the 'list' command just before executing and update/delete action,
48 because identifier value is tied to the content of the record, and will change anytime
49 something is changed in the record.
50 """
51
52 def __init__(self, config):
53 super(Provider, self).__init__(config)
54 self.domain_id = None
55 self.api_endpoint = 'https://api.godaddy.com/v1'
56
57 def _authenticate(self):
58 domain = self.domain
59
60 result = self._get('/domains/{0}'.format(domain))
61 self.domain_id = result['domainId']
62
63 def _list_records(self, rtype=None, name=None, content=None):
64 domain = self.domain
65
66 url = '/domains/{0}/records'.format(domain)
67 if rtype:
68 url += '/{0}'.format(rtype)
69 if name:
70 url += '/{0}'.format(self._relative_name(name))
71
72 raws = self._get(url)
73
74 records = []
75 for raw in raws:
76 records.append({
77 'id': Provider._identifier(raw),
78 'type': raw['type'],
79 'name': self._full_name(raw['name']),
80 'ttl': raw['ttl'],
81 'content': raw['data']
82 })
83
84 if content:
85 records = [
86 record for record in records if record['data'] == content]
87
88 LOGGER.debug('list_records: %s', records)
89
90 return records
91
92 def _create_record(self, rtype, name, content):
93 domain = self.domain
94 relative_name = self._relative_name(name)
95 ttl = self._get_lexicon_option('ttl')
96
97 # Retrieve existing data in DNS zone.
98 records = self._get('/domains/{0}/records'.format(domain))
99
100 # Check if a record already matches given parameters
101 for record in records:
102 if (record['type'] == rtype and self._relative_name(record['name']) == relative_name
103 and record['data'] == content):
104 LOGGER.debug(
105 'create_record (ignored, duplicate): %s %s %s', rtype, name, content)
106 return True
107
108 # Append a new entry corresponding to given parameters.
109 data = {'type': rtype, 'name': relative_name, 'data': content}
110 if ttl:
111 data['ttl'] = ttl
112
113 records.append(data)
114
115 # Synchronize data with inserted record into DNS zone.
116 self._put('/domains/{0}/records'.format(domain), records)
117
118 LOGGER.debug('create_record: %s %s %s', rtype, name, content)
119
120 return True
121
122 def _update_record(self, identifier, rtype=None, name=None, content=None):
123 # No identifier is used with GoDaddy.
124 # We can rely either:
125 # - only on rtype/name to get the relevant records, both of them are required
126 # or we will could update to much records ...,
127 # - or by the pseudo-identifier provided
128 # Furthermore for rtype/name approach, we cannot update all matching records, as it
129 # would lead o an error (two entries of same rtype + name cannot have the same content).
130 # So for rtype/name approach, we search first matching record for rtype/name on which
131 # content is different, and we update it before synchronizing the DNS zone.
132 if not identifier and not rtype:
133 raise Exception('ERROR: rtype is required')
134 if not identifier and not name:
135 raise Exception('ERROR: name is required')
136
137 domain = self.domain
138 relative_name = None
139 if name:
140 relative_name = self._relative_name(name)
141
142 # Retrieve existing data in DNS zone.
143 records = self._get('/domains/{0}/records'.format(domain))
144
145 # Get the record to update:
146 # - either explicitly by its identifier,
147 # - or the first matching by its rtype+name where content does not match
148 # (first match, see first method comment for explanation).
149 for record in records:
150 if ((identifier and Provider._identifier(record) == identifier) or # pylint: disable=too-many-boolean-expressions
151 (not identifier and record['type'] == rtype
152 and self._relative_name(record['name']) == relative_name
153 and record['data'] != content)):
154 record['data'] = content
155 break
156
157 # Synchronize data with updated records into DNS zone.
158 self._put('/domains/{0}/records'.format(domain), records)
159
160 LOGGER.debug('update_record: %s %s %s', rtype, name, content)
161
162 return True
163
164 def _delete_record(self, identifier=None, rtype=None, name=None, content=None):
165 # For the LOL. GoDaddy does not accept an empty array
166 # when updating a particular set of records.
167 # It means that you cannot request to remove all records
168 # matching a particular rtype and/or name.
169 # Instead, we get ALL records in the DNS zone, update the set,
170 # and replace EVERYTHING in the DNS zone.
171 # You will always have at minimal NS/SRV entries in the array,
172 # otherwise your DNS zone is broken, and updating the zone is the least of your problem ...
173 domain = self.domain
174
175 # Retrieve all records in the DNS zone
176 records = self._get('/domains/{0}/records'.format(domain))
177
178 relative_name = None
179 if name:
180 relative_name = self._relative_name(name)
181
182 # Filter out all records which matches the pattern (either identifier
183 # or some combination of rtype/name/content).
184 filtered_records = []
185 if identifier:
186 filtered_records = [
187 record for record in records if Provider._identifier(record) != identifier]
188 else:
189 for record in records:
190 if ((not rtype and not relative_name and not content) # pylint: disable=too-many-boolean-expressions
191 or (rtype and not relative_name and not content and record['type'] != rtype)
192 or (not rtype and relative_name and not content
193 and self._relative_name(record['name']) != relative_name)
194 or (not rtype and not relative_name and content
195 and record['data'] != content)
196 or (rtype and relative_name and not content
197 and (record['type'] != rtype
198 or self._relative_name(record['name']) != relative_name))
199 or (rtype and not relative_name and content
200 and (record['type'] != rtype or record['data'] != content))
201 or (not rtype and relative_name and content
202 and (self._relative_name(record['name']) != relative_name
203 or record['data'] != content))
204 or (rtype and relative_name and content
205 and (record['type'] != rtype
206 or self._relative_name(record['name']) != relative_name
207 or record['data'] != content))):
208 filtered_records.append(record)
209
210 # Synchronize data with expurged entries into DNS zone.
211 self._put('/domains/{0}/records'.format(domain), filtered_records)
212
213 LOGGER.debug('delete_records: %s %s %s', rtype, name, content)
214
215 return True
216
217 # GoDaddy provides no identifier for a record, which is a problem
218 # where identifiers can be used (delete and update).
219 # To circumvent this, we implement a pseudo-identifier,which is basically
220 # a hash of type+name+content of a given record.
221 # It is far from perfect, as the identifier will change each time
222 # we change something in the record ...
223 # But at least, one can use 'lexicon godaddy list ...' then
224 # 'lexicon godaddy update --identifier ...' to modify specific record.
225 # However, 'lexicon godaddy list ...' should be called each time DNS
226 # zone had been changed to calculate new identifiers.
227 @staticmethod
228 def _identifier(record):
229 sha256 = hashlib.sha256()
230 sha256.update(('type=' + record.get('type', '') + ',').encode('utf-8'))
231 sha256.update(('name=' + record.get('name', '') + ',').encode('utf-8'))
232 sha256.update(('data=' + record.get('data', '') + ',').encode('utf-8'))
233 return sha256.hexdigest()[0:7]
234
235 def _request(self, action='GET', url='/', data=None, query_params=None):
236 if not data:
237 data = {}
238 if not query_params:
239 query_params = {}
240
241 # When editing DNS zone, API is unavailable for few seconds
242 # (until modifications are propagated).
243 # In this case, call to API will return 409 HTTP error.
244 # We use the Retry extension to retry the requests until
245 # we get a processable reponse (402 HTTP status, or an HTTP error != 409)
246 retries = Retry(
247 total=10,
248 backoff_factor=0.5,
249 status_forcelist=[409],
250 method_whitelist=frozenset(
251 ['GET', 'PUT', 'POST', 'DELETE', 'PATCH'])
252 )
253
254 session = requests.Session()
255 session.mount('https://', HTTPAdapter(max_retries=retries))
256
257 result = session.request(action, self.api_endpoint + url,
258 params=query_params,
259 data=json.dumps(data),
260 headers={
261 'Content-Type': 'application/json',
262 'Accept': 'application/json',
263 # GoDaddy use a key/secret pair to authenticate
264 'Authorization': 'sso-key {0}:{1}'.format(
265 self._get_provider_option(
266 'auth_key'),
267 self._get_provider_option('auth_secret'))
268 })
269
270 result.raise_for_status()
271
272 try:
273 # Return the JSON body response if exists.
274 return result.json()
275 except ValueError:
276 # For some requests command (eg. PUT), GoDaddy will not
277 # return any JSON, just an HTTP status without body.
278 return None
279
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lexicon/providers/godaddy.py b/lexicon/providers/godaddy.py
--- a/lexicon/providers/godaddy.py
+++ b/lexicon/providers/godaddy.py
@@ -13,7 +13,7 @@
LOGGER = logging.getLogger(__name__)
-NAMESERVER_DOMAINS = ['godaddy.com']
+NAMESERVER_DOMAINS = ['godaddy.com', 'domaincontrol.com']
def provider_parser(subparser):
| {"golden_diff": "diff --git a/lexicon/providers/godaddy.py b/lexicon/providers/godaddy.py\n--- a/lexicon/providers/godaddy.py\n+++ b/lexicon/providers/godaddy.py\n@@ -13,7 +13,7 @@\n \n LOGGER = logging.getLogger(__name__)\n \n-NAMESERVER_DOMAINS = ['godaddy.com']\n+NAMESERVER_DOMAINS = ['godaddy.com', 'domaincontrol.com']\n \n \n def provider_parser(subparser):\n", "issue": "GoDaddy provider should recognize domaincontrol.com as its nameserver\nFor the auto provider, it should recognize that domains managed by GoDaddy often have nameservers under the *.domaincontrol.com namespace. You can verify this is GoDaddy via whois; and I also tested this by adding 'domaincontrol.com' to the recognized nameservers with the following patch.\r\n\r\n```\r\n--- providers/godaddy.py.orig\t2020-01-09 08:58:26.160360574 +0000\r\n+++ providers/godaddy.py\t2020-01-10 19:27:29.292030195 +0000\r\n@@ -14,5 +14,5 @@\r\n LOGGER = logging.getLogger(__name__)\r\n \r\n-NAMESERVER_DOMAINS = ['godaddy.com']\r\n+NAMESERVER_DOMAINS = ['godaddy.com','domaincontrol.com']\r\n```\r\n\r\nAnd the current whois excerpt:\r\n\r\n```\r\n$ whois domaincontrol.com\r\nDomain Name: DOMAINCONTROL.COM\r\n...\r\nUpdated Date: 2018-08-07T19:25:37Z\r\n...\r\nRegistrant Organization: Go Daddy Operating Company, LLC\r\nRegistrant State/Province: Arizona\r\nRegistrant Country: US\r\n```\r\n\r\n\n", "before_files": [{"content": "\"\"\"Module provider for Godaddy\"\"\"\nfrom __future__ import absolute_import\nimport hashlib\nimport json\nimport logging\n\nimport requests\nfrom requests.adapters import HTTPAdapter\nfrom urllib3.util.retry import Retry\n\nfrom lexicon.providers.base import Provider as BaseProvider\n\n\nLOGGER = logging.getLogger(__name__)\n\nNAMESERVER_DOMAINS = ['godaddy.com']\n\n\ndef provider_parser(subparser):\n \"\"\"Generate a subparser for Godaddy\"\"\"\n subparser.add_argument(\n '--auth-key', help='specify the key to access the API')\n subparser.add_argument(\n '--auth-secret', help='specify the secret to access the API')\n\n\nclass Provider(BaseProvider):\n \"\"\"\n Implements the DNS GoDaddy provider.\n Some general remarks about this provider, because it uses a weirdly designed API.\n Indeed, there is no direct way to insert, update or delete a specific record.\n Furthermore, there is no unique identifier for a record.\n Instead GoDaddy use a replace approach: for a given set of records one\n can replace this set with a new set sent through API.\n For the sake of simplicity and consistency across the provider edit methods,\n the set will be always all records in the DNS zone.\n With this approach:\n - adding a record consists in appending a record to the obtained set and call\n replace with the updated set,\n - updating a record consists in modifying a record in the obtained set and call\n replace with the updated set,\n - deleting a record consists in removing a record in the obtained set and call\n replace with the updated set.\n In parallel, as said before, there is no unique identifier.\n This provider then implement a pseudo-identifier, to allow an easy update or delete\n using the '--identifier' lexicon parameter.\n But you need to call the 'list' command just before executing and update/delete action,\n because identifier value is tied to the content of the record, and will change anytime\n something is changed in the record.\n \"\"\"\n\n def __init__(self, config):\n super(Provider, self).__init__(config)\n self.domain_id = None\n self.api_endpoint = 'https://api.godaddy.com/v1'\n\n def _authenticate(self):\n domain = self.domain\n\n result = self._get('/domains/{0}'.format(domain))\n self.domain_id = result['domainId']\n\n def _list_records(self, rtype=None, name=None, content=None):\n domain = self.domain\n\n url = '/domains/{0}/records'.format(domain)\n if rtype:\n url += '/{0}'.format(rtype)\n if name:\n url += '/{0}'.format(self._relative_name(name))\n\n raws = self._get(url)\n\n records = []\n for raw in raws:\n records.append({\n 'id': Provider._identifier(raw),\n 'type': raw['type'],\n 'name': self._full_name(raw['name']),\n 'ttl': raw['ttl'],\n 'content': raw['data']\n })\n\n if content:\n records = [\n record for record in records if record['data'] == content]\n\n LOGGER.debug('list_records: %s', records)\n\n return records\n\n def _create_record(self, rtype, name, content):\n domain = self.domain\n relative_name = self._relative_name(name)\n ttl = self._get_lexicon_option('ttl')\n\n # Retrieve existing data in DNS zone.\n records = self._get('/domains/{0}/records'.format(domain))\n\n # Check if a record already matches given parameters\n for record in records:\n if (record['type'] == rtype and self._relative_name(record['name']) == relative_name\n and record['data'] == content):\n LOGGER.debug(\n 'create_record (ignored, duplicate): %s %s %s', rtype, name, content)\n return True\n\n # Append a new entry corresponding to given parameters.\n data = {'type': rtype, 'name': relative_name, 'data': content}\n if ttl:\n data['ttl'] = ttl\n\n records.append(data)\n\n # Synchronize data with inserted record into DNS zone.\n self._put('/domains/{0}/records'.format(domain), records)\n\n LOGGER.debug('create_record: %s %s %s', rtype, name, content)\n\n return True\n\n def _update_record(self, identifier, rtype=None, name=None, content=None):\n # No identifier is used with GoDaddy.\n # We can rely either:\n # - only on rtype/name to get the relevant records, both of them are required\n # or we will could update to much records ...,\n # - or by the pseudo-identifier provided\n # Furthermore for rtype/name approach, we cannot update all matching records, as it\n # would lead o an error (two entries of same rtype + name cannot have the same content).\n # So for rtype/name approach, we search first matching record for rtype/name on which\n # content is different, and we update it before synchronizing the DNS zone.\n if not identifier and not rtype:\n raise Exception('ERROR: rtype is required')\n if not identifier and not name:\n raise Exception('ERROR: name is required')\n\n domain = self.domain\n relative_name = None\n if name:\n relative_name = self._relative_name(name)\n\n # Retrieve existing data in DNS zone.\n records = self._get('/domains/{0}/records'.format(domain))\n\n # Get the record to update:\n # - either explicitly by its identifier,\n # - or the first matching by its rtype+name where content does not match\n # (first match, see first method comment for explanation).\n for record in records:\n if ((identifier and Provider._identifier(record) == identifier) or # pylint: disable=too-many-boolean-expressions\n (not identifier and record['type'] == rtype\n and self._relative_name(record['name']) == relative_name\n and record['data'] != content)):\n record['data'] = content\n break\n\n # Synchronize data with updated records into DNS zone.\n self._put('/domains/{0}/records'.format(domain), records)\n\n LOGGER.debug('update_record: %s %s %s', rtype, name, content)\n\n return True\n\n def _delete_record(self, identifier=None, rtype=None, name=None, content=None):\n # For the LOL. GoDaddy does not accept an empty array\n # when updating a particular set of records.\n # It means that you cannot request to remove all records\n # matching a particular rtype and/or name.\n # Instead, we get ALL records in the DNS zone, update the set,\n # and replace EVERYTHING in the DNS zone.\n # You will always have at minimal NS/SRV entries in the array,\n # otherwise your DNS zone is broken, and updating the zone is the least of your problem ...\n domain = self.domain\n\n # Retrieve all records in the DNS zone\n records = self._get('/domains/{0}/records'.format(domain))\n\n relative_name = None\n if name:\n relative_name = self._relative_name(name)\n\n # Filter out all records which matches the pattern (either identifier\n # or some combination of rtype/name/content).\n filtered_records = []\n if identifier:\n filtered_records = [\n record for record in records if Provider._identifier(record) != identifier]\n else:\n for record in records:\n if ((not rtype and not relative_name and not content) # pylint: disable=too-many-boolean-expressions\n or (rtype and not relative_name and not content and record['type'] != rtype)\n or (not rtype and relative_name and not content\n and self._relative_name(record['name']) != relative_name)\n or (not rtype and not relative_name and content\n and record['data'] != content)\n or (rtype and relative_name and not content\n and (record['type'] != rtype\n or self._relative_name(record['name']) != relative_name))\n or (rtype and not relative_name and content\n and (record['type'] != rtype or record['data'] != content))\n or (not rtype and relative_name and content\n and (self._relative_name(record['name']) != relative_name\n or record['data'] != content))\n or (rtype and relative_name and content\n and (record['type'] != rtype\n or self._relative_name(record['name']) != relative_name\n or record['data'] != content))):\n filtered_records.append(record)\n\n # Synchronize data with expurged entries into DNS zone.\n self._put('/domains/{0}/records'.format(domain), filtered_records)\n\n LOGGER.debug('delete_records: %s %s %s', rtype, name, content)\n\n return True\n\n # GoDaddy provides no identifier for a record, which is a problem\n # where identifiers can be used (delete and update).\n # To circumvent this, we implement a pseudo-identifier,which is basically\n # a hash of type+name+content of a given record.\n # It is far from perfect, as the identifier will change each time\n # we change something in the record ...\n # But at least, one can use 'lexicon godaddy list ...' then\n # 'lexicon godaddy update --identifier ...' to modify specific record.\n # However, 'lexicon godaddy list ...' should be called each time DNS\n # zone had been changed to calculate new identifiers.\n @staticmethod\n def _identifier(record):\n sha256 = hashlib.sha256()\n sha256.update(('type=' + record.get('type', '') + ',').encode('utf-8'))\n sha256.update(('name=' + record.get('name', '') + ',').encode('utf-8'))\n sha256.update(('data=' + record.get('data', '') + ',').encode('utf-8'))\n return sha256.hexdigest()[0:7]\n\n def _request(self, action='GET', url='/', data=None, query_params=None):\n if not data:\n data = {}\n if not query_params:\n query_params = {}\n\n # When editing DNS zone, API is unavailable for few seconds\n # (until modifications are propagated).\n # In this case, call to API will return 409 HTTP error.\n # We use the Retry extension to retry the requests until\n # we get a processable reponse (402 HTTP status, or an HTTP error != 409)\n retries = Retry(\n total=10,\n backoff_factor=0.5,\n status_forcelist=[409],\n method_whitelist=frozenset(\n ['GET', 'PUT', 'POST', 'DELETE', 'PATCH'])\n )\n\n session = requests.Session()\n session.mount('https://', HTTPAdapter(max_retries=retries))\n\n result = session.request(action, self.api_endpoint + url,\n params=query_params,\n data=json.dumps(data),\n headers={\n 'Content-Type': 'application/json',\n 'Accept': 'application/json',\n # GoDaddy use a key/secret pair to authenticate\n 'Authorization': 'sso-key {0}:{1}'.format(\n self._get_provider_option(\n 'auth_key'),\n self._get_provider_option('auth_secret'))\n })\n\n result.raise_for_status()\n\n try:\n # Return the JSON body response if exists.\n return result.json()\n except ValueError:\n # For some requests command (eg. PUT), GoDaddy will not\n # return any JSON, just an HTTP status without body.\n return None\n", "path": "lexicon/providers/godaddy.py"}], "after_files": [{"content": "\"\"\"Module provider for Godaddy\"\"\"\nfrom __future__ import absolute_import\nimport hashlib\nimport json\nimport logging\n\nimport requests\nfrom requests.adapters import HTTPAdapter\nfrom urllib3.util.retry import Retry\n\nfrom lexicon.providers.base import Provider as BaseProvider\n\n\nLOGGER = logging.getLogger(__name__)\n\nNAMESERVER_DOMAINS = ['godaddy.com', 'domaincontrol.com']\n\n\ndef provider_parser(subparser):\n \"\"\"Generate a subparser for Godaddy\"\"\"\n subparser.add_argument(\n '--auth-key', help='specify the key to access the API')\n subparser.add_argument(\n '--auth-secret', help='specify the secret to access the API')\n\n\nclass Provider(BaseProvider):\n \"\"\"\n Implements the DNS GoDaddy provider.\n Some general remarks about this provider, because it uses a weirdly designed API.\n Indeed, there is no direct way to insert, update or delete a specific record.\n Furthermore, there is no unique identifier for a record.\n Instead GoDaddy use a replace approach: for a given set of records one\n can replace this set with a new set sent through API.\n For the sake of simplicity and consistency across the provider edit methods,\n the set will be always all records in the DNS zone.\n With this approach:\n - adding a record consists in appending a record to the obtained set and call\n replace with the updated set,\n - updating a record consists in modifying a record in the obtained set and call\n replace with the updated set,\n - deleting a record consists in removing a record in the obtained set and call\n replace with the updated set.\n In parallel, as said before, there is no unique identifier.\n This provider then implement a pseudo-identifier, to allow an easy update or delete\n using the '--identifier' lexicon parameter.\n But you need to call the 'list' command just before executing and update/delete action,\n because identifier value is tied to the content of the record, and will change anytime\n something is changed in the record.\n \"\"\"\n\n def __init__(self, config):\n super(Provider, self).__init__(config)\n self.domain_id = None\n self.api_endpoint = 'https://api.godaddy.com/v1'\n\n def _authenticate(self):\n domain = self.domain\n\n result = self._get('/domains/{0}'.format(domain))\n self.domain_id = result['domainId']\n\n def _list_records(self, rtype=None, name=None, content=None):\n domain = self.domain\n\n url = '/domains/{0}/records'.format(domain)\n if rtype:\n url += '/{0}'.format(rtype)\n if name:\n url += '/{0}'.format(self._relative_name(name))\n\n raws = self._get(url)\n\n records = []\n for raw in raws:\n records.append({\n 'id': Provider._identifier(raw),\n 'type': raw['type'],\n 'name': self._full_name(raw['name']),\n 'ttl': raw['ttl'],\n 'content': raw['data']\n })\n\n if content:\n records = [\n record for record in records if record['data'] == content]\n\n LOGGER.debug('list_records: %s', records)\n\n return records\n\n def _create_record(self, rtype, name, content):\n domain = self.domain\n relative_name = self._relative_name(name)\n ttl = self._get_lexicon_option('ttl')\n\n # Retrieve existing data in DNS zone.\n records = self._get('/domains/{0}/records'.format(domain))\n\n # Check if a record already matches given parameters\n for record in records:\n if (record['type'] == rtype and self._relative_name(record['name']) == relative_name\n and record['data'] == content):\n LOGGER.debug(\n 'create_record (ignored, duplicate): %s %s %s', rtype, name, content)\n return True\n\n # Append a new entry corresponding to given parameters.\n data = {'type': rtype, 'name': relative_name, 'data': content}\n if ttl:\n data['ttl'] = ttl\n\n records.append(data)\n\n # Synchronize data with inserted record into DNS zone.\n self._put('/domains/{0}/records'.format(domain), records)\n\n LOGGER.debug('create_record: %s %s %s', rtype, name, content)\n\n return True\n\n def _update_record(self, identifier, rtype=None, name=None, content=None):\n # No identifier is used with GoDaddy.\n # We can rely either:\n # - only on rtype/name to get the relevant records, both of them are required\n # or we will could update to much records ...,\n # - or by the pseudo-identifier provided\n # Furthermore for rtype/name approach, we cannot update all matching records, as it\n # would lead o an error (two entries of same rtype + name cannot have the same content).\n # So for rtype/name approach, we search first matching record for rtype/name on which\n # content is different, and we update it before synchronizing the DNS zone.\n if not identifier and not rtype:\n raise Exception('ERROR: rtype is required')\n if not identifier and not name:\n raise Exception('ERROR: name is required')\n\n domain = self.domain\n relative_name = None\n if name:\n relative_name = self._relative_name(name)\n\n # Retrieve existing data in DNS zone.\n records = self._get('/domains/{0}/records'.format(domain))\n\n # Get the record to update:\n # - either explicitly by its identifier,\n # - or the first matching by its rtype+name where content does not match\n # (first match, see first method comment for explanation).\n for record in records:\n if ((identifier and Provider._identifier(record) == identifier) or # pylint: disable=too-many-boolean-expressions\n (not identifier and record['type'] == rtype\n and self._relative_name(record['name']) == relative_name\n and record['data'] != content)):\n record['data'] = content\n break\n\n # Synchronize data with updated records into DNS zone.\n self._put('/domains/{0}/records'.format(domain), records)\n\n LOGGER.debug('update_record: %s %s %s', rtype, name, content)\n\n return True\n\n def _delete_record(self, identifier=None, rtype=None, name=None, content=None):\n # For the LOL. GoDaddy does not accept an empty array\n # when updating a particular set of records.\n # It means that you cannot request to remove all records\n # matching a particular rtype and/or name.\n # Instead, we get ALL records in the DNS zone, update the set,\n # and replace EVERYTHING in the DNS zone.\n # You will always have at minimal NS/SRV entries in the array,\n # otherwise your DNS zone is broken, and updating the zone is the least of your problem ...\n domain = self.domain\n\n # Retrieve all records in the DNS zone\n records = self._get('/domains/{0}/records'.format(domain))\n\n relative_name = None\n if name:\n relative_name = self._relative_name(name)\n\n # Filter out all records which matches the pattern (either identifier\n # or some combination of rtype/name/content).\n filtered_records = []\n if identifier:\n filtered_records = [\n record for record in records if Provider._identifier(record) != identifier]\n else:\n for record in records:\n if ((not rtype and not relative_name and not content) # pylint: disable=too-many-boolean-expressions\n or (rtype and not relative_name and not content and record['type'] != rtype)\n or (not rtype and relative_name and not content\n and self._relative_name(record['name']) != relative_name)\n or (not rtype and not relative_name and content\n and record['data'] != content)\n or (rtype and relative_name and not content\n and (record['type'] != rtype\n or self._relative_name(record['name']) != relative_name))\n or (rtype and not relative_name and content\n and (record['type'] != rtype or record['data'] != content))\n or (not rtype and relative_name and content\n and (self._relative_name(record['name']) != relative_name\n or record['data'] != content))\n or (rtype and relative_name and content\n and (record['type'] != rtype\n or self._relative_name(record['name']) != relative_name\n or record['data'] != content))):\n filtered_records.append(record)\n\n # Synchronize data with expurged entries into DNS zone.\n self._put('/domains/{0}/records'.format(domain), filtered_records)\n\n LOGGER.debug('delete_records: %s %s %s', rtype, name, content)\n\n return True\n\n # GoDaddy provides no identifier for a record, which is a problem\n # where identifiers can be used (delete and update).\n # To circumvent this, we implement a pseudo-identifier,which is basically\n # a hash of type+name+content of a given record.\n # It is far from perfect, as the identifier will change each time\n # we change something in the record ...\n # But at least, one can use 'lexicon godaddy list ...' then\n # 'lexicon godaddy update --identifier ...' to modify specific record.\n # However, 'lexicon godaddy list ...' should be called each time DNS\n # zone had been changed to calculate new identifiers.\n @staticmethod\n def _identifier(record):\n sha256 = hashlib.sha256()\n sha256.update(('type=' + record.get('type', '') + ',').encode('utf-8'))\n sha256.update(('name=' + record.get('name', '') + ',').encode('utf-8'))\n sha256.update(('data=' + record.get('data', '') + ',').encode('utf-8'))\n return sha256.hexdigest()[0:7]\n\n def _request(self, action='GET', url='/', data=None, query_params=None):\n if not data:\n data = {}\n if not query_params:\n query_params = {}\n\n # When editing DNS zone, API is unavailable for few seconds\n # (until modifications are propagated).\n # In this case, call to API will return 409 HTTP error.\n # We use the Retry extension to retry the requests until\n # we get a processable reponse (402 HTTP status, or an HTTP error != 409)\n retries = Retry(\n total=10,\n backoff_factor=0.5,\n status_forcelist=[409],\n method_whitelist=frozenset(\n ['GET', 'PUT', 'POST', 'DELETE', 'PATCH'])\n )\n\n session = requests.Session()\n session.mount('https://', HTTPAdapter(max_retries=retries))\n\n result = session.request(action, self.api_endpoint + url,\n params=query_params,\n data=json.dumps(data),\n headers={\n 'Content-Type': 'application/json',\n 'Accept': 'application/json',\n # GoDaddy use a key/secret pair to authenticate\n 'Authorization': 'sso-key {0}:{1}'.format(\n self._get_provider_option(\n 'auth_key'),\n self._get_provider_option('auth_secret'))\n })\n\n result.raise_for_status()\n\n try:\n # Return the JSON body response if exists.\n return result.json()\n except ValueError:\n # For some requests command (eg. PUT), GoDaddy will not\n # return any JSON, just an HTTP status without body.\n return None\n", "path": "lexicon/providers/godaddy.py"}]} | 3,860 | 99 |
gh_patches_debug_7672 | rasdani/github-patches | git_diff | googleapis__python-bigquery-948 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Expand the list of dependencies to run pre-release tests against
Context: https://github.com/googleapis/python-bigquery/issues/885
BigQuery Python client implicitly depended on `pytz` - this dependency was installed via `google-api-core` dependency. However, when `google-api-core` dropped `pytz` in the 2.0 release, users that upgraded that dependency suddenly started seeing import errors.
This should have been caught by pre-release tests, but at the moment we only do so for `pyarrow`, `pandas`, and `grpcio` dependencies. As discussed offline, this list should be extended with at least `google-api-core` and `google-clod-core` dependencies.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `noxfile.py`
Content:
```
1 # Copyright 2016 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from __future__ import absolute_import
16
17 import pathlib
18 import os
19 import shutil
20
21 import nox
22
23
24 PYTYPE_VERSION = "pytype==2021.4.9"
25 BLACK_VERSION = "black==19.10b0"
26 BLACK_PATHS = ("docs", "google", "samples", "tests", "noxfile.py", "setup.py")
27
28 DEFAULT_PYTHON_VERSION = "3.8"
29 SYSTEM_TEST_PYTHON_VERSIONS = ["3.8"]
30 UNIT_TEST_PYTHON_VERSIONS = ["3.6", "3.7", "3.8", "3.9"]
31 CURRENT_DIRECTORY = pathlib.Path(__file__).parent.absolute()
32
33 # 'docfx' is excluded since it only needs to run in 'docs-presubmit'
34 nox.options.sessions = [
35 "unit_noextras",
36 "unit",
37 "system",
38 "snippets",
39 "cover",
40 "lint",
41 "lint_setup_py",
42 "blacken",
43 "pytype",
44 "docs",
45 ]
46
47
48 def default(session, install_extras=True):
49 """Default unit test session.
50
51 This is intended to be run **without** an interpreter set, so
52 that the current ``python`` (on the ``PATH``) or the version of
53 Python corresponding to the ``nox`` binary the ``PATH`` can
54 run the tests.
55 """
56 constraints_path = str(
57 CURRENT_DIRECTORY / "testing" / f"constraints-{session.python}.txt"
58 )
59
60 # Install all test dependencies, then install local packages in-place.
61 session.install(
62 "mock",
63 "pytest",
64 "google-cloud-testutils",
65 "pytest-cov",
66 "freezegun",
67 "-c",
68 constraints_path,
69 )
70
71 install_target = ".[all]" if install_extras else "."
72 session.install("-e", install_target, "-c", constraints_path)
73
74 session.install("ipython", "-c", constraints_path)
75
76 # Run py.test against the unit tests.
77 session.run(
78 "py.test",
79 "--quiet",
80 "--cov=google/cloud/bigquery",
81 "--cov=tests/unit",
82 "--cov-append",
83 "--cov-config=.coveragerc",
84 "--cov-report=",
85 "--cov-fail-under=0",
86 os.path.join("tests", "unit"),
87 *session.posargs,
88 )
89
90
91 @nox.session(python=UNIT_TEST_PYTHON_VERSIONS)
92 def unit(session):
93 """Run the unit test suite."""
94 default(session)
95
96
97 @nox.session(python=[UNIT_TEST_PYTHON_VERSIONS[0], UNIT_TEST_PYTHON_VERSIONS[-1]])
98 def unit_noextras(session):
99 """Run the unit test suite."""
100
101 # Install optional dependencies that are out-of-date.
102 # https://github.com/googleapis/python-bigquery/issues/933
103 # There is no pyarrow 1.0.0 package for Python 3.9.
104 if session.python == UNIT_TEST_PYTHON_VERSIONS[0]:
105 session.install("pyarrow==1.0.0")
106
107 default(session, install_extras=False)
108
109
110 @nox.session(python=DEFAULT_PYTHON_VERSION)
111 def pytype(session):
112 """Run type checks."""
113 # An indirect dependecy attrs==21.1.0 breaks the check, and installing a less
114 # recent version avoids the error until a possibly better fix is found.
115 # https://github.com/googleapis/python-bigquery/issues/655
116 session.install("attrs==20.3.0")
117 session.install("-e", ".[all]")
118 session.install("ipython")
119 session.install(PYTYPE_VERSION)
120 session.run("pytype")
121
122
123 @nox.session(python=SYSTEM_TEST_PYTHON_VERSIONS)
124 def system(session):
125 """Run the system test suite."""
126
127 constraints_path = str(
128 CURRENT_DIRECTORY / "testing" / f"constraints-{session.python}.txt"
129 )
130
131 # Check the value of `RUN_SYSTEM_TESTS` env var. It defaults to true.
132 if os.environ.get("RUN_SYSTEM_TESTS", "true") == "false":
133 session.skip("RUN_SYSTEM_TESTS is set to false, skipping")
134
135 # Sanity check: Only run system tests if the environment variable is set.
136 if not os.environ.get("GOOGLE_APPLICATION_CREDENTIALS", ""):
137 session.skip("Credentials must be set via environment variable.")
138
139 # Use pre-release gRPC for system tests.
140 session.install("--pre", "grpcio", "-c", constraints_path)
141
142 # Install all test dependencies, then install local packages in place.
143 session.install(
144 "mock", "pytest", "psutil", "google-cloud-testutils", "-c", constraints_path
145 )
146 if os.environ.get("GOOGLE_API_USE_CLIENT_CERTIFICATE", "") == "true":
147 # mTLS test requires pyopenssl and latest google-cloud-storage
148 session.install("google-cloud-storage", "pyopenssl")
149 else:
150 session.install("google-cloud-storage", "-c", constraints_path)
151
152 # Data Catalog needed for the column ACL test with a real Policy Tag.
153 session.install("google-cloud-datacatalog", "-c", constraints_path)
154
155 session.install("-e", ".[all]", "-c", constraints_path)
156 session.install("ipython", "-c", constraints_path)
157
158 # Run py.test against the system tests.
159 session.run("py.test", "--quiet", os.path.join("tests", "system"), *session.posargs)
160
161
162 @nox.session(python=SYSTEM_TEST_PYTHON_VERSIONS)
163 def snippets(session):
164 """Run the snippets test suite."""
165
166 # Check the value of `RUN_SNIPPETS_TESTS` env var. It defaults to true.
167 if os.environ.get("RUN_SNIPPETS_TESTS", "true") == "false":
168 session.skip("RUN_SNIPPETS_TESTS is set to false, skipping")
169
170 constraints_path = str(
171 CURRENT_DIRECTORY / "testing" / f"constraints-{session.python}.txt"
172 )
173
174 # Install all test dependencies, then install local packages in place.
175 session.install("mock", "pytest", "google-cloud-testutils", "-c", constraints_path)
176 session.install("google-cloud-storage", "-c", constraints_path)
177 session.install("grpcio", "-c", constraints_path)
178
179 session.install("-e", ".[all]", "-c", constraints_path)
180
181 # Run py.test against the snippets tests.
182 # Skip tests in samples/snippets, as those are run in a different session
183 # using the nox config from that directory.
184 session.run("py.test", os.path.join("docs", "snippets.py"), *session.posargs)
185 session.run(
186 "py.test",
187 "samples",
188 "--ignore=samples/snippets",
189 "--ignore=samples/geography",
190 *session.posargs,
191 )
192
193
194 @nox.session(python=DEFAULT_PYTHON_VERSION)
195 def cover(session):
196 """Run the final coverage report.
197
198 This outputs the coverage report aggregating coverage from the unit
199 test runs (not system test runs), and then erases coverage data.
200 """
201 session.install("coverage", "pytest-cov")
202 session.run("coverage", "report", "--show-missing", "--fail-under=100")
203 session.run("coverage", "erase")
204
205
206 @nox.session(python=SYSTEM_TEST_PYTHON_VERSIONS)
207 def prerelease_deps(session):
208 """Run all tests with prerelease versions of dependencies installed.
209
210 https://github.com/googleapis/python-bigquery/issues/95
211 """
212 # PyArrow prerelease packages are published to an alternative PyPI host.
213 # https://arrow.apache.org/docs/python/install.html#installing-nightly-packages
214 session.install(
215 "--extra-index-url", "https://pypi.fury.io/arrow-nightlies/", "--pre", "pyarrow"
216 )
217 session.install("--pre", "grpcio", "pandas")
218 session.install(
219 "freezegun",
220 "google-cloud-datacatalog",
221 "google-cloud-storage",
222 "google-cloud-testutils",
223 "IPython",
224 "mock",
225 "psutil",
226 "pytest",
227 "pytest-cov",
228 )
229 session.install("-e", ".[all]")
230
231 # Print out prerelease package versions.
232 session.run("python", "-c", "import grpc; print(grpc.__version__)")
233 session.run("python", "-c", "import pandas; print(pandas.__version__)")
234 session.run("python", "-c", "import pyarrow; print(pyarrow.__version__)")
235
236 # Run all tests, except a few samples tests which require extra dependencies.
237 session.run("py.test", "tests/unit")
238 session.run("py.test", "tests/system")
239 session.run("py.test", "samples/tests")
240
241
242 @nox.session(python=DEFAULT_PYTHON_VERSION)
243 def lint(session):
244 """Run linters.
245
246 Returns a failure if the linters find linting errors or sufficiently
247 serious code quality issues.
248 """
249
250 session.install("flake8", BLACK_VERSION)
251 session.install("-e", ".")
252 session.run("flake8", os.path.join("google", "cloud", "bigquery"))
253 session.run("flake8", "tests")
254 session.run("flake8", os.path.join("docs", "samples"))
255 session.run("flake8", os.path.join("docs", "snippets.py"))
256 session.run("black", "--check", *BLACK_PATHS)
257
258
259 @nox.session(python=DEFAULT_PYTHON_VERSION)
260 def lint_setup_py(session):
261 """Verify that setup.py is valid (including RST check)."""
262
263 session.install("docutils", "Pygments")
264 session.run("python", "setup.py", "check", "--restructuredtext", "--strict")
265
266
267 @nox.session(python=DEFAULT_PYTHON_VERSION)
268 def blacken(session):
269 """Run black.
270 Format code to uniform standard.
271 """
272
273 session.install(BLACK_VERSION)
274 session.run("black", *BLACK_PATHS)
275
276
277 @nox.session(python=DEFAULT_PYTHON_VERSION)
278 def docs(session):
279 """Build the docs."""
280
281 session.install("ipython", "recommonmark", "sphinx==4.0.1", "sphinx_rtd_theme")
282 session.install("google-cloud-storage")
283 session.install("-e", ".[all]")
284
285 shutil.rmtree(os.path.join("docs", "_build"), ignore_errors=True)
286 session.run(
287 "sphinx-build",
288 "-W", # warnings as errors
289 "-T", # show full traceback on exception
290 "-N", # no colors
291 "-b",
292 "html",
293 "-d",
294 os.path.join("docs", "_build", "doctrees", ""),
295 os.path.join("docs", ""),
296 os.path.join("docs", "_build", "html", ""),
297 )
298
299
300 @nox.session(python=DEFAULT_PYTHON_VERSION)
301 def docfx(session):
302 """Build the docfx yaml files for this library."""
303
304 session.install("-e", ".")
305 session.install(
306 "sphinx==4.0.1", "alabaster", "recommonmark", "gcp-sphinx-docfx-yaml"
307 )
308
309 shutil.rmtree(os.path.join("docs", "_build"), ignore_errors=True)
310 session.run(
311 "sphinx-build",
312 "-T", # show full traceback on exception
313 "-N", # no colors
314 "-D",
315 (
316 "extensions=sphinx.ext.autodoc,"
317 "sphinx.ext.autosummary,"
318 "docfx_yaml.extension,"
319 "sphinx.ext.intersphinx,"
320 "sphinx.ext.coverage,"
321 "sphinx.ext.napoleon,"
322 "sphinx.ext.todo,"
323 "sphinx.ext.viewcode,"
324 "recommonmark"
325 ),
326 "-b",
327 "html",
328 "-d",
329 os.path.join("docs", "_build", "doctrees", ""),
330 os.path.join("docs", ""),
331 os.path.join("docs", "_build", "html", ""),
332 )
333
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/noxfile.py b/noxfile.py
--- a/noxfile.py
+++ b/noxfile.py
@@ -214,7 +214,15 @@
session.install(
"--extra-index-url", "https://pypi.fury.io/arrow-nightlies/", "--pre", "pyarrow"
)
- session.install("--pre", "grpcio", "pandas")
+ session.install(
+ "--pre",
+ "google-api-core",
+ "google-cloud-bigquery-storage",
+ "google-cloud-core",
+ "google-resumable-media",
+ "grpcio",
+ "pandas",
+ )
session.install(
"freezegun",
"google-cloud-datacatalog",
| {"golden_diff": "diff --git a/noxfile.py b/noxfile.py\n--- a/noxfile.py\n+++ b/noxfile.py\n@@ -214,7 +214,15 @@\n session.install(\n \"--extra-index-url\", \"https://pypi.fury.io/arrow-nightlies/\", \"--pre\", \"pyarrow\"\n )\n- session.install(\"--pre\", \"grpcio\", \"pandas\")\n+ session.install(\n+ \"--pre\",\n+ \"google-api-core\",\n+ \"google-cloud-bigquery-storage\",\n+ \"google-cloud-core\",\n+ \"google-resumable-media\",\n+ \"grpcio\",\n+ \"pandas\",\n+ )\n session.install(\n \"freezegun\",\n \"google-cloud-datacatalog\",\n", "issue": "Expand the list of dependencies to run pre-release tests against\nContext: https://github.com/googleapis/python-bigquery/issues/885\r\n\r\nBigQuery Python client implicitly depended on `pytz` - this dependency was installed via `google-api-core` dependency. However, when `google-api-core` dropped `pytz` in the 2.0 release, users that upgraded that dependency suddenly started seeing import errors.\r\n\r\nThis should have been caught by pre-release tests, but at the moment we only do so for `pyarrow`, `pandas`, and `grpcio` dependencies. As discussed offline, this list should be extended with at least `google-api-core` and `google-clod-core` dependencies.\n", "before_files": [{"content": "# Copyright 2016 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\n\nimport pathlib\nimport os\nimport shutil\n\nimport nox\n\n\nPYTYPE_VERSION = \"pytype==2021.4.9\"\nBLACK_VERSION = \"black==19.10b0\"\nBLACK_PATHS = (\"docs\", \"google\", \"samples\", \"tests\", \"noxfile.py\", \"setup.py\")\n\nDEFAULT_PYTHON_VERSION = \"3.8\"\nSYSTEM_TEST_PYTHON_VERSIONS = [\"3.8\"]\nUNIT_TEST_PYTHON_VERSIONS = [\"3.6\", \"3.7\", \"3.8\", \"3.9\"]\nCURRENT_DIRECTORY = pathlib.Path(__file__).parent.absolute()\n\n# 'docfx' is excluded since it only needs to run in 'docs-presubmit'\nnox.options.sessions = [\n \"unit_noextras\",\n \"unit\",\n \"system\",\n \"snippets\",\n \"cover\",\n \"lint\",\n \"lint_setup_py\",\n \"blacken\",\n \"pytype\",\n \"docs\",\n]\n\n\ndef default(session, install_extras=True):\n \"\"\"Default unit test session.\n\n This is intended to be run **without** an interpreter set, so\n that the current ``python`` (on the ``PATH``) or the version of\n Python corresponding to the ``nox`` binary the ``PATH`` can\n run the tests.\n \"\"\"\n constraints_path = str(\n CURRENT_DIRECTORY / \"testing\" / f\"constraints-{session.python}.txt\"\n )\n\n # Install all test dependencies, then install local packages in-place.\n session.install(\n \"mock\",\n \"pytest\",\n \"google-cloud-testutils\",\n \"pytest-cov\",\n \"freezegun\",\n \"-c\",\n constraints_path,\n )\n\n install_target = \".[all]\" if install_extras else \".\"\n session.install(\"-e\", install_target, \"-c\", constraints_path)\n\n session.install(\"ipython\", \"-c\", constraints_path)\n\n # Run py.test against the unit tests.\n session.run(\n \"py.test\",\n \"--quiet\",\n \"--cov=google/cloud/bigquery\",\n \"--cov=tests/unit\",\n \"--cov-append\",\n \"--cov-config=.coveragerc\",\n \"--cov-report=\",\n \"--cov-fail-under=0\",\n os.path.join(\"tests\", \"unit\"),\n *session.posargs,\n )\n\n\[email protected](python=UNIT_TEST_PYTHON_VERSIONS)\ndef unit(session):\n \"\"\"Run the unit test suite.\"\"\"\n default(session)\n\n\[email protected](python=[UNIT_TEST_PYTHON_VERSIONS[0], UNIT_TEST_PYTHON_VERSIONS[-1]])\ndef unit_noextras(session):\n \"\"\"Run the unit test suite.\"\"\"\n\n # Install optional dependencies that are out-of-date.\n # https://github.com/googleapis/python-bigquery/issues/933\n # There is no pyarrow 1.0.0 package for Python 3.9.\n if session.python == UNIT_TEST_PYTHON_VERSIONS[0]:\n session.install(\"pyarrow==1.0.0\")\n\n default(session, install_extras=False)\n\n\[email protected](python=DEFAULT_PYTHON_VERSION)\ndef pytype(session):\n \"\"\"Run type checks.\"\"\"\n # An indirect dependecy attrs==21.1.0 breaks the check, and installing a less\n # recent version avoids the error until a possibly better fix is found.\n # https://github.com/googleapis/python-bigquery/issues/655\n session.install(\"attrs==20.3.0\")\n session.install(\"-e\", \".[all]\")\n session.install(\"ipython\")\n session.install(PYTYPE_VERSION)\n session.run(\"pytype\")\n\n\[email protected](python=SYSTEM_TEST_PYTHON_VERSIONS)\ndef system(session):\n \"\"\"Run the system test suite.\"\"\"\n\n constraints_path = str(\n CURRENT_DIRECTORY / \"testing\" / f\"constraints-{session.python}.txt\"\n )\n\n # Check the value of `RUN_SYSTEM_TESTS` env var. It defaults to true.\n if os.environ.get(\"RUN_SYSTEM_TESTS\", \"true\") == \"false\":\n session.skip(\"RUN_SYSTEM_TESTS is set to false, skipping\")\n\n # Sanity check: Only run system tests if the environment variable is set.\n if not os.environ.get(\"GOOGLE_APPLICATION_CREDENTIALS\", \"\"):\n session.skip(\"Credentials must be set via environment variable.\")\n\n # Use pre-release gRPC for system tests.\n session.install(\"--pre\", \"grpcio\", \"-c\", constraints_path)\n\n # Install all test dependencies, then install local packages in place.\n session.install(\n \"mock\", \"pytest\", \"psutil\", \"google-cloud-testutils\", \"-c\", constraints_path\n )\n if os.environ.get(\"GOOGLE_API_USE_CLIENT_CERTIFICATE\", \"\") == \"true\":\n # mTLS test requires pyopenssl and latest google-cloud-storage\n session.install(\"google-cloud-storage\", \"pyopenssl\")\n else:\n session.install(\"google-cloud-storage\", \"-c\", constraints_path)\n\n # Data Catalog needed for the column ACL test with a real Policy Tag.\n session.install(\"google-cloud-datacatalog\", \"-c\", constraints_path)\n\n session.install(\"-e\", \".[all]\", \"-c\", constraints_path)\n session.install(\"ipython\", \"-c\", constraints_path)\n\n # Run py.test against the system tests.\n session.run(\"py.test\", \"--quiet\", os.path.join(\"tests\", \"system\"), *session.posargs)\n\n\[email protected](python=SYSTEM_TEST_PYTHON_VERSIONS)\ndef snippets(session):\n \"\"\"Run the snippets test suite.\"\"\"\n\n # Check the value of `RUN_SNIPPETS_TESTS` env var. It defaults to true.\n if os.environ.get(\"RUN_SNIPPETS_TESTS\", \"true\") == \"false\":\n session.skip(\"RUN_SNIPPETS_TESTS is set to false, skipping\")\n\n constraints_path = str(\n CURRENT_DIRECTORY / \"testing\" / f\"constraints-{session.python}.txt\"\n )\n\n # Install all test dependencies, then install local packages in place.\n session.install(\"mock\", \"pytest\", \"google-cloud-testutils\", \"-c\", constraints_path)\n session.install(\"google-cloud-storage\", \"-c\", constraints_path)\n session.install(\"grpcio\", \"-c\", constraints_path)\n\n session.install(\"-e\", \".[all]\", \"-c\", constraints_path)\n\n # Run py.test against the snippets tests.\n # Skip tests in samples/snippets, as those are run in a different session\n # using the nox config from that directory.\n session.run(\"py.test\", os.path.join(\"docs\", \"snippets.py\"), *session.posargs)\n session.run(\n \"py.test\",\n \"samples\",\n \"--ignore=samples/snippets\",\n \"--ignore=samples/geography\",\n *session.posargs,\n )\n\n\[email protected](python=DEFAULT_PYTHON_VERSION)\ndef cover(session):\n \"\"\"Run the final coverage report.\n\n This outputs the coverage report aggregating coverage from the unit\n test runs (not system test runs), and then erases coverage data.\n \"\"\"\n session.install(\"coverage\", \"pytest-cov\")\n session.run(\"coverage\", \"report\", \"--show-missing\", \"--fail-under=100\")\n session.run(\"coverage\", \"erase\")\n\n\[email protected](python=SYSTEM_TEST_PYTHON_VERSIONS)\ndef prerelease_deps(session):\n \"\"\"Run all tests with prerelease versions of dependencies installed.\n\n https://github.com/googleapis/python-bigquery/issues/95\n \"\"\"\n # PyArrow prerelease packages are published to an alternative PyPI host.\n # https://arrow.apache.org/docs/python/install.html#installing-nightly-packages\n session.install(\n \"--extra-index-url\", \"https://pypi.fury.io/arrow-nightlies/\", \"--pre\", \"pyarrow\"\n )\n session.install(\"--pre\", \"grpcio\", \"pandas\")\n session.install(\n \"freezegun\",\n \"google-cloud-datacatalog\",\n \"google-cloud-storage\",\n \"google-cloud-testutils\",\n \"IPython\",\n \"mock\",\n \"psutil\",\n \"pytest\",\n \"pytest-cov\",\n )\n session.install(\"-e\", \".[all]\")\n\n # Print out prerelease package versions.\n session.run(\"python\", \"-c\", \"import grpc; print(grpc.__version__)\")\n session.run(\"python\", \"-c\", \"import pandas; print(pandas.__version__)\")\n session.run(\"python\", \"-c\", \"import pyarrow; print(pyarrow.__version__)\")\n\n # Run all tests, except a few samples tests which require extra dependencies.\n session.run(\"py.test\", \"tests/unit\")\n session.run(\"py.test\", \"tests/system\")\n session.run(\"py.test\", \"samples/tests\")\n\n\[email protected](python=DEFAULT_PYTHON_VERSION)\ndef lint(session):\n \"\"\"Run linters.\n\n Returns a failure if the linters find linting errors or sufficiently\n serious code quality issues.\n \"\"\"\n\n session.install(\"flake8\", BLACK_VERSION)\n session.install(\"-e\", \".\")\n session.run(\"flake8\", os.path.join(\"google\", \"cloud\", \"bigquery\"))\n session.run(\"flake8\", \"tests\")\n session.run(\"flake8\", os.path.join(\"docs\", \"samples\"))\n session.run(\"flake8\", os.path.join(\"docs\", \"snippets.py\"))\n session.run(\"black\", \"--check\", *BLACK_PATHS)\n\n\[email protected](python=DEFAULT_PYTHON_VERSION)\ndef lint_setup_py(session):\n \"\"\"Verify that setup.py is valid (including RST check).\"\"\"\n\n session.install(\"docutils\", \"Pygments\")\n session.run(\"python\", \"setup.py\", \"check\", \"--restructuredtext\", \"--strict\")\n\n\[email protected](python=DEFAULT_PYTHON_VERSION)\ndef blacken(session):\n \"\"\"Run black.\n Format code to uniform standard.\n \"\"\"\n\n session.install(BLACK_VERSION)\n session.run(\"black\", *BLACK_PATHS)\n\n\[email protected](python=DEFAULT_PYTHON_VERSION)\ndef docs(session):\n \"\"\"Build the docs.\"\"\"\n\n session.install(\"ipython\", \"recommonmark\", \"sphinx==4.0.1\", \"sphinx_rtd_theme\")\n session.install(\"google-cloud-storage\")\n session.install(\"-e\", \".[all]\")\n\n shutil.rmtree(os.path.join(\"docs\", \"_build\"), ignore_errors=True)\n session.run(\n \"sphinx-build\",\n \"-W\", # warnings as errors\n \"-T\", # show full traceback on exception\n \"-N\", # no colors\n \"-b\",\n \"html\",\n \"-d\",\n os.path.join(\"docs\", \"_build\", \"doctrees\", \"\"),\n os.path.join(\"docs\", \"\"),\n os.path.join(\"docs\", \"_build\", \"html\", \"\"),\n )\n\n\[email protected](python=DEFAULT_PYTHON_VERSION)\ndef docfx(session):\n \"\"\"Build the docfx yaml files for this library.\"\"\"\n\n session.install(\"-e\", \".\")\n session.install(\n \"sphinx==4.0.1\", \"alabaster\", \"recommonmark\", \"gcp-sphinx-docfx-yaml\"\n )\n\n shutil.rmtree(os.path.join(\"docs\", \"_build\"), ignore_errors=True)\n session.run(\n \"sphinx-build\",\n \"-T\", # show full traceback on exception\n \"-N\", # no colors\n \"-D\",\n (\n \"extensions=sphinx.ext.autodoc,\"\n \"sphinx.ext.autosummary,\"\n \"docfx_yaml.extension,\"\n \"sphinx.ext.intersphinx,\"\n \"sphinx.ext.coverage,\"\n \"sphinx.ext.napoleon,\"\n \"sphinx.ext.todo,\"\n \"sphinx.ext.viewcode,\"\n \"recommonmark\"\n ),\n \"-b\",\n \"html\",\n \"-d\",\n os.path.join(\"docs\", \"_build\", \"doctrees\", \"\"),\n os.path.join(\"docs\", \"\"),\n os.path.join(\"docs\", \"_build\", \"html\", \"\"),\n )\n", "path": "noxfile.py"}], "after_files": [{"content": "# Copyright 2016 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\n\nimport pathlib\nimport os\nimport shutil\n\nimport nox\n\n\nPYTYPE_VERSION = \"pytype==2021.4.9\"\nBLACK_VERSION = \"black==19.10b0\"\nBLACK_PATHS = (\"docs\", \"google\", \"samples\", \"tests\", \"noxfile.py\", \"setup.py\")\n\nDEFAULT_PYTHON_VERSION = \"3.8\"\nSYSTEM_TEST_PYTHON_VERSIONS = [\"3.8\"]\nUNIT_TEST_PYTHON_VERSIONS = [\"3.6\", \"3.7\", \"3.8\", \"3.9\"]\nCURRENT_DIRECTORY = pathlib.Path(__file__).parent.absolute()\n\n# 'docfx' is excluded since it only needs to run in 'docs-presubmit'\nnox.options.sessions = [\n \"unit_noextras\",\n \"unit\",\n \"system\",\n \"snippets\",\n \"cover\",\n \"lint\",\n \"lint_setup_py\",\n \"blacken\",\n \"pytype\",\n \"docs\",\n]\n\n\ndef default(session, install_extras=True):\n \"\"\"Default unit test session.\n\n This is intended to be run **without** an interpreter set, so\n that the current ``python`` (on the ``PATH``) or the version of\n Python corresponding to the ``nox`` binary the ``PATH`` can\n run the tests.\n \"\"\"\n constraints_path = str(\n CURRENT_DIRECTORY / \"testing\" / f\"constraints-{session.python}.txt\"\n )\n\n # Install all test dependencies, then install local packages in-place.\n session.install(\n \"mock\",\n \"pytest\",\n \"google-cloud-testutils\",\n \"pytest-cov\",\n \"freezegun\",\n \"-c\",\n constraints_path,\n )\n\n install_target = \".[all]\" if install_extras else \".\"\n session.install(\"-e\", install_target, \"-c\", constraints_path)\n\n session.install(\"ipython\", \"-c\", constraints_path)\n\n # Run py.test against the unit tests.\n session.run(\n \"py.test\",\n \"--quiet\",\n \"--cov=google/cloud/bigquery\",\n \"--cov=tests/unit\",\n \"--cov-append\",\n \"--cov-config=.coveragerc\",\n \"--cov-report=\",\n \"--cov-fail-under=0\",\n os.path.join(\"tests\", \"unit\"),\n *session.posargs,\n )\n\n\[email protected](python=UNIT_TEST_PYTHON_VERSIONS)\ndef unit(session):\n \"\"\"Run the unit test suite.\"\"\"\n default(session)\n\n\[email protected](python=[UNIT_TEST_PYTHON_VERSIONS[0], UNIT_TEST_PYTHON_VERSIONS[-1]])\ndef unit_noextras(session):\n \"\"\"Run the unit test suite.\"\"\"\n\n # Install optional dependencies that are out-of-date.\n # https://github.com/googleapis/python-bigquery/issues/933\n # There is no pyarrow 1.0.0 package for Python 3.9.\n if session.python == UNIT_TEST_PYTHON_VERSIONS[0]:\n session.install(\"pyarrow==1.0.0\")\n\n default(session, install_extras=False)\n\n\[email protected](python=DEFAULT_PYTHON_VERSION)\ndef pytype(session):\n \"\"\"Run type checks.\"\"\"\n # An indirect dependecy attrs==21.1.0 breaks the check, and installing a less\n # recent version avoids the error until a possibly better fix is found.\n # https://github.com/googleapis/python-bigquery/issues/655\n session.install(\"attrs==20.3.0\")\n session.install(\"-e\", \".[all]\")\n session.install(\"ipython\")\n session.install(PYTYPE_VERSION)\n session.run(\"pytype\")\n\n\[email protected](python=SYSTEM_TEST_PYTHON_VERSIONS)\ndef system(session):\n \"\"\"Run the system test suite.\"\"\"\n\n constraints_path = str(\n CURRENT_DIRECTORY / \"testing\" / f\"constraints-{session.python}.txt\"\n )\n\n # Check the value of `RUN_SYSTEM_TESTS` env var. It defaults to true.\n if os.environ.get(\"RUN_SYSTEM_TESTS\", \"true\") == \"false\":\n session.skip(\"RUN_SYSTEM_TESTS is set to false, skipping\")\n\n # Sanity check: Only run system tests if the environment variable is set.\n if not os.environ.get(\"GOOGLE_APPLICATION_CREDENTIALS\", \"\"):\n session.skip(\"Credentials must be set via environment variable.\")\n\n # Use pre-release gRPC for system tests.\n session.install(\"--pre\", \"grpcio\", \"-c\", constraints_path)\n\n # Install all test dependencies, then install local packages in place.\n session.install(\n \"mock\", \"pytest\", \"psutil\", \"google-cloud-testutils\", \"-c\", constraints_path\n )\n if os.environ.get(\"GOOGLE_API_USE_CLIENT_CERTIFICATE\", \"\") == \"true\":\n # mTLS test requires pyopenssl and latest google-cloud-storage\n session.install(\"google-cloud-storage\", \"pyopenssl\")\n else:\n session.install(\"google-cloud-storage\", \"-c\", constraints_path)\n\n # Data Catalog needed for the column ACL test with a real Policy Tag.\n session.install(\"google-cloud-datacatalog\", \"-c\", constraints_path)\n\n session.install(\"-e\", \".[all]\", \"-c\", constraints_path)\n session.install(\"ipython\", \"-c\", constraints_path)\n\n # Run py.test against the system tests.\n session.run(\"py.test\", \"--quiet\", os.path.join(\"tests\", \"system\"), *session.posargs)\n\n\[email protected](python=SYSTEM_TEST_PYTHON_VERSIONS)\ndef snippets(session):\n \"\"\"Run the snippets test suite.\"\"\"\n\n # Check the value of `RUN_SNIPPETS_TESTS` env var. It defaults to true.\n if os.environ.get(\"RUN_SNIPPETS_TESTS\", \"true\") == \"false\":\n session.skip(\"RUN_SNIPPETS_TESTS is set to false, skipping\")\n\n constraints_path = str(\n CURRENT_DIRECTORY / \"testing\" / f\"constraints-{session.python}.txt\"\n )\n\n # Install all test dependencies, then install local packages in place.\n session.install(\"mock\", \"pytest\", \"google-cloud-testutils\", \"-c\", constraints_path)\n session.install(\"google-cloud-storage\", \"-c\", constraints_path)\n session.install(\"grpcio\", \"-c\", constraints_path)\n\n session.install(\"-e\", \".[all]\", \"-c\", constraints_path)\n\n # Run py.test against the snippets tests.\n # Skip tests in samples/snippets, as those are run in a different session\n # using the nox config from that directory.\n session.run(\"py.test\", os.path.join(\"docs\", \"snippets.py\"), *session.posargs)\n session.run(\n \"py.test\",\n \"samples\",\n \"--ignore=samples/snippets\",\n \"--ignore=samples/geography\",\n *session.posargs,\n )\n\n\[email protected](python=DEFAULT_PYTHON_VERSION)\ndef cover(session):\n \"\"\"Run the final coverage report.\n\n This outputs the coverage report aggregating coverage from the unit\n test runs (not system test runs), and then erases coverage data.\n \"\"\"\n session.install(\"coverage\", \"pytest-cov\")\n session.run(\"coverage\", \"report\", \"--show-missing\", \"--fail-under=100\")\n session.run(\"coverage\", \"erase\")\n\n\[email protected](python=SYSTEM_TEST_PYTHON_VERSIONS)\ndef prerelease_deps(session):\n \"\"\"Run all tests with prerelease versions of dependencies installed.\n\n https://github.com/googleapis/python-bigquery/issues/95\n \"\"\"\n # PyArrow prerelease packages are published to an alternative PyPI host.\n # https://arrow.apache.org/docs/python/install.html#installing-nightly-packages\n session.install(\n \"--extra-index-url\", \"https://pypi.fury.io/arrow-nightlies/\", \"--pre\", \"pyarrow\"\n )\n session.install(\n \"--pre\",\n \"google-api-core\",\n \"google-cloud-bigquery-storage\",\n \"google-cloud-core\",\n \"google-resumable-media\",\n \"grpcio\",\n \"pandas\",\n )\n session.install(\n \"freezegun\",\n \"google-cloud-datacatalog\",\n \"google-cloud-storage\",\n \"google-cloud-testutils\",\n \"IPython\",\n \"mock\",\n \"psutil\",\n \"pytest\",\n \"pytest-cov\",\n )\n session.install(\"-e\", \".[all]\")\n\n # Print out prerelease package versions.\n session.run(\"python\", \"-c\", \"import grpc; print(grpc.__version__)\")\n session.run(\"python\", \"-c\", \"import pandas; print(pandas.__version__)\")\n session.run(\"python\", \"-c\", \"import pyarrow; print(pyarrow.__version__)\")\n\n # Run all tests, except a few samples tests which require extra dependencies.\n session.run(\"py.test\", \"tests/unit\")\n session.run(\"py.test\", \"tests/system\")\n session.run(\"py.test\", \"samples/tests\")\n\n\[email protected](python=DEFAULT_PYTHON_VERSION)\ndef lint(session):\n \"\"\"Run linters.\n\n Returns a failure if the linters find linting errors or sufficiently\n serious code quality issues.\n \"\"\"\n\n session.install(\"flake8\", BLACK_VERSION)\n session.install(\"-e\", \".\")\n session.run(\"flake8\", os.path.join(\"google\", \"cloud\", \"bigquery\"))\n session.run(\"flake8\", \"tests\")\n session.run(\"flake8\", os.path.join(\"docs\", \"samples\"))\n session.run(\"flake8\", os.path.join(\"docs\", \"snippets.py\"))\n session.run(\"black\", \"--check\", *BLACK_PATHS)\n\n\[email protected](python=DEFAULT_PYTHON_VERSION)\ndef lint_setup_py(session):\n \"\"\"Verify that setup.py is valid (including RST check).\"\"\"\n\n session.install(\"docutils\", \"Pygments\")\n session.run(\"python\", \"setup.py\", \"check\", \"--restructuredtext\", \"--strict\")\n\n\[email protected](python=DEFAULT_PYTHON_VERSION)\ndef blacken(session):\n \"\"\"Run black.\n Format code to uniform standard.\n \"\"\"\n\n session.install(BLACK_VERSION)\n session.run(\"black\", *BLACK_PATHS)\n\n\[email protected](python=DEFAULT_PYTHON_VERSION)\ndef docs(session):\n \"\"\"Build the docs.\"\"\"\n\n session.install(\"ipython\", \"recommonmark\", \"sphinx==4.0.1\", \"sphinx_rtd_theme\")\n session.install(\"google-cloud-storage\")\n session.install(\"-e\", \".[all]\")\n\n shutil.rmtree(os.path.join(\"docs\", \"_build\"), ignore_errors=True)\n session.run(\n \"sphinx-build\",\n \"-W\", # warnings as errors\n \"-T\", # show full traceback on exception\n \"-N\", # no colors\n \"-b\",\n \"html\",\n \"-d\",\n os.path.join(\"docs\", \"_build\", \"doctrees\", \"\"),\n os.path.join(\"docs\", \"\"),\n os.path.join(\"docs\", \"_build\", \"html\", \"\"),\n )\n\n\[email protected](python=DEFAULT_PYTHON_VERSION)\ndef docfx(session):\n \"\"\"Build the docfx yaml files for this library.\"\"\"\n\n session.install(\"-e\", \".\")\n session.install(\n \"sphinx==4.0.1\", \"alabaster\", \"recommonmark\", \"gcp-sphinx-docfx-yaml\"\n )\n\n shutil.rmtree(os.path.join(\"docs\", \"_build\"), ignore_errors=True)\n session.run(\n \"sphinx-build\",\n \"-T\", # show full traceback on exception\n \"-N\", # no colors\n \"-D\",\n (\n \"extensions=sphinx.ext.autodoc,\"\n \"sphinx.ext.autosummary,\"\n \"docfx_yaml.extension,\"\n \"sphinx.ext.intersphinx,\"\n \"sphinx.ext.coverage,\"\n \"sphinx.ext.napoleon,\"\n \"sphinx.ext.todo,\"\n \"sphinx.ext.viewcode,\"\n \"recommonmark\"\n ),\n \"-b\",\n \"html\",\n \"-d\",\n os.path.join(\"docs\", \"_build\", \"doctrees\", \"\"),\n os.path.join(\"docs\", \"\"),\n os.path.join(\"docs\", \"_build\", \"html\", \"\"),\n )\n", "path": "noxfile.py"}]} | 4,005 | 165 |
gh_patches_debug_33610 | rasdani/github-patches | git_diff | litestar-org__litestar-183 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`from starlite import *` broken if `testing` extra not installed
This is only an issue on main, not in any release.
When I want to try a library out, I'll install it into a fresh env, run python repl and do `from lib import *` and have a play around. If just doing that raised an error it would freak me out a little about the lib.
Possible solution:
- remove `.testing` imports from `starlite.__all__`
- add deprecation warning for top-level `.testing` imports
- remove `if TYPE_CHECKING` too? May as well if we are doing the above, I think?
Refs: #174 #130
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `starlite/__init__.py`
Content:
```
1 from typing import TYPE_CHECKING, Any
2
3 from starlite.datastructures import File, Redirect, State, Stream, Template
4
5 from .app import Starlite
6 from .config import (
7 CacheConfig,
8 CORSConfig,
9 OpenAPIConfig,
10 StaticFilesConfig,
11 TemplateConfig,
12 )
13 from .connection import Request, WebSocket
14 from .controller import Controller
15 from .dto import DTOFactory
16 from .enums import (
17 HttpMethod,
18 MediaType,
19 OpenAPIMediaType,
20 RequestEncodingType,
21 ScopeType,
22 )
23 from .exceptions import (
24 HTTPException,
25 ImproperlyConfiguredException,
26 InternalServerException,
27 MissingDependencyException,
28 NotAuthorizedException,
29 NotFoundException,
30 PermissionDeniedException,
31 ServiceUnavailableException,
32 StarLiteException,
33 ValidationException,
34 )
35 from .handlers import (
36 ASGIRouteHandler,
37 BaseRouteHandler,
38 HTTPRouteHandler,
39 WebsocketRouteHandler,
40 asgi,
41 delete,
42 get,
43 patch,
44 post,
45 put,
46 route,
47 websocket,
48 )
49 from .logging import LoggingConfig, QueueListenerHandler
50 from .middleware import AbstractAuthenticationMiddleware, AuthenticationResult
51 from .openapi.controller import OpenAPIController
52 from .params import Body, Dependency, Parameter
53 from .plugins import PluginProtocol
54 from .provide import Provide
55 from .response import Response
56 from .router import Router
57 from .routes import BaseRoute, HTTPRoute, WebSocketRoute
58 from .types import MiddlewareProtocol, Partial, ResponseHeader
59
60 if TYPE_CHECKING:
61 from .testing import TestClient, create_test_client, create_test_request
62
63
64 __all__ = [
65 "ASGIRouteHandler",
66 "AbstractAuthenticationMiddleware",
67 "AuthenticationResult",
68 "BaseRoute",
69 "BaseRouteHandler",
70 "Body",
71 "CORSConfig",
72 "CacheConfig",
73 "Controller",
74 "Dependency",
75 "DTOFactory",
76 "File",
77 "HTTPException",
78 "HTTPRoute",
79 "HTTPRouteHandler",
80 "HttpMethod",
81 "ImproperlyConfiguredException",
82 "InternalServerException",
83 "LoggingConfig",
84 "MediaType",
85 "MiddlewareProtocol",
86 "MissingDependencyException",
87 "NotAuthorizedException",
88 "NotFoundException",
89 "OpenAPIConfig",
90 "OpenAPIController",
91 "OpenAPIMediaType",
92 "Parameter",
93 "Partial",
94 "PermissionDeniedException",
95 "PluginProtocol",
96 "Provide",
97 "QueueListenerHandler",
98 "Redirect",
99 "Request",
100 "RequestEncodingType",
101 "Response",
102 "ResponseHeader",
103 "Router",
104 "ScopeType",
105 "ServiceUnavailableException",
106 "StarLiteException",
107 "Starlite",
108 "State",
109 "StaticFilesConfig",
110 "Stream",
111 "Template",
112 "TemplateConfig",
113 "TestClient",
114 "ValidationException",
115 "WebSocket",
116 "WebSocketRoute",
117 "WebsocketRouteHandler",
118 "asgi",
119 "create_test_client",
120 "create_test_request",
121 "delete",
122 "get",
123 "patch",
124 "post",
125 "put",
126 "route",
127 "websocket",
128 ]
129
130 _dynamic_imports = {"TestClient", "create_test_client", "create_test_request"}
131
132
133 # pylint: disable=import-outside-toplevel
134 def __getattr__(name: str) -> Any:
135 """Provide lazy importing as per https://peps.python.org/pep-0562/"""
136 if name not in _dynamic_imports:
137 raise AttributeError(f"Module {__package__} has no attribute {name}")
138
139 from . import testing
140
141 attr = globals()[name] = getattr(testing, name)
142 return attr
143
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/starlite/__init__.py b/starlite/__init__.py
--- a/starlite/__init__.py
+++ b/starlite/__init__.py
@@ -1,4 +1,4 @@
-from typing import TYPE_CHECKING, Any
+from typing import Any
from starlite.datastructures import File, Redirect, State, Stream, Template
@@ -57,10 +57,6 @@
from .routes import BaseRoute, HTTPRoute, WebSocketRoute
from .types import MiddlewareProtocol, Partial, ResponseHeader
-if TYPE_CHECKING:
- from .testing import TestClient, create_test_client, create_test_request
-
-
__all__ = [
"ASGIRouteHandler",
"AbstractAuthenticationMiddleware",
@@ -110,14 +106,11 @@
"Stream",
"Template",
"TemplateConfig",
- "TestClient",
"ValidationException",
"WebSocket",
"WebSocketRoute",
"WebsocketRouteHandler",
"asgi",
- "create_test_client",
- "create_test_request",
"delete",
"get",
"patch",
@@ -127,15 +120,24 @@
"websocket",
]
-_dynamic_imports = {"TestClient", "create_test_client", "create_test_request"}
+
+_deprecated_imports = {"TestClient", "create_test_client", "create_test_request"}
# pylint: disable=import-outside-toplevel
def __getattr__(name: str) -> Any:
"""Provide lazy importing as per https://peps.python.org/pep-0562/"""
- if name not in _dynamic_imports:
+ if name not in _deprecated_imports:
raise AttributeError(f"Module {__package__} has no attribute {name}")
+ import warnings
+
+ warnings.warn(
+ f"Importing {name} from {__package__} is deprecated, use `from startlite.testing import {name}` instead",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+
from . import testing
attr = globals()[name] = getattr(testing, name)
| {"golden_diff": "diff --git a/starlite/__init__.py b/starlite/__init__.py\n--- a/starlite/__init__.py\n+++ b/starlite/__init__.py\n@@ -1,4 +1,4 @@\n-from typing import TYPE_CHECKING, Any\n+from typing import Any\n \n from starlite.datastructures import File, Redirect, State, Stream, Template\n \n@@ -57,10 +57,6 @@\n from .routes import BaseRoute, HTTPRoute, WebSocketRoute\n from .types import MiddlewareProtocol, Partial, ResponseHeader\n \n-if TYPE_CHECKING:\n- from .testing import TestClient, create_test_client, create_test_request\n-\n-\n __all__ = [\n \"ASGIRouteHandler\",\n \"AbstractAuthenticationMiddleware\",\n@@ -110,14 +106,11 @@\n \"Stream\",\n \"Template\",\n \"TemplateConfig\",\n- \"TestClient\",\n \"ValidationException\",\n \"WebSocket\",\n \"WebSocketRoute\",\n \"WebsocketRouteHandler\",\n \"asgi\",\n- \"create_test_client\",\n- \"create_test_request\",\n \"delete\",\n \"get\",\n \"patch\",\n@@ -127,15 +120,24 @@\n \"websocket\",\n ]\n \n-_dynamic_imports = {\"TestClient\", \"create_test_client\", \"create_test_request\"}\n+\n+_deprecated_imports = {\"TestClient\", \"create_test_client\", \"create_test_request\"}\n \n \n # pylint: disable=import-outside-toplevel\n def __getattr__(name: str) -> Any:\n \"\"\"Provide lazy importing as per https://peps.python.org/pep-0562/\"\"\"\n- if name not in _dynamic_imports:\n+ if name not in _deprecated_imports:\n raise AttributeError(f\"Module {__package__} has no attribute {name}\")\n \n+ import warnings\n+\n+ warnings.warn(\n+ f\"Importing {name} from {__package__} is deprecated, use `from startlite.testing import {name}` instead\",\n+ DeprecationWarning,\n+ stacklevel=2,\n+ )\n+\n from . import testing\n \n attr = globals()[name] = getattr(testing, name)\n", "issue": "`from starlite import *` broken if `testing` extra not installed\nThis is only an issue on main, not in any release.\r\n\r\nWhen I want to try a library out, I'll install it into a fresh env, run python repl and do `from lib import *` and have a play around. If just doing that raised an error it would freak me out a little about the lib.\r\n\r\nPossible solution:\r\n- remove `.testing` imports from `starlite.__all__`\r\n- add deprecation warning for top-level `.testing` imports\r\n- remove `if TYPE_CHECKING` too? May as well if we are doing the above, I think?\r\n\r\nRefs: #174 #130 \n", "before_files": [{"content": "from typing import TYPE_CHECKING, Any\n\nfrom starlite.datastructures import File, Redirect, State, Stream, Template\n\nfrom .app import Starlite\nfrom .config import (\n CacheConfig,\n CORSConfig,\n OpenAPIConfig,\n StaticFilesConfig,\n TemplateConfig,\n)\nfrom .connection import Request, WebSocket\nfrom .controller import Controller\nfrom .dto import DTOFactory\nfrom .enums import (\n HttpMethod,\n MediaType,\n OpenAPIMediaType,\n RequestEncodingType,\n ScopeType,\n)\nfrom .exceptions import (\n HTTPException,\n ImproperlyConfiguredException,\n InternalServerException,\n MissingDependencyException,\n NotAuthorizedException,\n NotFoundException,\n PermissionDeniedException,\n ServiceUnavailableException,\n StarLiteException,\n ValidationException,\n)\nfrom .handlers import (\n ASGIRouteHandler,\n BaseRouteHandler,\n HTTPRouteHandler,\n WebsocketRouteHandler,\n asgi,\n delete,\n get,\n patch,\n post,\n put,\n route,\n websocket,\n)\nfrom .logging import LoggingConfig, QueueListenerHandler\nfrom .middleware import AbstractAuthenticationMiddleware, AuthenticationResult\nfrom .openapi.controller import OpenAPIController\nfrom .params import Body, Dependency, Parameter\nfrom .plugins import PluginProtocol\nfrom .provide import Provide\nfrom .response import Response\nfrom .router import Router\nfrom .routes import BaseRoute, HTTPRoute, WebSocketRoute\nfrom .types import MiddlewareProtocol, Partial, ResponseHeader\n\nif TYPE_CHECKING:\n from .testing import TestClient, create_test_client, create_test_request\n\n\n__all__ = [\n \"ASGIRouteHandler\",\n \"AbstractAuthenticationMiddleware\",\n \"AuthenticationResult\",\n \"BaseRoute\",\n \"BaseRouteHandler\",\n \"Body\",\n \"CORSConfig\",\n \"CacheConfig\",\n \"Controller\",\n \"Dependency\",\n \"DTOFactory\",\n \"File\",\n \"HTTPException\",\n \"HTTPRoute\",\n \"HTTPRouteHandler\",\n \"HttpMethod\",\n \"ImproperlyConfiguredException\",\n \"InternalServerException\",\n \"LoggingConfig\",\n \"MediaType\",\n \"MiddlewareProtocol\",\n \"MissingDependencyException\",\n \"NotAuthorizedException\",\n \"NotFoundException\",\n \"OpenAPIConfig\",\n \"OpenAPIController\",\n \"OpenAPIMediaType\",\n \"Parameter\",\n \"Partial\",\n \"PermissionDeniedException\",\n \"PluginProtocol\",\n \"Provide\",\n \"QueueListenerHandler\",\n \"Redirect\",\n \"Request\",\n \"RequestEncodingType\",\n \"Response\",\n \"ResponseHeader\",\n \"Router\",\n \"ScopeType\",\n \"ServiceUnavailableException\",\n \"StarLiteException\",\n \"Starlite\",\n \"State\",\n \"StaticFilesConfig\",\n \"Stream\",\n \"Template\",\n \"TemplateConfig\",\n \"TestClient\",\n \"ValidationException\",\n \"WebSocket\",\n \"WebSocketRoute\",\n \"WebsocketRouteHandler\",\n \"asgi\",\n \"create_test_client\",\n \"create_test_request\",\n \"delete\",\n \"get\",\n \"patch\",\n \"post\",\n \"put\",\n \"route\",\n \"websocket\",\n]\n\n_dynamic_imports = {\"TestClient\", \"create_test_client\", \"create_test_request\"}\n\n\n# pylint: disable=import-outside-toplevel\ndef __getattr__(name: str) -> Any:\n \"\"\"Provide lazy importing as per https://peps.python.org/pep-0562/\"\"\"\n if name not in _dynamic_imports:\n raise AttributeError(f\"Module {__package__} has no attribute {name}\")\n\n from . import testing\n\n attr = globals()[name] = getattr(testing, name)\n return attr\n", "path": "starlite/__init__.py"}], "after_files": [{"content": "from typing import Any\n\nfrom starlite.datastructures import File, Redirect, State, Stream, Template\n\nfrom .app import Starlite\nfrom .config import (\n CacheConfig,\n CORSConfig,\n OpenAPIConfig,\n StaticFilesConfig,\n TemplateConfig,\n)\nfrom .connection import Request, WebSocket\nfrom .controller import Controller\nfrom .dto import DTOFactory\nfrom .enums import (\n HttpMethod,\n MediaType,\n OpenAPIMediaType,\n RequestEncodingType,\n ScopeType,\n)\nfrom .exceptions import (\n HTTPException,\n ImproperlyConfiguredException,\n InternalServerException,\n MissingDependencyException,\n NotAuthorizedException,\n NotFoundException,\n PermissionDeniedException,\n ServiceUnavailableException,\n StarLiteException,\n ValidationException,\n)\nfrom .handlers import (\n ASGIRouteHandler,\n BaseRouteHandler,\n HTTPRouteHandler,\n WebsocketRouteHandler,\n asgi,\n delete,\n get,\n patch,\n post,\n put,\n route,\n websocket,\n)\nfrom .logging import LoggingConfig, QueueListenerHandler\nfrom .middleware import AbstractAuthenticationMiddleware, AuthenticationResult\nfrom .openapi.controller import OpenAPIController\nfrom .params import Body, Dependency, Parameter\nfrom .plugins import PluginProtocol\nfrom .provide import Provide\nfrom .response import Response\nfrom .router import Router\nfrom .routes import BaseRoute, HTTPRoute, WebSocketRoute\nfrom .types import MiddlewareProtocol, Partial, ResponseHeader\n\n__all__ = [\n \"ASGIRouteHandler\",\n \"AbstractAuthenticationMiddleware\",\n \"AuthenticationResult\",\n \"BaseRoute\",\n \"BaseRouteHandler\",\n \"Body\",\n \"CORSConfig\",\n \"CacheConfig\",\n \"Controller\",\n \"Dependency\",\n \"DTOFactory\",\n \"File\",\n \"HTTPException\",\n \"HTTPRoute\",\n \"HTTPRouteHandler\",\n \"HttpMethod\",\n \"ImproperlyConfiguredException\",\n \"InternalServerException\",\n \"LoggingConfig\",\n \"MediaType\",\n \"MiddlewareProtocol\",\n \"MissingDependencyException\",\n \"NotAuthorizedException\",\n \"NotFoundException\",\n \"OpenAPIConfig\",\n \"OpenAPIController\",\n \"OpenAPIMediaType\",\n \"Parameter\",\n \"Partial\",\n \"PermissionDeniedException\",\n \"PluginProtocol\",\n \"Provide\",\n \"QueueListenerHandler\",\n \"Redirect\",\n \"Request\",\n \"RequestEncodingType\",\n \"Response\",\n \"ResponseHeader\",\n \"Router\",\n \"ScopeType\",\n \"ServiceUnavailableException\",\n \"StarLiteException\",\n \"Starlite\",\n \"State\",\n \"StaticFilesConfig\",\n \"Stream\",\n \"Template\",\n \"TemplateConfig\",\n \"ValidationException\",\n \"WebSocket\",\n \"WebSocketRoute\",\n \"WebsocketRouteHandler\",\n \"asgi\",\n \"delete\",\n \"get\",\n \"patch\",\n \"post\",\n \"put\",\n \"route\",\n \"websocket\",\n]\n\n\n_deprecated_imports = {\"TestClient\", \"create_test_client\", \"create_test_request\"}\n\n\n# pylint: disable=import-outside-toplevel\ndef __getattr__(name: str) -> Any:\n \"\"\"Provide lazy importing as per https://peps.python.org/pep-0562/\"\"\"\n if name not in _deprecated_imports:\n raise AttributeError(f\"Module {__package__} has no attribute {name}\")\n\n import warnings\n\n warnings.warn(\n f\"Importing {name} from {__package__} is deprecated, use `from startlite.testing import {name}` instead\",\n DeprecationWarning,\n stacklevel=2,\n )\n\n from . import testing\n\n attr = globals()[name] = getattr(testing, name)\n return attr\n", "path": "starlite/__init__.py"}]} | 1,510 | 471 |
gh_patches_debug_29079 | rasdani/github-patches | git_diff | pulp__pulpcore-133 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cleanup old publications - general cleanup
**Is your feature request related to a problem? Please describe.**
We do a lot of different new repos and publications - and also delete them after certain time.
Now i see, the publications resist.
Means, in normal handling we have hundreds of publications.
But, we don't know where they do belong to, becuase there is no name tag.
**Describe the solution you'd like**
May be there is a possibility to remove old publications with no belonging or link to a repository or distribution.
**Describe alternatives you've considered**
No idea, i just think about an clenaup job.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pulpcore/app/serializers/publication.py`
Content:
```
1 from gettext import gettext as _
2
3 from django.core import validators
4 from django.db.models import Q
5
6 from rest_framework import serializers
7 from rest_framework.validators import UniqueValidator
8
9 from pulpcore.app import models
10 from pulpcore.app.serializers import (
11 BaseURLField,
12 DetailIdentityField,
13 DetailRelatedField,
14 NestedRelatedField,
15 RelatedField,
16 MasterModelSerializer,
17 ModelSerializer,
18 validate_unknown_fields,
19 )
20
21
22 class PublicationSerializer(MasterModelSerializer):
23 _href = DetailIdentityField()
24 repository_version = NestedRelatedField(
25 view_name='versions-detail',
26 lookup_field='number',
27 parent_lookup_kwargs={'repository_pk': 'repository__pk'},
28 queryset=models.RepositoryVersion.objects.all(),
29 required=False,
30 )
31 repository = serializers.HyperlinkedRelatedField(
32 help_text=_('A URI of the repository to be published.'),
33 required=False,
34 label=_('Repository'),
35 queryset=models.Repository.objects.all(),
36 view_name='repositories-detail',
37 )
38
39 def validate(self, data):
40 if hasattr(self, 'initial_data'):
41 validate_unknown_fields(self.initial_data, self.fields)
42
43 repository = data.pop('repository', None) # not an actual field on publication
44 repository_version = data.get('repository_version')
45 if not repository and not repository_version:
46 raise serializers.ValidationError(
47 _("Either the 'repository' or 'repository_version' need to be specified"))
48 elif not repository and repository_version:
49 return data
50 elif repository and not repository_version:
51 version = models.RepositoryVersion.latest(repository)
52 if version:
53 new_data = {'repository_version': version}
54 new_data.update(data)
55 return new_data
56 else:
57 raise serializers.ValidationError(
58 detail=_('Repository has no version available to create Publication from'))
59 raise serializers.ValidationError(
60 _("Either the 'repository' or 'repository_version' need to be specified "
61 "but not both.")
62 )
63
64 class Meta:
65 abstract = True
66 model = models.Publication
67 fields = MasterModelSerializer.Meta.fields + (
68 'publisher',
69 'repository_version',
70 'repository'
71 )
72
73
74 class ContentGuardSerializer(MasterModelSerializer):
75 _href = DetailIdentityField()
76
77 name = serializers.CharField(
78 help_text=_('The unique name.')
79 )
80 description = serializers.CharField(
81 help_text=_('An optional description.'),
82 allow_blank=True,
83 required=False
84 )
85
86 class Meta:
87 model = models.ContentGuard
88 fields = MasterModelSerializer.Meta.fields + (
89 'name',
90 'description'
91 )
92
93
94 class BaseDistributionSerializer(MasterModelSerializer):
95 _href = DetailIdentityField()
96 base_path = serializers.CharField(
97 help_text=_('The base (relative) path component of the published url. Avoid paths that \
98 overlap with other distribution base paths (e.g. "foo" and "foo/bar")'),
99 validators=[validators.MaxLengthValidator(
100 models.BaseDistribution._meta.get_field('base_path').max_length,
101 message=_('`base_path` length must be less than {} characters').format(
102 models.BaseDistribution._meta.get_field('base_path').max_length
103 )),
104 UniqueValidator(queryset=models.BaseDistribution.objects.all()),
105 ]
106 )
107 base_url = BaseURLField(
108 source='base_path', read_only=True,
109 help_text=_('The URL for accessing the publication as defined by this distribution.')
110 )
111 content_guard = DetailRelatedField(
112 required=False,
113 help_text=_('An optional content-guard.'),
114 queryset=models.ContentGuard.objects.all(),
115 allow_null=True
116 )
117 name = serializers.CharField(
118 help_text=_('A unique name. Ex, `rawhide` and `stable`.'),
119 validators=[validators.MaxLengthValidator(
120 models.BaseDistribution._meta.get_field('name').max_length,
121 message=_('`name` length must be less than {} characters').format(
122 models.BaseDistribution._meta.get_field('name').max_length
123 )),
124 UniqueValidator(queryset=models.BaseDistribution.objects.all())]
125 )
126 remote = DetailRelatedField(
127 required=False,
128 help_text=_('Remote that can be used to fetch content when using pull-through caching.'),
129 queryset=models.Remote.objects.all(),
130 allow_null=True
131 )
132
133 class Meta:
134 abstract = True
135 model = models.BaseDistribution
136 fields = ModelSerializer.Meta.fields + (
137 'base_path',
138 'base_url',
139 'content_guard',
140 'name',
141 'remote',
142 )
143
144 def _validate_path_overlap(self, path):
145 # look for any base paths nested in path
146 search = path.split("/")[0]
147 q = Q(base_path=search)
148 for subdir in path.split("/")[1:]:
149 search = "/".join((search, subdir))
150 q |= Q(base_path=search)
151
152 # look for any base paths that nest path
153 q |= Q(base_path__startswith='{}/'.format(path))
154 qs = models.BaseDistribution.objects.filter(q)
155
156 if self.instance is not None:
157 qs = qs.exclude(pk=self.instance.pk)
158
159 match = qs.first()
160 if match:
161 raise serializers.ValidationError(detail=_("Overlaps with existing distribution '"
162 "{}'").format(match.name))
163
164 return path
165
166 def validate_base_path(self, path):
167 self._validate_relative_path(path)
168 return self._validate_path_overlap(path)
169
170
171 class PublicationDistributionSerializer(BaseDistributionSerializer):
172 publication = DetailRelatedField(
173 required=False,
174 help_text=_('Publication to be served'),
175 queryset=models.Publication.objects.exclude(complete=False),
176 allow_null=True
177 )
178
179 class Meta:
180 abstract = True
181 fields = BaseDistributionSerializer.Meta.fields + (
182 'publication',
183 )
184
185
186 class RepositoryVersionDistributionSerializer(BaseDistributionSerializer):
187 repository = RelatedField(
188 required=False,
189 help_text=_('The latest RepositoryVersion for this Repository will be served.'),
190 queryset=models.Repository.objects.all(),
191 view_name='repositories-detail',
192 allow_null=True
193 )
194 repository_version = NestedRelatedField(
195 required=False,
196 help_text=_('RepositoryVersion to be served'),
197 queryset=models.RepositoryVersion.objects.exclude(complete=False),
198 view_name='versions-detail',
199 allow_null=True,
200 lookup_field='number',
201 parent_lookup_kwargs={'repository_pk': 'repository__pk'},
202 )
203
204 class Meta:
205 abstract = True
206 fields = BaseDistributionSerializer.Meta.fields + (
207 'repository',
208 'repository_version',
209 )
210
211 def validate(self, data):
212 super().validate(data)
213
214 if 'repository' in data and 'repository_version' in data:
215 msg = _("The attributes 'repository' and 'repository_version' must be used"
216 "exclusively.")
217 raise serializers.ValidationError(msg)
218
219 return data
220
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pulpcore/app/serializers/publication.py b/pulpcore/app/serializers/publication.py
--- a/pulpcore/app/serializers/publication.py
+++ b/pulpcore/app/serializers/publication.py
@@ -92,6 +92,22 @@
class BaseDistributionSerializer(MasterModelSerializer):
+ """
+ The Serializer for the BaseDistribution model.
+
+ The serializer deliberately omits the "remote" field, which is used for
+ pull-through caching only. Plugins implementing pull-through caching will
+ have to add the field in their derived serializer class like this::
+
+ remote = DetailRelatedField(
+ required=False,
+ help_text=_('Remote that can be used to fetch content when using pull-through caching.'),
+ queryset=models.Remote.objects.all(),
+ allow_null=True
+ )
+
+ """
+
_href = DetailIdentityField()
base_path = serializers.CharField(
help_text=_('The base (relative) path component of the published url. Avoid paths that \
@@ -123,12 +139,6 @@
)),
UniqueValidator(queryset=models.BaseDistribution.objects.all())]
)
- remote = DetailRelatedField(
- required=False,
- help_text=_('Remote that can be used to fetch content when using pull-through caching.'),
- queryset=models.Remote.objects.all(),
- allow_null=True
- )
class Meta:
abstract = True
@@ -138,7 +148,6 @@
'base_url',
'content_guard',
'name',
- 'remote',
)
def _validate_path_overlap(self, path):
| {"golden_diff": "diff --git a/pulpcore/app/serializers/publication.py b/pulpcore/app/serializers/publication.py\n--- a/pulpcore/app/serializers/publication.py\n+++ b/pulpcore/app/serializers/publication.py\n@@ -92,6 +92,22 @@\n \n \n class BaseDistributionSerializer(MasterModelSerializer):\n+ \"\"\"\n+ The Serializer for the BaseDistribution model.\n+\n+ The serializer deliberately omits the \"remote\" field, which is used for\n+ pull-through caching only. Plugins implementing pull-through caching will\n+ have to add the field in their derived serializer class like this::\n+\n+ remote = DetailRelatedField(\n+ required=False,\n+ help_text=_('Remote that can be used to fetch content when using pull-through caching.'),\n+ queryset=models.Remote.objects.all(),\n+ allow_null=True\n+ )\n+\n+ \"\"\"\n+\n _href = DetailIdentityField()\n base_path = serializers.CharField(\n help_text=_('The base (relative) path component of the published url. Avoid paths that \\\n@@ -123,12 +139,6 @@\n )),\n UniqueValidator(queryset=models.BaseDistribution.objects.all())]\n )\n- remote = DetailRelatedField(\n- required=False,\n- help_text=_('Remote that can be used to fetch content when using pull-through caching.'),\n- queryset=models.Remote.objects.all(),\n- allow_null=True\n- )\n \n class Meta:\n abstract = True\n@@ -138,7 +148,6 @@\n 'base_url',\n 'content_guard',\n 'name',\n- 'remote',\n )\n \n def _validate_path_overlap(self, path):\n", "issue": "Cleanup old publications - general cleanup\n**Is your feature request related to a problem? Please describe.**\r\nWe do a lot of different new repos and publications - and also delete them after certain time.\r\nNow i see, the publications resist.\r\nMeans, in normal handling we have hundreds of publications.\r\nBut, we don't know where they do belong to, becuase there is no name tag. \r\n\r\n**Describe the solution you'd like**\r\nMay be there is a possibility to remove old publications with no belonging or link to a repository or distribution. \r\n\r\n**Describe alternatives you've considered**\r\nNo idea, i just think about an clenaup job.\r\n\r\n\n", "before_files": [{"content": "from gettext import gettext as _\n\nfrom django.core import validators\nfrom django.db.models import Q\n\nfrom rest_framework import serializers\nfrom rest_framework.validators import UniqueValidator\n\nfrom pulpcore.app import models\nfrom pulpcore.app.serializers import (\n BaseURLField,\n DetailIdentityField,\n DetailRelatedField,\n NestedRelatedField,\n RelatedField,\n MasterModelSerializer,\n ModelSerializer,\n validate_unknown_fields,\n)\n\n\nclass PublicationSerializer(MasterModelSerializer):\n _href = DetailIdentityField()\n repository_version = NestedRelatedField(\n view_name='versions-detail',\n lookup_field='number',\n parent_lookup_kwargs={'repository_pk': 'repository__pk'},\n queryset=models.RepositoryVersion.objects.all(),\n required=False,\n )\n repository = serializers.HyperlinkedRelatedField(\n help_text=_('A URI of the repository to be published.'),\n required=False,\n label=_('Repository'),\n queryset=models.Repository.objects.all(),\n view_name='repositories-detail',\n )\n\n def validate(self, data):\n if hasattr(self, 'initial_data'):\n validate_unknown_fields(self.initial_data, self.fields)\n\n repository = data.pop('repository', None) # not an actual field on publication\n repository_version = data.get('repository_version')\n if not repository and not repository_version:\n raise serializers.ValidationError(\n _(\"Either the 'repository' or 'repository_version' need to be specified\"))\n elif not repository and repository_version:\n return data\n elif repository and not repository_version:\n version = models.RepositoryVersion.latest(repository)\n if version:\n new_data = {'repository_version': version}\n new_data.update(data)\n return new_data\n else:\n raise serializers.ValidationError(\n detail=_('Repository has no version available to create Publication from'))\n raise serializers.ValidationError(\n _(\"Either the 'repository' or 'repository_version' need to be specified \"\n \"but not both.\")\n )\n\n class Meta:\n abstract = True\n model = models.Publication\n fields = MasterModelSerializer.Meta.fields + (\n 'publisher',\n 'repository_version',\n 'repository'\n )\n\n\nclass ContentGuardSerializer(MasterModelSerializer):\n _href = DetailIdentityField()\n\n name = serializers.CharField(\n help_text=_('The unique name.')\n )\n description = serializers.CharField(\n help_text=_('An optional description.'),\n allow_blank=True,\n required=False\n )\n\n class Meta:\n model = models.ContentGuard\n fields = MasterModelSerializer.Meta.fields + (\n 'name',\n 'description'\n )\n\n\nclass BaseDistributionSerializer(MasterModelSerializer):\n _href = DetailIdentityField()\n base_path = serializers.CharField(\n help_text=_('The base (relative) path component of the published url. Avoid paths that \\\n overlap with other distribution base paths (e.g. \"foo\" and \"foo/bar\")'),\n validators=[validators.MaxLengthValidator(\n models.BaseDistribution._meta.get_field('base_path').max_length,\n message=_('`base_path` length must be less than {} characters').format(\n models.BaseDistribution._meta.get_field('base_path').max_length\n )),\n UniqueValidator(queryset=models.BaseDistribution.objects.all()),\n ]\n )\n base_url = BaseURLField(\n source='base_path', read_only=True,\n help_text=_('The URL for accessing the publication as defined by this distribution.')\n )\n content_guard = DetailRelatedField(\n required=False,\n help_text=_('An optional content-guard.'),\n queryset=models.ContentGuard.objects.all(),\n allow_null=True\n )\n name = serializers.CharField(\n help_text=_('A unique name. Ex, `rawhide` and `stable`.'),\n validators=[validators.MaxLengthValidator(\n models.BaseDistribution._meta.get_field('name').max_length,\n message=_('`name` length must be less than {} characters').format(\n models.BaseDistribution._meta.get_field('name').max_length\n )),\n UniqueValidator(queryset=models.BaseDistribution.objects.all())]\n )\n remote = DetailRelatedField(\n required=False,\n help_text=_('Remote that can be used to fetch content when using pull-through caching.'),\n queryset=models.Remote.objects.all(),\n allow_null=True\n )\n\n class Meta:\n abstract = True\n model = models.BaseDistribution\n fields = ModelSerializer.Meta.fields + (\n 'base_path',\n 'base_url',\n 'content_guard',\n 'name',\n 'remote',\n )\n\n def _validate_path_overlap(self, path):\n # look for any base paths nested in path\n search = path.split(\"/\")[0]\n q = Q(base_path=search)\n for subdir in path.split(\"/\")[1:]:\n search = \"/\".join((search, subdir))\n q |= Q(base_path=search)\n\n # look for any base paths that nest path\n q |= Q(base_path__startswith='{}/'.format(path))\n qs = models.BaseDistribution.objects.filter(q)\n\n if self.instance is not None:\n qs = qs.exclude(pk=self.instance.pk)\n\n match = qs.first()\n if match:\n raise serializers.ValidationError(detail=_(\"Overlaps with existing distribution '\"\n \"{}'\").format(match.name))\n\n return path\n\n def validate_base_path(self, path):\n self._validate_relative_path(path)\n return self._validate_path_overlap(path)\n\n\nclass PublicationDistributionSerializer(BaseDistributionSerializer):\n publication = DetailRelatedField(\n required=False,\n help_text=_('Publication to be served'),\n queryset=models.Publication.objects.exclude(complete=False),\n allow_null=True\n )\n\n class Meta:\n abstract = True\n fields = BaseDistributionSerializer.Meta.fields + (\n 'publication',\n )\n\n\nclass RepositoryVersionDistributionSerializer(BaseDistributionSerializer):\n repository = RelatedField(\n required=False,\n help_text=_('The latest RepositoryVersion for this Repository will be served.'),\n queryset=models.Repository.objects.all(),\n view_name='repositories-detail',\n allow_null=True\n )\n repository_version = NestedRelatedField(\n required=False,\n help_text=_('RepositoryVersion to be served'),\n queryset=models.RepositoryVersion.objects.exclude(complete=False),\n view_name='versions-detail',\n allow_null=True,\n lookup_field='number',\n parent_lookup_kwargs={'repository_pk': 'repository__pk'},\n )\n\n class Meta:\n abstract = True\n fields = BaseDistributionSerializer.Meta.fields + (\n 'repository',\n 'repository_version',\n )\n\n def validate(self, data):\n super().validate(data)\n\n if 'repository' in data and 'repository_version' in data:\n msg = _(\"The attributes 'repository' and 'repository_version' must be used\"\n \"exclusively.\")\n raise serializers.ValidationError(msg)\n\n return data\n", "path": "pulpcore/app/serializers/publication.py"}], "after_files": [{"content": "from gettext import gettext as _\n\nfrom django.core import validators\nfrom django.db.models import Q\n\nfrom rest_framework import serializers\nfrom rest_framework.validators import UniqueValidator\n\nfrom pulpcore.app import models\nfrom pulpcore.app.serializers import (\n BaseURLField,\n DetailIdentityField,\n DetailRelatedField,\n NestedRelatedField,\n RelatedField,\n MasterModelSerializer,\n ModelSerializer,\n validate_unknown_fields,\n)\n\n\nclass PublicationSerializer(MasterModelSerializer):\n _href = DetailIdentityField()\n repository_version = NestedRelatedField(\n view_name='versions-detail',\n lookup_field='number',\n parent_lookup_kwargs={'repository_pk': 'repository__pk'},\n queryset=models.RepositoryVersion.objects.all(),\n required=False,\n )\n repository = serializers.HyperlinkedRelatedField(\n help_text=_('A URI of the repository to be published.'),\n required=False,\n label=_('Repository'),\n queryset=models.Repository.objects.all(),\n view_name='repositories-detail',\n )\n\n def validate(self, data):\n if hasattr(self, 'initial_data'):\n validate_unknown_fields(self.initial_data, self.fields)\n\n repository = data.pop('repository', None) # not an actual field on publication\n repository_version = data.get('repository_version')\n if not repository and not repository_version:\n raise serializers.ValidationError(\n _(\"Either the 'repository' or 'repository_version' need to be specified\"))\n elif not repository and repository_version:\n return data\n elif repository and not repository_version:\n version = models.RepositoryVersion.latest(repository)\n if version:\n new_data = {'repository_version': version}\n new_data.update(data)\n return new_data\n else:\n raise serializers.ValidationError(\n detail=_('Repository has no version available to create Publication from'))\n raise serializers.ValidationError(\n _(\"Either the 'repository' or 'repository_version' need to be specified \"\n \"but not both.\")\n )\n\n class Meta:\n abstract = True\n model = models.Publication\n fields = MasterModelSerializer.Meta.fields + (\n 'publisher',\n 'repository_version',\n 'repository'\n )\n\n\nclass ContentGuardSerializer(MasterModelSerializer):\n _href = DetailIdentityField()\n\n name = serializers.CharField(\n help_text=_('The unique name.')\n )\n description = serializers.CharField(\n help_text=_('An optional description.'),\n allow_blank=True,\n required=False\n )\n\n class Meta:\n model = models.ContentGuard\n fields = MasterModelSerializer.Meta.fields + (\n 'name',\n 'description'\n )\n\n\nclass BaseDistributionSerializer(MasterModelSerializer):\n \"\"\"\n The Serializer for the BaseDistribution model.\n\n The serializer deliberately omits the \"remote\" field, which is used for\n pull-through caching only. Plugins implementing pull-through caching will\n have to add the field in their derived serializer class like this::\n\n remote = DetailRelatedField(\n required=False,\n help_text=_('Remote that can be used to fetch content when using pull-through caching.'),\n queryset=models.Remote.objects.all(),\n allow_null=True\n )\n\n \"\"\"\n\n _href = DetailIdentityField()\n base_path = serializers.CharField(\n help_text=_('The base (relative) path component of the published url. Avoid paths that \\\n overlap with other distribution base paths (e.g. \"foo\" and \"foo/bar\")'),\n validators=[validators.MaxLengthValidator(\n models.BaseDistribution._meta.get_field('base_path').max_length,\n message=_('`base_path` length must be less than {} characters').format(\n models.BaseDistribution._meta.get_field('base_path').max_length\n )),\n UniqueValidator(queryset=models.BaseDistribution.objects.all()),\n ]\n )\n base_url = BaseURLField(\n source='base_path', read_only=True,\n help_text=_('The URL for accessing the publication as defined by this distribution.')\n )\n content_guard = DetailRelatedField(\n required=False,\n help_text=_('An optional content-guard.'),\n queryset=models.ContentGuard.objects.all(),\n allow_null=True\n )\n name = serializers.CharField(\n help_text=_('A unique name. Ex, `rawhide` and `stable`.'),\n validators=[validators.MaxLengthValidator(\n models.BaseDistribution._meta.get_field('name').max_length,\n message=_('`name` length must be less than {} characters').format(\n models.BaseDistribution._meta.get_field('name').max_length\n )),\n UniqueValidator(queryset=models.BaseDistribution.objects.all())]\n )\n\n class Meta:\n abstract = True\n model = models.BaseDistribution\n fields = ModelSerializer.Meta.fields + (\n 'base_path',\n 'base_url',\n 'content_guard',\n 'name',\n )\n\n def _validate_path_overlap(self, path):\n # look for any base paths nested in path\n search = path.split(\"/\")[0]\n q = Q(base_path=search)\n for subdir in path.split(\"/\")[1:]:\n search = \"/\".join((search, subdir))\n q |= Q(base_path=search)\n\n # look for any base paths that nest path\n q |= Q(base_path__startswith='{}/'.format(path))\n qs = models.BaseDistribution.objects.filter(q)\n\n if self.instance is not None:\n qs = qs.exclude(pk=self.instance.pk)\n\n match = qs.first()\n if match:\n raise serializers.ValidationError(detail=_(\"Overlaps with existing distribution '\"\n \"{}'\").format(match.name))\n\n return path\n\n def validate_base_path(self, path):\n self._validate_relative_path(path)\n return self._validate_path_overlap(path)\n\n\nclass PublicationDistributionSerializer(BaseDistributionSerializer):\n publication = DetailRelatedField(\n required=False,\n help_text=_('Publication to be served'),\n queryset=models.Publication.objects.exclude(complete=False),\n allow_null=True\n )\n\n class Meta:\n abstract = True\n fields = BaseDistributionSerializer.Meta.fields + (\n 'publication',\n )\n\n\nclass RepositoryVersionDistributionSerializer(BaseDistributionSerializer):\n repository = RelatedField(\n required=False,\n help_text=_('The latest RepositoryVersion for this Repository will be served.'),\n queryset=models.Repository.objects.all(),\n view_name='repositories-detail',\n allow_null=True\n )\n repository_version = NestedRelatedField(\n required=False,\n help_text=_('RepositoryVersion to be served'),\n queryset=models.RepositoryVersion.objects.exclude(complete=False),\n view_name='versions-detail',\n allow_null=True,\n lookup_field='number',\n parent_lookup_kwargs={'repository_pk': 'repository__pk'},\n )\n\n class Meta:\n abstract = True\n fields = BaseDistributionSerializer.Meta.fields + (\n 'repository',\n 'repository_version',\n )\n\n def validate(self, data):\n super().validate(data)\n\n if 'repository' in data and 'repository_version' in data:\n msg = _(\"The attributes 'repository' and 'repository_version' must be used\"\n \"exclusively.\")\n raise serializers.ValidationError(msg)\n\n return data\n", "path": "pulpcore/app/serializers/publication.py"}]} | 2,357 | 362 |
gh_patches_debug_14782 | rasdani/github-patches | git_diff | Textualize__textual-1359 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Validators and watchers should run hand-in-hand
When `init=True` in a `reactive` attribute, the `watch_` method for that attribute run, but the `validate_` method does not.
This seems rather confusing and we think that validators and watchers should always run hand-in-hand.
If the watcher is called, you should be confident that the incoming `new_value` has already been validated by the `validate_` method.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/textual/reactive.py`
Content:
```
1 from __future__ import annotations
2
3 from functools import partial
4 from inspect import isawaitable
5 from typing import (
6 TYPE_CHECKING,
7 Any,
8 Awaitable,
9 Callable,
10 Generic,
11 Type,
12 TypeVar,
13 Union,
14 )
15
16 from . import events
17 from ._callback import count_parameters, invoke
18 from ._types import MessageTarget
19
20 if TYPE_CHECKING:
21 from .app import App
22 from .widget import Widget
23
24 Reactable = Union[Widget, App]
25
26 ReactiveType = TypeVar("ReactiveType")
27
28
29 class _NotSet:
30 pass
31
32
33 _NOT_SET = _NotSet()
34
35 T = TypeVar("T")
36
37
38 class Reactive(Generic[ReactiveType]):
39 """Reactive descriptor.
40
41 Args:
42 default (ReactiveType | Callable[[], ReactiveType]): A default value or callable that returns a default.
43 layout (bool, optional): Perform a layout on change. Defaults to False.
44 repaint (bool, optional): Perform a repaint on change. Defaults to True.
45 init (bool, optional): Call watchers on initialize (post mount). Defaults to False.
46 always_update (bool, optional): Call watchers even when the new value equals the old value. Defaults to False.
47 """
48
49 def __init__(
50 self,
51 default: ReactiveType | Callable[[], ReactiveType],
52 *,
53 layout: bool = False,
54 repaint: bool = True,
55 init: bool = False,
56 always_update: bool = False,
57 ) -> None:
58 self._default = default
59 self._layout = layout
60 self._repaint = repaint
61 self._init = init
62 self._always_update = always_update
63
64 @classmethod
65 def init(
66 cls,
67 default: ReactiveType | Callable[[], ReactiveType],
68 *,
69 layout: bool = False,
70 repaint: bool = True,
71 always_update: bool = False,
72 ) -> Reactive:
73 """A reactive variable that calls watchers and compute on initialize (post mount).
74
75 Args:
76 default (ReactiveType | Callable[[], ReactiveType]): A default value or callable that returns a default.
77 layout (bool, optional): Perform a layout on change. Defaults to False.
78 repaint (bool, optional): Perform a repaint on change. Defaults to True.
79 always_update (bool, optional): Call watchers even when the new value equals the old value. Defaults to False.
80
81 Returns:
82 Reactive: A Reactive instance which calls watchers or initialize.
83 """
84 return cls(
85 default,
86 layout=layout,
87 repaint=repaint,
88 init=True,
89 always_update=always_update,
90 )
91
92 @classmethod
93 def var(
94 cls,
95 default: ReactiveType | Callable[[], ReactiveType],
96 ) -> Reactive:
97 """A reactive variable that doesn't update or layout.
98
99 Args:
100 default (ReactiveType | Callable[[], ReactiveType]): A default value or callable that returns a default.
101
102 Returns:
103 Reactive: A Reactive descriptor.
104 """
105 return cls(default, layout=False, repaint=False, init=True)
106
107 @classmethod
108 def _initialize_object(cls, obj: object) -> None:
109 """Set defaults and call any watchers / computes for the first time.
110
111 Args:
112 obj (Reactable): An object with Reactive descriptors
113 """
114 if not hasattr(obj, "__reactive_initialized"):
115 startswith = str.startswith
116 for key in obj.__class__.__dict__:
117 if startswith(key, "_default_"):
118 name = key[9:]
119 # Check defaults
120 if not hasattr(obj, name):
121 # Attribute has no value yet
122 default = getattr(obj, key)
123 default_value = default() if callable(default) else default
124 # Set the default vale (calls `__set__`)
125 setattr(obj, name, default_value)
126 setattr(obj, "__reactive_initialized", True)
127
128 @classmethod
129 def _reset_object(cls, obj: object) -> None:
130 """Reset reactive structures on object (to avoid reference cycles).
131
132 Args:
133 obj (object): A reactive object.
134 """
135 getattr(obj, "__watchers", {}).clear()
136 getattr(obj, "__computes", []).clear()
137
138 def __set_name__(self, owner: Type[MessageTarget], name: str) -> None:
139
140 # Check for compute method
141 if hasattr(owner, f"compute_{name}"):
142 # Compute methods are stored in a list called `__computes`
143 try:
144 computes = getattr(owner, "__computes")
145 except AttributeError:
146 computes = []
147 setattr(owner, "__computes", computes)
148 computes.append(name)
149
150 # The name of the attribute
151 self.name = name
152 # The internal name where the attribute's value is stored
153 self.internal_name = f"_reactive_{name}"
154 default = self._default
155 setattr(owner, f"_default_{name}", default)
156
157 def __get__(self, obj: Reactable, obj_type: type[object]) -> ReactiveType:
158 _rich_traceback_omit = True
159 value: _NotSet | ReactiveType = getattr(obj, self.internal_name, _NOT_SET)
160 if isinstance(value, _NotSet):
161 # No value present, we need to set the default
162 init_name = f"_default_{self.name}"
163 default = getattr(obj, init_name)
164 default_value = default() if callable(default) else default
165 # Set and return the value
166 setattr(obj, self.internal_name, default_value)
167 if self._init:
168 self._check_watchers(obj, self.name, default_value)
169 return default_value
170 return value
171
172 def __set__(self, obj: Reactable, value: ReactiveType) -> None:
173 _rich_traceback_omit = True
174 name = self.name
175 current_value = getattr(obj, name)
176 # Check for validate function
177 validate_function = getattr(obj, f"validate_{name}", None)
178 # Check if this is the first time setting the value
179 first_set = getattr(obj, f"__first_set_{self.internal_name}", True)
180 # Call validate, but not on first set.
181 if callable(validate_function) and not first_set:
182 value = validate_function(value)
183 # If the value has changed, or this is the first time setting the value
184 if current_value != value or first_set or self._always_update:
185 # Set the first set flag to False
186 setattr(obj, f"__first_set_{self.internal_name}", False)
187 # Store the internal value
188 setattr(obj, self.internal_name, value)
189 # Check all watchers
190 self._check_watchers(obj, name, current_value)
191 # Refresh according to descriptor flags
192 if self._layout or self._repaint:
193 obj.refresh(repaint=self._repaint, layout=self._layout)
194
195 @classmethod
196 def _check_watchers(cls, obj: Reactable, name: str, old_value: Any):
197 """Check watchers, and call watch methods / computes
198
199 Args:
200 obj (Reactable): The reactable object.
201 name (str): Attribute name.
202 old_value (Any): The old (previous) value of the attribute.
203 first_set (bool, optional): True if this is the first time setting the value. Defaults to False.
204 """
205 _rich_traceback_omit = True
206 # Get the current value.
207 internal_name = f"_reactive_{name}"
208 value = getattr(obj, internal_name)
209
210 async def await_watcher(awaitable: Awaitable) -> None:
211 """Coroutine to await an awaitable returned from a watcher"""
212 _rich_traceback_omit = True
213 await awaitable
214 # Watcher may have changed the state, so run compute again
215 obj.post_message_no_wait(
216 events.Callback(sender=obj, callback=partial(Reactive._compute, obj))
217 )
218
219 def invoke_watcher(
220 watch_function: Callable, old_value: object, value: object
221 ) -> bool:
222 """Invoke a watch function.
223
224 Args:
225 watch_function (Callable): A watch function, which may be sync or async.
226 old_value (object): The old value of the attribute.
227 value (object): The new value of the attribute.
228
229 Returns:
230 bool: True if the watcher was run, or False if it was posted.
231 """
232 _rich_traceback_omit = True
233 if count_parameters(watch_function) == 2:
234 watch_result = watch_function(old_value, value)
235 else:
236 watch_result = watch_function(value)
237 if isawaitable(watch_result):
238 # Result is awaitable, so we need to await it within an async context
239 obj.post_message_no_wait(
240 events.Callback(
241 sender=obj, callback=partial(await_watcher, watch_result)
242 )
243 )
244 return False
245 else:
246 return True
247
248 # Compute is only required if a watcher runs immediately, not if they were posted.
249 require_compute = False
250 watch_function = getattr(obj, f"watch_{name}", None)
251 if callable(watch_function):
252 require_compute = require_compute or invoke_watcher(
253 watch_function, old_value, value
254 )
255
256 watchers: list[Callable] = getattr(obj, "__watchers", {}).get(name, [])
257 for watcher in watchers:
258 require_compute = require_compute or invoke_watcher(
259 watcher, old_value, value
260 )
261
262 if require_compute:
263 # Run computes
264 obj.post_message_no_wait(
265 events.Callback(sender=obj, callback=partial(Reactive._compute, obj))
266 )
267
268 @classmethod
269 async def _compute(cls, obj: Reactable) -> None:
270 """Invoke all computes.
271
272 Args:
273 obj (Reactable): Reactable object.
274 """
275 _rich_traceback_guard = True
276 computes = getattr(obj, "__computes", [])
277 for compute in computes:
278 try:
279 compute_method = getattr(obj, f"compute_{compute}")
280 except AttributeError:
281 continue
282
283 value = await invoke(compute_method)
284 setattr(obj, compute, value)
285
286
287 class reactive(Reactive[ReactiveType]):
288 """Create a reactive attribute.
289
290 Args:
291 default (ReactiveType | Callable[[], ReactiveType]): A default value or callable that returns a default.
292 layout (bool, optional): Perform a layout on change. Defaults to False.
293 repaint (bool, optional): Perform a repaint on change. Defaults to True.
294 init (bool, optional): Call watchers on initialize (post mount). Defaults to True.
295 always_update (bool, optional): Call watchers even when the new value equals the old value. Defaults to False.
296 """
297
298 def __init__(
299 self,
300 default: ReactiveType | Callable[[], ReactiveType],
301 *,
302 layout: bool = False,
303 repaint: bool = True,
304 init: bool = True,
305 always_update: bool = False,
306 ) -> None:
307 super().__init__(
308 default,
309 layout=layout,
310 repaint=repaint,
311 init=init,
312 always_update=always_update,
313 )
314
315
316 class var(Reactive[ReactiveType]):
317 """Create a reactive attribute (with no auto-refresh).
318
319 Args:
320 default (ReactiveType | Callable[[], ReactiveType]): A default value or callable that returns a default.
321 init (bool, optional): Call watchers on initialize (post mount). Defaults to True.
322 """
323
324 def __init__(
325 self, default: ReactiveType | Callable[[], ReactiveType], init: bool = True
326 ) -> None:
327 super().__init__(default, layout=False, repaint=False, init=init)
328
329
330 def watch(
331 obj: Reactable,
332 attribute_name: str,
333 callback: Callable[[Any], object],
334 init: bool = True,
335 ) -> None:
336 """Watch a reactive variable on an object.
337
338 Args:
339 obj (Reactable): The parent object.
340 attribute_name (str): The attribute to watch.
341 callback (Callable[[Any], object]): A callable to call when the attribute changes.
342 init (bool, optional): True to call watcher initialization. Defaults to True.
343 """
344
345 if not hasattr(obj, "__watchers"):
346 setattr(obj, "__watchers", {})
347 watchers: dict[str, list[Callable]] = getattr(obj, "__watchers")
348 watcher_list = watchers.setdefault(attribute_name, [])
349 if callback in watcher_list:
350 return
351 watcher_list.append(callback)
352 if init:
353 current_value = getattr(obj, attribute_name, None)
354 Reactive._check_watchers(obj, attribute_name, current_value)
355
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/textual/reactive.py b/src/textual/reactive.py
--- a/src/textual/reactive.py
+++ b/src/textual/reactive.py
@@ -177,8 +177,8 @@
validate_function = getattr(obj, f"validate_{name}", None)
# Check if this is the first time setting the value
first_set = getattr(obj, f"__first_set_{self.internal_name}", True)
- # Call validate, but not on first set.
- if callable(validate_function) and not first_set:
+ # Call validate
+ if callable(validate_function):
value = validate_function(value)
# If the value has changed, or this is the first time setting the value
if current_value != value or first_set or self._always_update:
| {"golden_diff": "diff --git a/src/textual/reactive.py b/src/textual/reactive.py\n--- a/src/textual/reactive.py\n+++ b/src/textual/reactive.py\n@@ -177,8 +177,8 @@\n validate_function = getattr(obj, f\"validate_{name}\", None)\n # Check if this is the first time setting the value\n first_set = getattr(obj, f\"__first_set_{self.internal_name}\", True)\n- # Call validate, but not on first set.\n- if callable(validate_function) and not first_set:\n+ # Call validate\n+ if callable(validate_function):\n value = validate_function(value)\n # If the value has changed, or this is the first time setting the value\n if current_value != value or first_set or self._always_update:\n", "issue": "Validators and watchers should run hand-in-hand\nWhen `init=True` in a `reactive` attribute, the `watch_` method for that attribute run, but the `validate_` method does not.\nThis seems rather confusing and we think that validators and watchers should always run hand-in-hand.\nIf the watcher is called, you should be confident that the incoming `new_value` has already been validated by the `validate_` method.\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom functools import partial\nfrom inspect import isawaitable\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Awaitable,\n Callable,\n Generic,\n Type,\n TypeVar,\n Union,\n)\n\nfrom . import events\nfrom ._callback import count_parameters, invoke\nfrom ._types import MessageTarget\n\nif TYPE_CHECKING:\n from .app import App\n from .widget import Widget\n\n Reactable = Union[Widget, App]\n\nReactiveType = TypeVar(\"ReactiveType\")\n\n\nclass _NotSet:\n pass\n\n\n_NOT_SET = _NotSet()\n\nT = TypeVar(\"T\")\n\n\nclass Reactive(Generic[ReactiveType]):\n \"\"\"Reactive descriptor.\n\n Args:\n default (ReactiveType | Callable[[], ReactiveType]): A default value or callable that returns a default.\n layout (bool, optional): Perform a layout on change. Defaults to False.\n repaint (bool, optional): Perform a repaint on change. Defaults to True.\n init (bool, optional): Call watchers on initialize (post mount). Defaults to False.\n always_update (bool, optional): Call watchers even when the new value equals the old value. Defaults to False.\n \"\"\"\n\n def __init__(\n self,\n default: ReactiveType | Callable[[], ReactiveType],\n *,\n layout: bool = False,\n repaint: bool = True,\n init: bool = False,\n always_update: bool = False,\n ) -> None:\n self._default = default\n self._layout = layout\n self._repaint = repaint\n self._init = init\n self._always_update = always_update\n\n @classmethod\n def init(\n cls,\n default: ReactiveType | Callable[[], ReactiveType],\n *,\n layout: bool = False,\n repaint: bool = True,\n always_update: bool = False,\n ) -> Reactive:\n \"\"\"A reactive variable that calls watchers and compute on initialize (post mount).\n\n Args:\n default (ReactiveType | Callable[[], ReactiveType]): A default value or callable that returns a default.\n layout (bool, optional): Perform a layout on change. Defaults to False.\n repaint (bool, optional): Perform a repaint on change. Defaults to True.\n always_update (bool, optional): Call watchers even when the new value equals the old value. Defaults to False.\n\n Returns:\n Reactive: A Reactive instance which calls watchers or initialize.\n \"\"\"\n return cls(\n default,\n layout=layout,\n repaint=repaint,\n init=True,\n always_update=always_update,\n )\n\n @classmethod\n def var(\n cls,\n default: ReactiveType | Callable[[], ReactiveType],\n ) -> Reactive:\n \"\"\"A reactive variable that doesn't update or layout.\n\n Args:\n default (ReactiveType | Callable[[], ReactiveType]): A default value or callable that returns a default.\n\n Returns:\n Reactive: A Reactive descriptor.\n \"\"\"\n return cls(default, layout=False, repaint=False, init=True)\n\n @classmethod\n def _initialize_object(cls, obj: object) -> None:\n \"\"\"Set defaults and call any watchers / computes for the first time.\n\n Args:\n obj (Reactable): An object with Reactive descriptors\n \"\"\"\n if not hasattr(obj, \"__reactive_initialized\"):\n startswith = str.startswith\n for key in obj.__class__.__dict__:\n if startswith(key, \"_default_\"):\n name = key[9:]\n # Check defaults\n if not hasattr(obj, name):\n # Attribute has no value yet\n default = getattr(obj, key)\n default_value = default() if callable(default) else default\n # Set the default vale (calls `__set__`)\n setattr(obj, name, default_value)\n setattr(obj, \"__reactive_initialized\", True)\n\n @classmethod\n def _reset_object(cls, obj: object) -> None:\n \"\"\"Reset reactive structures on object (to avoid reference cycles).\n\n Args:\n obj (object): A reactive object.\n \"\"\"\n getattr(obj, \"__watchers\", {}).clear()\n getattr(obj, \"__computes\", []).clear()\n\n def __set_name__(self, owner: Type[MessageTarget], name: str) -> None:\n\n # Check for compute method\n if hasattr(owner, f\"compute_{name}\"):\n # Compute methods are stored in a list called `__computes`\n try:\n computes = getattr(owner, \"__computes\")\n except AttributeError:\n computes = []\n setattr(owner, \"__computes\", computes)\n computes.append(name)\n\n # The name of the attribute\n self.name = name\n # The internal name where the attribute's value is stored\n self.internal_name = f\"_reactive_{name}\"\n default = self._default\n setattr(owner, f\"_default_{name}\", default)\n\n def __get__(self, obj: Reactable, obj_type: type[object]) -> ReactiveType:\n _rich_traceback_omit = True\n value: _NotSet | ReactiveType = getattr(obj, self.internal_name, _NOT_SET)\n if isinstance(value, _NotSet):\n # No value present, we need to set the default\n init_name = f\"_default_{self.name}\"\n default = getattr(obj, init_name)\n default_value = default() if callable(default) else default\n # Set and return the value\n setattr(obj, self.internal_name, default_value)\n if self._init:\n self._check_watchers(obj, self.name, default_value)\n return default_value\n return value\n\n def __set__(self, obj: Reactable, value: ReactiveType) -> None:\n _rich_traceback_omit = True\n name = self.name\n current_value = getattr(obj, name)\n # Check for validate function\n validate_function = getattr(obj, f\"validate_{name}\", None)\n # Check if this is the first time setting the value\n first_set = getattr(obj, f\"__first_set_{self.internal_name}\", True)\n # Call validate, but not on first set.\n if callable(validate_function) and not first_set:\n value = validate_function(value)\n # If the value has changed, or this is the first time setting the value\n if current_value != value or first_set or self._always_update:\n # Set the first set flag to False\n setattr(obj, f\"__first_set_{self.internal_name}\", False)\n # Store the internal value\n setattr(obj, self.internal_name, value)\n # Check all watchers\n self._check_watchers(obj, name, current_value)\n # Refresh according to descriptor flags\n if self._layout or self._repaint:\n obj.refresh(repaint=self._repaint, layout=self._layout)\n\n @classmethod\n def _check_watchers(cls, obj: Reactable, name: str, old_value: Any):\n \"\"\"Check watchers, and call watch methods / computes\n\n Args:\n obj (Reactable): The reactable object.\n name (str): Attribute name.\n old_value (Any): The old (previous) value of the attribute.\n first_set (bool, optional): True if this is the first time setting the value. Defaults to False.\n \"\"\"\n _rich_traceback_omit = True\n # Get the current value.\n internal_name = f\"_reactive_{name}\"\n value = getattr(obj, internal_name)\n\n async def await_watcher(awaitable: Awaitable) -> None:\n \"\"\"Coroutine to await an awaitable returned from a watcher\"\"\"\n _rich_traceback_omit = True\n await awaitable\n # Watcher may have changed the state, so run compute again\n obj.post_message_no_wait(\n events.Callback(sender=obj, callback=partial(Reactive._compute, obj))\n )\n\n def invoke_watcher(\n watch_function: Callable, old_value: object, value: object\n ) -> bool:\n \"\"\"Invoke a watch function.\n\n Args:\n watch_function (Callable): A watch function, which may be sync or async.\n old_value (object): The old value of the attribute.\n value (object): The new value of the attribute.\n\n Returns:\n bool: True if the watcher was run, or False if it was posted.\n \"\"\"\n _rich_traceback_omit = True\n if count_parameters(watch_function) == 2:\n watch_result = watch_function(old_value, value)\n else:\n watch_result = watch_function(value)\n if isawaitable(watch_result):\n # Result is awaitable, so we need to await it within an async context\n obj.post_message_no_wait(\n events.Callback(\n sender=obj, callback=partial(await_watcher, watch_result)\n )\n )\n return False\n else:\n return True\n\n # Compute is only required if a watcher runs immediately, not if they were posted.\n require_compute = False\n watch_function = getattr(obj, f\"watch_{name}\", None)\n if callable(watch_function):\n require_compute = require_compute or invoke_watcher(\n watch_function, old_value, value\n )\n\n watchers: list[Callable] = getattr(obj, \"__watchers\", {}).get(name, [])\n for watcher in watchers:\n require_compute = require_compute or invoke_watcher(\n watcher, old_value, value\n )\n\n if require_compute:\n # Run computes\n obj.post_message_no_wait(\n events.Callback(sender=obj, callback=partial(Reactive._compute, obj))\n )\n\n @classmethod\n async def _compute(cls, obj: Reactable) -> None:\n \"\"\"Invoke all computes.\n\n Args:\n obj (Reactable): Reactable object.\n \"\"\"\n _rich_traceback_guard = True\n computes = getattr(obj, \"__computes\", [])\n for compute in computes:\n try:\n compute_method = getattr(obj, f\"compute_{compute}\")\n except AttributeError:\n continue\n\n value = await invoke(compute_method)\n setattr(obj, compute, value)\n\n\nclass reactive(Reactive[ReactiveType]):\n \"\"\"Create a reactive attribute.\n\n Args:\n default (ReactiveType | Callable[[], ReactiveType]): A default value or callable that returns a default.\n layout (bool, optional): Perform a layout on change. Defaults to False.\n repaint (bool, optional): Perform a repaint on change. Defaults to True.\n init (bool, optional): Call watchers on initialize (post mount). Defaults to True.\n always_update (bool, optional): Call watchers even when the new value equals the old value. Defaults to False.\n \"\"\"\n\n def __init__(\n self,\n default: ReactiveType | Callable[[], ReactiveType],\n *,\n layout: bool = False,\n repaint: bool = True,\n init: bool = True,\n always_update: bool = False,\n ) -> None:\n super().__init__(\n default,\n layout=layout,\n repaint=repaint,\n init=init,\n always_update=always_update,\n )\n\n\nclass var(Reactive[ReactiveType]):\n \"\"\"Create a reactive attribute (with no auto-refresh).\n\n Args:\n default (ReactiveType | Callable[[], ReactiveType]): A default value or callable that returns a default.\n init (bool, optional): Call watchers on initialize (post mount). Defaults to True.\n \"\"\"\n\n def __init__(\n self, default: ReactiveType | Callable[[], ReactiveType], init: bool = True\n ) -> None:\n super().__init__(default, layout=False, repaint=False, init=init)\n\n\ndef watch(\n obj: Reactable,\n attribute_name: str,\n callback: Callable[[Any], object],\n init: bool = True,\n) -> None:\n \"\"\"Watch a reactive variable on an object.\n\n Args:\n obj (Reactable): The parent object.\n attribute_name (str): The attribute to watch.\n callback (Callable[[Any], object]): A callable to call when the attribute changes.\n init (bool, optional): True to call watcher initialization. Defaults to True.\n \"\"\"\n\n if not hasattr(obj, \"__watchers\"):\n setattr(obj, \"__watchers\", {})\n watchers: dict[str, list[Callable]] = getattr(obj, \"__watchers\")\n watcher_list = watchers.setdefault(attribute_name, [])\n if callback in watcher_list:\n return\n watcher_list.append(callback)\n if init:\n current_value = getattr(obj, attribute_name, None)\n Reactive._check_watchers(obj, attribute_name, current_value)\n", "path": "src/textual/reactive.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom functools import partial\nfrom inspect import isawaitable\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Awaitable,\n Callable,\n Generic,\n Type,\n TypeVar,\n Union,\n)\n\nfrom . import events\nfrom ._callback import count_parameters, invoke\nfrom ._types import MessageTarget\n\nif TYPE_CHECKING:\n from .app import App\n from .widget import Widget\n\n Reactable = Union[Widget, App]\n\nReactiveType = TypeVar(\"ReactiveType\")\n\n\nclass _NotSet:\n pass\n\n\n_NOT_SET = _NotSet()\n\nT = TypeVar(\"T\")\n\n\nclass Reactive(Generic[ReactiveType]):\n \"\"\"Reactive descriptor.\n\n Args:\n default (ReactiveType | Callable[[], ReactiveType]): A default value or callable that returns a default.\n layout (bool, optional): Perform a layout on change. Defaults to False.\n repaint (bool, optional): Perform a repaint on change. Defaults to True.\n init (bool, optional): Call watchers on initialize (post mount). Defaults to False.\n always_update (bool, optional): Call watchers even when the new value equals the old value. Defaults to False.\n \"\"\"\n\n def __init__(\n self,\n default: ReactiveType | Callable[[], ReactiveType],\n *,\n layout: bool = False,\n repaint: bool = True,\n init: bool = False,\n always_update: bool = False,\n ) -> None:\n self._default = default\n self._layout = layout\n self._repaint = repaint\n self._init = init\n self._always_update = always_update\n\n @classmethod\n def init(\n cls,\n default: ReactiveType | Callable[[], ReactiveType],\n *,\n layout: bool = False,\n repaint: bool = True,\n always_update: bool = False,\n ) -> Reactive:\n \"\"\"A reactive variable that calls watchers and compute on initialize (post mount).\n\n Args:\n default (ReactiveType | Callable[[], ReactiveType]): A default value or callable that returns a default.\n layout (bool, optional): Perform a layout on change. Defaults to False.\n repaint (bool, optional): Perform a repaint on change. Defaults to True.\n always_update (bool, optional): Call watchers even when the new value equals the old value. Defaults to False.\n\n Returns:\n Reactive: A Reactive instance which calls watchers or initialize.\n \"\"\"\n return cls(\n default,\n layout=layout,\n repaint=repaint,\n init=True,\n always_update=always_update,\n )\n\n @classmethod\n def var(\n cls,\n default: ReactiveType | Callable[[], ReactiveType],\n ) -> Reactive:\n \"\"\"A reactive variable that doesn't update or layout.\n\n Args:\n default (ReactiveType | Callable[[], ReactiveType]): A default value or callable that returns a default.\n\n Returns:\n Reactive: A Reactive descriptor.\n \"\"\"\n return cls(default, layout=False, repaint=False, init=True)\n\n @classmethod\n def _initialize_object(cls, obj: object) -> None:\n \"\"\"Set defaults and call any watchers / computes for the first time.\n\n Args:\n obj (Reactable): An object with Reactive descriptors\n \"\"\"\n if not hasattr(obj, \"__reactive_initialized\"):\n startswith = str.startswith\n for key in obj.__class__.__dict__:\n if startswith(key, \"_default_\"):\n name = key[9:]\n # Check defaults\n if not hasattr(obj, name):\n # Attribute has no value yet\n default = getattr(obj, key)\n default_value = default() if callable(default) else default\n # Set the default vale (calls `__set__`)\n setattr(obj, name, default_value)\n setattr(obj, \"__reactive_initialized\", True)\n\n @classmethod\n def _reset_object(cls, obj: object) -> None:\n \"\"\"Reset reactive structures on object (to avoid reference cycles).\n\n Args:\n obj (object): A reactive object.\n \"\"\"\n getattr(obj, \"__watchers\", {}).clear()\n getattr(obj, \"__computes\", []).clear()\n\n def __set_name__(self, owner: Type[MessageTarget], name: str) -> None:\n\n # Check for compute method\n if hasattr(owner, f\"compute_{name}\"):\n # Compute methods are stored in a list called `__computes`\n try:\n computes = getattr(owner, \"__computes\")\n except AttributeError:\n computes = []\n setattr(owner, \"__computes\", computes)\n computes.append(name)\n\n # The name of the attribute\n self.name = name\n # The internal name where the attribute's value is stored\n self.internal_name = f\"_reactive_{name}\"\n default = self._default\n setattr(owner, f\"_default_{name}\", default)\n\n def __get__(self, obj: Reactable, obj_type: type[object]) -> ReactiveType:\n _rich_traceback_omit = True\n value: _NotSet | ReactiveType = getattr(obj, self.internal_name, _NOT_SET)\n if isinstance(value, _NotSet):\n # No value present, we need to set the default\n init_name = f\"_default_{self.name}\"\n default = getattr(obj, init_name)\n default_value = default() if callable(default) else default\n # Set and return the value\n setattr(obj, self.internal_name, default_value)\n if self._init:\n self._check_watchers(obj, self.name, default_value)\n return default_value\n return value\n\n def __set__(self, obj: Reactable, value: ReactiveType) -> None:\n _rich_traceback_omit = True\n name = self.name\n current_value = getattr(obj, name)\n # Check for validate function\n validate_function = getattr(obj, f\"validate_{name}\", None)\n # Check if this is the first time setting the value\n first_set = getattr(obj, f\"__first_set_{self.internal_name}\", True)\n # Call validate\n if callable(validate_function):\n value = validate_function(value)\n # If the value has changed, or this is the first time setting the value\n if current_value != value or first_set or self._always_update:\n # Set the first set flag to False\n setattr(obj, f\"__first_set_{self.internal_name}\", False)\n # Store the internal value\n setattr(obj, self.internal_name, value)\n # Check all watchers\n self._check_watchers(obj, name, current_value)\n # Refresh according to descriptor flags\n if self._layout or self._repaint:\n obj.refresh(repaint=self._repaint, layout=self._layout)\n\n @classmethod\n def _check_watchers(cls, obj: Reactable, name: str, old_value: Any):\n \"\"\"Check watchers, and call watch methods / computes\n\n Args:\n obj (Reactable): The reactable object.\n name (str): Attribute name.\n old_value (Any): The old (previous) value of the attribute.\n first_set (bool, optional): True if this is the first time setting the value. Defaults to False.\n \"\"\"\n _rich_traceback_omit = True\n # Get the current value.\n internal_name = f\"_reactive_{name}\"\n value = getattr(obj, internal_name)\n\n async def await_watcher(awaitable: Awaitable) -> None:\n \"\"\"Coroutine to await an awaitable returned from a watcher\"\"\"\n _rich_traceback_omit = True\n await awaitable\n # Watcher may have changed the state, so run compute again\n obj.post_message_no_wait(\n events.Callback(sender=obj, callback=partial(Reactive._compute, obj))\n )\n\n def invoke_watcher(\n watch_function: Callable, old_value: object, value: object\n ) -> bool:\n \"\"\"Invoke a watch function.\n\n Args:\n watch_function (Callable): A watch function, which may be sync or async.\n old_value (object): The old value of the attribute.\n value (object): The new value of the attribute.\n\n Returns:\n bool: True if the watcher was run, or False if it was posted.\n \"\"\"\n _rich_traceback_omit = True\n if count_parameters(watch_function) == 2:\n watch_result = watch_function(old_value, value)\n else:\n watch_result = watch_function(value)\n if isawaitable(watch_result):\n # Result is awaitable, so we need to await it within an async context\n obj.post_message_no_wait(\n events.Callback(\n sender=obj, callback=partial(await_watcher, watch_result)\n )\n )\n return False\n else:\n return True\n\n # Compute is only required if a watcher runs immediately, not if they were posted.\n require_compute = False\n watch_function = getattr(obj, f\"watch_{name}\", None)\n if callable(watch_function):\n require_compute = require_compute or invoke_watcher(\n watch_function, old_value, value\n )\n\n watchers: list[Callable] = getattr(obj, \"__watchers\", {}).get(name, [])\n for watcher in watchers:\n require_compute = require_compute or invoke_watcher(\n watcher, old_value, value\n )\n\n if require_compute:\n # Run computes\n obj.post_message_no_wait(\n events.Callback(sender=obj, callback=partial(Reactive._compute, obj))\n )\n\n @classmethod\n async def _compute(cls, obj: Reactable) -> None:\n \"\"\"Invoke all computes.\n\n Args:\n obj (Reactable): Reactable object.\n \"\"\"\n _rich_traceback_guard = True\n computes = getattr(obj, \"__computes\", [])\n for compute in computes:\n try:\n compute_method = getattr(obj, f\"compute_{compute}\")\n except AttributeError:\n continue\n\n value = await invoke(compute_method)\n setattr(obj, compute, value)\n\n\nclass reactive(Reactive[ReactiveType]):\n \"\"\"Create a reactive attribute.\n\n Args:\n default (ReactiveType | Callable[[], ReactiveType]): A default value or callable that returns a default.\n layout (bool, optional): Perform a layout on change. Defaults to False.\n repaint (bool, optional): Perform a repaint on change. Defaults to True.\n init (bool, optional): Call watchers on initialize (post mount). Defaults to True.\n always_update (bool, optional): Call watchers even when the new value equals the old value. Defaults to False.\n \"\"\"\n\n def __init__(\n self,\n default: ReactiveType | Callable[[], ReactiveType],\n *,\n layout: bool = False,\n repaint: bool = True,\n init: bool = True,\n always_update: bool = False,\n ) -> None:\n super().__init__(\n default,\n layout=layout,\n repaint=repaint,\n init=init,\n always_update=always_update,\n )\n\n\nclass var(Reactive[ReactiveType]):\n \"\"\"Create a reactive attribute (with no auto-refresh).\n\n Args:\n default (ReactiveType | Callable[[], ReactiveType]): A default value or callable that returns a default.\n init (bool, optional): Call watchers on initialize (post mount). Defaults to True.\n \"\"\"\n\n def __init__(\n self, default: ReactiveType | Callable[[], ReactiveType], init: bool = True\n ) -> None:\n super().__init__(default, layout=False, repaint=False, init=init)\n\n\ndef watch(\n obj: Reactable,\n attribute_name: str,\n callback: Callable[[Any], object],\n init: bool = True,\n) -> None:\n \"\"\"Watch a reactive variable on an object.\n\n Args:\n obj (Reactable): The parent object.\n attribute_name (str): The attribute to watch.\n callback (Callable[[Any], object]): A callable to call when the attribute changes.\n init (bool, optional): True to call watcher initialization. Defaults to True.\n \"\"\"\n\n if not hasattr(obj, \"__watchers\"):\n setattr(obj, \"__watchers\", {})\n watchers: dict[str, list[Callable]] = getattr(obj, \"__watchers\")\n watcher_list = watchers.setdefault(attribute_name, [])\n if callback in watcher_list:\n return\n watcher_list.append(callback)\n if init:\n current_value = getattr(obj, attribute_name, None)\n Reactive._check_watchers(obj, attribute_name, current_value)\n", "path": "src/textual/reactive.py"}]} | 4,020 | 174 |
gh_patches_debug_38574 | rasdani/github-patches | git_diff | electricitymaps__electricitymaps-contrib-1305 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CL-SIC parser returns wrong datetime
CL-SIC parser returns datetime of 0018 instead of 2018, as illustrated below:
`datetime: 0018-02-18 (06:43:00.000) CET`
`production: {"unknown":0,"coal":2206.6,"hydro":2416.39660,"wind":309.314,....}`
`updatedAt: 2018-02-20 (23:40:06.979) CET`
@systemcatch any idea?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parsers/lib/quality.py`
Content:
```
1 import datetime
2 import warnings
3
4 import arrow
5
6
7 class ValidationError(ValueError):
8 pass
9
10
11 def validate_consumption(obj, zone_key):
12 # Data quality check
13 if obj['consumption'] is not None and obj['consumption'] < 0:
14 raise ValidationError('%s: consumption has negative value '
15 '%s' % (zone_key, obj['consumption']))
16
17
18 def validate_exchange(item, k):
19 if item.get('sortedZoneKeys', None) != k:
20 raise ValidationError("Sorted country codes %s and %s don't "
21 "match" % (item.get('sortedZoneKeys', None), k))
22 if 'datetime' not in item:
23 raise ValidationError('datetime was not returned for %s' % k)
24 if type(item['datetime']) != datetime.datetime:
25 raise ValidationError('datetime %s is not valid for %s' %
26 (item['datetime'], k))
27 data_time = arrow.get(item['datetime'])
28 if data_time > arrow.now():
29 raise ValidationError("Data from %s can't be in the future, data was "
30 "%s, now is %s" % (k, data_time, arrow.now()))
31 if data_time.year < 2000:
32 raise ValidationError("Data from %s can't be before year 2000, it was "
33 "%s" % (k, data_time))
34
35
36 def validate_production(obj, zone_key):
37 if 'datetime' not in obj:
38 raise ValidationError(
39 'datetime was not returned for %s' % zone_key)
40 if 'countryCode' in obj:
41 warnings.warn('object has field `countryCode`. It should have '
42 '`zoneKey` instead. In {}'.format(obj))
43 if 'zoneKey' not in obj and 'countryCode' not in obj:
44 raise ValidationError('zoneKey was not returned for %s' % zone_key)
45 if not isinstance(obj['datetime'], datetime.datetime):
46 raise ValidationError('datetime %s is not valid for %s' %
47 (obj['datetime'], zone_key))
48 if (obj.get('zoneKey', None) or obj.get('countryCode', None)) != zone_key:
49 raise ValidationError("Zone keys %s and %s don't match in %s" %
50 (obj.get('zoneKey', None), zone_key, obj))
51 data_time = arrow.get(obj['datetime'])
52 arrow_now = arrow.utcnow()
53 if data_time > arrow_now:
54 raise ValidationError(
55 "Data from %s can't be in the future, data was %s, now is "
56 "%s" % (zone_key, data_time, arrow_now))
57
58 if ((obj.get('production', {}).get('unknown', None) is None and
59 obj.get('production', {}).get('coal', None) is None and
60 obj.get('production', {}).get('oil', None) is None and
61 obj.get('production', {}).get('gas', None) is None and zone_key
62 not in ['CH', 'NO', 'AUS-TAS', 'DK-BHM', 'US-NEISO'])):
63 raise ValidationError(
64 "Coal or oil or unknown production value is required for"
65 " %s" % zone_key)
66 for k, v in obj['production'].items():
67 if v is None:
68 continue
69 if v < 0:
70 raise ValidationError('%s: key %s has negative value %s' %
71 (zone_key, k, v))
72
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/parsers/lib/quality.py b/parsers/lib/quality.py
--- a/parsers/lib/quality.py
+++ b/parsers/lib/quality.py
@@ -8,11 +8,26 @@
pass
+def validate_reasonable_time(item, k):
+ data_time = arrow.get(item['datetime'])
+ if data_time.year < 2000:
+ raise ValidationError("Data from %s can't be before year 2000, it was "
+ "%s" % (k, data_time))
+
+ arrow_now = arrow.utcnow()
+ if data_time > arrow_now:
+ raise ValidationError(
+ "Data from %s can't be in the future, data was %s, now is "
+ "%s" % (k, data_time, arrow_now))
+
+
+
def validate_consumption(obj, zone_key):
# Data quality check
if obj['consumption'] is not None and obj['consumption'] < 0:
raise ValidationError('%s: consumption has negative value '
'%s' % (zone_key, obj['consumption']))
+ validate_reasonable_time(obj, zone_key)
def validate_exchange(item, k):
@@ -24,13 +39,7 @@
if type(item['datetime']) != datetime.datetime:
raise ValidationError('datetime %s is not valid for %s' %
(item['datetime'], k))
- data_time = arrow.get(item['datetime'])
- if data_time > arrow.now():
- raise ValidationError("Data from %s can't be in the future, data was "
- "%s, now is %s" % (k, data_time, arrow.now()))
- if data_time.year < 2000:
- raise ValidationError("Data from %s can't be before year 2000, it was "
- "%s" % (k, data_time))
+ validate_reasonable_time(item, k)
def validate_production(obj, zone_key):
@@ -48,12 +57,6 @@
if (obj.get('zoneKey', None) or obj.get('countryCode', None)) != zone_key:
raise ValidationError("Zone keys %s and %s don't match in %s" %
(obj.get('zoneKey', None), zone_key, obj))
- data_time = arrow.get(obj['datetime'])
- arrow_now = arrow.utcnow()
- if data_time > arrow_now:
- raise ValidationError(
- "Data from %s can't be in the future, data was %s, now is "
- "%s" % (zone_key, data_time, arrow_now))
if ((obj.get('production', {}).get('unknown', None) is None and
obj.get('production', {}).get('coal', None) is None and
@@ -69,3 +72,4 @@
if v < 0:
raise ValidationError('%s: key %s has negative value %s' %
(zone_key, k, v))
+ validate_reasonable_time(obj, zone_key)
| {"golden_diff": "diff --git a/parsers/lib/quality.py b/parsers/lib/quality.py\n--- a/parsers/lib/quality.py\n+++ b/parsers/lib/quality.py\n@@ -8,11 +8,26 @@\n pass\n \n \n+def validate_reasonable_time(item, k):\n+ data_time = arrow.get(item['datetime'])\n+ if data_time.year < 2000:\n+ raise ValidationError(\"Data from %s can't be before year 2000, it was \"\n+ \"%s\" % (k, data_time))\n+\n+ arrow_now = arrow.utcnow()\n+ if data_time > arrow_now:\n+ raise ValidationError(\n+ \"Data from %s can't be in the future, data was %s, now is \"\n+ \"%s\" % (k, data_time, arrow_now))\n+\n+\n+\n def validate_consumption(obj, zone_key):\n # Data quality check\n if obj['consumption'] is not None and obj['consumption'] < 0:\n raise ValidationError('%s: consumption has negative value '\n '%s' % (zone_key, obj['consumption']))\n+ validate_reasonable_time(obj, zone_key)\n \n \n def validate_exchange(item, k):\n@@ -24,13 +39,7 @@\n if type(item['datetime']) != datetime.datetime:\n raise ValidationError('datetime %s is not valid for %s' %\n (item['datetime'], k))\n- data_time = arrow.get(item['datetime'])\n- if data_time > arrow.now():\n- raise ValidationError(\"Data from %s can't be in the future, data was \"\n- \"%s, now is %s\" % (k, data_time, arrow.now()))\n- if data_time.year < 2000:\n- raise ValidationError(\"Data from %s can't be before year 2000, it was \"\n- \"%s\" % (k, data_time))\n+ validate_reasonable_time(item, k)\n \n \n def validate_production(obj, zone_key):\n@@ -48,12 +57,6 @@\n if (obj.get('zoneKey', None) or obj.get('countryCode', None)) != zone_key:\n raise ValidationError(\"Zone keys %s and %s don't match in %s\" %\n (obj.get('zoneKey', None), zone_key, obj))\n- data_time = arrow.get(obj['datetime'])\n- arrow_now = arrow.utcnow()\n- if data_time > arrow_now:\n- raise ValidationError(\n- \"Data from %s can't be in the future, data was %s, now is \"\n- \"%s\" % (zone_key, data_time, arrow_now))\n \n if ((obj.get('production', {}).get('unknown', None) is None and\n obj.get('production', {}).get('coal', None) is None and\n@@ -69,3 +72,4 @@\n if v < 0:\n raise ValidationError('%s: key %s has negative value %s' %\n (zone_key, k, v))\n+ validate_reasonable_time(obj, zone_key)\n", "issue": "CL-SIC parser returns wrong datetime\nCL-SIC parser returns datetime of 0018 instead of 2018, as illustrated below:\r\n\r\n`datetime: 0018-02-18 (06:43:00.000) CET`\r\n`production: {\"unknown\":0,\"coal\":2206.6,\"hydro\":2416.39660,\"wind\":309.314,....}`\r\n`updatedAt: 2018-02-20 (23:40:06.979) CET`\r\n\r\n@systemcatch any idea?\r\n\n", "before_files": [{"content": "import datetime\nimport warnings\n\nimport arrow\n\n\nclass ValidationError(ValueError):\n pass\n\n\ndef validate_consumption(obj, zone_key):\n # Data quality check\n if obj['consumption'] is not None and obj['consumption'] < 0:\n raise ValidationError('%s: consumption has negative value '\n '%s' % (zone_key, obj['consumption']))\n\n\ndef validate_exchange(item, k):\n if item.get('sortedZoneKeys', None) != k:\n raise ValidationError(\"Sorted country codes %s and %s don't \"\n \"match\" % (item.get('sortedZoneKeys', None), k))\n if 'datetime' not in item:\n raise ValidationError('datetime was not returned for %s' % k)\n if type(item['datetime']) != datetime.datetime:\n raise ValidationError('datetime %s is not valid for %s' %\n (item['datetime'], k))\n data_time = arrow.get(item['datetime'])\n if data_time > arrow.now():\n raise ValidationError(\"Data from %s can't be in the future, data was \"\n \"%s, now is %s\" % (k, data_time, arrow.now()))\n if data_time.year < 2000:\n raise ValidationError(\"Data from %s can't be before year 2000, it was \"\n \"%s\" % (k, data_time))\n\n\ndef validate_production(obj, zone_key):\n if 'datetime' not in obj:\n raise ValidationError(\n 'datetime was not returned for %s' % zone_key)\n if 'countryCode' in obj:\n warnings.warn('object has field `countryCode`. It should have '\n '`zoneKey` instead. In {}'.format(obj))\n if 'zoneKey' not in obj and 'countryCode' not in obj:\n raise ValidationError('zoneKey was not returned for %s' % zone_key)\n if not isinstance(obj['datetime'], datetime.datetime):\n raise ValidationError('datetime %s is not valid for %s' %\n (obj['datetime'], zone_key))\n if (obj.get('zoneKey', None) or obj.get('countryCode', None)) != zone_key:\n raise ValidationError(\"Zone keys %s and %s don't match in %s\" %\n (obj.get('zoneKey', None), zone_key, obj))\n data_time = arrow.get(obj['datetime'])\n arrow_now = arrow.utcnow()\n if data_time > arrow_now:\n raise ValidationError(\n \"Data from %s can't be in the future, data was %s, now is \"\n \"%s\" % (zone_key, data_time, arrow_now))\n\n if ((obj.get('production', {}).get('unknown', None) is None and\n obj.get('production', {}).get('coal', None) is None and\n obj.get('production', {}).get('oil', None) is None and\n obj.get('production', {}).get('gas', None) is None and zone_key\n not in ['CH', 'NO', 'AUS-TAS', 'DK-BHM', 'US-NEISO'])):\n raise ValidationError(\n \"Coal or oil or unknown production value is required for\"\n \" %s\" % zone_key)\n for k, v in obj['production'].items():\n if v is None:\n continue\n if v < 0:\n raise ValidationError('%s: key %s has negative value %s' %\n (zone_key, k, v))\n", "path": "parsers/lib/quality.py"}], "after_files": [{"content": "import datetime\nimport warnings\n\nimport arrow\n\n\nclass ValidationError(ValueError):\n pass\n\n\ndef validate_reasonable_time(item, k):\n data_time = arrow.get(item['datetime'])\n if data_time.year < 2000:\n raise ValidationError(\"Data from %s can't be before year 2000, it was \"\n \"%s\" % (k, data_time))\n\n arrow_now = arrow.utcnow()\n if data_time > arrow_now:\n raise ValidationError(\n \"Data from %s can't be in the future, data was %s, now is \"\n \"%s\" % (k, data_time, arrow_now))\n\n\n\ndef validate_consumption(obj, zone_key):\n # Data quality check\n if obj['consumption'] is not None and obj['consumption'] < 0:\n raise ValidationError('%s: consumption has negative value '\n '%s' % (zone_key, obj['consumption']))\n validate_reasonable_time(obj, zone_key)\n\n\ndef validate_exchange(item, k):\n if item.get('sortedZoneKeys', None) != k:\n raise ValidationError(\"Sorted country codes %s and %s don't \"\n \"match\" % (item.get('sortedZoneKeys', None), k))\n if 'datetime' not in item:\n raise ValidationError('datetime was not returned for %s' % k)\n if type(item['datetime']) != datetime.datetime:\n raise ValidationError('datetime %s is not valid for %s' %\n (item['datetime'], k))\n validate_reasonable_time(item, k)\n\n\ndef validate_production(obj, zone_key):\n if 'datetime' not in obj:\n raise ValidationError(\n 'datetime was not returned for %s' % zone_key)\n if 'countryCode' in obj:\n warnings.warn('object has field `countryCode`. It should have '\n '`zoneKey` instead. In {}'.format(obj))\n if 'zoneKey' not in obj and 'countryCode' not in obj:\n raise ValidationError('zoneKey was not returned for %s' % zone_key)\n if not isinstance(obj['datetime'], datetime.datetime):\n raise ValidationError('datetime %s is not valid for %s' %\n (obj['datetime'], zone_key))\n if (obj.get('zoneKey', None) or obj.get('countryCode', None)) != zone_key:\n raise ValidationError(\"Zone keys %s and %s don't match in %s\" %\n (obj.get('zoneKey', None), zone_key, obj))\n\n if ((obj.get('production', {}).get('unknown', None) is None and\n obj.get('production', {}).get('coal', None) is None and\n obj.get('production', {}).get('oil', None) is None and\n obj.get('production', {}).get('gas', None) is None and zone_key\n not in ['CH', 'NO', 'AUS-TAS', 'DK-BHM', 'US-NEISO'])):\n raise ValidationError(\n \"Coal or oil or unknown production value is required for\"\n \" %s\" % zone_key)\n for k, v in obj['production'].items():\n if v is None:\n continue\n if v < 0:\n raise ValidationError('%s: key %s has negative value %s' %\n (zone_key, k, v))\n validate_reasonable_time(obj, zone_key)\n", "path": "parsers/lib/quality.py"}]} | 1,277 | 670 |
gh_patches_debug_1852 | rasdani/github-patches | git_diff | jazzband__pip-tools-488 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Providing a source file which does not have an extension writes to a .txt file in current dir
If you happen to use an extensionless filename as a source of requirements in, pip-compile will deduce the wrong output filename and generate a ".txt" file (relative file, thus at current working dir).
##### Environment Versions
1. OS Type Should be cross platform, but observed on Windows
1. Python version: `$ python -V` 2.7.8
1. pip version: `$ pip --version` 9.0.1
1. pip-tools version: `$ pip-compile --version` 1.8.1rc2
##### Steps to replicate
1. echo "request" > no_extension
2. pip-compile no_extension
##### Expected result
We should error out because it is difficult to deduce a new name if there is no extension to remove.
##### Actual result
a .txt file is generated at the current directory.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `piptools/scripts/compile.py`
Content:
```
1 # coding: utf-8
2 from __future__ import (absolute_import, division, print_function,
3 unicode_literals)
4
5 import optparse
6 import os
7 import sys
8 import tempfile
9
10 import pip
11 from pip.req import InstallRequirement, parse_requirements
12
13 from .. import click
14 from ..exceptions import PipToolsError
15 from ..logging import log
16 from ..repositories import LocalRequirementsRepository, PyPIRepository
17 from ..resolver import Resolver
18 from ..utils import (assert_compatible_pip_version, is_pinned_requirement,
19 key_from_req, dedup)
20 from ..writer import OutputWriter
21
22 # Make sure we're using a compatible version of pip
23 assert_compatible_pip_version()
24
25 DEFAULT_REQUIREMENTS_FILE = 'requirements.in'
26
27
28 class PipCommand(pip.basecommand.Command):
29 name = 'PipCommand'
30
31
32 @click.command()
33 @click.version_option()
34 @click.option('-v', '--verbose', is_flag=True, help="Show more output")
35 @click.option('-n', '--dry-run', is_flag=True, help="Only show what would happen, don't change anything")
36 @click.option('-p', '--pre', is_flag=True, default=None, help="Allow resolving to prereleases (default is not)")
37 @click.option('-r', '--rebuild', is_flag=True, help="Clear any caches upfront, rebuild from scratch")
38 @click.option('-f', '--find-links', multiple=True, help="Look for archives in this directory or on this HTML page", envvar='PIP_FIND_LINKS') # noqa
39 @click.option('-i', '--index-url', help="Change index URL (defaults to PyPI)", envvar='PIP_INDEX_URL')
40 @click.option('--extra-index-url', multiple=True, help="Add additional index URL to search", envvar='PIP_EXTRA_INDEX_URL') # noqa
41 @click.option('--client-cert', help="Path to SSL client certificate, a single file containing the private key and the certificate in PEM format.") # noqa
42 @click.option('--trusted-host', multiple=True, envvar='PIP_TRUSTED_HOST',
43 help="Mark this host as trusted, even though it does not have "
44 "valid or any HTTPS.")
45 @click.option('--header/--no-header', is_flag=True, default=True,
46 help="Add header to generated file")
47 @click.option('--index/--no-index', is_flag=True, default=True,
48 help="Add index URL to generated file")
49 @click.option('--emit-trusted-host/--no-emit-trusted-host', is_flag=True,
50 default=True, help="Add trusted host option to generated file")
51 @click.option('--annotate/--no-annotate', is_flag=True, default=True,
52 help="Annotate results, indicating where dependencies come from")
53 @click.option('-U', '--upgrade', is_flag=True, default=False,
54 help='Try to upgrade all dependencies to their latest versions')
55 @click.option('-P', '--upgrade-package', 'upgrade_packages', nargs=1, multiple=True,
56 help="Specify particular packages to upgrade.")
57 @click.option('-o', '--output-file', nargs=1, type=str, default=None,
58 help=('Output file name. Required if more than one input file is given. '
59 'Will be derived from input file otherwise.'))
60 @click.option('--allow-unsafe', is_flag=True, default=False,
61 help="Pin packages considered unsafe: pip, setuptools & distribute")
62 @click.option('--generate-hashes', is_flag=True, default=False,
63 help="Generate pip 8 style hashes in the resulting requirements file.")
64 @click.option('--max-rounds', default=10,
65 help="Maximum number of rounds before resolving the requirements aborts.")
66 @click.argument('src_files', nargs=-1, type=click.Path(exists=True, allow_dash=True))
67 def cli(verbose, dry_run, pre, rebuild, find_links, index_url, extra_index_url,
68 client_cert, trusted_host, header, index, emit_trusted_host, annotate,
69 upgrade, upgrade_packages, output_file, allow_unsafe, generate_hashes,
70 src_files, max_rounds):
71 """Compiles requirements.txt from requirements.in specs."""
72 log.verbose = verbose
73
74 if len(src_files) == 0:
75 if os.path.exists(DEFAULT_REQUIREMENTS_FILE):
76 src_files = (DEFAULT_REQUIREMENTS_FILE,)
77 elif os.path.exists('setup.py'):
78 src_files = ('setup.py',)
79 if not output_file:
80 output_file = 'requirements.txt'
81 else:
82 raise click.BadParameter(("If you do not specify an input file, "
83 "the default is {} or setup.py").format(DEFAULT_REQUIREMENTS_FILE))
84
85 if len(src_files) == 1 and src_files[0] == '-':
86 if not output_file:
87 raise click.BadParameter('--output-file is required if input is from stdin')
88
89 if len(src_files) > 1 and not output_file:
90 raise click.BadParameter('--output-file is required if two or more input files are given.')
91
92 if output_file:
93 dst_file = output_file
94 else:
95 base_name, _, _ = src_files[0].rpartition('.')
96 dst_file = base_name + '.txt'
97
98 if upgrade and upgrade_packages:
99 raise click.BadParameter('Only one of --upgrade or --upgrade-package can be provided as an argument.')
100
101 ###
102 # Setup
103 ###
104
105 pip_command = get_pip_command()
106
107 pip_args = []
108 if find_links:
109 for link in find_links:
110 pip_args.extend(['-f', link])
111 if index_url:
112 pip_args.extend(['-i', index_url])
113 if extra_index_url:
114 for extra_index in extra_index_url:
115 pip_args.extend(['--extra-index-url', extra_index])
116 if client_cert:
117 pip_args.extend(['--client-cert', client_cert])
118 if pre:
119 pip_args.extend(['--pre'])
120 if trusted_host:
121 for host in trusted_host:
122 pip_args.extend(['--trusted-host', host])
123
124 pip_options, _ = pip_command.parse_args(pip_args)
125
126 session = pip_command._build_session(pip_options)
127 repository = PyPIRepository(pip_options, session)
128
129 # Pre-parse the inline package upgrade specs: they should take precedence
130 # over the stuff in the requirements files
131 upgrade_packages = [InstallRequirement.from_line(pkg)
132 for pkg in upgrade_packages]
133
134 # Proxy with a LocalRequirementsRepository if --upgrade is not specified
135 # (= default invocation)
136 if not (upgrade or upgrade_packages) and os.path.exists(dst_file):
137 ireqs = parse_requirements(dst_file, finder=repository.finder, session=repository.session, options=pip_options)
138 existing_pins = {key_from_req(ireq.req): ireq for ireq in ireqs if is_pinned_requirement(ireq)}
139 repository = LocalRequirementsRepository(existing_pins, repository)
140
141 log.debug('Using indexes:')
142 # remove duplicate index urls before processing
143 repository.finder.index_urls = list(dedup(repository.finder.index_urls))
144 for index_url in repository.finder.index_urls:
145 log.debug(' {}'.format(index_url))
146
147 if repository.finder.find_links:
148 log.debug('')
149 log.debug('Configuration:')
150 for find_link in repository.finder.find_links:
151 log.debug(' -f {}'.format(find_link))
152
153 ###
154 # Parsing/collecting initial requirements
155 ###
156
157 constraints = []
158 for src_file in src_files:
159 is_setup_file = os.path.basename(src_file) == 'setup.py'
160 if is_setup_file or src_file == '-':
161 # pip requires filenames and not files. Since we want to support
162 # piping from stdin, we need to briefly save the input from stdin
163 # to a temporary file and have pip read that. also used for
164 # reading requirements from install_requires in setup.py.
165 tmpfile = tempfile.NamedTemporaryFile(mode='wt', delete=False)
166 if is_setup_file:
167 from distutils.core import run_setup
168 dist = run_setup(src_file)
169 tmpfile.write('\n'.join(dist.install_requires))
170 else:
171 tmpfile.write(sys.stdin.read())
172 tmpfile.flush()
173 constraints.extend(parse_requirements(
174 tmpfile.name, finder=repository.finder, session=repository.session, options=pip_options))
175 else:
176 constraints.extend(parse_requirements(
177 src_file, finder=repository.finder, session=repository.session, options=pip_options))
178
179 # Check the given base set of constraints first
180 Resolver.check_constraints(constraints)
181
182 try:
183 resolver = Resolver(constraints, repository, prereleases=pre,
184 clear_caches=rebuild, allow_unsafe=allow_unsafe)
185 results = resolver.resolve(max_rounds=max_rounds)
186 if generate_hashes:
187 hashes = resolver.resolve_hashes(results)
188 else:
189 hashes = None
190 except PipToolsError as e:
191 log.error(str(e))
192 sys.exit(2)
193
194 log.debug('')
195
196 ##
197 # Output
198 ##
199
200 # Compute reverse dependency annotations statically, from the
201 # dependency cache that the resolver has populated by now.
202 #
203 # TODO (1a): reverse deps for any editable package are lost
204 # what SHOULD happen is that they are cached in memory, just
205 # not persisted to disk!
206 #
207 # TODO (1b): perhaps it's easiest if the dependency cache has an API
208 # that could take InstallRequirements directly, like:
209 #
210 # cache.set(ireq, ...)
211 #
212 # then, when ireq is editable, it would store in
213 #
214 # editables[egg_name][link_without_fragment] = deps
215 # editables['pip-tools']['git+...ols.git@future'] = {'click>=3.0', 'six'}
216 #
217 # otherwise:
218 #
219 # self[as_name_version_tuple(ireq)] = {'click>=3.0', 'six'}
220 #
221 reverse_dependencies = None
222 if annotate:
223 reverse_dependencies = resolver.reverse_dependencies(results)
224
225 writer = OutputWriter(src_files, dst_file, dry_run=dry_run,
226 emit_header=header, emit_index=index,
227 emit_trusted_host=emit_trusted_host,
228 annotate=annotate,
229 generate_hashes=generate_hashes,
230 default_index_url=repository.DEFAULT_INDEX_URL,
231 index_urls=repository.finder.index_urls,
232 trusted_hosts=pip_options.trusted_hosts,
233 format_control=repository.finder.format_control)
234 writer.write(results=results,
235 reverse_dependencies=reverse_dependencies,
236 primary_packages={key_from_req(ireq.req) for ireq in constraints if not ireq.constraint},
237 markers={key_from_req(ireq.req): ireq.markers
238 for ireq in constraints if ireq.markers},
239 hashes=hashes)
240
241 if dry_run:
242 log.warning('Dry-run, so nothing updated.')
243
244
245 def get_pip_command():
246 # Use pip's parser for pip.conf management and defaults.
247 # General options (find_links, index_url, extra_index_url, trusted_host,
248 # and pre) are defered to pip.
249 pip_command = PipCommand()
250 index_opts = pip.cmdoptions.make_option_group(
251 pip.cmdoptions.index_group,
252 pip_command.parser,
253 )
254 pip_command.parser.insert_option_group(0, index_opts)
255 pip_command.parser.add_option(optparse.Option('--pre', action='store_true', default=False))
256
257 return pip_command
258
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/piptools/scripts/compile.py b/piptools/scripts/compile.py
--- a/piptools/scripts/compile.py
+++ b/piptools/scripts/compile.py
@@ -92,7 +92,7 @@
if output_file:
dst_file = output_file
else:
- base_name, _, _ = src_files[0].rpartition('.')
+ base_name = src_files[0].rsplit('.', 1)[0]
dst_file = base_name + '.txt'
if upgrade and upgrade_packages:
| {"golden_diff": "diff --git a/piptools/scripts/compile.py b/piptools/scripts/compile.py\n--- a/piptools/scripts/compile.py\n+++ b/piptools/scripts/compile.py\n@@ -92,7 +92,7 @@\n if output_file:\n dst_file = output_file\n else:\n- base_name, _, _ = src_files[0].rpartition('.')\n+ base_name = src_files[0].rsplit('.', 1)[0]\n dst_file = base_name + '.txt'\n \n if upgrade and upgrade_packages:\n", "issue": "Providing a source file which does not have an extension writes to a .txt file in current dir\nIf you happen to use an extensionless filename as a source of requirements in, pip-compile will deduce the wrong output filename and generate a \".txt\" file (relative file, thus at current working dir).\r\n\r\n##### Environment Versions\r\n\r\n1. OS Type Should be cross platform, but observed on Windows\r\n1. Python version: `$ python -V` 2.7.8\r\n1. pip version: `$ pip --version` 9.0.1\r\n1. pip-tools version: `$ pip-compile --version` 1.8.1rc2\r\n\r\n##### Steps to replicate\r\n\r\n1. echo \"request\" > no_extension\r\n2. pip-compile no_extension\r\n\r\n##### Expected result\r\n\r\nWe should error out because it is difficult to deduce a new name if there is no extension to remove.\r\n\r\n##### Actual result\r\n\r\na .txt file is generated at the current directory.\r\n\n", "before_files": [{"content": "# coding: utf-8\nfrom __future__ import (absolute_import, division, print_function,\n unicode_literals)\n\nimport optparse\nimport os\nimport sys\nimport tempfile\n\nimport pip\nfrom pip.req import InstallRequirement, parse_requirements\n\nfrom .. import click\nfrom ..exceptions import PipToolsError\nfrom ..logging import log\nfrom ..repositories import LocalRequirementsRepository, PyPIRepository\nfrom ..resolver import Resolver\nfrom ..utils import (assert_compatible_pip_version, is_pinned_requirement,\n key_from_req, dedup)\nfrom ..writer import OutputWriter\n\n# Make sure we're using a compatible version of pip\nassert_compatible_pip_version()\n\nDEFAULT_REQUIREMENTS_FILE = 'requirements.in'\n\n\nclass PipCommand(pip.basecommand.Command):\n name = 'PipCommand'\n\n\[email protected]()\[email protected]_option()\[email protected]('-v', '--verbose', is_flag=True, help=\"Show more output\")\[email protected]('-n', '--dry-run', is_flag=True, help=\"Only show what would happen, don't change anything\")\[email protected]('-p', '--pre', is_flag=True, default=None, help=\"Allow resolving to prereleases (default is not)\")\[email protected]('-r', '--rebuild', is_flag=True, help=\"Clear any caches upfront, rebuild from scratch\")\[email protected]('-f', '--find-links', multiple=True, help=\"Look for archives in this directory or on this HTML page\", envvar='PIP_FIND_LINKS') # noqa\[email protected]('-i', '--index-url', help=\"Change index URL (defaults to PyPI)\", envvar='PIP_INDEX_URL')\[email protected]('--extra-index-url', multiple=True, help=\"Add additional index URL to search\", envvar='PIP_EXTRA_INDEX_URL') # noqa\[email protected]('--client-cert', help=\"Path to SSL client certificate, a single file containing the private key and the certificate in PEM format.\") # noqa\[email protected]('--trusted-host', multiple=True, envvar='PIP_TRUSTED_HOST',\n help=\"Mark this host as trusted, even though it does not have \"\n \"valid or any HTTPS.\")\[email protected]('--header/--no-header', is_flag=True, default=True,\n help=\"Add header to generated file\")\[email protected]('--index/--no-index', is_flag=True, default=True,\n help=\"Add index URL to generated file\")\[email protected]('--emit-trusted-host/--no-emit-trusted-host', is_flag=True,\n default=True, help=\"Add trusted host option to generated file\")\[email protected]('--annotate/--no-annotate', is_flag=True, default=True,\n help=\"Annotate results, indicating where dependencies come from\")\[email protected]('-U', '--upgrade', is_flag=True, default=False,\n help='Try to upgrade all dependencies to their latest versions')\[email protected]('-P', '--upgrade-package', 'upgrade_packages', nargs=1, multiple=True,\n help=\"Specify particular packages to upgrade.\")\[email protected]('-o', '--output-file', nargs=1, type=str, default=None,\n help=('Output file name. Required if more than one input file is given. '\n 'Will be derived from input file otherwise.'))\[email protected]('--allow-unsafe', is_flag=True, default=False,\n help=\"Pin packages considered unsafe: pip, setuptools & distribute\")\[email protected]('--generate-hashes', is_flag=True, default=False,\n help=\"Generate pip 8 style hashes in the resulting requirements file.\")\[email protected]('--max-rounds', default=10,\n help=\"Maximum number of rounds before resolving the requirements aborts.\")\[email protected]('src_files', nargs=-1, type=click.Path(exists=True, allow_dash=True))\ndef cli(verbose, dry_run, pre, rebuild, find_links, index_url, extra_index_url,\n client_cert, trusted_host, header, index, emit_trusted_host, annotate,\n upgrade, upgrade_packages, output_file, allow_unsafe, generate_hashes,\n src_files, max_rounds):\n \"\"\"Compiles requirements.txt from requirements.in specs.\"\"\"\n log.verbose = verbose\n\n if len(src_files) == 0:\n if os.path.exists(DEFAULT_REQUIREMENTS_FILE):\n src_files = (DEFAULT_REQUIREMENTS_FILE,)\n elif os.path.exists('setup.py'):\n src_files = ('setup.py',)\n if not output_file:\n output_file = 'requirements.txt'\n else:\n raise click.BadParameter((\"If you do not specify an input file, \"\n \"the default is {} or setup.py\").format(DEFAULT_REQUIREMENTS_FILE))\n\n if len(src_files) == 1 and src_files[0] == '-':\n if not output_file:\n raise click.BadParameter('--output-file is required if input is from stdin')\n\n if len(src_files) > 1 and not output_file:\n raise click.BadParameter('--output-file is required if two or more input files are given.')\n\n if output_file:\n dst_file = output_file\n else:\n base_name, _, _ = src_files[0].rpartition('.')\n dst_file = base_name + '.txt'\n\n if upgrade and upgrade_packages:\n raise click.BadParameter('Only one of --upgrade or --upgrade-package can be provided as an argument.')\n\n ###\n # Setup\n ###\n\n pip_command = get_pip_command()\n\n pip_args = []\n if find_links:\n for link in find_links:\n pip_args.extend(['-f', link])\n if index_url:\n pip_args.extend(['-i', index_url])\n if extra_index_url:\n for extra_index in extra_index_url:\n pip_args.extend(['--extra-index-url', extra_index])\n if client_cert:\n pip_args.extend(['--client-cert', client_cert])\n if pre:\n pip_args.extend(['--pre'])\n if trusted_host:\n for host in trusted_host:\n pip_args.extend(['--trusted-host', host])\n\n pip_options, _ = pip_command.parse_args(pip_args)\n\n session = pip_command._build_session(pip_options)\n repository = PyPIRepository(pip_options, session)\n\n # Pre-parse the inline package upgrade specs: they should take precedence\n # over the stuff in the requirements files\n upgrade_packages = [InstallRequirement.from_line(pkg)\n for pkg in upgrade_packages]\n\n # Proxy with a LocalRequirementsRepository if --upgrade is not specified\n # (= default invocation)\n if not (upgrade or upgrade_packages) and os.path.exists(dst_file):\n ireqs = parse_requirements(dst_file, finder=repository.finder, session=repository.session, options=pip_options)\n existing_pins = {key_from_req(ireq.req): ireq for ireq in ireqs if is_pinned_requirement(ireq)}\n repository = LocalRequirementsRepository(existing_pins, repository)\n\n log.debug('Using indexes:')\n # remove duplicate index urls before processing\n repository.finder.index_urls = list(dedup(repository.finder.index_urls))\n for index_url in repository.finder.index_urls:\n log.debug(' {}'.format(index_url))\n\n if repository.finder.find_links:\n log.debug('')\n log.debug('Configuration:')\n for find_link in repository.finder.find_links:\n log.debug(' -f {}'.format(find_link))\n\n ###\n # Parsing/collecting initial requirements\n ###\n\n constraints = []\n for src_file in src_files:\n is_setup_file = os.path.basename(src_file) == 'setup.py'\n if is_setup_file or src_file == '-':\n # pip requires filenames and not files. Since we want to support\n # piping from stdin, we need to briefly save the input from stdin\n # to a temporary file and have pip read that. also used for\n # reading requirements from install_requires in setup.py.\n tmpfile = tempfile.NamedTemporaryFile(mode='wt', delete=False)\n if is_setup_file:\n from distutils.core import run_setup\n dist = run_setup(src_file)\n tmpfile.write('\\n'.join(dist.install_requires))\n else:\n tmpfile.write(sys.stdin.read())\n tmpfile.flush()\n constraints.extend(parse_requirements(\n tmpfile.name, finder=repository.finder, session=repository.session, options=pip_options))\n else:\n constraints.extend(parse_requirements(\n src_file, finder=repository.finder, session=repository.session, options=pip_options))\n\n # Check the given base set of constraints first\n Resolver.check_constraints(constraints)\n\n try:\n resolver = Resolver(constraints, repository, prereleases=pre,\n clear_caches=rebuild, allow_unsafe=allow_unsafe)\n results = resolver.resolve(max_rounds=max_rounds)\n if generate_hashes:\n hashes = resolver.resolve_hashes(results)\n else:\n hashes = None\n except PipToolsError as e:\n log.error(str(e))\n sys.exit(2)\n\n log.debug('')\n\n ##\n # Output\n ##\n\n # Compute reverse dependency annotations statically, from the\n # dependency cache that the resolver has populated by now.\n #\n # TODO (1a): reverse deps for any editable package are lost\n # what SHOULD happen is that they are cached in memory, just\n # not persisted to disk!\n #\n # TODO (1b): perhaps it's easiest if the dependency cache has an API\n # that could take InstallRequirements directly, like:\n #\n # cache.set(ireq, ...)\n #\n # then, when ireq is editable, it would store in\n #\n # editables[egg_name][link_without_fragment] = deps\n # editables['pip-tools']['git+...ols.git@future'] = {'click>=3.0', 'six'}\n #\n # otherwise:\n #\n # self[as_name_version_tuple(ireq)] = {'click>=3.0', 'six'}\n #\n reverse_dependencies = None\n if annotate:\n reverse_dependencies = resolver.reverse_dependencies(results)\n\n writer = OutputWriter(src_files, dst_file, dry_run=dry_run,\n emit_header=header, emit_index=index,\n emit_trusted_host=emit_trusted_host,\n annotate=annotate,\n generate_hashes=generate_hashes,\n default_index_url=repository.DEFAULT_INDEX_URL,\n index_urls=repository.finder.index_urls,\n trusted_hosts=pip_options.trusted_hosts,\n format_control=repository.finder.format_control)\n writer.write(results=results,\n reverse_dependencies=reverse_dependencies,\n primary_packages={key_from_req(ireq.req) for ireq in constraints if not ireq.constraint},\n markers={key_from_req(ireq.req): ireq.markers\n for ireq in constraints if ireq.markers},\n hashes=hashes)\n\n if dry_run:\n log.warning('Dry-run, so nothing updated.')\n\n\ndef get_pip_command():\n # Use pip's parser for pip.conf management and defaults.\n # General options (find_links, index_url, extra_index_url, trusted_host,\n # and pre) are defered to pip.\n pip_command = PipCommand()\n index_opts = pip.cmdoptions.make_option_group(\n pip.cmdoptions.index_group,\n pip_command.parser,\n )\n pip_command.parser.insert_option_group(0, index_opts)\n pip_command.parser.add_option(optparse.Option('--pre', action='store_true', default=False))\n\n return pip_command\n", "path": "piptools/scripts/compile.py"}], "after_files": [{"content": "# coding: utf-8\nfrom __future__ import (absolute_import, division, print_function,\n unicode_literals)\n\nimport optparse\nimport os\nimport sys\nimport tempfile\n\nimport pip\nfrom pip.req import InstallRequirement, parse_requirements\n\nfrom .. import click\nfrom ..exceptions import PipToolsError\nfrom ..logging import log\nfrom ..repositories import LocalRequirementsRepository, PyPIRepository\nfrom ..resolver import Resolver\nfrom ..utils import (assert_compatible_pip_version, is_pinned_requirement,\n key_from_req, dedup)\nfrom ..writer import OutputWriter\n\n# Make sure we're using a compatible version of pip\nassert_compatible_pip_version()\n\nDEFAULT_REQUIREMENTS_FILE = 'requirements.in'\n\n\nclass PipCommand(pip.basecommand.Command):\n name = 'PipCommand'\n\n\[email protected]()\[email protected]_option()\[email protected]('-v', '--verbose', is_flag=True, help=\"Show more output\")\[email protected]('-n', '--dry-run', is_flag=True, help=\"Only show what would happen, don't change anything\")\[email protected]('-p', '--pre', is_flag=True, default=None, help=\"Allow resolving to prereleases (default is not)\")\[email protected]('-r', '--rebuild', is_flag=True, help=\"Clear any caches upfront, rebuild from scratch\")\[email protected]('-f', '--find-links', multiple=True, help=\"Look for archives in this directory or on this HTML page\", envvar='PIP_FIND_LINKS') # noqa\[email protected]('-i', '--index-url', help=\"Change index URL (defaults to PyPI)\", envvar='PIP_INDEX_URL')\[email protected]('--extra-index-url', multiple=True, help=\"Add additional index URL to search\", envvar='PIP_EXTRA_INDEX_URL') # noqa\[email protected]('--client-cert', help=\"Path to SSL client certificate, a single file containing the private key and the certificate in PEM format.\") # noqa\[email protected]('--trusted-host', multiple=True, envvar='PIP_TRUSTED_HOST',\n help=\"Mark this host as trusted, even though it does not have \"\n \"valid or any HTTPS.\")\[email protected]('--header/--no-header', is_flag=True, default=True,\n help=\"Add header to generated file\")\[email protected]('--index/--no-index', is_flag=True, default=True,\n help=\"Add index URL to generated file\")\[email protected]('--emit-trusted-host/--no-emit-trusted-host', is_flag=True,\n default=True, help=\"Add trusted host option to generated file\")\[email protected]('--annotate/--no-annotate', is_flag=True, default=True,\n help=\"Annotate results, indicating where dependencies come from\")\[email protected]('-U', '--upgrade', is_flag=True, default=False,\n help='Try to upgrade all dependencies to their latest versions')\[email protected]('-P', '--upgrade-package', 'upgrade_packages', nargs=1, multiple=True,\n help=\"Specify particular packages to upgrade.\")\[email protected]('-o', '--output-file', nargs=1, type=str, default=None,\n help=('Output file name. Required if more than one input file is given. '\n 'Will be derived from input file otherwise.'))\[email protected]('--allow-unsafe', is_flag=True, default=False,\n help=\"Pin packages considered unsafe: pip, setuptools & distribute\")\[email protected]('--generate-hashes', is_flag=True, default=False,\n help=\"Generate pip 8 style hashes in the resulting requirements file.\")\[email protected]('--max-rounds', default=10,\n help=\"Maximum number of rounds before resolving the requirements aborts.\")\[email protected]('src_files', nargs=-1, type=click.Path(exists=True, allow_dash=True))\ndef cli(verbose, dry_run, pre, rebuild, find_links, index_url, extra_index_url,\n client_cert, trusted_host, header, index, emit_trusted_host, annotate,\n upgrade, upgrade_packages, output_file, allow_unsafe, generate_hashes,\n src_files, max_rounds):\n \"\"\"Compiles requirements.txt from requirements.in specs.\"\"\"\n log.verbose = verbose\n\n if len(src_files) == 0:\n if os.path.exists(DEFAULT_REQUIREMENTS_FILE):\n src_files = (DEFAULT_REQUIREMENTS_FILE,)\n elif os.path.exists('setup.py'):\n src_files = ('setup.py',)\n if not output_file:\n output_file = 'requirements.txt'\n else:\n raise click.BadParameter((\"If you do not specify an input file, \"\n \"the default is {} or setup.py\").format(DEFAULT_REQUIREMENTS_FILE))\n\n if len(src_files) == 1 and src_files[0] == '-':\n if not output_file:\n raise click.BadParameter('--output-file is required if input is from stdin')\n\n if len(src_files) > 1 and not output_file:\n raise click.BadParameter('--output-file is required if two or more input files are given.')\n\n if output_file:\n dst_file = output_file\n else:\n base_name = src_files[0].rsplit('.', 1)[0]\n dst_file = base_name + '.txt'\n\n if upgrade and upgrade_packages:\n raise click.BadParameter('Only one of --upgrade or --upgrade-package can be provided as an argument.')\n\n ###\n # Setup\n ###\n\n pip_command = get_pip_command()\n\n pip_args = []\n if find_links:\n for link in find_links:\n pip_args.extend(['-f', link])\n if index_url:\n pip_args.extend(['-i', index_url])\n if extra_index_url:\n for extra_index in extra_index_url:\n pip_args.extend(['--extra-index-url', extra_index])\n if client_cert:\n pip_args.extend(['--client-cert', client_cert])\n if pre:\n pip_args.extend(['--pre'])\n if trusted_host:\n for host in trusted_host:\n pip_args.extend(['--trusted-host', host])\n\n pip_options, _ = pip_command.parse_args(pip_args)\n\n session = pip_command._build_session(pip_options)\n repository = PyPIRepository(pip_options, session)\n\n # Pre-parse the inline package upgrade specs: they should take precedence\n # over the stuff in the requirements files\n upgrade_packages = [InstallRequirement.from_line(pkg)\n for pkg in upgrade_packages]\n\n # Proxy with a LocalRequirementsRepository if --upgrade is not specified\n # (= default invocation)\n if not (upgrade or upgrade_packages) and os.path.exists(dst_file):\n ireqs = parse_requirements(dst_file, finder=repository.finder, session=repository.session, options=pip_options)\n existing_pins = {key_from_req(ireq.req): ireq for ireq in ireqs if is_pinned_requirement(ireq)}\n repository = LocalRequirementsRepository(existing_pins, repository)\n\n log.debug('Using indexes:')\n # remove duplicate index urls before processing\n repository.finder.index_urls = list(dedup(repository.finder.index_urls))\n for index_url in repository.finder.index_urls:\n log.debug(' {}'.format(index_url))\n\n if repository.finder.find_links:\n log.debug('')\n log.debug('Configuration:')\n for find_link in repository.finder.find_links:\n log.debug(' -f {}'.format(find_link))\n\n ###\n # Parsing/collecting initial requirements\n ###\n\n constraints = []\n for src_file in src_files:\n is_setup_file = os.path.basename(src_file) == 'setup.py'\n if is_setup_file or src_file == '-':\n # pip requires filenames and not files. Since we want to support\n # piping from stdin, we need to briefly save the input from stdin\n # to a temporary file and have pip read that. also used for\n # reading requirements from install_requires in setup.py.\n tmpfile = tempfile.NamedTemporaryFile(mode='wt', delete=False)\n if is_setup_file:\n from distutils.core import run_setup\n dist = run_setup(src_file)\n tmpfile.write('\\n'.join(dist.install_requires))\n else:\n tmpfile.write(sys.stdin.read())\n tmpfile.flush()\n constraints.extend(parse_requirements(\n tmpfile.name, finder=repository.finder, session=repository.session, options=pip_options))\n else:\n constraints.extend(parse_requirements(\n src_file, finder=repository.finder, session=repository.session, options=pip_options))\n\n # Check the given base set of constraints first\n Resolver.check_constraints(constraints)\n\n try:\n resolver = Resolver(constraints, repository, prereleases=pre,\n clear_caches=rebuild, allow_unsafe=allow_unsafe)\n results = resolver.resolve(max_rounds=max_rounds)\n if generate_hashes:\n hashes = resolver.resolve_hashes(results)\n else:\n hashes = None\n except PipToolsError as e:\n log.error(str(e))\n sys.exit(2)\n\n log.debug('')\n\n ##\n # Output\n ##\n\n # Compute reverse dependency annotations statically, from the\n # dependency cache that the resolver has populated by now.\n #\n # TODO (1a): reverse deps for any editable package are lost\n # what SHOULD happen is that they are cached in memory, just\n # not persisted to disk!\n #\n # TODO (1b): perhaps it's easiest if the dependency cache has an API\n # that could take InstallRequirements directly, like:\n #\n # cache.set(ireq, ...)\n #\n # then, when ireq is editable, it would store in\n #\n # editables[egg_name][link_without_fragment] = deps\n # editables['pip-tools']['git+...ols.git@future'] = {'click>=3.0', 'six'}\n #\n # otherwise:\n #\n # self[as_name_version_tuple(ireq)] = {'click>=3.0', 'six'}\n #\n reverse_dependencies = None\n if annotate:\n reverse_dependencies = resolver.reverse_dependencies(results)\n\n writer = OutputWriter(src_files, dst_file, dry_run=dry_run,\n emit_header=header, emit_index=index,\n emit_trusted_host=emit_trusted_host,\n annotate=annotate,\n generate_hashes=generate_hashes,\n default_index_url=repository.DEFAULT_INDEX_URL,\n index_urls=repository.finder.index_urls,\n trusted_hosts=pip_options.trusted_hosts,\n format_control=repository.finder.format_control)\n writer.write(results=results,\n reverse_dependencies=reverse_dependencies,\n primary_packages={key_from_req(ireq.req) for ireq in constraints if not ireq.constraint},\n markers={key_from_req(ireq.req): ireq.markers\n for ireq in constraints if ireq.markers},\n hashes=hashes)\n\n if dry_run:\n log.warning('Dry-run, so nothing updated.')\n\n\ndef get_pip_command():\n # Use pip's parser for pip.conf management and defaults.\n # General options (find_links, index_url, extra_index_url, trusted_host,\n # and pre) are defered to pip.\n pip_command = PipCommand()\n index_opts = pip.cmdoptions.make_option_group(\n pip.cmdoptions.index_group,\n pip_command.parser,\n )\n pip_command.parser.insert_option_group(0, index_opts)\n pip_command.parser.add_option(optparse.Option('--pre', action='store_true', default=False))\n\n return pip_command\n", "path": "piptools/scripts/compile.py"}]} | 3,558 | 119 |
gh_patches_debug_10762 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-contrib-363 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
opentelemetry-instrumentation-grpc fails if client channel is used as a context manager
**Describe your environment**
Verified on python 3.8.6 with opentelemetry 0.17b0
**Steps to reproduce**
Use a gRPC channel as a context manager.
For instance, this happens if you add instrumentation to the [gRPC helloworld example client](https://github.com/grpc/grpc/tree/026046d84301999936a759c1779669c04c6ffaaa/examples/python/helloworld):
```py
# Copyright 2015 gRPC authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""The Python implementation of the GRPC helloworld.Greeter client."""
from __future__ import print_function
import logging
import grpc
from opentelemetry import trace
from opentelemetry.instrumentation.grpc import GrpcInstrumentorClient
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import (
ConsoleSpanExporter,
SimpleExportSpanProcessor,
)
import helloworld_pb2
import helloworld_pb2_grpc
trace.set_tracer_provider(TracerProvider())
trace.get_tracer_provider().add_span_processor(
SimpleExportSpanProcessor(ConsoleSpanExporter())
)
GrpcInstrumentorClient().instrument()
def run():
# NOTE(gRPC Python Team): .close() is possible on a channel and should be
# used in circumstances in which the with statement does not fit the needs
# of the code.
with grpc.insecure_channel('localhost:50051') as channel:
stub = helloworld_pb2_grpc.GreeterStub(channel)
response = stub.SayHello(helloworld_pb2.HelloRequest(name='you'))
print("Greeter client received: " + response.message)
if __name__ == '__main__':
logging.basicConfig()
run()
```
**What is the expected behavior?**
The channel should be usable as normal, except now with instrumentation
**What is the actual behavior?**
```
❯ python greeter_client.py
Traceback (most recent call last):
File "greeter_client.py", line 52, in <module>
run()
File "greeter_client.py", line 44, in run
with grpc.insecure_channel('localhost:50051') as channel:
File "/home/isobel/workspace/tmp/otlp/grpc/lib/python3.8/site-packages/opentelemetry/instrumentation/grpc/grpcext/_interceptor.py", line 235, in __enter__
raise NotImplementedError()
NotImplementedError
```
**Additional context**
LMK and I can make a PR
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `instrumentation/opentelemetry-instrumentation-grpc/src/opentelemetry/instrumentation/grpc/grpcext/_interceptor.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # pylint:disable=relative-beyond-top-level
16 # pylint:disable=arguments-differ
17 # pylint:disable=no-member
18 # pylint:disable=signature-differs
19
20 """Implementation of gRPC Python interceptors."""
21
22
23 import collections
24
25 import grpc
26
27 from opentelemetry.instrumentation.grpc import grpcext
28
29
30 class _UnaryClientInfo(
31 collections.namedtuple("_UnaryClientInfo", ("full_method", "timeout"))
32 ):
33 pass
34
35
36 class _StreamClientInfo(
37 collections.namedtuple(
38 "_StreamClientInfo",
39 ("full_method", "is_client_stream", "is_server_stream", "timeout"),
40 )
41 ):
42 pass
43
44
45 class _InterceptorUnaryUnaryMultiCallable(grpc.UnaryUnaryMultiCallable):
46 def __init__(self, method, base_callable, interceptor):
47 self._method = method
48 self._base_callable = base_callable
49 self._interceptor = interceptor
50
51 def __call__(self, request, timeout=None, metadata=None, credentials=None):
52 def invoker(request, metadata):
53 return self._base_callable(request, timeout, metadata, credentials)
54
55 client_info = _UnaryClientInfo(self._method, timeout)
56 return self._interceptor.intercept_unary(
57 request, metadata, client_info, invoker
58 )
59
60 def with_call(
61 self, request, timeout=None, metadata=None, credentials=None
62 ):
63 def invoker(request, metadata):
64 return self._base_callable.with_call(
65 request, timeout, metadata, credentials
66 )
67
68 client_info = _UnaryClientInfo(self._method, timeout)
69 return self._interceptor.intercept_unary(
70 request, metadata, client_info, invoker
71 )
72
73 def future(self, request, timeout=None, metadata=None, credentials=None):
74 def invoker(request, metadata):
75 return self._base_callable.future(
76 request, timeout, metadata, credentials
77 )
78
79 client_info = _UnaryClientInfo(self._method, timeout)
80 return self._interceptor.intercept_unary(
81 request, metadata, client_info, invoker
82 )
83
84
85 class _InterceptorUnaryStreamMultiCallable(grpc.UnaryStreamMultiCallable):
86 def __init__(self, method, base_callable, interceptor):
87 self._method = method
88 self._base_callable = base_callable
89 self._interceptor = interceptor
90
91 def __call__(self, request, timeout=None, metadata=None, credentials=None):
92 def invoker(request, metadata):
93 return self._base_callable(request, timeout, metadata, credentials)
94
95 client_info = _StreamClientInfo(self._method, False, True, timeout)
96 return self._interceptor.intercept_stream(
97 request, metadata, client_info, invoker
98 )
99
100
101 class _InterceptorStreamUnaryMultiCallable(grpc.StreamUnaryMultiCallable):
102 def __init__(self, method, base_callable, interceptor):
103 self._method = method
104 self._base_callable = base_callable
105 self._interceptor = interceptor
106
107 def __call__(
108 self, request_iterator, timeout=None, metadata=None, credentials=None
109 ):
110 def invoker(request_iterator, metadata):
111 return self._base_callable(
112 request_iterator, timeout, metadata, credentials
113 )
114
115 client_info = _StreamClientInfo(self._method, True, False, timeout)
116 return self._interceptor.intercept_stream(
117 request_iterator, metadata, client_info, invoker
118 )
119
120 def with_call(
121 self, request_iterator, timeout=None, metadata=None, credentials=None
122 ):
123 def invoker(request_iterator, metadata):
124 return self._base_callable.with_call(
125 request_iterator, timeout, metadata, credentials
126 )
127
128 client_info = _StreamClientInfo(self._method, True, False, timeout)
129 return self._interceptor.intercept_stream(
130 request_iterator, metadata, client_info, invoker
131 )
132
133 def future(
134 self, request_iterator, timeout=None, metadata=None, credentials=None
135 ):
136 def invoker(request_iterator, metadata):
137 return self._base_callable.future(
138 request_iterator, timeout, metadata, credentials
139 )
140
141 client_info = _StreamClientInfo(self._method, True, False, timeout)
142 return self._interceptor.intercept_stream(
143 request_iterator, metadata, client_info, invoker
144 )
145
146
147 class _InterceptorStreamStreamMultiCallable(grpc.StreamStreamMultiCallable):
148 def __init__(self, method, base_callable, interceptor):
149 self._method = method
150 self._base_callable = base_callable
151 self._interceptor = interceptor
152
153 def __call__(
154 self, request_iterator, timeout=None, metadata=None, credentials=None
155 ):
156 def invoker(request_iterator, metadata):
157 return self._base_callable(
158 request_iterator, timeout, metadata, credentials
159 )
160
161 client_info = _StreamClientInfo(self._method, True, True, timeout)
162 return self._interceptor.intercept_stream(
163 request_iterator, metadata, client_info, invoker
164 )
165
166
167 class _InterceptorChannel(grpc.Channel):
168 def __init__(self, channel, interceptor):
169 self._channel = channel
170 self._interceptor = interceptor
171
172 def subscribe(self, *args, **kwargs):
173 self._channel.subscribe(*args, **kwargs)
174
175 def unsubscribe(self, *args, **kwargs):
176 self._channel.unsubscribe(*args, **kwargs)
177
178 def unary_unary(
179 self, method, request_serializer=None, response_deserializer=None
180 ):
181 base_callable = self._channel.unary_unary(
182 method, request_serializer, response_deserializer
183 )
184 if isinstance(self._interceptor, grpcext.UnaryClientInterceptor):
185 return _InterceptorUnaryUnaryMultiCallable(
186 method, base_callable, self._interceptor
187 )
188 return base_callable
189
190 def unary_stream(
191 self, method, request_serializer=None, response_deserializer=None
192 ):
193 base_callable = self._channel.unary_stream(
194 method, request_serializer, response_deserializer
195 )
196 if isinstance(self._interceptor, grpcext.StreamClientInterceptor):
197 return _InterceptorUnaryStreamMultiCallable(
198 method, base_callable, self._interceptor
199 )
200 return base_callable
201
202 def stream_unary(
203 self, method, request_serializer=None, response_deserializer=None
204 ):
205 base_callable = self._channel.stream_unary(
206 method, request_serializer, response_deserializer
207 )
208 if isinstance(self._interceptor, grpcext.StreamClientInterceptor):
209 return _InterceptorStreamUnaryMultiCallable(
210 method, base_callable, self._interceptor
211 )
212 return base_callable
213
214 def stream_stream(
215 self, method, request_serializer=None, response_deserializer=None
216 ):
217 base_callable = self._channel.stream_stream(
218 method, request_serializer, response_deserializer
219 )
220 if isinstance(self._interceptor, grpcext.StreamClientInterceptor):
221 return _InterceptorStreamStreamMultiCallable(
222 method, base_callable, self._interceptor
223 )
224 return base_callable
225
226 def close(self):
227 if not hasattr(self._channel, "close"):
228 raise RuntimeError(
229 "close() is not supported with the installed version of grpcio"
230 )
231 self._channel.close()
232
233 def __enter__(self):
234 """Enters the runtime context related to the channel object."""
235 raise NotImplementedError()
236
237 def __exit__(self, exc_type, exc_val, exc_tb):
238 """Exits the runtime context related to the channel object."""
239 raise NotImplementedError()
240
241
242 def intercept_channel(channel, *interceptors):
243 result = channel
244 for interceptor in interceptors:
245 if not isinstance(
246 interceptor, grpcext.UnaryClientInterceptor
247 ) and not isinstance(interceptor, grpcext.StreamClientInterceptor):
248 raise TypeError(
249 "interceptor must be either a "
250 "grpcext.UnaryClientInterceptor or a "
251 "grpcext.StreamClientInterceptor"
252 )
253 result = _InterceptorChannel(result, interceptor)
254 return result
255
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/instrumentation/opentelemetry-instrumentation-grpc/src/opentelemetry/instrumentation/grpc/grpcext/_interceptor.py b/instrumentation/opentelemetry-instrumentation-grpc/src/opentelemetry/instrumentation/grpc/grpcext/_interceptor.py
--- a/instrumentation/opentelemetry-instrumentation-grpc/src/opentelemetry/instrumentation/grpc/grpcext/_interceptor.py
+++ b/instrumentation/opentelemetry-instrumentation-grpc/src/opentelemetry/instrumentation/grpc/grpcext/_interceptor.py
@@ -232,11 +232,11 @@
def __enter__(self):
"""Enters the runtime context related to the channel object."""
- raise NotImplementedError()
+ return self
def __exit__(self, exc_type, exc_val, exc_tb):
"""Exits the runtime context related to the channel object."""
- raise NotImplementedError()
+ self.close()
def intercept_channel(channel, *interceptors):
| {"golden_diff": "diff --git a/instrumentation/opentelemetry-instrumentation-grpc/src/opentelemetry/instrumentation/grpc/grpcext/_interceptor.py b/instrumentation/opentelemetry-instrumentation-grpc/src/opentelemetry/instrumentation/grpc/grpcext/_interceptor.py\n--- a/instrumentation/opentelemetry-instrumentation-grpc/src/opentelemetry/instrumentation/grpc/grpcext/_interceptor.py\n+++ b/instrumentation/opentelemetry-instrumentation-grpc/src/opentelemetry/instrumentation/grpc/grpcext/_interceptor.py\n@@ -232,11 +232,11 @@\n \n def __enter__(self):\n \"\"\"Enters the runtime context related to the channel object.\"\"\"\n- raise NotImplementedError()\n+ return self\n \n def __exit__(self, exc_type, exc_val, exc_tb):\n \"\"\"Exits the runtime context related to the channel object.\"\"\"\n- raise NotImplementedError()\n+ self.close()\n \n \n def intercept_channel(channel, *interceptors):\n", "issue": "opentelemetry-instrumentation-grpc fails if client channel is used as a context manager\n**Describe your environment**\r\n\r\nVerified on python 3.8.6 with opentelemetry 0.17b0\r\n\r\n**Steps to reproduce**\r\n\r\nUse a gRPC channel as a context manager.\r\n\r\nFor instance, this happens if you add instrumentation to the [gRPC helloworld example client](https://github.com/grpc/grpc/tree/026046d84301999936a759c1779669c04c6ffaaa/examples/python/helloworld):\r\n\r\n```py\r\n# Copyright 2015 gRPC authors.\r\n#\r\n# Licensed under the Apache License, Version 2.0 (the \"License\");\r\n# you may not use this file except in compliance with the License.\r\n# You may obtain a copy of the License at\r\n#\r\n# http://www.apache.org/licenses/LICENSE-2.0\r\n#\r\n# Unless required by applicable law or agreed to in writing, software\r\n# distributed under the License is distributed on an \"AS IS\" BASIS,\r\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r\n# See the License for the specific language governing permissions and\r\n# limitations under the License.\r\n\"\"\"The Python implementation of the GRPC helloworld.Greeter client.\"\"\"\r\n\r\nfrom __future__ import print_function\r\nimport logging\r\n\r\nimport grpc\r\n\r\nfrom opentelemetry import trace\r\nfrom opentelemetry.instrumentation.grpc import GrpcInstrumentorClient\r\nfrom opentelemetry.sdk.trace import TracerProvider\r\nfrom opentelemetry.sdk.trace.export import (\r\n ConsoleSpanExporter,\r\n SimpleExportSpanProcessor,\r\n)\r\n\r\nimport helloworld_pb2\r\nimport helloworld_pb2_grpc\r\n\r\ntrace.set_tracer_provider(TracerProvider())\r\ntrace.get_tracer_provider().add_span_processor(\r\n SimpleExportSpanProcessor(ConsoleSpanExporter())\r\n)\r\n\r\nGrpcInstrumentorClient().instrument()\r\n\r\n\r\ndef run():\r\n # NOTE(gRPC Python Team): .close() is possible on a channel and should be\r\n # used in circumstances in which the with statement does not fit the needs\r\n # of the code.\r\n with grpc.insecure_channel('localhost:50051') as channel:\r\n stub = helloworld_pb2_grpc.GreeterStub(channel)\r\n response = stub.SayHello(helloworld_pb2.HelloRequest(name='you'))\r\n print(\"Greeter client received: \" + response.message)\r\n\r\n\r\nif __name__ == '__main__':\r\n logging.basicConfig()\r\n run()\r\n```\r\n\r\n**What is the expected behavior?**\r\n\r\nThe channel should be usable as normal, except now with instrumentation\r\n\r\n**What is the actual behavior?**\r\n\r\n```\r\n\u276f python greeter_client.py\r\nTraceback (most recent call last):\r\n File \"greeter_client.py\", line 52, in <module>\r\n run()\r\n File \"greeter_client.py\", line 44, in run\r\n with grpc.insecure_channel('localhost:50051') as channel:\r\n File \"/home/isobel/workspace/tmp/otlp/grpc/lib/python3.8/site-packages/opentelemetry/instrumentation/grpc/grpcext/_interceptor.py\", line 235, in __enter__\r\n raise NotImplementedError()\r\nNotImplementedError\r\n```\r\n\r\n**Additional context**\r\n\r\nLMK and I can make a PR\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# pylint:disable=relative-beyond-top-level\n# pylint:disable=arguments-differ\n# pylint:disable=no-member\n# pylint:disable=signature-differs\n\n\"\"\"Implementation of gRPC Python interceptors.\"\"\"\n\n\nimport collections\n\nimport grpc\n\nfrom opentelemetry.instrumentation.grpc import grpcext\n\n\nclass _UnaryClientInfo(\n collections.namedtuple(\"_UnaryClientInfo\", (\"full_method\", \"timeout\"))\n):\n pass\n\n\nclass _StreamClientInfo(\n collections.namedtuple(\n \"_StreamClientInfo\",\n (\"full_method\", \"is_client_stream\", \"is_server_stream\", \"timeout\"),\n )\n):\n pass\n\n\nclass _InterceptorUnaryUnaryMultiCallable(grpc.UnaryUnaryMultiCallable):\n def __init__(self, method, base_callable, interceptor):\n self._method = method\n self._base_callable = base_callable\n self._interceptor = interceptor\n\n def __call__(self, request, timeout=None, metadata=None, credentials=None):\n def invoker(request, metadata):\n return self._base_callable(request, timeout, metadata, credentials)\n\n client_info = _UnaryClientInfo(self._method, timeout)\n return self._interceptor.intercept_unary(\n request, metadata, client_info, invoker\n )\n\n def with_call(\n self, request, timeout=None, metadata=None, credentials=None\n ):\n def invoker(request, metadata):\n return self._base_callable.with_call(\n request, timeout, metadata, credentials\n )\n\n client_info = _UnaryClientInfo(self._method, timeout)\n return self._interceptor.intercept_unary(\n request, metadata, client_info, invoker\n )\n\n def future(self, request, timeout=None, metadata=None, credentials=None):\n def invoker(request, metadata):\n return self._base_callable.future(\n request, timeout, metadata, credentials\n )\n\n client_info = _UnaryClientInfo(self._method, timeout)\n return self._interceptor.intercept_unary(\n request, metadata, client_info, invoker\n )\n\n\nclass _InterceptorUnaryStreamMultiCallable(grpc.UnaryStreamMultiCallable):\n def __init__(self, method, base_callable, interceptor):\n self._method = method\n self._base_callable = base_callable\n self._interceptor = interceptor\n\n def __call__(self, request, timeout=None, metadata=None, credentials=None):\n def invoker(request, metadata):\n return self._base_callable(request, timeout, metadata, credentials)\n\n client_info = _StreamClientInfo(self._method, False, True, timeout)\n return self._interceptor.intercept_stream(\n request, metadata, client_info, invoker\n )\n\n\nclass _InterceptorStreamUnaryMultiCallable(grpc.StreamUnaryMultiCallable):\n def __init__(self, method, base_callable, interceptor):\n self._method = method\n self._base_callable = base_callable\n self._interceptor = interceptor\n\n def __call__(\n self, request_iterator, timeout=None, metadata=None, credentials=None\n ):\n def invoker(request_iterator, metadata):\n return self._base_callable(\n request_iterator, timeout, metadata, credentials\n )\n\n client_info = _StreamClientInfo(self._method, True, False, timeout)\n return self._interceptor.intercept_stream(\n request_iterator, metadata, client_info, invoker\n )\n\n def with_call(\n self, request_iterator, timeout=None, metadata=None, credentials=None\n ):\n def invoker(request_iterator, metadata):\n return self._base_callable.with_call(\n request_iterator, timeout, metadata, credentials\n )\n\n client_info = _StreamClientInfo(self._method, True, False, timeout)\n return self._interceptor.intercept_stream(\n request_iterator, metadata, client_info, invoker\n )\n\n def future(\n self, request_iterator, timeout=None, metadata=None, credentials=None\n ):\n def invoker(request_iterator, metadata):\n return self._base_callable.future(\n request_iterator, timeout, metadata, credentials\n )\n\n client_info = _StreamClientInfo(self._method, True, False, timeout)\n return self._interceptor.intercept_stream(\n request_iterator, metadata, client_info, invoker\n )\n\n\nclass _InterceptorStreamStreamMultiCallable(grpc.StreamStreamMultiCallable):\n def __init__(self, method, base_callable, interceptor):\n self._method = method\n self._base_callable = base_callable\n self._interceptor = interceptor\n\n def __call__(\n self, request_iterator, timeout=None, metadata=None, credentials=None\n ):\n def invoker(request_iterator, metadata):\n return self._base_callable(\n request_iterator, timeout, metadata, credentials\n )\n\n client_info = _StreamClientInfo(self._method, True, True, timeout)\n return self._interceptor.intercept_stream(\n request_iterator, metadata, client_info, invoker\n )\n\n\nclass _InterceptorChannel(grpc.Channel):\n def __init__(self, channel, interceptor):\n self._channel = channel\n self._interceptor = interceptor\n\n def subscribe(self, *args, **kwargs):\n self._channel.subscribe(*args, **kwargs)\n\n def unsubscribe(self, *args, **kwargs):\n self._channel.unsubscribe(*args, **kwargs)\n\n def unary_unary(\n self, method, request_serializer=None, response_deserializer=None\n ):\n base_callable = self._channel.unary_unary(\n method, request_serializer, response_deserializer\n )\n if isinstance(self._interceptor, grpcext.UnaryClientInterceptor):\n return _InterceptorUnaryUnaryMultiCallable(\n method, base_callable, self._interceptor\n )\n return base_callable\n\n def unary_stream(\n self, method, request_serializer=None, response_deserializer=None\n ):\n base_callable = self._channel.unary_stream(\n method, request_serializer, response_deserializer\n )\n if isinstance(self._interceptor, grpcext.StreamClientInterceptor):\n return _InterceptorUnaryStreamMultiCallable(\n method, base_callable, self._interceptor\n )\n return base_callable\n\n def stream_unary(\n self, method, request_serializer=None, response_deserializer=None\n ):\n base_callable = self._channel.stream_unary(\n method, request_serializer, response_deserializer\n )\n if isinstance(self._interceptor, grpcext.StreamClientInterceptor):\n return _InterceptorStreamUnaryMultiCallable(\n method, base_callable, self._interceptor\n )\n return base_callable\n\n def stream_stream(\n self, method, request_serializer=None, response_deserializer=None\n ):\n base_callable = self._channel.stream_stream(\n method, request_serializer, response_deserializer\n )\n if isinstance(self._interceptor, grpcext.StreamClientInterceptor):\n return _InterceptorStreamStreamMultiCallable(\n method, base_callable, self._interceptor\n )\n return base_callable\n\n def close(self):\n if not hasattr(self._channel, \"close\"):\n raise RuntimeError(\n \"close() is not supported with the installed version of grpcio\"\n )\n self._channel.close()\n\n def __enter__(self):\n \"\"\"Enters the runtime context related to the channel object.\"\"\"\n raise NotImplementedError()\n\n def __exit__(self, exc_type, exc_val, exc_tb):\n \"\"\"Exits the runtime context related to the channel object.\"\"\"\n raise NotImplementedError()\n\n\ndef intercept_channel(channel, *interceptors):\n result = channel\n for interceptor in interceptors:\n if not isinstance(\n interceptor, grpcext.UnaryClientInterceptor\n ) and not isinstance(interceptor, grpcext.StreamClientInterceptor):\n raise TypeError(\n \"interceptor must be either a \"\n \"grpcext.UnaryClientInterceptor or a \"\n \"grpcext.StreamClientInterceptor\"\n )\n result = _InterceptorChannel(result, interceptor)\n return result\n", "path": "instrumentation/opentelemetry-instrumentation-grpc/src/opentelemetry/instrumentation/grpc/grpcext/_interceptor.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# pylint:disable=relative-beyond-top-level\n# pylint:disable=arguments-differ\n# pylint:disable=no-member\n# pylint:disable=signature-differs\n\n\"\"\"Implementation of gRPC Python interceptors.\"\"\"\n\n\nimport collections\n\nimport grpc\n\nfrom opentelemetry.instrumentation.grpc import grpcext\n\n\nclass _UnaryClientInfo(\n collections.namedtuple(\"_UnaryClientInfo\", (\"full_method\", \"timeout\"))\n):\n pass\n\n\nclass _StreamClientInfo(\n collections.namedtuple(\n \"_StreamClientInfo\",\n (\"full_method\", \"is_client_stream\", \"is_server_stream\", \"timeout\"),\n )\n):\n pass\n\n\nclass _InterceptorUnaryUnaryMultiCallable(grpc.UnaryUnaryMultiCallable):\n def __init__(self, method, base_callable, interceptor):\n self._method = method\n self._base_callable = base_callable\n self._interceptor = interceptor\n\n def __call__(self, request, timeout=None, metadata=None, credentials=None):\n def invoker(request, metadata):\n return self._base_callable(request, timeout, metadata, credentials)\n\n client_info = _UnaryClientInfo(self._method, timeout)\n return self._interceptor.intercept_unary(\n request, metadata, client_info, invoker\n )\n\n def with_call(\n self, request, timeout=None, metadata=None, credentials=None\n ):\n def invoker(request, metadata):\n return self._base_callable.with_call(\n request, timeout, metadata, credentials\n )\n\n client_info = _UnaryClientInfo(self._method, timeout)\n return self._interceptor.intercept_unary(\n request, metadata, client_info, invoker\n )\n\n def future(self, request, timeout=None, metadata=None, credentials=None):\n def invoker(request, metadata):\n return self._base_callable.future(\n request, timeout, metadata, credentials\n )\n\n client_info = _UnaryClientInfo(self._method, timeout)\n return self._interceptor.intercept_unary(\n request, metadata, client_info, invoker\n )\n\n\nclass _InterceptorUnaryStreamMultiCallable(grpc.UnaryStreamMultiCallable):\n def __init__(self, method, base_callable, interceptor):\n self._method = method\n self._base_callable = base_callable\n self._interceptor = interceptor\n\n def __call__(self, request, timeout=None, metadata=None, credentials=None):\n def invoker(request, metadata):\n return self._base_callable(request, timeout, metadata, credentials)\n\n client_info = _StreamClientInfo(self._method, False, True, timeout)\n return self._interceptor.intercept_stream(\n request, metadata, client_info, invoker\n )\n\n\nclass _InterceptorStreamUnaryMultiCallable(grpc.StreamUnaryMultiCallable):\n def __init__(self, method, base_callable, interceptor):\n self._method = method\n self._base_callable = base_callable\n self._interceptor = interceptor\n\n def __call__(\n self, request_iterator, timeout=None, metadata=None, credentials=None\n ):\n def invoker(request_iterator, metadata):\n return self._base_callable(\n request_iterator, timeout, metadata, credentials\n )\n\n client_info = _StreamClientInfo(self._method, True, False, timeout)\n return self._interceptor.intercept_stream(\n request_iterator, metadata, client_info, invoker\n )\n\n def with_call(\n self, request_iterator, timeout=None, metadata=None, credentials=None\n ):\n def invoker(request_iterator, metadata):\n return self._base_callable.with_call(\n request_iterator, timeout, metadata, credentials\n )\n\n client_info = _StreamClientInfo(self._method, True, False, timeout)\n return self._interceptor.intercept_stream(\n request_iterator, metadata, client_info, invoker\n )\n\n def future(\n self, request_iterator, timeout=None, metadata=None, credentials=None\n ):\n def invoker(request_iterator, metadata):\n return self._base_callable.future(\n request_iterator, timeout, metadata, credentials\n )\n\n client_info = _StreamClientInfo(self._method, True, False, timeout)\n return self._interceptor.intercept_stream(\n request_iterator, metadata, client_info, invoker\n )\n\n\nclass _InterceptorStreamStreamMultiCallable(grpc.StreamStreamMultiCallable):\n def __init__(self, method, base_callable, interceptor):\n self._method = method\n self._base_callable = base_callable\n self._interceptor = interceptor\n\n def __call__(\n self, request_iterator, timeout=None, metadata=None, credentials=None\n ):\n def invoker(request_iterator, metadata):\n return self._base_callable(\n request_iterator, timeout, metadata, credentials\n )\n\n client_info = _StreamClientInfo(self._method, True, True, timeout)\n return self._interceptor.intercept_stream(\n request_iterator, metadata, client_info, invoker\n )\n\n\nclass _InterceptorChannel(grpc.Channel):\n def __init__(self, channel, interceptor):\n self._channel = channel\n self._interceptor = interceptor\n\n def subscribe(self, *args, **kwargs):\n self._channel.subscribe(*args, **kwargs)\n\n def unsubscribe(self, *args, **kwargs):\n self._channel.unsubscribe(*args, **kwargs)\n\n def unary_unary(\n self, method, request_serializer=None, response_deserializer=None\n ):\n base_callable = self._channel.unary_unary(\n method, request_serializer, response_deserializer\n )\n if isinstance(self._interceptor, grpcext.UnaryClientInterceptor):\n return _InterceptorUnaryUnaryMultiCallable(\n method, base_callable, self._interceptor\n )\n return base_callable\n\n def unary_stream(\n self, method, request_serializer=None, response_deserializer=None\n ):\n base_callable = self._channel.unary_stream(\n method, request_serializer, response_deserializer\n )\n if isinstance(self._interceptor, grpcext.StreamClientInterceptor):\n return _InterceptorUnaryStreamMultiCallable(\n method, base_callable, self._interceptor\n )\n return base_callable\n\n def stream_unary(\n self, method, request_serializer=None, response_deserializer=None\n ):\n base_callable = self._channel.stream_unary(\n method, request_serializer, response_deserializer\n )\n if isinstance(self._interceptor, grpcext.StreamClientInterceptor):\n return _InterceptorStreamUnaryMultiCallable(\n method, base_callable, self._interceptor\n )\n return base_callable\n\n def stream_stream(\n self, method, request_serializer=None, response_deserializer=None\n ):\n base_callable = self._channel.stream_stream(\n method, request_serializer, response_deserializer\n )\n if isinstance(self._interceptor, grpcext.StreamClientInterceptor):\n return _InterceptorStreamStreamMultiCallable(\n method, base_callable, self._interceptor\n )\n return base_callable\n\n def close(self):\n if not hasattr(self._channel, \"close\"):\n raise RuntimeError(\n \"close() is not supported with the installed version of grpcio\"\n )\n self._channel.close()\n\n def __enter__(self):\n \"\"\"Enters the runtime context related to the channel object.\"\"\"\n return self\n\n def __exit__(self, exc_type, exc_val, exc_tb):\n \"\"\"Exits the runtime context related to the channel object.\"\"\"\n self.close()\n\n\ndef intercept_channel(channel, *interceptors):\n result = channel\n for interceptor in interceptors:\n if not isinstance(\n interceptor, grpcext.UnaryClientInterceptor\n ) and not isinstance(interceptor, grpcext.StreamClientInterceptor):\n raise TypeError(\n \"interceptor must be either a \"\n \"grpcext.UnaryClientInterceptor or a \"\n \"grpcext.StreamClientInterceptor\"\n )\n result = _InterceptorChannel(result, interceptor)\n return result\n", "path": "instrumentation/opentelemetry-instrumentation-grpc/src/opentelemetry/instrumentation/grpc/grpcext/_interceptor.py"}]} | 3,457 | 208 |
gh_patches_debug_10020 | rasdani/github-patches | git_diff | onnx__sklearn-onnx-440 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SelectFromModel converted model with threshold such that no features selected give error
SelectFromModel model in Scikit with threshold such that no features are selected returns an array of shape (M, 0), where M is the number of test instances, onnx converter however can't handle it.
RuntimeError: [ONNXRuntimeError] : 1 : GENERAL ERROR : Load model from pca1.onnx failed:[ShapeInferenceError] Shape input must be a one-dimensional tensor.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `skl2onnx/operator_converters/feature_selection.py`
Content:
```
1 # -------------------------------------------------------------------------
2 # Copyright (c) Microsoft Corporation. All rights reserved.
3 # Licensed under the MIT License. See License.txt in the project root for
4 # license information.
5 # --------------------------------------------------------------------------
6
7 from ..proto import onnx_proto
8 from ..common._apply_operation import apply_cast
9 from ..common._registration import register_converter
10 from ..common.data_types import FloatTensorType, FloatType
11
12
13 def convert_sklearn_feature_selection(scope, operator, container):
14 op = operator.raw_operator
15 # Get indices of the features selected
16 index = op.get_support(indices=True)
17 needs_cast = not isinstance(operator.inputs[0].type,
18 (FloatTensorType, FloatType))
19 if needs_cast:
20 output_name = scope.get_unique_variable_name('output')
21 else:
22 output_name = operator.outputs[0].full_name
23
24 if index.any():
25 column_indices_name = scope.get_unique_variable_name('column_indices')
26
27 container.add_initializer(column_indices_name,
28 onnx_proto.TensorProto.INT64,
29 [len(index)], index)
30
31 container.add_node(
32 'ArrayFeatureExtractor',
33 [operator.inputs[0].full_name, column_indices_name],
34 output_name, op_domain='ai.onnx.ml',
35 name=scope.get_unique_operator_name('ArrayFeatureExtractor'))
36 else:
37 container.add_node('ConstantOfShape', operator.inputs[0].full_name,
38 output_name, op_version=9)
39 if needs_cast:
40 apply_cast(scope, output_name, operator.outputs[0].full_name,
41 container, to=onnx_proto.TensorProto.FLOAT)
42
43
44 register_converter('SklearnGenericUnivariateSelect',
45 convert_sklearn_feature_selection)
46 register_converter('SklearnRFE', convert_sklearn_feature_selection)
47 register_converter('SklearnRFECV', convert_sklearn_feature_selection)
48 register_converter('SklearnSelectFdr', convert_sklearn_feature_selection)
49 register_converter('SklearnSelectFpr', convert_sklearn_feature_selection)
50 register_converter('SklearnSelectFromModel', convert_sklearn_feature_selection)
51 register_converter('SklearnSelectFwe', convert_sklearn_feature_selection)
52 register_converter('SklearnSelectKBest', convert_sklearn_feature_selection)
53 register_converter('SklearnSelectPercentile',
54 convert_sklearn_feature_selection)
55 register_converter('SklearnVarianceThreshold',
56 convert_sklearn_feature_selection)
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/skl2onnx/operator_converters/feature_selection.py b/skl2onnx/operator_converters/feature_selection.py
--- a/skl2onnx/operator_converters/feature_selection.py
+++ b/skl2onnx/operator_converters/feature_selection.py
@@ -14,6 +14,11 @@
op = operator.raw_operator
# Get indices of the features selected
index = op.get_support(indices=True)
+ if len(index) == 0:
+ raise RuntimeError(
+ "Model '{}' did not select any feature. "
+ "This model cannot be converted into ONNX."
+ "".format(op.__class__.__name__))
needs_cast = not isinstance(operator.inputs[0].type,
(FloatTensorType, FloatType))
if needs_cast:
| {"golden_diff": "diff --git a/skl2onnx/operator_converters/feature_selection.py b/skl2onnx/operator_converters/feature_selection.py\n--- a/skl2onnx/operator_converters/feature_selection.py\n+++ b/skl2onnx/operator_converters/feature_selection.py\n@@ -14,6 +14,11 @@\n op = operator.raw_operator\n # Get indices of the features selected\n index = op.get_support(indices=True)\n+ if len(index) == 0:\n+ raise RuntimeError(\n+ \"Model '{}' did not select any feature. \"\n+ \"This model cannot be converted into ONNX.\"\n+ \"\".format(op.__class__.__name__))\n needs_cast = not isinstance(operator.inputs[0].type,\n (FloatTensorType, FloatType))\n if needs_cast:\n", "issue": "SelectFromModel converted model with threshold such that no features selected give error\nSelectFromModel model in Scikit with threshold such that no features are selected returns an array of shape (M, 0), where M is the number of test instances, onnx converter however can't handle it.\r\n\r\nRuntimeError: [ONNXRuntimeError] : 1 : GENERAL ERROR : Load model from pca1.onnx failed:[ShapeInferenceError] Shape input must be a one-dimensional tensor. \n", "before_files": [{"content": "# -------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See License.txt in the project root for\n# license information.\n# --------------------------------------------------------------------------\n\nfrom ..proto import onnx_proto\nfrom ..common._apply_operation import apply_cast\nfrom ..common._registration import register_converter\nfrom ..common.data_types import FloatTensorType, FloatType\n\n\ndef convert_sklearn_feature_selection(scope, operator, container):\n op = operator.raw_operator\n # Get indices of the features selected\n index = op.get_support(indices=True)\n needs_cast = not isinstance(operator.inputs[0].type,\n (FloatTensorType, FloatType))\n if needs_cast:\n output_name = scope.get_unique_variable_name('output')\n else:\n output_name = operator.outputs[0].full_name\n\n if index.any():\n column_indices_name = scope.get_unique_variable_name('column_indices')\n\n container.add_initializer(column_indices_name,\n onnx_proto.TensorProto.INT64,\n [len(index)], index)\n\n container.add_node(\n 'ArrayFeatureExtractor',\n [operator.inputs[0].full_name, column_indices_name],\n output_name, op_domain='ai.onnx.ml',\n name=scope.get_unique_operator_name('ArrayFeatureExtractor'))\n else:\n container.add_node('ConstantOfShape', operator.inputs[0].full_name,\n output_name, op_version=9)\n if needs_cast:\n apply_cast(scope, output_name, operator.outputs[0].full_name,\n container, to=onnx_proto.TensorProto.FLOAT)\n\n\nregister_converter('SklearnGenericUnivariateSelect',\n convert_sklearn_feature_selection)\nregister_converter('SklearnRFE', convert_sklearn_feature_selection)\nregister_converter('SklearnRFECV', convert_sklearn_feature_selection)\nregister_converter('SklearnSelectFdr', convert_sklearn_feature_selection)\nregister_converter('SklearnSelectFpr', convert_sklearn_feature_selection)\nregister_converter('SklearnSelectFromModel', convert_sklearn_feature_selection)\nregister_converter('SklearnSelectFwe', convert_sklearn_feature_selection)\nregister_converter('SklearnSelectKBest', convert_sklearn_feature_selection)\nregister_converter('SklearnSelectPercentile',\n convert_sklearn_feature_selection)\nregister_converter('SklearnVarianceThreshold',\n convert_sklearn_feature_selection)\n", "path": "skl2onnx/operator_converters/feature_selection.py"}], "after_files": [{"content": "# -------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See License.txt in the project root for\n# license information.\n# --------------------------------------------------------------------------\n\nfrom ..proto import onnx_proto\nfrom ..common._apply_operation import apply_cast\nfrom ..common._registration import register_converter\nfrom ..common.data_types import FloatTensorType, FloatType\n\n\ndef convert_sklearn_feature_selection(scope, operator, container):\n op = operator.raw_operator\n # Get indices of the features selected\n index = op.get_support(indices=True)\n if len(index) == 0:\n raise RuntimeError(\n \"Model '{}' did not select any feature. \"\n \"This model cannot be converted into ONNX.\"\n \"\".format(op.__class__.__name__))\n needs_cast = not isinstance(operator.inputs[0].type,\n (FloatTensorType, FloatType))\n if needs_cast:\n output_name = scope.get_unique_variable_name('output')\n else:\n output_name = operator.outputs[0].full_name\n\n if index.any():\n column_indices_name = scope.get_unique_variable_name('column_indices')\n\n container.add_initializer(column_indices_name,\n onnx_proto.TensorProto.INT64,\n [len(index)], index)\n\n container.add_node(\n 'ArrayFeatureExtractor',\n [operator.inputs[0].full_name, column_indices_name],\n output_name, op_domain='ai.onnx.ml',\n name=scope.get_unique_operator_name('ArrayFeatureExtractor'))\n else:\n container.add_node('ConstantOfShape', operator.inputs[0].full_name,\n output_name, op_version=9)\n if needs_cast:\n apply_cast(scope, output_name, operator.outputs[0].full_name,\n container, to=onnx_proto.TensorProto.FLOAT)\n\n\nregister_converter('SklearnGenericUnivariateSelect',\n convert_sklearn_feature_selection)\nregister_converter('SklearnRFE', convert_sklearn_feature_selection)\nregister_converter('SklearnRFECV', convert_sklearn_feature_selection)\nregister_converter('SklearnSelectFdr', convert_sklearn_feature_selection)\nregister_converter('SklearnSelectFpr', convert_sklearn_feature_selection)\nregister_converter('SklearnSelectFromModel', convert_sklearn_feature_selection)\nregister_converter('SklearnSelectFwe', convert_sklearn_feature_selection)\nregister_converter('SklearnSelectKBest', convert_sklearn_feature_selection)\nregister_converter('SklearnSelectPercentile',\n convert_sklearn_feature_selection)\nregister_converter('SklearnVarianceThreshold',\n convert_sklearn_feature_selection)\n", "path": "skl2onnx/operator_converters/feature_selection.py"}]} | 954 | 174 |
gh_patches_debug_42030 | rasdani/github-patches | git_diff | ManageIQ__integration_tests-3352 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Check if we have fallback if no trackerbot URL specified
We had some issues htat people with no trackerbot URL in config were not able to run test:
- Investigate what is happening
- Add fallback (do not check templates at all? Load them from all providers?)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `fixtures/provider.py`
Content:
```
1 """``setup_provider`` fixture
2
3 In test modules paramatrized with :py:func:`utils.testgen.provider_by_type` (should be
4 just about any module that needs a provider to run its tests), this fixture will set up
5 the single provider needed to run that test.
6
7 If the provider setup fails, this fixture will record that failure and skip future tests
8 using the provider.
9
10 """
11 import pytest
12
13 from fixtures.artifactor_plugin import art_client, get_test_idents
14 from fixtures.templateloader import TEMPLATES
15 from utils import providers
16 from utils.log import logger
17
18 # failed provider tracking for _setup_provider_fixture
19 _failed_providers = set()
20
21
22 def _setup_provider(provider_key, request=None):
23 def skip(provider_key, previous_fail=False):
24 if request:
25 node = request.node
26 name, location = get_test_idents(node)
27 skip_data = {'type': 'provider', 'reason': provider_key}
28 art_client.fire_hook('skip_test', test_location=location, test_name=name,
29 skip_data=skip_data)
30 if previous_fail:
31 raise pytest.skip('Provider {} failed to set up previously in another test, '
32 'skipping test'.format(provider_key))
33 else:
34 raise pytest.skip('Provider {} failed to set up this time, '
35 'skipping test'.format(provider_key))
36 # This function is dynamically "fixturized" to setup up a specific provider,
37 # optionally skipping the provider setup if that provider has previously failed.
38 if provider_key in _failed_providers:
39 skip(provider_key, previous_fail=True)
40
41 try:
42 providers.setup_provider(provider_key)
43 except Exception as ex:
44 logger.error('Error setting up provider {}'.format(provider_key))
45 logger.exception(ex)
46 _failed_providers.add(provider_key)
47 skip(provider_key)
48
49
50 @pytest.fixture(scope='function')
51 def setup_provider(request, provider):
52 """Function-scoped fixture to set up a provider"""
53 _setup_provider(provider.key, request)
54
55
56 @pytest.fixture(scope='module')
57 def setup_provider_modscope(request, provider):
58 """Function-scoped fixture to set up a provider"""
59 _setup_provider(provider.key, request)
60
61
62 @pytest.fixture(scope='class')
63 def setup_provider_clsscope(request, provider):
64 """Module-scoped fixture to set up a provider"""
65 _setup_provider(provider.key, request)
66
67
68 @pytest.fixture
69 def setup_provider_funcscope(request, provider):
70 """Function-scoped fixture to set up a provider
71
72 Note:
73
74 While there are cases where this is useful, provider fixtures should
75 be module-scoped the majority of the time.
76
77 """
78 _setup_provider(provider.key, request)
79
80
81 @pytest.fixture(scope="session")
82 def any_provider_session():
83 providers.clear_providers() # To make it clean
84 providers.setup_a_provider(validate=True, check_existing=True)
85
86
87 @pytest.fixture(scope="function")
88 def template(template_location, provider):
89 if template_location is not None:
90 o = provider.data
91 try:
92 for field in template_location:
93 o = o[field]
94 except (IndexError, KeyError):
95 logger.info("Cannot apply %s to %s in the template specification, ignoring.",
96 repr(field), repr(o))
97 else:
98 if not isinstance(o, basestring):
99 raise ValueError("{} is not a string! (for template)".format(repr(o)))
100 templates = TEMPLATES.get(provider.key, None)
101 if templates is not None:
102 if o in templates:
103 return o
104 logger.info(
105 "Wanted template {} on {} but it is not there!\n".format(o, provider.key))
106 pytest.skip('Template not available')
107
108
109 def _small_template(provider):
110 template = provider.data.get('small_template', None)
111 if template:
112 templates = TEMPLATES.get(provider.key, None)
113 if templates is not None:
114 if template in templates:
115 return template
116 logger.info(
117 "Wanted template {} on {} but it is not there!\n".format(template, provider.key))
118 pytest.skip('Template not available')
119
120
121 @pytest.fixture(scope="function")
122 def small_template(provider):
123 return _small_template(provider)
124
125
126 @pytest.fixture(scope="module")
127 def small_template_modscope(provider):
128 return _small_template(provider)
129
130
131 @pytest.fixture(scope="function")
132 def full_template(provider):
133 template = provider.data.get('full_template', {})
134 if template:
135 templates = TEMPLATES.get(provider.key, None)
136 if templates is not None:
137 if template['name'] in templates:
138 return template
139 logger.info(
140 "Wanted template {} on {} but it is not there!\n".format(template, provider.key))
141 pytest.skip('Template not available')
142
143
144 @pytest.fixture(scope="function")
145 def provisioning(provider):
146 return provider.data['provisioning']
147
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/fixtures/provider.py b/fixtures/provider.py
--- a/fixtures/provider.py
+++ b/fixtures/provider.py
@@ -9,6 +9,7 @@
"""
import pytest
+import six
from fixtures.artifactor_plugin import art_client, get_test_idents
from fixtures.templateloader import TEMPLATES
@@ -41,7 +42,7 @@
try:
providers.setup_provider(provider_key)
except Exception as ex:
- logger.error('Error setting up provider {}'.format(provider_key))
+ logger.error('Error setting up provider %s', provider_key)
logger.exception(ex)
_failed_providers.add(provider_key)
skip(provider_key)
@@ -92,29 +93,35 @@
for field in template_location:
o = o[field]
except (IndexError, KeyError):
- logger.info("Cannot apply %s to %s in the template specification, ignoring.",
- repr(field), repr(o))
+ logger.info("Cannot apply %r to %r in the template specification, ignoring.", field, o)
else:
- if not isinstance(o, basestring):
- raise ValueError("{} is not a string! (for template)".format(repr(o)))
+ if not isinstance(o, six.string_types):
+ raise ValueError("{!r} is not a string! (for template)".format(o))
+ if not TEMPLATES:
+ # There is nothing in TEMPLATES, that means no trackerbot URL and no data pulled.
+ # This should normally not constitute an issue so continue.
+ return o
templates = TEMPLATES.get(provider.key, None)
if templates is not None:
if o in templates:
return o
- logger.info(
- "Wanted template {} on {} but it is not there!\n".format(o, provider.key))
+ logger.info("Wanted template %s on %s but it is not there!", o, provider.key)
pytest.skip('Template not available')
def _small_template(provider):
template = provider.data.get('small_template', None)
if template:
+ if not TEMPLATES:
+ # Same as couple of lines above
+ return template
templates = TEMPLATES.get(provider.key, None)
if templates is not None:
if template in templates:
return template
- logger.info(
- "Wanted template {} on {} but it is not there!\n".format(template, provider.key))
+ else:
+ pytest.skip('No small_template for provider {}'.format(provider.key))
+ logger.info("Wanted template %s on %s but it is not there!", template, provider.key)
pytest.skip('Template not available')
@@ -132,12 +139,16 @@
def full_template(provider):
template = provider.data.get('full_template', {})
if template:
+ if not TEMPLATES:
+ # Same as couple of lines above
+ return template
templates = TEMPLATES.get(provider.key, None)
if templates is not None:
if template['name'] in templates:
return template
- logger.info(
- "Wanted template {} on {} but it is not there!\n".format(template, provider.key))
+ else:
+ pytest.skip('No full_template for provider {}'.format(provider.key))
+ logger.info("Wanted template %s on %s but it is not there!", template, provider.key)
pytest.skip('Template not available')
| {"golden_diff": "diff --git a/fixtures/provider.py b/fixtures/provider.py\n--- a/fixtures/provider.py\n+++ b/fixtures/provider.py\n@@ -9,6 +9,7 @@\n \n \"\"\"\n import pytest\n+import six\n \n from fixtures.artifactor_plugin import art_client, get_test_idents\n from fixtures.templateloader import TEMPLATES\n@@ -41,7 +42,7 @@\n try:\n providers.setup_provider(provider_key)\n except Exception as ex:\n- logger.error('Error setting up provider {}'.format(provider_key))\n+ logger.error('Error setting up provider %s', provider_key)\n logger.exception(ex)\n _failed_providers.add(provider_key)\n skip(provider_key)\n@@ -92,29 +93,35 @@\n for field in template_location:\n o = o[field]\n except (IndexError, KeyError):\n- logger.info(\"Cannot apply %s to %s in the template specification, ignoring.\",\n- repr(field), repr(o))\n+ logger.info(\"Cannot apply %r to %r in the template specification, ignoring.\", field, o)\n else:\n- if not isinstance(o, basestring):\n- raise ValueError(\"{} is not a string! (for template)\".format(repr(o)))\n+ if not isinstance(o, six.string_types):\n+ raise ValueError(\"{!r} is not a string! (for template)\".format(o))\n+ if not TEMPLATES:\n+ # There is nothing in TEMPLATES, that means no trackerbot URL and no data pulled.\n+ # This should normally not constitute an issue so continue.\n+ return o\n templates = TEMPLATES.get(provider.key, None)\n if templates is not None:\n if o in templates:\n return o\n- logger.info(\n- \"Wanted template {} on {} but it is not there!\\n\".format(o, provider.key))\n+ logger.info(\"Wanted template %s on %s but it is not there!\", o, provider.key)\n pytest.skip('Template not available')\n \n \n def _small_template(provider):\n template = provider.data.get('small_template', None)\n if template:\n+ if not TEMPLATES:\n+ # Same as couple of lines above\n+ return template\n templates = TEMPLATES.get(provider.key, None)\n if templates is not None:\n if template in templates:\n return template\n- logger.info(\n- \"Wanted template {} on {} but it is not there!\\n\".format(template, provider.key))\n+ else:\n+ pytest.skip('No small_template for provider {}'.format(provider.key))\n+ logger.info(\"Wanted template %s on %s but it is not there!\", template, provider.key)\n pytest.skip('Template not available')\n \n \n@@ -132,12 +139,16 @@\n def full_template(provider):\n template = provider.data.get('full_template', {})\n if template:\n+ if not TEMPLATES:\n+ # Same as couple of lines above\n+ return template\n templates = TEMPLATES.get(provider.key, None)\n if templates is not None:\n if template['name'] in templates:\n return template\n- logger.info(\n- \"Wanted template {} on {} but it is not there!\\n\".format(template, provider.key))\n+ else:\n+ pytest.skip('No full_template for provider {}'.format(provider.key))\n+ logger.info(\"Wanted template %s on %s but it is not there!\", template, provider.key)\n pytest.skip('Template not available')\n", "issue": "Check if we have fallback if no trackerbot URL specified\nWe had some issues htat people with no trackerbot URL in config were not able to run test:\n- Investigate what is happening\n- Add fallback (do not check templates at all? Load them from all providers?)\n\n", "before_files": [{"content": "\"\"\"``setup_provider`` fixture\n\nIn test modules paramatrized with :py:func:`utils.testgen.provider_by_type` (should be\njust about any module that needs a provider to run its tests), this fixture will set up\nthe single provider needed to run that test.\n\nIf the provider setup fails, this fixture will record that failure and skip future tests\nusing the provider.\n\n\"\"\"\nimport pytest\n\nfrom fixtures.artifactor_plugin import art_client, get_test_idents\nfrom fixtures.templateloader import TEMPLATES\nfrom utils import providers\nfrom utils.log import logger\n\n# failed provider tracking for _setup_provider_fixture\n_failed_providers = set()\n\n\ndef _setup_provider(provider_key, request=None):\n def skip(provider_key, previous_fail=False):\n if request:\n node = request.node\n name, location = get_test_idents(node)\n skip_data = {'type': 'provider', 'reason': provider_key}\n art_client.fire_hook('skip_test', test_location=location, test_name=name,\n skip_data=skip_data)\n if previous_fail:\n raise pytest.skip('Provider {} failed to set up previously in another test, '\n 'skipping test'.format(provider_key))\n else:\n raise pytest.skip('Provider {} failed to set up this time, '\n 'skipping test'.format(provider_key))\n # This function is dynamically \"fixturized\" to setup up a specific provider,\n # optionally skipping the provider setup if that provider has previously failed.\n if provider_key in _failed_providers:\n skip(provider_key, previous_fail=True)\n\n try:\n providers.setup_provider(provider_key)\n except Exception as ex:\n logger.error('Error setting up provider {}'.format(provider_key))\n logger.exception(ex)\n _failed_providers.add(provider_key)\n skip(provider_key)\n\n\[email protected](scope='function')\ndef setup_provider(request, provider):\n \"\"\"Function-scoped fixture to set up a provider\"\"\"\n _setup_provider(provider.key, request)\n\n\[email protected](scope='module')\ndef setup_provider_modscope(request, provider):\n \"\"\"Function-scoped fixture to set up a provider\"\"\"\n _setup_provider(provider.key, request)\n\n\[email protected](scope='class')\ndef setup_provider_clsscope(request, provider):\n \"\"\"Module-scoped fixture to set up a provider\"\"\"\n _setup_provider(provider.key, request)\n\n\[email protected]\ndef setup_provider_funcscope(request, provider):\n \"\"\"Function-scoped fixture to set up a provider\n\n Note:\n\n While there are cases where this is useful, provider fixtures should\n be module-scoped the majority of the time.\n\n \"\"\"\n _setup_provider(provider.key, request)\n\n\[email protected](scope=\"session\")\ndef any_provider_session():\n providers.clear_providers() # To make it clean\n providers.setup_a_provider(validate=True, check_existing=True)\n\n\[email protected](scope=\"function\")\ndef template(template_location, provider):\n if template_location is not None:\n o = provider.data\n try:\n for field in template_location:\n o = o[field]\n except (IndexError, KeyError):\n logger.info(\"Cannot apply %s to %s in the template specification, ignoring.\",\n repr(field), repr(o))\n else:\n if not isinstance(o, basestring):\n raise ValueError(\"{} is not a string! (for template)\".format(repr(o)))\n templates = TEMPLATES.get(provider.key, None)\n if templates is not None:\n if o in templates:\n return o\n logger.info(\n \"Wanted template {} on {} but it is not there!\\n\".format(o, provider.key))\n pytest.skip('Template not available')\n\n\ndef _small_template(provider):\n template = provider.data.get('small_template', None)\n if template:\n templates = TEMPLATES.get(provider.key, None)\n if templates is not None:\n if template in templates:\n return template\n logger.info(\n \"Wanted template {} on {} but it is not there!\\n\".format(template, provider.key))\n pytest.skip('Template not available')\n\n\[email protected](scope=\"function\")\ndef small_template(provider):\n return _small_template(provider)\n\n\[email protected](scope=\"module\")\ndef small_template_modscope(provider):\n return _small_template(provider)\n\n\[email protected](scope=\"function\")\ndef full_template(provider):\n template = provider.data.get('full_template', {})\n if template:\n templates = TEMPLATES.get(provider.key, None)\n if templates is not None:\n if template['name'] in templates:\n return template\n logger.info(\n \"Wanted template {} on {} but it is not there!\\n\".format(template, provider.key))\n pytest.skip('Template not available')\n\n\[email protected](scope=\"function\")\ndef provisioning(provider):\n return provider.data['provisioning']\n", "path": "fixtures/provider.py"}], "after_files": [{"content": "\"\"\"``setup_provider`` fixture\n\nIn test modules paramatrized with :py:func:`utils.testgen.provider_by_type` (should be\njust about any module that needs a provider to run its tests), this fixture will set up\nthe single provider needed to run that test.\n\nIf the provider setup fails, this fixture will record that failure and skip future tests\nusing the provider.\n\n\"\"\"\nimport pytest\nimport six\n\nfrom fixtures.artifactor_plugin import art_client, get_test_idents\nfrom fixtures.templateloader import TEMPLATES\nfrom utils import providers\nfrom utils.log import logger\n\n# failed provider tracking for _setup_provider_fixture\n_failed_providers = set()\n\n\ndef _setup_provider(provider_key, request=None):\n def skip(provider_key, previous_fail=False):\n if request:\n node = request.node\n name, location = get_test_idents(node)\n skip_data = {'type': 'provider', 'reason': provider_key}\n art_client.fire_hook('skip_test', test_location=location, test_name=name,\n skip_data=skip_data)\n if previous_fail:\n raise pytest.skip('Provider {} failed to set up previously in another test, '\n 'skipping test'.format(provider_key))\n else:\n raise pytest.skip('Provider {} failed to set up this time, '\n 'skipping test'.format(provider_key))\n # This function is dynamically \"fixturized\" to setup up a specific provider,\n # optionally skipping the provider setup if that provider has previously failed.\n if provider_key in _failed_providers:\n skip(provider_key, previous_fail=True)\n\n try:\n providers.setup_provider(provider_key)\n except Exception as ex:\n logger.error('Error setting up provider %s', provider_key)\n logger.exception(ex)\n _failed_providers.add(provider_key)\n skip(provider_key)\n\n\[email protected](scope='function')\ndef setup_provider(request, provider):\n \"\"\"Function-scoped fixture to set up a provider\"\"\"\n _setup_provider(provider.key, request)\n\n\[email protected](scope='module')\ndef setup_provider_modscope(request, provider):\n \"\"\"Function-scoped fixture to set up a provider\"\"\"\n _setup_provider(provider.key, request)\n\n\[email protected](scope='class')\ndef setup_provider_clsscope(request, provider):\n \"\"\"Module-scoped fixture to set up a provider\"\"\"\n _setup_provider(provider.key, request)\n\n\[email protected]\ndef setup_provider_funcscope(request, provider):\n \"\"\"Function-scoped fixture to set up a provider\n\n Note:\n\n While there are cases where this is useful, provider fixtures should\n be module-scoped the majority of the time.\n\n \"\"\"\n _setup_provider(provider.key, request)\n\n\[email protected](scope=\"session\")\ndef any_provider_session():\n providers.clear_providers() # To make it clean\n providers.setup_a_provider(validate=True, check_existing=True)\n\n\[email protected](scope=\"function\")\ndef template(template_location, provider):\n if template_location is not None:\n o = provider.data\n try:\n for field in template_location:\n o = o[field]\n except (IndexError, KeyError):\n logger.info(\"Cannot apply %r to %r in the template specification, ignoring.\", field, o)\n else:\n if not isinstance(o, six.string_types):\n raise ValueError(\"{!r} is not a string! (for template)\".format(o))\n if not TEMPLATES:\n # There is nothing in TEMPLATES, that means no trackerbot URL and no data pulled.\n # This should normally not constitute an issue so continue.\n return o\n templates = TEMPLATES.get(provider.key, None)\n if templates is not None:\n if o in templates:\n return o\n logger.info(\"Wanted template %s on %s but it is not there!\", o, provider.key)\n pytest.skip('Template not available')\n\n\ndef _small_template(provider):\n template = provider.data.get('small_template', None)\n if template:\n if not TEMPLATES:\n # Same as couple of lines above\n return template\n templates = TEMPLATES.get(provider.key, None)\n if templates is not None:\n if template in templates:\n return template\n else:\n pytest.skip('No small_template for provider {}'.format(provider.key))\n logger.info(\"Wanted template %s on %s but it is not there!\", template, provider.key)\n pytest.skip('Template not available')\n\n\[email protected](scope=\"function\")\ndef small_template(provider):\n return _small_template(provider)\n\n\[email protected](scope=\"module\")\ndef small_template_modscope(provider):\n return _small_template(provider)\n\n\[email protected](scope=\"function\")\ndef full_template(provider):\n template = provider.data.get('full_template', {})\n if template:\n if not TEMPLATES:\n # Same as couple of lines above\n return template\n templates = TEMPLATES.get(provider.key, None)\n if templates is not None:\n if template['name'] in templates:\n return template\n else:\n pytest.skip('No full_template for provider {}'.format(provider.key))\n logger.info(\"Wanted template %s on %s but it is not there!\", template, provider.key)\n pytest.skip('Template not available')\n\n\[email protected](scope=\"function\")\ndef provisioning(provider):\n return provider.data['provisioning']\n", "path": "fixtures/provider.py"}]} | 1,679 | 756 |
gh_patches_debug_23580 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-3516 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Confusing confdir argument
My tests are messy cause I used mitmproxy and mitmdump, both of them in 4.0.1 and 4.0.3 and now I mixed all cases.
At some point, I think I had an error saying that --confdir was deprecated and had to use "--set confdir=" (I can't reproduce this case though with mitmproxy or mitmdump...)
I spent some time to try to make "--set confdir=" work in a weird bash script and arguments with quotes and maybe it failed due to me.
But I realized --confdir was still working eventually for mitmdump in 4.0.3.
Question to sum up:
Is it attended to have both "--confdir" and "--set confdir=" working at the same time for mitmproxy & mitmdump?
If yes, help (-h) should specify it clearly with something like: "--confdir PATH, --set confdir=PATH"
If not, one of them should be deleted.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mitmproxy/tools/cmdline.py`
Content:
```
1 import argparse
2
3 from mitmproxy.addons import core
4
5
6 def common_options(parser, opts):
7 parser.add_argument(
8 '--version',
9 action='store_true',
10 help="show version number and exit",
11 dest='version',
12 )
13 parser.add_argument(
14 '--options',
15 action='store_true',
16 help="Show all options and their default values",
17 )
18 parser.add_argument(
19 '--commands',
20 action='store_true',
21 help="Show all commands and their signatures",
22 )
23 parser.add_argument(
24 "--confdir",
25 type=str, dest="confdir", default=core.CONF_DIR,
26 metavar="PATH",
27 help="Path to the mitmproxy config directory"
28 )
29 parser.add_argument(
30 "--set",
31 type=str, dest="setoptions", default=[],
32 action="append",
33 metavar="option[=value]",
34 help="""
35 Set an option. When the value is omitted, booleans are set to true,
36 strings and integers are set to None (if permitted), and sequences
37 are emptied. Boolean values can be true, false or toggle.
38 """
39 )
40 parser.add_argument(
41 "-q", "--quiet",
42 action="store_true", dest="quiet",
43 help="Quiet."
44 )
45 parser.add_argument(
46 "-v", "--verbose",
47 action="store_const", dest="verbose", const='debug',
48 help="Increase log verbosity."
49 )
50
51 # Basic options
52 opts.make_parser(parser, "mode", short="m")
53 opts.make_parser(parser, "anticache")
54 opts.make_parser(parser, "showhost")
55 opts.make_parser(parser, "rfile", metavar="PATH", short="r")
56 opts.make_parser(parser, "scripts", metavar="SCRIPT", short="s")
57 opts.make_parser(parser, "stickycookie", metavar="FILTER")
58 opts.make_parser(parser, "stickyauth", metavar="FILTER")
59 opts.make_parser(parser, "save_stream_file", metavar="PATH", short="w")
60 opts.make_parser(parser, "anticomp")
61
62 # Proxy options
63 group = parser.add_argument_group("Proxy Options")
64 opts.make_parser(group, "listen_host", metavar="HOST")
65 opts.make_parser(group, "listen_port", metavar="PORT", short="p")
66 opts.make_parser(group, "server", short="n")
67 opts.make_parser(group, "ignore_hosts", metavar="HOST")
68 opts.make_parser(group, "tcp_hosts", metavar="HOST")
69 opts.make_parser(group, "upstream_auth", metavar="USER:PASS")
70 opts.make_parser(group, "proxyauth", metavar="SPEC")
71 opts.make_parser(group, "rawtcp")
72 opts.make_parser(group, "http2")
73
74 # Proxy SSL options
75 group = parser.add_argument_group("SSL")
76 opts.make_parser(group, "certs", metavar="SPEC")
77 opts.make_parser(group, "ssl_insecure", short="k")
78
79 # Client replay
80 group = parser.add_argument_group("Client Replay")
81 opts.make_parser(group, "client_replay", metavar="PATH", short="C")
82
83 # Server replay
84 group = parser.add_argument_group("Server Replay")
85 opts.make_parser(group, "server_replay", metavar="PATH", short="S")
86 opts.make_parser(group, "server_replay_kill_extra")
87 opts.make_parser(group, "server_replay_nopop")
88
89 # Replacements
90 group = parser.add_argument_group("Replacements")
91 opts.make_parser(group, "replacements", metavar="PATTERN", short="R")
92
93 # Set headers
94 group = parser.add_argument_group("Set Headers")
95 opts.make_parser(group, "setheaders", metavar="PATTERN", short="H")
96
97
98 def mitmproxy(opts):
99 parser = argparse.ArgumentParser(usage="%(prog)s [options]")
100 common_options(parser, opts)
101
102 opts.make_parser(parser, "console_layout")
103 opts.make_parser(parser, "console_layout_headers")
104 group = parser.add_argument_group(
105 "Filters",
106 "See help in mitmproxy for filter expression syntax."
107 )
108 opts.make_parser(group, "intercept", metavar="FILTER")
109 opts.make_parser(group, "view_filter", metavar="FILTER")
110 return parser
111
112
113 def mitmdump(opts):
114 parser = argparse.ArgumentParser(usage="%(prog)s [options] [filter]")
115
116 common_options(parser, opts)
117 opts.make_parser(parser, "flow_detail", metavar = "LEVEL")
118 parser.add_argument(
119 'filter_args',
120 nargs="...",
121 help="""
122 Filter expression, equivalent to setting both the view_filter
123 and save_stream_filter options.
124 """
125 )
126 return parser
127
128
129 def mitmweb(opts):
130 parser = argparse.ArgumentParser(usage="%(prog)s [options]")
131
132 group = parser.add_argument_group("Mitmweb")
133 opts.make_parser(group, "web_open_browser")
134 opts.make_parser(group, "web_port", metavar="PORT")
135 opts.make_parser(group, "web_iface", metavar="INTERFACE")
136
137 common_options(parser, opts)
138 group = parser.add_argument_group(
139 "Filters",
140 "See help in mitmproxy for filter expression syntax."
141 )
142 opts.make_parser(group, "intercept", metavar="FILTER")
143 return parser
144
```
Path: `mitmproxy/tools/_main.py`
Content:
```
1 """
2 This file contains python3.6+ syntax!
3 Feel free to import and use whatever new package you deem necessary.
4 """
5
6 import os
7 import sys
8 import asyncio
9 import argparse # noqa
10 import signal # noqa
11 import typing # noqa
12
13 from mitmproxy.tools import cmdline # noqa
14 from mitmproxy import exceptions, master # noqa
15 from mitmproxy import options # noqa
16 from mitmproxy import optmanager # noqa
17 from mitmproxy import proxy # noqa
18 from mitmproxy import log # noqa
19 from mitmproxy.utils import debug, arg_check # noqa
20
21 OPTIONS_FILE_NAME = "config.yaml"
22
23
24 def assert_utf8_env():
25 spec = ""
26 for i in ["LANG", "LC_CTYPE", "LC_ALL"]:
27 spec += os.environ.get(i, "").lower()
28 if "utf" not in spec:
29 print(
30 "Error: mitmproxy requires a UTF console environment.",
31 file=sys.stderr
32 )
33 print(
34 "Set your LANG environment variable to something like en_US.UTF-8",
35 file=sys.stderr
36 )
37 sys.exit(1)
38
39
40 def process_options(parser, opts, args):
41 if args.version:
42 print(debug.dump_system_info())
43 sys.exit(0)
44 if args.quiet or args.options or args.commands:
45 # also reduce log verbosity if --options or --commands is passed,
46 # we don't want log messages from regular startup then.
47 args.termlog_verbosity = 'error'
48 args.flow_detail = 0
49 if args.verbose:
50 args.termlog_verbosity = 'debug'
51 args.flow_detail = 2
52
53 adict = {}
54 for n in dir(args):
55 if n in opts:
56 adict[n] = getattr(args, n)
57 opts.merge(adict)
58
59 return proxy.config.ProxyConfig(opts)
60
61
62 def run(
63 master_cls: typing.Type[master.Master],
64 make_parser: typing.Callable[[options.Options], argparse.ArgumentParser],
65 arguments: typing.Sequence[str],
66 extra: typing.Callable[[typing.Any], dict] = None
67 ) -> master.Master: # pragma: no cover
68 """
69 extra: Extra argument processing callable which returns a dict of
70 options.
71 """
72 debug.register_info_dumpers()
73
74 opts = options.Options()
75 master = master_cls(opts)
76
77 parser = make_parser(opts)
78
79 # To make migration from 2.x to 3.0 bearable.
80 if "-R" in sys.argv and sys.argv[sys.argv.index("-R") + 1].startswith("http"):
81 print("-R is used for specifying replacements.\n"
82 "To use mitmproxy in reverse mode please use --mode reverse:SPEC instead")
83
84 try:
85 args = parser.parse_args(arguments)
86 except SystemExit:
87 arg_check.check()
88 sys.exit(1)
89 try:
90 opts.confdir = args.confdir
91 optmanager.load_paths(
92 opts,
93 os.path.join(opts.confdir, OPTIONS_FILE_NAME),
94 )
95 pconf = process_options(parser, opts, args)
96 server: typing.Any = None
97 if pconf.options.server:
98 try:
99 server = proxy.server.ProxyServer(pconf)
100 except exceptions.ServerException as v:
101 print(str(v), file=sys.stderr)
102 sys.exit(1)
103 else:
104 server = proxy.server.DummyServer(pconf)
105
106 master.server = server
107 if args.options:
108 print(optmanager.dump_defaults(opts))
109 sys.exit(0)
110 if args.commands:
111 master.commands.dump()
112 sys.exit(0)
113 opts.set(*args.setoptions, defer=True)
114 if extra:
115 opts.update(**extra(args))
116
117 loop = asyncio.get_event_loop()
118 for signame in ('SIGINT', 'SIGTERM'):
119 try:
120 loop.add_signal_handler(getattr(signal, signame), master.shutdown)
121 except NotImplementedError:
122 # Not supported on Windows
123 pass
124
125 # Make sure that we catch KeyboardInterrupts on Windows.
126 # https://stackoverflow.com/a/36925722/934719
127 if os.name == "nt":
128 async def wakeup():
129 while True:
130 await asyncio.sleep(0.2)
131 asyncio.ensure_future(wakeup())
132
133 master.run()
134 except exceptions.OptionsError as e:
135 print("%s: %s" % (sys.argv[0], e), file=sys.stderr)
136 sys.exit(1)
137 except (KeyboardInterrupt, RuntimeError):
138 pass
139 return master
140
141
142 def mitmproxy(args=None) -> typing.Optional[int]: # pragma: no cover
143 if os.name == "nt":
144 print("Error: mitmproxy's console interface is not supported on Windows. "
145 "You can run mitmdump or mitmweb instead.", file=sys.stderr)
146 return 1
147 assert_utf8_env()
148 from mitmproxy.tools import console
149 run(console.master.ConsoleMaster, cmdline.mitmproxy, args)
150 return None
151
152
153 def mitmdump(args=None) -> typing.Optional[int]: # pragma: no cover
154 from mitmproxy.tools import dump
155
156 def extra(args):
157 if args.filter_args:
158 v = " ".join(args.filter_args)
159 return dict(
160 save_stream_filter=v,
161 readfile_filter=v,
162 dumper_filter=v,
163 )
164 return {}
165
166 m = run(dump.DumpMaster, cmdline.mitmdump, args, extra)
167 if m and m.errorcheck.has_errored: # type: ignore
168 return 1
169 return None
170
171
172 def mitmweb(args=None) -> typing.Optional[int]: # pragma: no cover
173 from mitmproxy.tools import web
174 run(web.master.WebMaster, cmdline.mitmweb, args)
175 return None
176
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mitmproxy/tools/_main.py b/mitmproxy/tools/_main.py
--- a/mitmproxy/tools/_main.py
+++ b/mitmproxy/tools/_main.py
@@ -87,7 +87,7 @@
arg_check.check()
sys.exit(1)
try:
- opts.confdir = args.confdir
+ opts.set(*args.setoptions, defer=True)
optmanager.load_paths(
opts,
os.path.join(opts.confdir, OPTIONS_FILE_NAME),
@@ -110,7 +110,6 @@
if args.commands:
master.commands.dump()
sys.exit(0)
- opts.set(*args.setoptions, defer=True)
if extra:
opts.update(**extra(args))
diff --git a/mitmproxy/tools/cmdline.py b/mitmproxy/tools/cmdline.py
--- a/mitmproxy/tools/cmdline.py
+++ b/mitmproxy/tools/cmdline.py
@@ -20,12 +20,6 @@
action='store_true',
help="Show all commands and their signatures",
)
- parser.add_argument(
- "--confdir",
- type=str, dest="confdir", default=core.CONF_DIR,
- metavar="PATH",
- help="Path to the mitmproxy config directory"
- )
parser.add_argument(
"--set",
type=str, dest="setoptions", default=[],
| {"golden_diff": "diff --git a/mitmproxy/tools/_main.py b/mitmproxy/tools/_main.py\n--- a/mitmproxy/tools/_main.py\n+++ b/mitmproxy/tools/_main.py\n@@ -87,7 +87,7 @@\n arg_check.check()\n sys.exit(1)\n try:\n- opts.confdir = args.confdir\n+ opts.set(*args.setoptions, defer=True)\n optmanager.load_paths(\n opts,\n os.path.join(opts.confdir, OPTIONS_FILE_NAME),\n@@ -110,7 +110,6 @@\n if args.commands:\n master.commands.dump()\n sys.exit(0)\n- opts.set(*args.setoptions, defer=True)\n if extra:\n opts.update(**extra(args))\n \ndiff --git a/mitmproxy/tools/cmdline.py b/mitmproxy/tools/cmdline.py\n--- a/mitmproxy/tools/cmdline.py\n+++ b/mitmproxy/tools/cmdline.py\n@@ -20,12 +20,6 @@\n action='store_true',\n help=\"Show all commands and their signatures\",\n )\n- parser.add_argument(\n- \"--confdir\",\n- type=str, dest=\"confdir\", default=core.CONF_DIR,\n- metavar=\"PATH\",\n- help=\"Path to the mitmproxy config directory\"\n- )\n parser.add_argument(\n \"--set\",\n type=str, dest=\"setoptions\", default=[],\n", "issue": "Confusing confdir argument\nMy tests are messy cause I used mitmproxy and mitmdump, both of them in 4.0.1 and 4.0.3 and now I mixed all cases.\r\n\r\nAt some point, I think I had an error saying that --confdir was deprecated and had to use \"--set confdir=\" (I can't reproduce this case though with mitmproxy or mitmdump...)\r\n\r\nI spent some time to try to make \"--set confdir=\" work in a weird bash script and arguments with quotes and maybe it failed due to me.\r\nBut I realized --confdir was still working eventually for mitmdump in 4.0.3.\r\n\r\nQuestion to sum up:\r\nIs it attended to have both \"--confdir\" and \"--set confdir=\" working at the same time for mitmproxy & mitmdump?\r\nIf yes, help (-h) should specify it clearly with something like: \"--confdir PATH, --set confdir=PATH\"\r\nIf not, one of them should be deleted.\n", "before_files": [{"content": "import argparse\n\nfrom mitmproxy.addons import core\n\n\ndef common_options(parser, opts):\n parser.add_argument(\n '--version',\n action='store_true',\n help=\"show version number and exit\",\n dest='version',\n )\n parser.add_argument(\n '--options',\n action='store_true',\n help=\"Show all options and their default values\",\n )\n parser.add_argument(\n '--commands',\n action='store_true',\n help=\"Show all commands and their signatures\",\n )\n parser.add_argument(\n \"--confdir\",\n type=str, dest=\"confdir\", default=core.CONF_DIR,\n metavar=\"PATH\",\n help=\"Path to the mitmproxy config directory\"\n )\n parser.add_argument(\n \"--set\",\n type=str, dest=\"setoptions\", default=[],\n action=\"append\",\n metavar=\"option[=value]\",\n help=\"\"\"\n Set an option. When the value is omitted, booleans are set to true,\n strings and integers are set to None (if permitted), and sequences\n are emptied. Boolean values can be true, false or toggle.\n \"\"\"\n )\n parser.add_argument(\n \"-q\", \"--quiet\",\n action=\"store_true\", dest=\"quiet\",\n help=\"Quiet.\"\n )\n parser.add_argument(\n \"-v\", \"--verbose\",\n action=\"store_const\", dest=\"verbose\", const='debug',\n help=\"Increase log verbosity.\"\n )\n\n # Basic options\n opts.make_parser(parser, \"mode\", short=\"m\")\n opts.make_parser(parser, \"anticache\")\n opts.make_parser(parser, \"showhost\")\n opts.make_parser(parser, \"rfile\", metavar=\"PATH\", short=\"r\")\n opts.make_parser(parser, \"scripts\", metavar=\"SCRIPT\", short=\"s\")\n opts.make_parser(parser, \"stickycookie\", metavar=\"FILTER\")\n opts.make_parser(parser, \"stickyauth\", metavar=\"FILTER\")\n opts.make_parser(parser, \"save_stream_file\", metavar=\"PATH\", short=\"w\")\n opts.make_parser(parser, \"anticomp\")\n\n # Proxy options\n group = parser.add_argument_group(\"Proxy Options\")\n opts.make_parser(group, \"listen_host\", metavar=\"HOST\")\n opts.make_parser(group, \"listen_port\", metavar=\"PORT\", short=\"p\")\n opts.make_parser(group, \"server\", short=\"n\")\n opts.make_parser(group, \"ignore_hosts\", metavar=\"HOST\")\n opts.make_parser(group, \"tcp_hosts\", metavar=\"HOST\")\n opts.make_parser(group, \"upstream_auth\", metavar=\"USER:PASS\")\n opts.make_parser(group, \"proxyauth\", metavar=\"SPEC\")\n opts.make_parser(group, \"rawtcp\")\n opts.make_parser(group, \"http2\")\n\n # Proxy SSL options\n group = parser.add_argument_group(\"SSL\")\n opts.make_parser(group, \"certs\", metavar=\"SPEC\")\n opts.make_parser(group, \"ssl_insecure\", short=\"k\")\n\n # Client replay\n group = parser.add_argument_group(\"Client Replay\")\n opts.make_parser(group, \"client_replay\", metavar=\"PATH\", short=\"C\")\n\n # Server replay\n group = parser.add_argument_group(\"Server Replay\")\n opts.make_parser(group, \"server_replay\", metavar=\"PATH\", short=\"S\")\n opts.make_parser(group, \"server_replay_kill_extra\")\n opts.make_parser(group, \"server_replay_nopop\")\n\n # Replacements\n group = parser.add_argument_group(\"Replacements\")\n opts.make_parser(group, \"replacements\", metavar=\"PATTERN\", short=\"R\")\n\n # Set headers\n group = parser.add_argument_group(\"Set Headers\")\n opts.make_parser(group, \"setheaders\", metavar=\"PATTERN\", short=\"H\")\n\n\ndef mitmproxy(opts):\n parser = argparse.ArgumentParser(usage=\"%(prog)s [options]\")\n common_options(parser, opts)\n\n opts.make_parser(parser, \"console_layout\")\n opts.make_parser(parser, \"console_layout_headers\")\n group = parser.add_argument_group(\n \"Filters\",\n \"See help in mitmproxy for filter expression syntax.\"\n )\n opts.make_parser(group, \"intercept\", metavar=\"FILTER\")\n opts.make_parser(group, \"view_filter\", metavar=\"FILTER\")\n return parser\n\n\ndef mitmdump(opts):\n parser = argparse.ArgumentParser(usage=\"%(prog)s [options] [filter]\")\n\n common_options(parser, opts)\n opts.make_parser(parser, \"flow_detail\", metavar = \"LEVEL\")\n parser.add_argument(\n 'filter_args',\n nargs=\"...\",\n help=\"\"\"\n Filter expression, equivalent to setting both the view_filter\n and save_stream_filter options.\n \"\"\"\n )\n return parser\n\n\ndef mitmweb(opts):\n parser = argparse.ArgumentParser(usage=\"%(prog)s [options]\")\n\n group = parser.add_argument_group(\"Mitmweb\")\n opts.make_parser(group, \"web_open_browser\")\n opts.make_parser(group, \"web_port\", metavar=\"PORT\")\n opts.make_parser(group, \"web_iface\", metavar=\"INTERFACE\")\n\n common_options(parser, opts)\n group = parser.add_argument_group(\n \"Filters\",\n \"See help in mitmproxy for filter expression syntax.\"\n )\n opts.make_parser(group, \"intercept\", metavar=\"FILTER\")\n return parser\n", "path": "mitmproxy/tools/cmdline.py"}, {"content": "\"\"\"\nThis file contains python3.6+ syntax!\nFeel free to import and use whatever new package you deem necessary.\n\"\"\"\n\nimport os\nimport sys\nimport asyncio\nimport argparse # noqa\nimport signal # noqa\nimport typing # noqa\n\nfrom mitmproxy.tools import cmdline # noqa\nfrom mitmproxy import exceptions, master # noqa\nfrom mitmproxy import options # noqa\nfrom mitmproxy import optmanager # noqa\nfrom mitmproxy import proxy # noqa\nfrom mitmproxy import log # noqa\nfrom mitmproxy.utils import debug, arg_check # noqa\n\nOPTIONS_FILE_NAME = \"config.yaml\"\n\n\ndef assert_utf8_env():\n spec = \"\"\n for i in [\"LANG\", \"LC_CTYPE\", \"LC_ALL\"]:\n spec += os.environ.get(i, \"\").lower()\n if \"utf\" not in spec:\n print(\n \"Error: mitmproxy requires a UTF console environment.\",\n file=sys.stderr\n )\n print(\n \"Set your LANG environment variable to something like en_US.UTF-8\",\n file=sys.stderr\n )\n sys.exit(1)\n\n\ndef process_options(parser, opts, args):\n if args.version:\n print(debug.dump_system_info())\n sys.exit(0)\n if args.quiet or args.options or args.commands:\n # also reduce log verbosity if --options or --commands is passed,\n # we don't want log messages from regular startup then.\n args.termlog_verbosity = 'error'\n args.flow_detail = 0\n if args.verbose:\n args.termlog_verbosity = 'debug'\n args.flow_detail = 2\n\n adict = {}\n for n in dir(args):\n if n in opts:\n adict[n] = getattr(args, n)\n opts.merge(adict)\n\n return proxy.config.ProxyConfig(opts)\n\n\ndef run(\n master_cls: typing.Type[master.Master],\n make_parser: typing.Callable[[options.Options], argparse.ArgumentParser],\n arguments: typing.Sequence[str],\n extra: typing.Callable[[typing.Any], dict] = None\n) -> master.Master: # pragma: no cover\n \"\"\"\n extra: Extra argument processing callable which returns a dict of\n options.\n \"\"\"\n debug.register_info_dumpers()\n\n opts = options.Options()\n master = master_cls(opts)\n\n parser = make_parser(opts)\n\n # To make migration from 2.x to 3.0 bearable.\n if \"-R\" in sys.argv and sys.argv[sys.argv.index(\"-R\") + 1].startswith(\"http\"):\n print(\"-R is used for specifying replacements.\\n\"\n \"To use mitmproxy in reverse mode please use --mode reverse:SPEC instead\")\n\n try:\n args = parser.parse_args(arguments)\n except SystemExit:\n arg_check.check()\n sys.exit(1)\n try:\n opts.confdir = args.confdir\n optmanager.load_paths(\n opts,\n os.path.join(opts.confdir, OPTIONS_FILE_NAME),\n )\n pconf = process_options(parser, opts, args)\n server: typing.Any = None\n if pconf.options.server:\n try:\n server = proxy.server.ProxyServer(pconf)\n except exceptions.ServerException as v:\n print(str(v), file=sys.stderr)\n sys.exit(1)\n else:\n server = proxy.server.DummyServer(pconf)\n\n master.server = server\n if args.options:\n print(optmanager.dump_defaults(opts))\n sys.exit(0)\n if args.commands:\n master.commands.dump()\n sys.exit(0)\n opts.set(*args.setoptions, defer=True)\n if extra:\n opts.update(**extra(args))\n\n loop = asyncio.get_event_loop()\n for signame in ('SIGINT', 'SIGTERM'):\n try:\n loop.add_signal_handler(getattr(signal, signame), master.shutdown)\n except NotImplementedError:\n # Not supported on Windows\n pass\n\n # Make sure that we catch KeyboardInterrupts on Windows.\n # https://stackoverflow.com/a/36925722/934719\n if os.name == \"nt\":\n async def wakeup():\n while True:\n await asyncio.sleep(0.2)\n asyncio.ensure_future(wakeup())\n\n master.run()\n except exceptions.OptionsError as e:\n print(\"%s: %s\" % (sys.argv[0], e), file=sys.stderr)\n sys.exit(1)\n except (KeyboardInterrupt, RuntimeError):\n pass\n return master\n\n\ndef mitmproxy(args=None) -> typing.Optional[int]: # pragma: no cover\n if os.name == \"nt\":\n print(\"Error: mitmproxy's console interface is not supported on Windows. \"\n \"You can run mitmdump or mitmweb instead.\", file=sys.stderr)\n return 1\n assert_utf8_env()\n from mitmproxy.tools import console\n run(console.master.ConsoleMaster, cmdline.mitmproxy, args)\n return None\n\n\ndef mitmdump(args=None) -> typing.Optional[int]: # pragma: no cover\n from mitmproxy.tools import dump\n\n def extra(args):\n if args.filter_args:\n v = \" \".join(args.filter_args)\n return dict(\n save_stream_filter=v,\n readfile_filter=v,\n dumper_filter=v,\n )\n return {}\n\n m = run(dump.DumpMaster, cmdline.mitmdump, args, extra)\n if m and m.errorcheck.has_errored: # type: ignore\n return 1\n return None\n\n\ndef mitmweb(args=None) -> typing.Optional[int]: # pragma: no cover\n from mitmproxy.tools import web\n run(web.master.WebMaster, cmdline.mitmweb, args)\n return None\n", "path": "mitmproxy/tools/_main.py"}], "after_files": [{"content": "import argparse\n\nfrom mitmproxy.addons import core\n\n\ndef common_options(parser, opts):\n parser.add_argument(\n '--version',\n action='store_true',\n help=\"show version number and exit\",\n dest='version',\n )\n parser.add_argument(\n '--options',\n action='store_true',\n help=\"Show all options and their default values\",\n )\n parser.add_argument(\n '--commands',\n action='store_true',\n help=\"Show all commands and their signatures\",\n )\n parser.add_argument(\n \"--set\",\n type=str, dest=\"setoptions\", default=[],\n action=\"append\",\n metavar=\"option[=value]\",\n help=\"\"\"\n Set an option. When the value is omitted, booleans are set to true,\n strings and integers are set to None (if permitted), and sequences\n are emptied. Boolean values can be true, false or toggle.\n \"\"\"\n )\n parser.add_argument(\n \"-q\", \"--quiet\",\n action=\"store_true\", dest=\"quiet\",\n help=\"Quiet.\"\n )\n parser.add_argument(\n \"-v\", \"--verbose\",\n action=\"store_const\", dest=\"verbose\", const='debug',\n help=\"Increase log verbosity.\"\n )\n\n # Basic options\n opts.make_parser(parser, \"mode\", short=\"m\")\n opts.make_parser(parser, \"anticache\")\n opts.make_parser(parser, \"showhost\")\n opts.make_parser(parser, \"rfile\", metavar=\"PATH\", short=\"r\")\n opts.make_parser(parser, \"scripts\", metavar=\"SCRIPT\", short=\"s\")\n opts.make_parser(parser, \"stickycookie\", metavar=\"FILTER\")\n opts.make_parser(parser, \"stickyauth\", metavar=\"FILTER\")\n opts.make_parser(parser, \"save_stream_file\", metavar=\"PATH\", short=\"w\")\n opts.make_parser(parser, \"anticomp\")\n\n # Proxy options\n group = parser.add_argument_group(\"Proxy Options\")\n opts.make_parser(group, \"listen_host\", metavar=\"HOST\")\n opts.make_parser(group, \"listen_port\", metavar=\"PORT\", short=\"p\")\n opts.make_parser(group, \"server\", short=\"n\")\n opts.make_parser(group, \"ignore_hosts\", metavar=\"HOST\")\n opts.make_parser(group, \"tcp_hosts\", metavar=\"HOST\")\n opts.make_parser(group, \"upstream_auth\", metavar=\"USER:PASS\")\n opts.make_parser(group, \"proxyauth\", metavar=\"SPEC\")\n opts.make_parser(group, \"rawtcp\")\n opts.make_parser(group, \"http2\")\n\n # Proxy SSL options\n group = parser.add_argument_group(\"SSL\")\n opts.make_parser(group, \"certs\", metavar=\"SPEC\")\n opts.make_parser(group, \"ssl_insecure\", short=\"k\")\n\n # Client replay\n group = parser.add_argument_group(\"Client Replay\")\n opts.make_parser(group, \"client_replay\", metavar=\"PATH\", short=\"C\")\n\n # Server replay\n group = parser.add_argument_group(\"Server Replay\")\n opts.make_parser(group, \"server_replay\", metavar=\"PATH\", short=\"S\")\n opts.make_parser(group, \"server_replay_kill_extra\")\n opts.make_parser(group, \"server_replay_nopop\")\n\n # Replacements\n group = parser.add_argument_group(\"Replacements\")\n opts.make_parser(group, \"replacements\", metavar=\"PATTERN\", short=\"R\")\n\n # Set headers\n group = parser.add_argument_group(\"Set Headers\")\n opts.make_parser(group, \"setheaders\", metavar=\"PATTERN\", short=\"H\")\n\n\ndef mitmproxy(opts):\n parser = argparse.ArgumentParser(usage=\"%(prog)s [options]\")\n common_options(parser, opts)\n\n opts.make_parser(parser, \"console_layout\")\n opts.make_parser(parser, \"console_layout_headers\")\n group = parser.add_argument_group(\n \"Filters\",\n \"See help in mitmproxy for filter expression syntax.\"\n )\n opts.make_parser(group, \"intercept\", metavar=\"FILTER\")\n opts.make_parser(group, \"view_filter\", metavar=\"FILTER\")\n return parser\n\n\ndef mitmdump(opts):\n parser = argparse.ArgumentParser(usage=\"%(prog)s [options] [filter]\")\n\n common_options(parser, opts)\n opts.make_parser(parser, \"flow_detail\", metavar = \"LEVEL\")\n parser.add_argument(\n 'filter_args',\n nargs=\"...\",\n help=\"\"\"\n Filter expression, equivalent to setting both the view_filter\n and save_stream_filter options.\n \"\"\"\n )\n return parser\n\n\ndef mitmweb(opts):\n parser = argparse.ArgumentParser(usage=\"%(prog)s [options]\")\n\n group = parser.add_argument_group(\"Mitmweb\")\n opts.make_parser(group, \"web_open_browser\")\n opts.make_parser(group, \"web_port\", metavar=\"PORT\")\n opts.make_parser(group, \"web_iface\", metavar=\"INTERFACE\")\n\n common_options(parser, opts)\n group = parser.add_argument_group(\n \"Filters\",\n \"See help in mitmproxy for filter expression syntax.\"\n )\n opts.make_parser(group, \"intercept\", metavar=\"FILTER\")\n return parser\n", "path": "mitmproxy/tools/cmdline.py"}, {"content": "\"\"\"\nThis file contains python3.6+ syntax!\nFeel free to import and use whatever new package you deem necessary.\n\"\"\"\n\nimport os\nimport sys\nimport asyncio\nimport argparse # noqa\nimport signal # noqa\nimport typing # noqa\n\nfrom mitmproxy.tools import cmdline # noqa\nfrom mitmproxy import exceptions, master # noqa\nfrom mitmproxy import options # noqa\nfrom mitmproxy import optmanager # noqa\nfrom mitmproxy import proxy # noqa\nfrom mitmproxy import log # noqa\nfrom mitmproxy.utils import debug, arg_check # noqa\n\nOPTIONS_FILE_NAME = \"config.yaml\"\n\n\ndef assert_utf8_env():\n spec = \"\"\n for i in [\"LANG\", \"LC_CTYPE\", \"LC_ALL\"]:\n spec += os.environ.get(i, \"\").lower()\n if \"utf\" not in spec:\n print(\n \"Error: mitmproxy requires a UTF console environment.\",\n file=sys.stderr\n )\n print(\n \"Set your LANG environment variable to something like en_US.UTF-8\",\n file=sys.stderr\n )\n sys.exit(1)\n\n\ndef process_options(parser, opts, args):\n if args.version:\n print(debug.dump_system_info())\n sys.exit(0)\n if args.quiet or args.options or args.commands:\n # also reduce log verbosity if --options or --commands is passed,\n # we don't want log messages from regular startup then.\n args.termlog_verbosity = 'error'\n args.flow_detail = 0\n if args.verbose:\n args.termlog_verbosity = 'debug'\n args.flow_detail = 2\n\n adict = {}\n for n in dir(args):\n if n in opts:\n adict[n] = getattr(args, n)\n opts.merge(adict)\n\n return proxy.config.ProxyConfig(opts)\n\n\ndef run(\n master_cls: typing.Type[master.Master],\n make_parser: typing.Callable[[options.Options], argparse.ArgumentParser],\n arguments: typing.Sequence[str],\n extra: typing.Callable[[typing.Any], dict] = None\n) -> master.Master: # pragma: no cover\n \"\"\"\n extra: Extra argument processing callable which returns a dict of\n options.\n \"\"\"\n debug.register_info_dumpers()\n\n opts = options.Options()\n master = master_cls(opts)\n\n parser = make_parser(opts)\n\n # To make migration from 2.x to 3.0 bearable.\n if \"-R\" in sys.argv and sys.argv[sys.argv.index(\"-R\") + 1].startswith(\"http\"):\n print(\"-R is used for specifying replacements.\\n\"\n \"To use mitmproxy in reverse mode please use --mode reverse:SPEC instead\")\n\n try:\n args = parser.parse_args(arguments)\n except SystemExit:\n arg_check.check()\n sys.exit(1)\n try:\n opts.set(*args.setoptions, defer=True)\n optmanager.load_paths(\n opts,\n os.path.join(opts.confdir, OPTIONS_FILE_NAME),\n )\n pconf = process_options(parser, opts, args)\n server: typing.Any = None\n if pconf.options.server:\n try:\n server = proxy.server.ProxyServer(pconf)\n except exceptions.ServerException as v:\n print(str(v), file=sys.stderr)\n sys.exit(1)\n else:\n server = proxy.server.DummyServer(pconf)\n\n master.server = server\n if args.options:\n print(optmanager.dump_defaults(opts))\n sys.exit(0)\n if args.commands:\n master.commands.dump()\n sys.exit(0)\n if extra:\n opts.update(**extra(args))\n\n loop = asyncio.get_event_loop()\n for signame in ('SIGINT', 'SIGTERM'):\n try:\n loop.add_signal_handler(getattr(signal, signame), master.shutdown)\n except NotImplementedError:\n # Not supported on Windows\n pass\n\n # Make sure that we catch KeyboardInterrupts on Windows.\n # https://stackoverflow.com/a/36925722/934719\n if os.name == \"nt\":\n async def wakeup():\n while True:\n await asyncio.sleep(0.2)\n asyncio.ensure_future(wakeup())\n\n master.run()\n except exceptions.OptionsError as e:\n print(\"%s: %s\" % (sys.argv[0], e), file=sys.stderr)\n sys.exit(1)\n except (KeyboardInterrupt, RuntimeError):\n pass\n return master\n\n\ndef mitmproxy(args=None) -> typing.Optional[int]: # pragma: no cover\n if os.name == \"nt\":\n print(\"Error: mitmproxy's console interface is not supported on Windows. \"\n \"You can run mitmdump or mitmweb instead.\", file=sys.stderr)\n return 1\n assert_utf8_env()\n from mitmproxy.tools import console\n run(console.master.ConsoleMaster, cmdline.mitmproxy, args)\n return None\n\n\ndef mitmdump(args=None) -> typing.Optional[int]: # pragma: no cover\n from mitmproxy.tools import dump\n\n def extra(args):\n if args.filter_args:\n v = \" \".join(args.filter_args)\n return dict(\n save_stream_filter=v,\n readfile_filter=v,\n dumper_filter=v,\n )\n return {}\n\n m = run(dump.DumpMaster, cmdline.mitmdump, args, extra)\n if m and m.errorcheck.has_errored: # type: ignore\n return 1\n return None\n\n\ndef mitmweb(args=None) -> typing.Optional[int]: # pragma: no cover\n from mitmproxy.tools import web\n run(web.master.WebMaster, cmdline.mitmweb, args)\n return None\n", "path": "mitmproxy/tools/_main.py"}]} | 3,608 | 306 |
gh_patches_debug_15965 | rasdani/github-patches | git_diff | mozilla__bugbug-121 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add 'priority' feature
In bugbug/bug_features.py
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bugbug/models/tracking.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # This Source Code Form is subject to the terms of the Mozilla Public
3 # License, v. 2.0. If a copy of the MPL was not distributed with this file,
4 # You can obtain one at http://mozilla.org/MPL/2.0/.
5
6 import xgboost
7 from sklearn.compose import ColumnTransformer
8 from sklearn.feature_extraction import DictVectorizer
9 from sklearn.pipeline import Pipeline
10
11 from bugbug import bug_features
12 from bugbug import bugzilla
13 from bugbug.model import Model
14
15
16 class TrackingModel(Model):
17 def __init__(self, lemmatization=False):
18 Model.__init__(self, lemmatization)
19
20 feature_extractors = [
21 bug_features.has_str(),
22 bug_features.has_regression_range(),
23 bug_features.severity(),
24 bug_features.keywords(),
25 bug_features.is_coverity_issue(),
26 bug_features.has_crash_signature(),
27 bug_features.has_url(),
28 bug_features.has_w3c_url(),
29 bug_features.has_github_url(),
30 bug_features.whiteboard(),
31 bug_features.patches(),
32 bug_features.landings(),
33 bug_features.title(),
34 ]
35
36 cleanup_functions = [
37 bug_features.cleanup_fileref,
38 bug_features.cleanup_url,
39 bug_features.cleanup_synonyms,
40 ]
41
42 self.extraction_pipeline = Pipeline([
43 ('bug_extractor', bug_features.BugExtractor(feature_extractors, cleanup_functions, rollback=True, rollback_when=self.rollback)),
44 ('union', ColumnTransformer([
45 ('data', DictVectorizer(), 'data'),
46
47 ('title', self.text_vectorizer(stop_words='english'), 'title'),
48
49 ('comments', self.text_vectorizer(stop_words='english'), 'comments'),
50 ])),
51 ])
52
53 self.clf = xgboost.XGBClassifier(n_jobs=16)
54 self.clf.set_params(predictor='cpu_predictor')
55
56 def rollback(self, change):
57 return change['field_name'].startswith('cf_tracking_firefox')
58
59 def get_labels(self):
60 classes = {}
61
62 for bug_data in bugzilla.get_bugs():
63 bug_id = int(bug_data['id'])
64
65 for entry in bug_data['history']:
66 for change in entry['changes']:
67 if change['field_name'].startswith('cf_tracking_firefox'):
68 if change['added'] in ['blocking', '+']:
69 classes[bug_id] = 1
70 elif change['added'] == '-':
71 classes[bug_id] = 0
72
73 if bug_id not in classes:
74 classes[bug_id] = 0
75
76 return classes
77
78 def get_feature_names(self):
79 return self.extraction_pipeline.named_steps['union'].get_feature_names()
80
```
Path: `bugbug/bug_features.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # This Source Code Form is subject to the terms of the Mozilla Public
3 # License, v. 2.0. If a copy of the MPL was not distributed with this file,
4 # You can obtain one at http://mozilla.org/MPL/2.0/.
5
6 import re
7 from datetime import datetime
8 from datetime import timezone
9
10 import pandas as pd
11 from libmozdata import versions
12 from sklearn.base import BaseEstimator
13 from sklearn.base import TransformerMixin
14
15 from bugbug import bug_snapshot
16 from bugbug import repository
17
18
19 def field(bug, field):
20 if field in bug and bug[field] != '---':
21 return bug[field]
22
23 return None
24
25
26 class has_str(object):
27 def __call__(self, bug):
28 return field(bug, 'cf_has_str')
29
30
31 class has_regression_range(object):
32 def __call__(self, bug):
33 return field(bug, 'cf_has_regression_range')
34
35
36 class has_crash_signature(object):
37 def __call__(self, bug):
38 return 'cf_crash_signature' in bug and bug['cf_crash_signature'] != ''
39
40
41 class keywords(object):
42 def __init__(self, to_ignore=set()):
43 self.to_ignore = to_ignore
44
45 def __call__(self, bug):
46 keywords = []
47 subkeywords = []
48 for keyword in bug['keywords']:
49 if keyword in self.to_ignore:
50 continue
51
52 keywords.append(keyword)
53
54 if keyword.startswith('sec-'):
55 subkeywords.append('sec-')
56 elif keyword.startswith('csectype-'):
57 subkeywords.append('csectype-')
58 return keywords + subkeywords
59
60
61 class severity(object):
62 def __call__(self, bug):
63 return field(bug, 'severity')
64
65
66 class is_coverity_issue(object):
67 def __call__(self, bug):
68 return re.search('[CID ?[0-9]+]', bug['summary']) is not None or re.search('[CID ?[0-9]+]', bug['whiteboard']) is not None
69
70
71 class has_url(object):
72 def __call__(self, bug):
73 return bug['url'] != ''
74
75
76 class has_w3c_url(object):
77 def __call__(self, bug):
78 return 'w3c' in bug['url']
79
80
81 class has_github_url(object):
82 def __call__(self, bug):
83 return 'github' in bug['url']
84
85
86 class whiteboard(object):
87 def __call__(self, bug):
88 ret = []
89
90 # TODO: Add any [XXX:YYY] that appears in the whiteboard as [XXX: only
91
92 for elem in ['memshrink', '[ux]']:
93 if elem in bug['whiteboard'].lower():
94 ret.append(elem)
95
96 return ret
97
98
99 class patches(object):
100 def __call__(self, bug):
101 return sum(1 for a in bug['attachments'] if a['is_patch'] or a['content_type'] in ['text/x-review-board-request', 'text/x-phabricator-request'])
102
103
104 class landings(object):
105 def __call__(self, bug):
106 return sum(1 for c in bug['comments'] if '://hg.mozilla.org/' in c['text'])
107
108
109 class title(object):
110 def __call__(self, bug):
111 ret = []
112
113 keywords = [
114 'fail',
115 ]
116 for keyword in keywords:
117 if keyword in bug['summary'].lower():
118 ret.append(keyword)
119
120 return ret
121
122
123 class product(object):
124 def __call__(self, bug):
125 return bug['product']
126
127
128 class component(object):
129 def __call__(self, bug):
130 return bug['component']
131
132
133 class is_mozillian(object):
134 def __call__(self, bug):
135 return any(bug['creator_detail']['email'].endswith(domain) for domain in ['@mozilla.com', '@mozilla.org'])
136
137
138 class delta_request_merge(object):
139 def __call__(self, bug):
140 for history in bug['history']:
141 for change in history['changes']:
142 if change['added'].startswith('approval-mozilla'):
143 uplift_request_datetime = datetime.strptime(history['when'], '%Y-%m-%dT%H:%M:%SZ').replace(tzinfo=timezone.utc)
144 timedelta = versions.getCloserRelease(uplift_request_datetime)[1] - uplift_request_datetime
145 return timedelta.days + timedelta.seconds / (24 * 60 * 60)
146
147 return None
148
149
150 class commit_added(object):
151 def __call__(self, bug):
152 return sum(commit['added'] for commit in bug['commits'])
153
154
155 class commit_deleted(object):
156 def __call__(self, bug):
157 return sum(commit['deleted'] for commit in bug['commits'])
158
159
160 class commit_types(object):
161 def __call__(self, bug):
162 return sum((commit['types'] for commit in bug['commits']), [])
163
164
165 def cleanup_url(text):
166 text = re.sub(r'http[s]?://(hg.mozilla|searchfox|dxr.mozilla)\S+', '__CODE_REFERENCE_URL__', text)
167 return re.sub(r'http\S+', '__URL__', text)
168
169
170 def cleanup_fileref(text):
171 return re.sub(r'\w+\.py\b|\w+\.json\b|\w+\.js\b|\w+\.jsm\b|\w+\.html\b|\w+\.css\b|\w+\.c\b|\w+\.cpp\b|\w+\.h\b', '__FILE_REFERENCE__', text)
172
173
174 def cleanup_responses(text):
175 return re.sub('>[^\n]+', ' ', text)
176
177
178 def cleanup_hex(text):
179 return re.sub(r'\b0[xX][0-9a-fA-F]+\b', '__HEX_NUMBER__', text)
180
181
182 def cleanup_dll(text):
183 return re.sub(r'\w+(\.dll|\.so|\.dylib)\b', '__DLL_NAME__', text)
184
185
186 def cleanup_synonyms(text):
187 synonyms = [
188 ('safemode', ['safemode', 'safe mode']),
189 ('str', ['str', 'steps to reproduce', 'repro steps']),
190 ('uaf', ['uaf', 'use after free', 'use-after-free']),
191 ('asan', ['asan', 'address sanitizer', 'addresssanitizer']),
192 ('permafailure', ['permafailure', 'permafailing', 'permafail', 'perma failure', 'perma failing', 'perma fail', 'perma-failure', 'perma-failing', 'perma-fail']),
193 ('spec', ['spec', 'specification']),
194 ]
195
196 for synonym_group, synonym_list in synonyms:
197 text = re.sub('|'.join(fr'\b{synonym}\b' for synonym in synonym_list), synonym_group, text, flags=re.IGNORECASE)
198
199 return text
200
201
202 def cleanup_crash(text):
203 return re.sub(r'bp-[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{6}[0-9]{6}\b', '__CRASH_STATS_LINK__', text)
204
205
206 class BugExtractor(BaseEstimator, TransformerMixin):
207 def __init__(self, feature_extractors, cleanup_functions, rollback=False, rollback_when=None, commit_data=False):
208 self.feature_extractors = feature_extractors
209 self.cleanup_functions = cleanup_functions
210 self.rollback = rollback
211 self.rollback_when = rollback_when
212 self.commit_map = repository.get_commit_map() if commit_data else None
213
214 def fit(self, x, y=None):
215 return self
216
217 def transform(self, bugs):
218 results = []
219
220 for bug in bugs:
221 bug_id = bug['id']
222
223 if self.rollback:
224 bug = bug_snapshot.rollback(bug, self.rollback_when)
225
226 data = {}
227
228 if self.commit_map is not None:
229 if bug_id in self.commit_map:
230 bug['commits'] = self.commit_map[bug_id]
231 else:
232 bug['commits'] = []
233
234 for f in self.feature_extractors:
235 res = f(bug)
236
237 if res is None:
238 continue
239
240 if isinstance(res, list):
241 for item in res:
242 data[f.__class__.__name__ + '-' + item] = 'True'
243 continue
244
245 if isinstance(res, bool):
246 res = str(res)
247
248 data[f.__class__.__name__] = res
249
250 # TODO: Try simply using all possible fields instead of extracting features manually.
251
252 for cleanup_function in self.cleanup_functions:
253 bug['summary'] = cleanup_function(bug['summary'])
254 for c in bug['comments']:
255 c['text'] = cleanup_function(c['text'])
256
257 result = {
258 'data': data,
259 'title': bug['summary'],
260 'first_comment': bug['comments'][0]['text'],
261 'comments': ' '.join([c['text'] for c in bug['comments']]),
262 }
263
264 results.append(result)
265
266 return pd.DataFrame(results)
267
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bugbug/bug_features.py b/bugbug/bug_features.py
--- a/bugbug/bug_features.py
+++ b/bugbug/bug_features.py
@@ -162,6 +162,11 @@
return sum((commit['types'] for commit in bug['commits']), [])
+class priority(object):
+ def __call__(self, bug):
+ return bug['priority']
+
+
def cleanup_url(text):
text = re.sub(r'http[s]?://(hg.mozilla|searchfox|dxr.mozilla)\S+', '__CODE_REFERENCE_URL__', text)
return re.sub(r'http\S+', '__URL__', text)
diff --git a/bugbug/models/tracking.py b/bugbug/models/tracking.py
--- a/bugbug/models/tracking.py
+++ b/bugbug/models/tracking.py
@@ -31,6 +31,7 @@
bug_features.patches(),
bug_features.landings(),
bug_features.title(),
+ bug_features.priority(),
]
cleanup_functions = [
| {"golden_diff": "diff --git a/bugbug/bug_features.py b/bugbug/bug_features.py\n--- a/bugbug/bug_features.py\n+++ b/bugbug/bug_features.py\n@@ -162,6 +162,11 @@\n return sum((commit['types'] for commit in bug['commits']), [])\n \n \n+class priority(object):\n+ def __call__(self, bug):\n+ return bug['priority']\n+\n+\n def cleanup_url(text):\n text = re.sub(r'http[s]?://(hg.mozilla|searchfox|dxr.mozilla)\\S+', '__CODE_REFERENCE_URL__', text)\n return re.sub(r'http\\S+', '__URL__', text)\ndiff --git a/bugbug/models/tracking.py b/bugbug/models/tracking.py\n--- a/bugbug/models/tracking.py\n+++ b/bugbug/models/tracking.py\n@@ -31,6 +31,7 @@\n bug_features.patches(),\n bug_features.landings(),\n bug_features.title(),\n+ bug_features.priority(),\n ]\n \n cleanup_functions = [\n", "issue": "Add 'priority' feature\nIn bugbug/bug_features.py\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport xgboost\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.feature_extraction import DictVectorizer\nfrom sklearn.pipeline import Pipeline\n\nfrom bugbug import bug_features\nfrom bugbug import bugzilla\nfrom bugbug.model import Model\n\n\nclass TrackingModel(Model):\n def __init__(self, lemmatization=False):\n Model.__init__(self, lemmatization)\n\n feature_extractors = [\n bug_features.has_str(),\n bug_features.has_regression_range(),\n bug_features.severity(),\n bug_features.keywords(),\n bug_features.is_coverity_issue(),\n bug_features.has_crash_signature(),\n bug_features.has_url(),\n bug_features.has_w3c_url(),\n bug_features.has_github_url(),\n bug_features.whiteboard(),\n bug_features.patches(),\n bug_features.landings(),\n bug_features.title(),\n ]\n\n cleanup_functions = [\n bug_features.cleanup_fileref,\n bug_features.cleanup_url,\n bug_features.cleanup_synonyms,\n ]\n\n self.extraction_pipeline = Pipeline([\n ('bug_extractor', bug_features.BugExtractor(feature_extractors, cleanup_functions, rollback=True, rollback_when=self.rollback)),\n ('union', ColumnTransformer([\n ('data', DictVectorizer(), 'data'),\n\n ('title', self.text_vectorizer(stop_words='english'), 'title'),\n\n ('comments', self.text_vectorizer(stop_words='english'), 'comments'),\n ])),\n ])\n\n self.clf = xgboost.XGBClassifier(n_jobs=16)\n self.clf.set_params(predictor='cpu_predictor')\n\n def rollback(self, change):\n return change['field_name'].startswith('cf_tracking_firefox')\n\n def get_labels(self):\n classes = {}\n\n for bug_data in bugzilla.get_bugs():\n bug_id = int(bug_data['id'])\n\n for entry in bug_data['history']:\n for change in entry['changes']:\n if change['field_name'].startswith('cf_tracking_firefox'):\n if change['added'] in ['blocking', '+']:\n classes[bug_id] = 1\n elif change['added'] == '-':\n classes[bug_id] = 0\n\n if bug_id not in classes:\n classes[bug_id] = 0\n\n return classes\n\n def get_feature_names(self):\n return self.extraction_pipeline.named_steps['union'].get_feature_names()\n", "path": "bugbug/models/tracking.py"}, {"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport re\nfrom datetime import datetime\nfrom datetime import timezone\n\nimport pandas as pd\nfrom libmozdata import versions\nfrom sklearn.base import BaseEstimator\nfrom sklearn.base import TransformerMixin\n\nfrom bugbug import bug_snapshot\nfrom bugbug import repository\n\n\ndef field(bug, field):\n if field in bug and bug[field] != '---':\n return bug[field]\n\n return None\n\n\nclass has_str(object):\n def __call__(self, bug):\n return field(bug, 'cf_has_str')\n\n\nclass has_regression_range(object):\n def __call__(self, bug):\n return field(bug, 'cf_has_regression_range')\n\n\nclass has_crash_signature(object):\n def __call__(self, bug):\n return 'cf_crash_signature' in bug and bug['cf_crash_signature'] != ''\n\n\nclass keywords(object):\n def __init__(self, to_ignore=set()):\n self.to_ignore = to_ignore\n\n def __call__(self, bug):\n keywords = []\n subkeywords = []\n for keyword in bug['keywords']:\n if keyword in self.to_ignore:\n continue\n\n keywords.append(keyword)\n\n if keyword.startswith('sec-'):\n subkeywords.append('sec-')\n elif keyword.startswith('csectype-'):\n subkeywords.append('csectype-')\n return keywords + subkeywords\n\n\nclass severity(object):\n def __call__(self, bug):\n return field(bug, 'severity')\n\n\nclass is_coverity_issue(object):\n def __call__(self, bug):\n return re.search('[CID ?[0-9]+]', bug['summary']) is not None or re.search('[CID ?[0-9]+]', bug['whiteboard']) is not None\n\n\nclass has_url(object):\n def __call__(self, bug):\n return bug['url'] != ''\n\n\nclass has_w3c_url(object):\n def __call__(self, bug):\n return 'w3c' in bug['url']\n\n\nclass has_github_url(object):\n def __call__(self, bug):\n return 'github' in bug['url']\n\n\nclass whiteboard(object):\n def __call__(self, bug):\n ret = []\n\n # TODO: Add any [XXX:YYY] that appears in the whiteboard as [XXX: only\n\n for elem in ['memshrink', '[ux]']:\n if elem in bug['whiteboard'].lower():\n ret.append(elem)\n\n return ret\n\n\nclass patches(object):\n def __call__(self, bug):\n return sum(1 for a in bug['attachments'] if a['is_patch'] or a['content_type'] in ['text/x-review-board-request', 'text/x-phabricator-request'])\n\n\nclass landings(object):\n def __call__(self, bug):\n return sum(1 for c in bug['comments'] if '://hg.mozilla.org/' in c['text'])\n\n\nclass title(object):\n def __call__(self, bug):\n ret = []\n\n keywords = [\n 'fail',\n ]\n for keyword in keywords:\n if keyword in bug['summary'].lower():\n ret.append(keyword)\n\n return ret\n\n\nclass product(object):\n def __call__(self, bug):\n return bug['product']\n\n\nclass component(object):\n def __call__(self, bug):\n return bug['component']\n\n\nclass is_mozillian(object):\n def __call__(self, bug):\n return any(bug['creator_detail']['email'].endswith(domain) for domain in ['@mozilla.com', '@mozilla.org'])\n\n\nclass delta_request_merge(object):\n def __call__(self, bug):\n for history in bug['history']:\n for change in history['changes']:\n if change['added'].startswith('approval-mozilla'):\n uplift_request_datetime = datetime.strptime(history['when'], '%Y-%m-%dT%H:%M:%SZ').replace(tzinfo=timezone.utc)\n timedelta = versions.getCloserRelease(uplift_request_datetime)[1] - uplift_request_datetime\n return timedelta.days + timedelta.seconds / (24 * 60 * 60)\n\n return None\n\n\nclass commit_added(object):\n def __call__(self, bug):\n return sum(commit['added'] for commit in bug['commits'])\n\n\nclass commit_deleted(object):\n def __call__(self, bug):\n return sum(commit['deleted'] for commit in bug['commits'])\n\n\nclass commit_types(object):\n def __call__(self, bug):\n return sum((commit['types'] for commit in bug['commits']), [])\n\n\ndef cleanup_url(text):\n text = re.sub(r'http[s]?://(hg.mozilla|searchfox|dxr.mozilla)\\S+', '__CODE_REFERENCE_URL__', text)\n return re.sub(r'http\\S+', '__URL__', text)\n\n\ndef cleanup_fileref(text):\n return re.sub(r'\\w+\\.py\\b|\\w+\\.json\\b|\\w+\\.js\\b|\\w+\\.jsm\\b|\\w+\\.html\\b|\\w+\\.css\\b|\\w+\\.c\\b|\\w+\\.cpp\\b|\\w+\\.h\\b', '__FILE_REFERENCE__', text)\n\n\ndef cleanup_responses(text):\n return re.sub('>[^\\n]+', ' ', text)\n\n\ndef cleanup_hex(text):\n return re.sub(r'\\b0[xX][0-9a-fA-F]+\\b', '__HEX_NUMBER__', text)\n\n\ndef cleanup_dll(text):\n return re.sub(r'\\w+(\\.dll|\\.so|\\.dylib)\\b', '__DLL_NAME__', text)\n\n\ndef cleanup_synonyms(text):\n synonyms = [\n ('safemode', ['safemode', 'safe mode']),\n ('str', ['str', 'steps to reproduce', 'repro steps']),\n ('uaf', ['uaf', 'use after free', 'use-after-free']),\n ('asan', ['asan', 'address sanitizer', 'addresssanitizer']),\n ('permafailure', ['permafailure', 'permafailing', 'permafail', 'perma failure', 'perma failing', 'perma fail', 'perma-failure', 'perma-failing', 'perma-fail']),\n ('spec', ['spec', 'specification']),\n ]\n\n for synonym_group, synonym_list in synonyms:\n text = re.sub('|'.join(fr'\\b{synonym}\\b' for synonym in synonym_list), synonym_group, text, flags=re.IGNORECASE)\n\n return text\n\n\ndef cleanup_crash(text):\n return re.sub(r'bp-[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{6}[0-9]{6}\\b', '__CRASH_STATS_LINK__', text)\n\n\nclass BugExtractor(BaseEstimator, TransformerMixin):\n def __init__(self, feature_extractors, cleanup_functions, rollback=False, rollback_when=None, commit_data=False):\n self.feature_extractors = feature_extractors\n self.cleanup_functions = cleanup_functions\n self.rollback = rollback\n self.rollback_when = rollback_when\n self.commit_map = repository.get_commit_map() if commit_data else None\n\n def fit(self, x, y=None):\n return self\n\n def transform(self, bugs):\n results = []\n\n for bug in bugs:\n bug_id = bug['id']\n\n if self.rollback:\n bug = bug_snapshot.rollback(bug, self.rollback_when)\n\n data = {}\n\n if self.commit_map is not None:\n if bug_id in self.commit_map:\n bug['commits'] = self.commit_map[bug_id]\n else:\n bug['commits'] = []\n\n for f in self.feature_extractors:\n res = f(bug)\n\n if res is None:\n continue\n\n if isinstance(res, list):\n for item in res:\n data[f.__class__.__name__ + '-' + item] = 'True'\n continue\n\n if isinstance(res, bool):\n res = str(res)\n\n data[f.__class__.__name__] = res\n\n # TODO: Try simply using all possible fields instead of extracting features manually.\n\n for cleanup_function in self.cleanup_functions:\n bug['summary'] = cleanup_function(bug['summary'])\n for c in bug['comments']:\n c['text'] = cleanup_function(c['text'])\n\n result = {\n 'data': data,\n 'title': bug['summary'],\n 'first_comment': bug['comments'][0]['text'],\n 'comments': ' '.join([c['text'] for c in bug['comments']]),\n }\n\n results.append(result)\n\n return pd.DataFrame(results)\n", "path": "bugbug/bug_features.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport xgboost\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.feature_extraction import DictVectorizer\nfrom sklearn.pipeline import Pipeline\n\nfrom bugbug import bug_features\nfrom bugbug import bugzilla\nfrom bugbug.model import Model\n\n\nclass TrackingModel(Model):\n def __init__(self, lemmatization=False):\n Model.__init__(self, lemmatization)\n\n feature_extractors = [\n bug_features.has_str(),\n bug_features.has_regression_range(),\n bug_features.severity(),\n bug_features.keywords(),\n bug_features.is_coverity_issue(),\n bug_features.has_crash_signature(),\n bug_features.has_url(),\n bug_features.has_w3c_url(),\n bug_features.has_github_url(),\n bug_features.whiteboard(),\n bug_features.patches(),\n bug_features.landings(),\n bug_features.title(),\n bug_features.priority(),\n ]\n\n cleanup_functions = [\n bug_features.cleanup_fileref,\n bug_features.cleanup_url,\n bug_features.cleanup_synonyms,\n ]\n\n self.extraction_pipeline = Pipeline([\n ('bug_extractor', bug_features.BugExtractor(feature_extractors, cleanup_functions, rollback=True, rollback_when=self.rollback)),\n ('union', ColumnTransformer([\n ('data', DictVectorizer(), 'data'),\n\n ('title', self.text_vectorizer(stop_words='english'), 'title'),\n\n ('comments', self.text_vectorizer(stop_words='english'), 'comments'),\n ])),\n ])\n\n self.clf = xgboost.XGBClassifier(n_jobs=16)\n self.clf.set_params(predictor='cpu_predictor')\n\n def rollback(self, change):\n return change['field_name'].startswith('cf_tracking_firefox')\n\n def get_labels(self):\n classes = {}\n\n for bug_data in bugzilla.get_bugs():\n bug_id = int(bug_data['id'])\n\n for entry in bug_data['history']:\n for change in entry['changes']:\n if change['field_name'].startswith('cf_tracking_firefox'):\n if change['added'] in ['blocking', '+']:\n classes[bug_id] = 1\n elif change['added'] == '-':\n classes[bug_id] = 0\n\n if bug_id not in classes:\n classes[bug_id] = 0\n\n return classes\n\n def get_feature_names(self):\n return self.extraction_pipeline.named_steps['union'].get_feature_names()\n", "path": "bugbug/models/tracking.py"}, {"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport re\nfrom datetime import datetime\nfrom datetime import timezone\n\nimport pandas as pd\nfrom libmozdata import versions\nfrom sklearn.base import BaseEstimator\nfrom sklearn.base import TransformerMixin\n\nfrom bugbug import bug_snapshot\nfrom bugbug import repository\n\n\ndef field(bug, field):\n if field in bug and bug[field] != '---':\n return bug[field]\n\n return None\n\n\nclass has_str(object):\n def __call__(self, bug):\n return field(bug, 'cf_has_str')\n\n\nclass has_regression_range(object):\n def __call__(self, bug):\n return field(bug, 'cf_has_regression_range')\n\n\nclass has_crash_signature(object):\n def __call__(self, bug):\n return 'cf_crash_signature' in bug and bug['cf_crash_signature'] != ''\n\n\nclass keywords(object):\n def __init__(self, to_ignore=set()):\n self.to_ignore = to_ignore\n\n def __call__(self, bug):\n keywords = []\n subkeywords = []\n for keyword in bug['keywords']:\n if keyword in self.to_ignore:\n continue\n\n keywords.append(keyword)\n\n if keyword.startswith('sec-'):\n subkeywords.append('sec-')\n elif keyword.startswith('csectype-'):\n subkeywords.append('csectype-')\n return keywords + subkeywords\n\n\nclass severity(object):\n def __call__(self, bug):\n return field(bug, 'severity')\n\n\nclass is_coverity_issue(object):\n def __call__(self, bug):\n return re.search('[CID ?[0-9]+]', bug['summary']) is not None or re.search('[CID ?[0-9]+]', bug['whiteboard']) is not None\n\n\nclass has_url(object):\n def __call__(self, bug):\n return bug['url'] != ''\n\n\nclass has_w3c_url(object):\n def __call__(self, bug):\n return 'w3c' in bug['url']\n\n\nclass has_github_url(object):\n def __call__(self, bug):\n return 'github' in bug['url']\n\n\nclass whiteboard(object):\n def __call__(self, bug):\n ret = []\n\n # TODO: Add any [XXX:YYY] that appears in the whiteboard as [XXX: only\n\n for elem in ['memshrink', '[ux]']:\n if elem in bug['whiteboard'].lower():\n ret.append(elem)\n\n return ret\n\n\nclass patches(object):\n def __call__(self, bug):\n return sum(1 for a in bug['attachments'] if a['is_patch'] or a['content_type'] in ['text/x-review-board-request', 'text/x-phabricator-request'])\n\n\nclass landings(object):\n def __call__(self, bug):\n return sum(1 for c in bug['comments'] if '://hg.mozilla.org/' in c['text'])\n\n\nclass title(object):\n def __call__(self, bug):\n ret = []\n\n keywords = [\n 'fail',\n ]\n for keyword in keywords:\n if keyword in bug['summary'].lower():\n ret.append(keyword)\n\n return ret\n\n\nclass product(object):\n def __call__(self, bug):\n return bug['product']\n\n\nclass component(object):\n def __call__(self, bug):\n return bug['component']\n\n\nclass is_mozillian(object):\n def __call__(self, bug):\n return any(bug['creator_detail']['email'].endswith(domain) for domain in ['@mozilla.com', '@mozilla.org'])\n\n\nclass delta_request_merge(object):\n def __call__(self, bug):\n for history in bug['history']:\n for change in history['changes']:\n if change['added'].startswith('approval-mozilla'):\n uplift_request_datetime = datetime.strptime(history['when'], '%Y-%m-%dT%H:%M:%SZ').replace(tzinfo=timezone.utc)\n timedelta = versions.getCloserRelease(uplift_request_datetime)[1] - uplift_request_datetime\n return timedelta.days + timedelta.seconds / (24 * 60 * 60)\n\n return None\n\n\nclass commit_added(object):\n def __call__(self, bug):\n return sum(commit['added'] for commit in bug['commits'])\n\n\nclass commit_deleted(object):\n def __call__(self, bug):\n return sum(commit['deleted'] for commit in bug['commits'])\n\n\nclass commit_types(object):\n def __call__(self, bug):\n return sum((commit['types'] for commit in bug['commits']), [])\n\n\nclass priority(object):\n def __call__(self, bug):\n return bug['priority']\n\n\ndef cleanup_url(text):\n text = re.sub(r'http[s]?://(hg.mozilla|searchfox|dxr.mozilla)\\S+', '__CODE_REFERENCE_URL__', text)\n return re.sub(r'http\\S+', '__URL__', text)\n\n\ndef cleanup_fileref(text):\n return re.sub(r'\\w+\\.py\\b|\\w+\\.json\\b|\\w+\\.js\\b|\\w+\\.jsm\\b|\\w+\\.html\\b|\\w+\\.css\\b|\\w+\\.c\\b|\\w+\\.cpp\\b|\\w+\\.h\\b', '__FILE_REFERENCE__', text)\n\n\ndef cleanup_responses(text):\n return re.sub('>[^\\n]+', ' ', text)\n\n\ndef cleanup_hex(text):\n return re.sub(r'\\b0[xX][0-9a-fA-F]+\\b', '__HEX_NUMBER__', text)\n\n\ndef cleanup_dll(text):\n return re.sub(r'\\w+(\\.dll|\\.so|\\.dylib)\\b', '__DLL_NAME__', text)\n\n\ndef cleanup_synonyms(text):\n synonyms = [\n ('safemode', ['safemode', 'safe mode']),\n ('str', ['str', 'steps to reproduce', 'repro steps']),\n ('uaf', ['uaf', 'use after free', 'use-after-free']),\n ('asan', ['asan', 'address sanitizer', 'addresssanitizer']),\n ('permafailure', ['permafailure', 'permafailing', 'permafail', 'perma failure', 'perma failing', 'perma fail', 'perma-failure', 'perma-failing', 'perma-fail']),\n ('spec', ['spec', 'specification']),\n ]\n\n for synonym_group, synonym_list in synonyms:\n text = re.sub('|'.join(fr'\\b{synonym}\\b' for synonym in synonym_list), synonym_group, text, flags=re.IGNORECASE)\n\n return text\n\n\ndef cleanup_crash(text):\n return re.sub(r'bp-[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{6}[0-9]{6}\\b', '__CRASH_STATS_LINK__', text)\n\n\nclass BugExtractor(BaseEstimator, TransformerMixin):\n def __init__(self, feature_extractors, cleanup_functions, rollback=False, rollback_when=None, commit_data=False):\n self.feature_extractors = feature_extractors\n self.cleanup_functions = cleanup_functions\n self.rollback = rollback\n self.rollback_when = rollback_when\n self.commit_map = repository.get_commit_map() if commit_data else None\n\n def fit(self, x, y=None):\n return self\n\n def transform(self, bugs):\n results = []\n\n for bug in bugs:\n bug_id = bug['id']\n\n if self.rollback:\n bug = bug_snapshot.rollback(bug, self.rollback_when)\n\n data = {}\n\n if self.commit_map is not None:\n if bug_id in self.commit_map:\n bug['commits'] = self.commit_map[bug_id]\n else:\n bug['commits'] = []\n\n for f in self.feature_extractors:\n res = f(bug)\n\n if res is None:\n continue\n\n if isinstance(res, list):\n for item in res:\n data[f.__class__.__name__ + '-' + item] = 'True'\n continue\n\n if isinstance(res, bool):\n res = str(res)\n\n data[f.__class__.__name__] = res\n\n # TODO: Try simply using all possible fields instead of extracting features manually.\n\n for cleanup_function in self.cleanup_functions:\n bug['summary'] = cleanup_function(bug['summary'])\n for c in bug['comments']:\n c['text'] = cleanup_function(c['text'])\n\n result = {\n 'data': data,\n 'title': bug['summary'],\n 'first_comment': bug['comments'][0]['text'],\n 'comments': ' '.join([c['text'] for c in bug['comments']]),\n }\n\n results.append(result)\n\n return pd.DataFrame(results)\n", "path": "bugbug/bug_features.py"}]} | 3,646 | 228 |
gh_patches_debug_19773 | rasdani/github-patches | git_diff | akvo__akvo-rsr-4044 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Ensure UP app only lists projects that user has access
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `akvo/rsr/views/account.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 """Akvo RSR is covered by the GNU Affero General Public License.
4
5 See more details in the license.txt file located at the root folder of the
6 Akvo RSR module. For additional details on the GNU license please
7 see < http://www.gnu.org/licenses/agpl.html >.
8 """
9
10 import re
11 import json
12
13 from lxml import etree
14 from tastypie.models import ApiKey
15
16 from akvo.rsr.forms import RegisterForm, InvitedUserForm, PasswordResetForm
17 from akvo.rsr.models import Employment
18 from akvo.utils import rsr_send_mail
19
20 from django.conf import settings
21 from django.contrib.auth import login, logout, authenticate, get_user_model
22 from django.contrib.auth.forms import AuthenticationForm
23 from django.core.exceptions import ObjectDoesNotExist, ValidationError
24 from django.core.signing import TimestampSigner, BadSignature
25 from django.http import (HttpResponse, HttpResponseRedirect,
26 HttpResponseForbidden)
27 from django.shortcuts import redirect, render
28
29 from registration.models import RegistrationProfile
30
31 from django.views.decorators.csrf import csrf_exempt
32 from django.views.decorators.http import require_POST
33
34
35 def register(request):
36 """Register form."""
37 if request.method == 'POST':
38 form = RegisterForm(data=request.POST, files=request.FILES)
39 if form.is_valid():
40 # Honeypot field filled in? If so don't register and redirect to home page
41 if request.POST.get('hp_title'):
42 return redirect('index')
43 user = form.save(request)
44 return render(
45 request,
46 'registration/register_complete.html',
47 {'new_user': user},
48 )
49 else:
50 form = RegisterForm()
51 return render(
52 request,
53 'registration/register.html',
54 {'form': form, 'password_length': settings.PASSWORD_MINIMUM_LENGTH}
55 )
56
57
58 def activate(request, activation_key, extra_context=None):
59 """Activate resouce.
60
61 Activate a User's account, if their key is valid and hasn't expired.
62 Any values passed in the keyword argument "extra_context"
63 (which must be a dictionary) will be added to the context.
64 Any values in "extra_context" which are callable will be called prior to
65 being added to the context.
66 """
67 sha = re.compile('^[a-f0-9]{40}$')
68 activation_key = activation_key.lower()
69
70 if sha.search(activation_key):
71 try:
72 registration_profile = RegistrationProfile.objects.get(
73 activation_key=activation_key)
74 except RegistrationProfile.DoesNotExist:
75 user = False
76 else:
77 if not registration_profile.activation_key_expired():
78 registration_profile.activation_key = RegistrationProfile.ACTIVATED
79 registration_profile.save()
80 user = registration_profile.user
81 user.is_active = True
82 user.save()
83
84 # Log in user without password, using custom backend
85 user = authenticate(username=user.username, no_password=True)
86 login(request, user)
87 if extra_context is None:
88 extra_context = {}
89 context = dict()
90 for key, value in extra_context.items():
91 context[key] = callable(value) and value() or value
92 return render(
93 request,
94 'registration/activate.html',
95 context
96 )
97
98
99 def invite_activate(request, inviting_pk, user_pk, employment_pk, token_date, token):
100 """
101 Activate a user that has been invited to use RSR.
102
103 :param request: the request
104 :param inviting_pk: the invitee user's primary key
105 :param user_pk: the invited user's primary key
106 :param employment_pk: the employment's primary key
107 :param token_date: the first part of the token
108 :param token: the second part of the token
109 """
110
111 def approve_employment(invitee, invited, empl):
112 """
113 Approves the employment and sends a mail to the user that has invited the new user.
114
115 :param invitee: the invitee user's instance
116 :param invited: the invited user's instance
117 :param empl: the employment's instance
118 """
119 empl.approve(invitee)
120
121 if invitee:
122 # Send notification email to inviting user
123 rsr_send_mail(
124 [invitee.email],
125 subject='registration/inviting_user_notification_subject.txt',
126 message='registration/inviting_user_notification_message.txt',
127 html_message='registration/inviting_user_notification_message.html',
128 subject_context={
129 'user': invited,
130 },
131 msg_context={
132 'invited_user': invited,
133 'inviting_user': invitee,
134 'organisation': empl.organisation,
135 }
136 )
137
138 def login_and_redirect(req, invited):
139 """
140 Log the invited user in and redirect to the My projects page in MyRSR.
141
142 :param req: the request
143 :param invited: the invited user's instance
144 """
145 invited = authenticate(username=invited.username, no_password=True)
146 login(request, invited)
147 return redirect('my_projects')
148
149 bad_link, user, inviting_user, employment = False, None, None, None
150
151 try:
152 user = get_user_model().objects.get(pk=user_pk)
153 inviting_user = get_user_model().objects.get(pk=inviting_pk)
154 employment = Employment.objects.get(pk=employment_pk) if int(employment_pk) != 0 else None
155 except ObjectDoesNotExist:
156 bad_link = True
157
158 try:
159 TimestampSigner().unsign(':'.join([user.email, token_date, token]))
160 except BadSignature:
161 bad_link = True
162
163 if user and user.is_active:
164 if employment and employment.is_approved:
165 # User is active and employment is approved, so nothing to do here
166 return login_and_redirect(request, user)
167 elif employment and not bad_link:
168 # Employment is not yet approved, and link is ok.
169 # Approve employment and log user in.
170 approve_employment(inviting_user, user, employment)
171 return login_and_redirect(request, user)
172 elif employment is None and not bad_link and request.GET.get('project_invite') is not None:
173 return login_and_redirect(request, user)
174
175 if request.method == 'POST':
176 form = InvitedUserForm(user=user, data=request.POST)
177 if form.is_valid():
178 # Approve employment and save new user details
179 form.save(request)
180 if employment is not None:
181 approve_employment(inviting_user, user, employment)
182 return login_and_redirect(request, user)
183 else:
184 form = InvitedUserForm(user=user)
185
186 context = {
187 'form': form,
188 'bad_link': bad_link,
189 'password_length': settings.PASSWORD_MINIMUM_LENGTH
190 }
191 return render(request, 'registration/invite_activate.html', context)
192
193
194 def sign_in(request):
195 """Sign in.
196
197 POST have two variants with username & email:
198 - username > normal sign in
199 - email > password reset workflow
200 """
201 form = AuthenticationForm()
202 reset_form = PasswordResetForm()
203 if request.method == "POST" and 'username' in request.POST:
204 form = AuthenticationForm(data=request.POST)
205 if form.is_valid():
206 login(request, form.get_user())
207 next_page = request.GET.get('next')
208 return HttpResponseRedirect(next_page) if next_page else redirect('my_projects')
209 # Password reset on sign in page
210 elif request.method == "POST" and 'email' in request.POST:
211 reset_form = PasswordResetForm(data=request.POST)
212 if reset_form.is_valid():
213 reset_form.save(domain_override=settings.RSR_DOMAIN)
214 return HttpResponse()
215 return render(request, 'sign_in.html', {'form': form, 'reset_form': reset_form})
216
217
218 def sign_out(request):
219 """Log out resouce."""
220 logout(request)
221 return redirect('index')
222
223
224 def api_key_xml_response(user, orgs):
225 """Build the XML response.
226
227 This is used by the Up app - so make sure they match on change.
228 """
229 xml_root = etree.Element("credentials")
230
231 # User
232 user_id_element = etree.SubElement(xml_root, "user_id")
233 user_id_element.text = str(user.id)
234 user_username_element = etree.SubElement(xml_root, "username")
235 user_username_element.text = user.username
236
237 # Organisations
238 for org in orgs:
239 org_id_element = etree.SubElement(xml_root, "org_id")
240 org_id_element.text = str(org.id)
241
242 # API key
243 api_key_element = etree.SubElement(xml_root, "api_key")
244 api_key_element.text = ApiKey.objects.get_or_create(user=user)[0].key
245
246 # Published and editable projects
247 projects = orgs.all_projects().published()
248 pub_projs_element = etree.SubElement(xml_root, "published_projects")
249 edit_projs_element = etree.SubElement(xml_root, "allow_edit_projects")
250 for project in projects:
251 project_id_element = etree.SubElement(pub_projs_element, "id")
252 project_id_element.text = str(project.id)
253 if user.has_perm('rsr.change_project', project):
254 project_id_element = etree.SubElement(edit_projs_element, "id")
255 project_id_element.text = str(project.id)
256
257 return etree.tostring(etree.ElementTree(xml_root))
258
259
260 def api_key_json_response(user, orgs):
261 """
262 Build the JSON response. This is used by the Up app - so make sure they match on change.
263 """
264 response_data = dict()
265
266 # User
267 response_data["user_id"] = user.id
268 response_data["username"] = user.username
269
270 # Organisations
271 response_data["organisations"] = [org.id for org in orgs]
272
273 # API key
274 response_data["api_key"] = ApiKey.objects.get_or_create(user=user)[0].key
275
276 # Published projects
277 projects = orgs.all_projects().published()
278 response_data["published_projects"] = [p.id for p in projects]
279
280 # Editable projects
281 perm = 'rsr.change_project'
282 perm_filter = user.get_permission_filter(perm, '')
283 response_data["allow_edit_projects"] = list(projects.filter(perm_filter).values_list('id', flat=True))
284
285 return json.dumps(response_data)
286
287
288 @require_POST
289 @csrf_exempt
290 def api_key(request):
291 """On successful user credentials returns an auth token for API usage.
292
293 Since RSR changed in v3 to allow users without an organisation we need to
294 introduce a way to make old Up apps work as before but new ones support
295 users without any connected organisations.
296 """
297 request_format = request.GET.get('format', 'xml')
298 username = request.POST.get('username', False)
299 password = request.POST.get('password', False)
300 handles_unemployed = bool(request.POST.get("handles_unemployed", False))
301
302 if username and password:
303 try:
304 user = authenticate(username=username, password=password)
305 except ValidationError:
306 user = None
307 if user is not None and user.is_active:
308 orgs = user.approved_organisations()
309 if orgs or handles_unemployed:
310 if request_format == 'xml':
311 return HttpResponse(api_key_xml_response(user, orgs),
312 content_type="text/xml")
313 elif request_format == 'json':
314 return HttpResponse(api_key_json_response(user, orgs),
315 content_type="application/json")
316 return HttpResponseForbidden()
317
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/akvo/rsr/views/account.py b/akvo/rsr/views/account.py
--- a/akvo/rsr/views/account.py
+++ b/akvo/rsr/views/account.py
@@ -244,7 +244,7 @@
api_key_element.text = ApiKey.objects.get_or_create(user=user)[0].key
# Published and editable projects
- projects = orgs.all_projects().published()
+ projects = user.my_projects.published()
pub_projs_element = etree.SubElement(xml_root, "published_projects")
edit_projs_element = etree.SubElement(xml_root, "allow_edit_projects")
for project in projects:
@@ -274,7 +274,7 @@
response_data["api_key"] = ApiKey.objects.get_or_create(user=user)[0].key
# Published projects
- projects = orgs.all_projects().published()
+ projects = user.my_projects().published()
response_data["published_projects"] = [p.id for p in projects]
# Editable projects
| {"golden_diff": "diff --git a/akvo/rsr/views/account.py b/akvo/rsr/views/account.py\n--- a/akvo/rsr/views/account.py\n+++ b/akvo/rsr/views/account.py\n@@ -244,7 +244,7 @@\n api_key_element.text = ApiKey.objects.get_or_create(user=user)[0].key\n \n # Published and editable projects\n- projects = orgs.all_projects().published()\n+ projects = user.my_projects.published()\n pub_projs_element = etree.SubElement(xml_root, \"published_projects\")\n edit_projs_element = etree.SubElement(xml_root, \"allow_edit_projects\")\n for project in projects:\n@@ -274,7 +274,7 @@\n response_data[\"api_key\"] = ApiKey.objects.get_or_create(user=user)[0].key\n \n # Published projects\n- projects = orgs.all_projects().published()\n+ projects = user.my_projects().published()\n response_data[\"published_projects\"] = [p.id for p in projects]\n \n # Editable projects\n", "issue": "Ensure UP app only lists projects that user has access\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\n\nSee more details in the license.txt file located at the root folder of the\nAkvo RSR module. For additional details on the GNU license please\nsee < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nimport re\nimport json\n\nfrom lxml import etree\nfrom tastypie.models import ApiKey\n\nfrom akvo.rsr.forms import RegisterForm, InvitedUserForm, PasswordResetForm\nfrom akvo.rsr.models import Employment\nfrom akvo.utils import rsr_send_mail\n\nfrom django.conf import settings\nfrom django.contrib.auth import login, logout, authenticate, get_user_model\nfrom django.contrib.auth.forms import AuthenticationForm\nfrom django.core.exceptions import ObjectDoesNotExist, ValidationError\nfrom django.core.signing import TimestampSigner, BadSignature\nfrom django.http import (HttpResponse, HttpResponseRedirect,\n HttpResponseForbidden)\nfrom django.shortcuts import redirect, render\n\nfrom registration.models import RegistrationProfile\n\nfrom django.views.decorators.csrf import csrf_exempt\nfrom django.views.decorators.http import require_POST\n\n\ndef register(request):\n \"\"\"Register form.\"\"\"\n if request.method == 'POST':\n form = RegisterForm(data=request.POST, files=request.FILES)\n if form.is_valid():\n # Honeypot field filled in? If so don't register and redirect to home page\n if request.POST.get('hp_title'):\n return redirect('index')\n user = form.save(request)\n return render(\n request,\n 'registration/register_complete.html',\n {'new_user': user},\n )\n else:\n form = RegisterForm()\n return render(\n request,\n 'registration/register.html',\n {'form': form, 'password_length': settings.PASSWORD_MINIMUM_LENGTH}\n )\n\n\ndef activate(request, activation_key, extra_context=None):\n \"\"\"Activate resouce.\n\n Activate a User's account, if their key is valid and hasn't expired.\n Any values passed in the keyword argument \"extra_context\"\n (which must be a dictionary) will be added to the context.\n Any values in \"extra_context\" which are callable will be called prior to\n being added to the context.\n \"\"\"\n sha = re.compile('^[a-f0-9]{40}$')\n activation_key = activation_key.lower()\n\n if sha.search(activation_key):\n try:\n registration_profile = RegistrationProfile.objects.get(\n activation_key=activation_key)\n except RegistrationProfile.DoesNotExist:\n user = False\n else:\n if not registration_profile.activation_key_expired():\n registration_profile.activation_key = RegistrationProfile.ACTIVATED\n registration_profile.save()\n user = registration_profile.user\n user.is_active = True\n user.save()\n\n # Log in user without password, using custom backend\n user = authenticate(username=user.username, no_password=True)\n login(request, user)\n if extra_context is None:\n extra_context = {}\n context = dict()\n for key, value in extra_context.items():\n context[key] = callable(value) and value() or value\n return render(\n request,\n 'registration/activate.html',\n context\n )\n\n\ndef invite_activate(request, inviting_pk, user_pk, employment_pk, token_date, token):\n \"\"\"\n Activate a user that has been invited to use RSR.\n\n :param request: the request\n :param inviting_pk: the invitee user's primary key\n :param user_pk: the invited user's primary key\n :param employment_pk: the employment's primary key\n :param token_date: the first part of the token\n :param token: the second part of the token\n \"\"\"\n\n def approve_employment(invitee, invited, empl):\n \"\"\"\n Approves the employment and sends a mail to the user that has invited the new user.\n\n :param invitee: the invitee user's instance\n :param invited: the invited user's instance\n :param empl: the employment's instance\n \"\"\"\n empl.approve(invitee)\n\n if invitee:\n # Send notification email to inviting user\n rsr_send_mail(\n [invitee.email],\n subject='registration/inviting_user_notification_subject.txt',\n message='registration/inviting_user_notification_message.txt',\n html_message='registration/inviting_user_notification_message.html',\n subject_context={\n 'user': invited,\n },\n msg_context={\n 'invited_user': invited,\n 'inviting_user': invitee,\n 'organisation': empl.organisation,\n }\n )\n\n def login_and_redirect(req, invited):\n \"\"\"\n Log the invited user in and redirect to the My projects page in MyRSR.\n\n :param req: the request\n :param invited: the invited user's instance\n \"\"\"\n invited = authenticate(username=invited.username, no_password=True)\n login(request, invited)\n return redirect('my_projects')\n\n bad_link, user, inviting_user, employment = False, None, None, None\n\n try:\n user = get_user_model().objects.get(pk=user_pk)\n inviting_user = get_user_model().objects.get(pk=inviting_pk)\n employment = Employment.objects.get(pk=employment_pk) if int(employment_pk) != 0 else None\n except ObjectDoesNotExist:\n bad_link = True\n\n try:\n TimestampSigner().unsign(':'.join([user.email, token_date, token]))\n except BadSignature:\n bad_link = True\n\n if user and user.is_active:\n if employment and employment.is_approved:\n # User is active and employment is approved, so nothing to do here\n return login_and_redirect(request, user)\n elif employment and not bad_link:\n # Employment is not yet approved, and link is ok.\n # Approve employment and log user in.\n approve_employment(inviting_user, user, employment)\n return login_and_redirect(request, user)\n elif employment is None and not bad_link and request.GET.get('project_invite') is not None:\n return login_and_redirect(request, user)\n\n if request.method == 'POST':\n form = InvitedUserForm(user=user, data=request.POST)\n if form.is_valid():\n # Approve employment and save new user details\n form.save(request)\n if employment is not None:\n approve_employment(inviting_user, user, employment)\n return login_and_redirect(request, user)\n else:\n form = InvitedUserForm(user=user)\n\n context = {\n 'form': form,\n 'bad_link': bad_link,\n 'password_length': settings.PASSWORD_MINIMUM_LENGTH\n }\n return render(request, 'registration/invite_activate.html', context)\n\n\ndef sign_in(request):\n \"\"\"Sign in.\n\n POST have two variants with username & email:\n - username > normal sign in\n - email > password reset workflow\n \"\"\"\n form = AuthenticationForm()\n reset_form = PasswordResetForm()\n if request.method == \"POST\" and 'username' in request.POST:\n form = AuthenticationForm(data=request.POST)\n if form.is_valid():\n login(request, form.get_user())\n next_page = request.GET.get('next')\n return HttpResponseRedirect(next_page) if next_page else redirect('my_projects')\n # Password reset on sign in page\n elif request.method == \"POST\" and 'email' in request.POST:\n reset_form = PasswordResetForm(data=request.POST)\n if reset_form.is_valid():\n reset_form.save(domain_override=settings.RSR_DOMAIN)\n return HttpResponse()\n return render(request, 'sign_in.html', {'form': form, 'reset_form': reset_form})\n\n\ndef sign_out(request):\n \"\"\"Log out resouce.\"\"\"\n logout(request)\n return redirect('index')\n\n\ndef api_key_xml_response(user, orgs):\n \"\"\"Build the XML response.\n\n This is used by the Up app - so make sure they match on change.\n \"\"\"\n xml_root = etree.Element(\"credentials\")\n\n # User\n user_id_element = etree.SubElement(xml_root, \"user_id\")\n user_id_element.text = str(user.id)\n user_username_element = etree.SubElement(xml_root, \"username\")\n user_username_element.text = user.username\n\n # Organisations\n for org in orgs:\n org_id_element = etree.SubElement(xml_root, \"org_id\")\n org_id_element.text = str(org.id)\n\n # API key\n api_key_element = etree.SubElement(xml_root, \"api_key\")\n api_key_element.text = ApiKey.objects.get_or_create(user=user)[0].key\n\n # Published and editable projects\n projects = orgs.all_projects().published()\n pub_projs_element = etree.SubElement(xml_root, \"published_projects\")\n edit_projs_element = etree.SubElement(xml_root, \"allow_edit_projects\")\n for project in projects:\n project_id_element = etree.SubElement(pub_projs_element, \"id\")\n project_id_element.text = str(project.id)\n if user.has_perm('rsr.change_project', project):\n project_id_element = etree.SubElement(edit_projs_element, \"id\")\n project_id_element.text = str(project.id)\n\n return etree.tostring(etree.ElementTree(xml_root))\n\n\ndef api_key_json_response(user, orgs):\n \"\"\"\n Build the JSON response. This is used by the Up app - so make sure they match on change.\n \"\"\"\n response_data = dict()\n\n # User\n response_data[\"user_id\"] = user.id\n response_data[\"username\"] = user.username\n\n # Organisations\n response_data[\"organisations\"] = [org.id for org in orgs]\n\n # API key\n response_data[\"api_key\"] = ApiKey.objects.get_or_create(user=user)[0].key\n\n # Published projects\n projects = orgs.all_projects().published()\n response_data[\"published_projects\"] = [p.id for p in projects]\n\n # Editable projects\n perm = 'rsr.change_project'\n perm_filter = user.get_permission_filter(perm, '')\n response_data[\"allow_edit_projects\"] = list(projects.filter(perm_filter).values_list('id', flat=True))\n\n return json.dumps(response_data)\n\n\n@require_POST\n@csrf_exempt\ndef api_key(request):\n \"\"\"On successful user credentials returns an auth token for API usage.\n\n Since RSR changed in v3 to allow users without an organisation we need to\n introduce a way to make old Up apps work as before but new ones support\n users without any connected organisations.\n \"\"\"\n request_format = request.GET.get('format', 'xml')\n username = request.POST.get('username', False)\n password = request.POST.get('password', False)\n handles_unemployed = bool(request.POST.get(\"handles_unemployed\", False))\n\n if username and password:\n try:\n user = authenticate(username=username, password=password)\n except ValidationError:\n user = None\n if user is not None and user.is_active:\n orgs = user.approved_organisations()\n if orgs or handles_unemployed:\n if request_format == 'xml':\n return HttpResponse(api_key_xml_response(user, orgs),\n content_type=\"text/xml\")\n elif request_format == 'json':\n return HttpResponse(api_key_json_response(user, orgs),\n content_type=\"application/json\")\n return HttpResponseForbidden()\n", "path": "akvo/rsr/views/account.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\n\nSee more details in the license.txt file located at the root folder of the\nAkvo RSR module. For additional details on the GNU license please\nsee < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nimport re\nimport json\n\nfrom lxml import etree\nfrom tastypie.models import ApiKey\n\nfrom akvo.rsr.forms import RegisterForm, InvitedUserForm, PasswordResetForm\nfrom akvo.rsr.models import Employment\nfrom akvo.utils import rsr_send_mail\n\nfrom django.conf import settings\nfrom django.contrib.auth import login, logout, authenticate, get_user_model\nfrom django.contrib.auth.forms import AuthenticationForm\nfrom django.core.exceptions import ObjectDoesNotExist, ValidationError\nfrom django.core.signing import TimestampSigner, BadSignature\nfrom django.http import (HttpResponse, HttpResponseRedirect,\n HttpResponseForbidden)\nfrom django.shortcuts import redirect, render\n\nfrom registration.models import RegistrationProfile\n\nfrom django.views.decorators.csrf import csrf_exempt\nfrom django.views.decorators.http import require_POST\n\n\ndef register(request):\n \"\"\"Register form.\"\"\"\n if request.method == 'POST':\n form = RegisterForm(data=request.POST, files=request.FILES)\n if form.is_valid():\n # Honeypot field filled in? If so don't register and redirect to home page\n if request.POST.get('hp_title'):\n return redirect('index')\n user = form.save(request)\n return render(\n request,\n 'registration/register_complete.html',\n {'new_user': user},\n )\n else:\n form = RegisterForm()\n return render(\n request,\n 'registration/register.html',\n {'form': form, 'password_length': settings.PASSWORD_MINIMUM_LENGTH}\n )\n\n\ndef activate(request, activation_key, extra_context=None):\n \"\"\"Activate resouce.\n\n Activate a User's account, if their key is valid and hasn't expired.\n Any values passed in the keyword argument \"extra_context\"\n (which must be a dictionary) will be added to the context.\n Any values in \"extra_context\" which are callable will be called prior to\n being added to the context.\n \"\"\"\n sha = re.compile('^[a-f0-9]{40}$')\n activation_key = activation_key.lower()\n\n if sha.search(activation_key):\n try:\n registration_profile = RegistrationProfile.objects.get(\n activation_key=activation_key)\n except RegistrationProfile.DoesNotExist:\n user = False\n else:\n if not registration_profile.activation_key_expired():\n registration_profile.activation_key = RegistrationProfile.ACTIVATED\n registration_profile.save()\n user = registration_profile.user\n user.is_active = True\n user.save()\n\n # Log in user without password, using custom backend\n user = authenticate(username=user.username, no_password=True)\n login(request, user)\n if extra_context is None:\n extra_context = {}\n context = dict()\n for key, value in extra_context.items():\n context[key] = callable(value) and value() or value\n return render(\n request,\n 'registration/activate.html',\n context\n )\n\n\ndef invite_activate(request, inviting_pk, user_pk, employment_pk, token_date, token):\n \"\"\"\n Activate a user that has been invited to use RSR.\n\n :param request: the request\n :param inviting_pk: the invitee user's primary key\n :param user_pk: the invited user's primary key\n :param employment_pk: the employment's primary key\n :param token_date: the first part of the token\n :param token: the second part of the token\n \"\"\"\n\n def approve_employment(invitee, invited, empl):\n \"\"\"\n Approves the employment and sends a mail to the user that has invited the new user.\n\n :param invitee: the invitee user's instance\n :param invited: the invited user's instance\n :param empl: the employment's instance\n \"\"\"\n empl.approve(invitee)\n\n if invitee:\n # Send notification email to inviting user\n rsr_send_mail(\n [invitee.email],\n subject='registration/inviting_user_notification_subject.txt',\n message='registration/inviting_user_notification_message.txt',\n html_message='registration/inviting_user_notification_message.html',\n subject_context={\n 'user': invited,\n },\n msg_context={\n 'invited_user': invited,\n 'inviting_user': invitee,\n 'organisation': empl.organisation,\n }\n )\n\n def login_and_redirect(req, invited):\n \"\"\"\n Log the invited user in and redirect to the My projects page in MyRSR.\n\n :param req: the request\n :param invited: the invited user's instance\n \"\"\"\n invited = authenticate(username=invited.username, no_password=True)\n login(request, invited)\n return redirect('my_projects')\n\n bad_link, user, inviting_user, employment = False, None, None, None\n\n try:\n user = get_user_model().objects.get(pk=user_pk)\n inviting_user = get_user_model().objects.get(pk=inviting_pk)\n employment = Employment.objects.get(pk=employment_pk) if int(employment_pk) != 0 else None\n except ObjectDoesNotExist:\n bad_link = True\n\n try:\n TimestampSigner().unsign(':'.join([user.email, token_date, token]))\n except BadSignature:\n bad_link = True\n\n if user and user.is_active:\n if employment and employment.is_approved:\n # User is active and employment is approved, so nothing to do here\n return login_and_redirect(request, user)\n elif employment and not bad_link:\n # Employment is not yet approved, and link is ok.\n # Approve employment and log user in.\n approve_employment(inviting_user, user, employment)\n return login_and_redirect(request, user)\n elif employment is None and not bad_link and request.GET.get('project_invite') is not None:\n return login_and_redirect(request, user)\n\n if request.method == 'POST':\n form = InvitedUserForm(user=user, data=request.POST)\n if form.is_valid():\n # Approve employment and save new user details\n form.save(request)\n if employment is not None:\n approve_employment(inviting_user, user, employment)\n return login_and_redirect(request, user)\n else:\n form = InvitedUserForm(user=user)\n\n context = {\n 'form': form,\n 'bad_link': bad_link,\n 'password_length': settings.PASSWORD_MINIMUM_LENGTH\n }\n return render(request, 'registration/invite_activate.html', context)\n\n\ndef sign_in(request):\n \"\"\"Sign in.\n\n POST have two variants with username & email:\n - username > normal sign in\n - email > password reset workflow\n \"\"\"\n form = AuthenticationForm()\n reset_form = PasswordResetForm()\n if request.method == \"POST\" and 'username' in request.POST:\n form = AuthenticationForm(data=request.POST)\n if form.is_valid():\n login(request, form.get_user())\n next_page = request.GET.get('next')\n return HttpResponseRedirect(next_page) if next_page else redirect('my_projects')\n # Password reset on sign in page\n elif request.method == \"POST\" and 'email' in request.POST:\n reset_form = PasswordResetForm(data=request.POST)\n if reset_form.is_valid():\n reset_form.save(domain_override=settings.RSR_DOMAIN)\n return HttpResponse()\n return render(request, 'sign_in.html', {'form': form, 'reset_form': reset_form})\n\n\ndef sign_out(request):\n \"\"\"Log out resouce.\"\"\"\n logout(request)\n return redirect('index')\n\n\ndef api_key_xml_response(user, orgs):\n \"\"\"Build the XML response.\n\n This is used by the Up app - so make sure they match on change.\n \"\"\"\n xml_root = etree.Element(\"credentials\")\n\n # User\n user_id_element = etree.SubElement(xml_root, \"user_id\")\n user_id_element.text = str(user.id)\n user_username_element = etree.SubElement(xml_root, \"username\")\n user_username_element.text = user.username\n\n # Organisations\n for org in orgs:\n org_id_element = etree.SubElement(xml_root, \"org_id\")\n org_id_element.text = str(org.id)\n\n # API key\n api_key_element = etree.SubElement(xml_root, \"api_key\")\n api_key_element.text = ApiKey.objects.get_or_create(user=user)[0].key\n\n # Published and editable projects\n projects = user.my_projects.published()\n pub_projs_element = etree.SubElement(xml_root, \"published_projects\")\n edit_projs_element = etree.SubElement(xml_root, \"allow_edit_projects\")\n for project in projects:\n project_id_element = etree.SubElement(pub_projs_element, \"id\")\n project_id_element.text = str(project.id)\n if user.has_perm('rsr.change_project', project):\n project_id_element = etree.SubElement(edit_projs_element, \"id\")\n project_id_element.text = str(project.id)\n\n return etree.tostring(etree.ElementTree(xml_root))\n\n\ndef api_key_json_response(user, orgs):\n \"\"\"\n Build the JSON response. This is used by the Up app - so make sure they match on change.\n \"\"\"\n response_data = dict()\n\n # User\n response_data[\"user_id\"] = user.id\n response_data[\"username\"] = user.username\n\n # Organisations\n response_data[\"organisations\"] = [org.id for org in orgs]\n\n # API key\n response_data[\"api_key\"] = ApiKey.objects.get_or_create(user=user)[0].key\n\n # Published projects\n projects = user.my_projects().published()\n response_data[\"published_projects\"] = [p.id for p in projects]\n\n # Editable projects\n perm = 'rsr.change_project'\n perm_filter = user.get_permission_filter(perm, '')\n response_data[\"allow_edit_projects\"] = list(projects.filter(perm_filter).values_list('id', flat=True))\n\n return json.dumps(response_data)\n\n\n@require_POST\n@csrf_exempt\ndef api_key(request):\n \"\"\"On successful user credentials returns an auth token for API usage.\n\n Since RSR changed in v3 to allow users without an organisation we need to\n introduce a way to make old Up apps work as before but new ones support\n users without any connected organisations.\n \"\"\"\n request_format = request.GET.get('format', 'xml')\n username = request.POST.get('username', False)\n password = request.POST.get('password', False)\n handles_unemployed = bool(request.POST.get(\"handles_unemployed\", False))\n\n if username and password:\n try:\n user = authenticate(username=username, password=password)\n except ValidationError:\n user = None\n if user is not None and user.is_active:\n orgs = user.approved_organisations()\n if orgs or handles_unemployed:\n if request_format == 'xml':\n return HttpResponse(api_key_xml_response(user, orgs),\n content_type=\"text/xml\")\n elif request_format == 'json':\n return HttpResponse(api_key_json_response(user, orgs),\n content_type=\"application/json\")\n return HttpResponseForbidden()\n", "path": "akvo/rsr/views/account.py"}]} | 3,536 | 225 |
gh_patches_debug_12100 | rasdani/github-patches | git_diff | piskvorky__gensim-681 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
KeyError: Popping a non-existing value from `_keywords`
OS X 10.11.1
Python 2.7.10
gensim 0.12.4
To reproduce this in Python shell:
```
>>> import gensim.summarization
>>> t = "Victor S. Sage Compare Sage 50c Editions Find accounting software that's right for your business Every product comes with anytime, anywhere online access; automatic updates; access to unlimited support; access to built-in credit card processing and payroll; and advanced reporting. Three solutions for your business 1 user From $249/year Buy now Free Trial 1-5 users From $299/year Buy now Free Trial 3-40 users From $1,199/year Buy now Free Trial Essential Accounting Accounts payable, accounts receivable, cash management check check check open check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check Advanced Accounting Automated tasks, audit trail, budgeting, change order processing check check open check check check check check check check check check check check check check check check check check check check check check check check check check check check check In-depth Accounting Fast processing, industry-specific features, workflow management check open check check check check check check check check check check check Disclaimers open * This product is backed by a no-risk guarantee for first-time Sage 50 customers. If, within 60 days of purchase, you are not convinced that Sage 50 is the best accounting program for your business, we will refund your money (less and rebate you have received for this purchase). Dated proof of purchase and return of product is required. For details, call 877-481-0341."
>>> import gensim.summarization
>>> keywords = gensim.summarization.keywords(t, pos_filter=[], ratio=0.2, lemmatize=True, scores=True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/gensim/summarization/keywords.py", line 229, in keywords
combined_keywords = _get_combined_keywords(keywords, text.split())
File "/usr/local/lib/python2.7/site-packages/gensim/summarization/keywords.py", line 171, in _get_combined_keywords
_keywords.pop(keyword)
KeyError: u'check'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gensim/summarization/keywords.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html
5
6 from gensim.summarization.pagerank_weighted import pagerank_weighted as _pagerank
7 from gensim.summarization.textcleaner import clean_text_by_word as _clean_text_by_word
8 from gensim.summarization.textcleaner import tokenize_by_word as _tokenize_by_word
9 from gensim.summarization.commons import build_graph as _build_graph
10 from gensim.summarization.commons import remove_unreachable_nodes as _remove_unreachable_nodes
11 from gensim.utils import to_unicode
12 from itertools import combinations as _combinations
13 from six.moves.queue import Queue as _Queue
14 from six.moves import xrange
15 from six import iteritems
16
17
18 WINDOW_SIZE = 2
19
20 """
21 Check tags in http://www.clips.ua.ac.be/pages/mbsp-tags and use only first two letters
22 Example: filter for nouns and adjectives:
23 INCLUDING_FILTER = ['NN', 'JJ']
24 """
25 INCLUDING_FILTER = ['NN', 'JJ']
26 EXCLUDING_FILTER = []
27
28
29 def _get_pos_filters():
30 return frozenset(INCLUDING_FILTER), frozenset(EXCLUDING_FILTER)
31
32
33 def _get_words_for_graph(tokens, pos_filter):
34 if pos_filter is None:
35 include_filters, exclude_filters = _get_pos_filters()
36 else:
37 include_filters = set(pos_filter)
38 exclude_filters = frozenset([])
39 if include_filters and exclude_filters:
40 raise ValueError("Can't use both include and exclude filters, should use only one")
41
42 result = []
43 for word, unit in iteritems(tokens):
44 if exclude_filters and unit.tag in exclude_filters:
45 continue
46 if (include_filters and unit.tag in include_filters) or not include_filters or not unit.tag:
47 result.append(unit.token)
48 return result
49
50
51 def _get_first_window(split_text):
52 return split_text[:WINDOW_SIZE]
53
54
55 def _set_graph_edge(graph, tokens, word_a, word_b):
56 if word_a in tokens and word_b in tokens:
57 lemma_a = tokens[word_a].token
58 lemma_b = tokens[word_b].token
59 edge = (lemma_a, lemma_b)
60
61 if graph.has_node(lemma_a) and graph.has_node(lemma_b) and not graph.has_edge(edge):
62 graph.add_edge(edge)
63
64
65 def _process_first_window(graph, tokens, split_text):
66 first_window = _get_first_window(split_text)
67 for word_a, word_b in _combinations(first_window, 2):
68 _set_graph_edge(graph, tokens, word_a, word_b)
69
70
71 def _init_queue(split_text):
72 queue = _Queue()
73 first_window = _get_first_window(split_text)
74 for word in first_window[1:]:
75 queue.put(word)
76 return queue
77
78
79 def _process_word(graph, tokens, queue, word):
80 for word_to_compare in _queue_iterator(queue):
81 _set_graph_edge(graph, tokens, word, word_to_compare)
82
83
84 def _update_queue(queue, word):
85 queue.get()
86 queue.put(word)
87 assert queue.qsize() == (WINDOW_SIZE - 1)
88
89
90 def _process_text(graph, tokens, split_text):
91 queue = _init_queue(split_text)
92 for i in xrange(WINDOW_SIZE, len(split_text)):
93 word = split_text[i]
94 _process_word(graph, tokens, queue, word)
95 _update_queue(queue, word)
96
97
98 def _queue_iterator(queue):
99 iterations = queue.qsize()
100 for i in xrange(iterations):
101 var = queue.get()
102 yield var
103 queue.put(var)
104
105
106 def _set_graph_edges(graph, tokens, split_text):
107 _process_first_window(graph, tokens, split_text)
108 _process_text(graph, tokens, split_text)
109
110
111 def _extract_tokens(lemmas, scores, ratio, words):
112 lemmas.sort(key=lambda s: scores[s], reverse=True)
113
114 # If no "words" option is selected, the number of sentences is
115 # reduced by the provided ratio, else, the ratio is ignored.
116 length = len(lemmas) * ratio if words is None else words
117 return [(scores[lemmas[i]], lemmas[i],) for i in range(int(length))]
118
119
120 def _lemmas_to_words(tokens):
121 lemma_to_word = {}
122 for word, unit in iteritems(tokens):
123 lemma = unit.token
124 if lemma in lemma_to_word:
125 lemma_to_word[lemma].append(word)
126 else:
127 lemma_to_word[lemma] = [word]
128 return lemma_to_word
129
130
131 def _get_keywords_with_score(extracted_lemmas, lemma_to_word):
132 """
133 :param extracted_lemmas:list of tuples
134 :param lemma_to_word: dict of {lemma:list of words}
135 :return: dict of {keyword:score}
136 """
137 keywords = {}
138 for score, lemma in extracted_lemmas:
139 keyword_list = lemma_to_word[lemma]
140 for keyword in keyword_list:
141 keywords[keyword] = score
142 return keywords
143
144
145 def _strip_word(word):
146 stripped_word_list = list(_tokenize_by_word(word))
147 return stripped_word_list[0] if stripped_word_list else ""
148
149
150 def _get_combined_keywords(_keywords, split_text):
151 """
152 :param keywords:dict of keywords:scores
153 :param split_text: list of strings
154 :return: combined_keywords:list
155 """
156 result = []
157 _keywords = _keywords.copy()
158 len_text = len(split_text)
159 for i in xrange(len_text):
160 word = _strip_word(split_text[i])
161 if word in _keywords:
162 combined_word = [word]
163 if i + 1 == len_text:
164 result.append(word) # appends last word if keyword and doesn't iterate
165 for j in xrange(i + 1, len_text):
166 other_word = _strip_word(split_text[j])
167 if other_word in _keywords and other_word == split_text[j]:
168 combined_word.append(other_word)
169 else:
170 for keyword in combined_word:
171 _keywords.pop(keyword)
172 result.append(" ".join(combined_word))
173 break
174 return result
175
176
177 def _get_average_score(concept, _keywords):
178 word_list = concept.split()
179 word_counter = 0
180 total = 0
181 for word in word_list:
182 total += _keywords[word]
183 word_counter += 1
184 return total / word_counter
185
186
187 def _format_results(_keywords, combined_keywords, split, scores):
188 """
189 :param keywords:dict of keywords:scores
190 :param combined_keywords:list of word/s
191 """
192 combined_keywords.sort(key=lambda w: _get_average_score(w, _keywords), reverse=True)
193 if scores:
194 return [(word, _get_average_score(word, _keywords)) for word in combined_keywords]
195 if split:
196 return combined_keywords
197 return "\n".join(combined_keywords)
198
199
200 def keywords(text, ratio=0.2, words=None, split=False, scores=False, pos_filter=['NN', 'JJ'], lemmatize=False):
201 # Gets a dict of word -> lemma
202 text = to_unicode(text)
203 tokens = _clean_text_by_word(text)
204 split_text = list(_tokenize_by_word(text))
205
206 # Creates the graph and adds the edges
207 graph = _build_graph(_get_words_for_graph(tokens, pos_filter))
208 _set_graph_edges(graph, tokens, split_text)
209 del split_text # It's no longer used
210
211 _remove_unreachable_nodes(graph)
212
213 # Ranks the tokens using the PageRank algorithm. Returns dict of lemma -> score
214 pagerank_scores = _pagerank(graph)
215
216 extracted_lemmas = _extract_tokens(graph.nodes(), pagerank_scores, ratio, words)
217
218 # The results can be polluted by many variations of the same word
219 if lemmatize:
220 lemmas_to_word = {}
221 for word, unit in iteritems(tokens):
222 lemmas_to_word[unit.token] = [word]
223 else:
224 lemmas_to_word = _lemmas_to_words(tokens)
225
226 keywords = _get_keywords_with_score(extracted_lemmas, lemmas_to_word)
227
228 # text.split() to keep numbers and punctuation marks, so separeted concepts are not combined
229 combined_keywords = _get_combined_keywords(keywords, text.split())
230
231 return _format_results(keywords, combined_keywords, split, scores)
232
233
234 def get_graph(text):
235 tokens = _clean_text_by_word(text)
236 split_text = list(_tokenize_by_word(text))
237
238 graph = _build_graph(_get_words_for_graph(tokens))
239 _set_graph_edges(graph, tokens, split_text)
240
241 return graph
242
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gensim/summarization/keywords.py b/gensim/summarization/keywords.py
--- a/gensim/summarization/keywords.py
+++ b/gensim/summarization/keywords.py
@@ -164,7 +164,7 @@
result.append(word) # appends last word if keyword and doesn't iterate
for j in xrange(i + 1, len_text):
other_word = _strip_word(split_text[j])
- if other_word in _keywords and other_word == split_text[j]:
+ if other_word in _keywords and other_word == split_text[j] and not other_word in combined_word:
combined_word.append(other_word)
else:
for keyword in combined_word:
| {"golden_diff": "diff --git a/gensim/summarization/keywords.py b/gensim/summarization/keywords.py\n--- a/gensim/summarization/keywords.py\n+++ b/gensim/summarization/keywords.py\n@@ -164,7 +164,7 @@\n result.append(word) # appends last word if keyword and doesn't iterate\n for j in xrange(i + 1, len_text):\n other_word = _strip_word(split_text[j])\n- if other_word in _keywords and other_word == split_text[j]:\n+ if other_word in _keywords and other_word == split_text[j] and not other_word in combined_word:\n combined_word.append(other_word)\n else:\n for keyword in combined_word:\n", "issue": "KeyError: Popping a non-existing value from `_keywords`\nOS X 10.11.1\nPython 2.7.10\ngensim 0.12.4\n\nTo reproduce this in Python shell:\n\n```\n>>> import gensim.summarization\n>>> t = \"Victor S. Sage Compare Sage 50c Editions Find accounting software that's right for your business Every product comes with anytime, anywhere online access; automatic updates; access to unlimited support; access to built-in credit card processing and payroll; and advanced reporting. Three solutions for your business 1 user From $249/year Buy now Free Trial 1-5 users From $299/year Buy now Free Trial 3-40 users From $1,199/year Buy now Free Trial Essential Accounting Accounts payable, accounts receivable, cash management check check check open check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check check Advanced Accounting Automated tasks, audit trail, budgeting, change order processing check check open check check check check check check check check check check check check check check check check check check check check check check check check check check check check In-depth Accounting Fast processing, industry-specific features, workflow management check open check check check check check check check check check check check Disclaimers open * This product is backed by a no-risk guarantee for first-time Sage 50 customers. If, within 60 days of purchase, you are not convinced that Sage 50 is the best accounting program for your business, we will refund your money (less and rebate you have received for this purchase). Dated proof of purchase and return of product is required. For details, call 877-481-0341.\"\n>>> import gensim.summarization\n>>> keywords = gensim.summarization.keywords(t, pos_filter=[], ratio=0.2, lemmatize=True, scores=True)\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"/usr/local/lib/python2.7/site-packages/gensim/summarization/keywords.py\", line 229, in keywords\n combined_keywords = _get_combined_keywords(keywords, text.split())\n File \"/usr/local/lib/python2.7/site-packages/gensim/summarization/keywords.py\", line 171, in _get_combined_keywords\n _keywords.pop(keyword)\nKeyError: u'check'\n```\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html\n\nfrom gensim.summarization.pagerank_weighted import pagerank_weighted as _pagerank\nfrom gensim.summarization.textcleaner import clean_text_by_word as _clean_text_by_word\nfrom gensim.summarization.textcleaner import tokenize_by_word as _tokenize_by_word\nfrom gensim.summarization.commons import build_graph as _build_graph\nfrom gensim.summarization.commons import remove_unreachable_nodes as _remove_unreachable_nodes\nfrom gensim.utils import to_unicode\nfrom itertools import combinations as _combinations\nfrom six.moves.queue import Queue as _Queue\nfrom six.moves import xrange\nfrom six import iteritems\n\n\nWINDOW_SIZE = 2\n\n\"\"\"\nCheck tags in http://www.clips.ua.ac.be/pages/mbsp-tags and use only first two letters\nExample: filter for nouns and adjectives:\nINCLUDING_FILTER = ['NN', 'JJ']\n\"\"\"\nINCLUDING_FILTER = ['NN', 'JJ']\nEXCLUDING_FILTER = []\n\n\ndef _get_pos_filters():\n return frozenset(INCLUDING_FILTER), frozenset(EXCLUDING_FILTER)\n\n\ndef _get_words_for_graph(tokens, pos_filter):\n if pos_filter is None:\n include_filters, exclude_filters = _get_pos_filters()\n else:\n include_filters = set(pos_filter)\n exclude_filters = frozenset([])\n if include_filters and exclude_filters:\n raise ValueError(\"Can't use both include and exclude filters, should use only one\")\n\n result = []\n for word, unit in iteritems(tokens):\n if exclude_filters and unit.tag in exclude_filters:\n continue\n if (include_filters and unit.tag in include_filters) or not include_filters or not unit.tag:\n result.append(unit.token)\n return result\n\n\ndef _get_first_window(split_text):\n return split_text[:WINDOW_SIZE]\n\n\ndef _set_graph_edge(graph, tokens, word_a, word_b):\n if word_a in tokens and word_b in tokens:\n lemma_a = tokens[word_a].token\n lemma_b = tokens[word_b].token\n edge = (lemma_a, lemma_b)\n\n if graph.has_node(lemma_a) and graph.has_node(lemma_b) and not graph.has_edge(edge):\n graph.add_edge(edge)\n\n\ndef _process_first_window(graph, tokens, split_text):\n first_window = _get_first_window(split_text)\n for word_a, word_b in _combinations(first_window, 2):\n _set_graph_edge(graph, tokens, word_a, word_b)\n\n\ndef _init_queue(split_text):\n queue = _Queue()\n first_window = _get_first_window(split_text)\n for word in first_window[1:]:\n queue.put(word)\n return queue\n\n\ndef _process_word(graph, tokens, queue, word):\n for word_to_compare in _queue_iterator(queue):\n _set_graph_edge(graph, tokens, word, word_to_compare)\n\n\ndef _update_queue(queue, word):\n queue.get()\n queue.put(word)\n assert queue.qsize() == (WINDOW_SIZE - 1)\n\n\ndef _process_text(graph, tokens, split_text):\n queue = _init_queue(split_text)\n for i in xrange(WINDOW_SIZE, len(split_text)):\n word = split_text[i]\n _process_word(graph, tokens, queue, word)\n _update_queue(queue, word)\n\n\ndef _queue_iterator(queue):\n iterations = queue.qsize()\n for i in xrange(iterations):\n var = queue.get()\n yield var\n queue.put(var)\n\n\ndef _set_graph_edges(graph, tokens, split_text):\n _process_first_window(graph, tokens, split_text)\n _process_text(graph, tokens, split_text)\n\n\ndef _extract_tokens(lemmas, scores, ratio, words):\n lemmas.sort(key=lambda s: scores[s], reverse=True)\n\n # If no \"words\" option is selected, the number of sentences is\n # reduced by the provided ratio, else, the ratio is ignored.\n length = len(lemmas) * ratio if words is None else words\n return [(scores[lemmas[i]], lemmas[i],) for i in range(int(length))]\n\n\ndef _lemmas_to_words(tokens):\n lemma_to_word = {}\n for word, unit in iteritems(tokens):\n lemma = unit.token\n if lemma in lemma_to_word:\n lemma_to_word[lemma].append(word)\n else:\n lemma_to_word[lemma] = [word]\n return lemma_to_word\n\n\ndef _get_keywords_with_score(extracted_lemmas, lemma_to_word):\n \"\"\"\n :param extracted_lemmas:list of tuples\n :param lemma_to_word: dict of {lemma:list of words}\n :return: dict of {keyword:score}\n \"\"\"\n keywords = {}\n for score, lemma in extracted_lemmas:\n keyword_list = lemma_to_word[lemma]\n for keyword in keyword_list:\n keywords[keyword] = score\n return keywords\n\n\ndef _strip_word(word):\n stripped_word_list = list(_tokenize_by_word(word))\n return stripped_word_list[0] if stripped_word_list else \"\"\n\n\ndef _get_combined_keywords(_keywords, split_text):\n \"\"\"\n :param keywords:dict of keywords:scores\n :param split_text: list of strings\n :return: combined_keywords:list\n \"\"\"\n result = []\n _keywords = _keywords.copy()\n len_text = len(split_text)\n for i in xrange(len_text):\n word = _strip_word(split_text[i])\n if word in _keywords:\n combined_word = [word]\n if i + 1 == len_text:\n result.append(word) # appends last word if keyword and doesn't iterate\n for j in xrange(i + 1, len_text):\n other_word = _strip_word(split_text[j])\n if other_word in _keywords and other_word == split_text[j]:\n combined_word.append(other_word)\n else:\n for keyword in combined_word:\n _keywords.pop(keyword)\n result.append(\" \".join(combined_word))\n break\n return result\n\n\ndef _get_average_score(concept, _keywords):\n word_list = concept.split()\n word_counter = 0\n total = 0\n for word in word_list:\n total += _keywords[word]\n word_counter += 1\n return total / word_counter\n\n\ndef _format_results(_keywords, combined_keywords, split, scores):\n \"\"\"\n :param keywords:dict of keywords:scores\n :param combined_keywords:list of word/s\n \"\"\"\n combined_keywords.sort(key=lambda w: _get_average_score(w, _keywords), reverse=True)\n if scores:\n return [(word, _get_average_score(word, _keywords)) for word in combined_keywords]\n if split:\n return combined_keywords\n return \"\\n\".join(combined_keywords)\n\n\ndef keywords(text, ratio=0.2, words=None, split=False, scores=False, pos_filter=['NN', 'JJ'], lemmatize=False):\n # Gets a dict of word -> lemma\n text = to_unicode(text)\n tokens = _clean_text_by_word(text)\n split_text = list(_tokenize_by_word(text))\n\n # Creates the graph and adds the edges\n graph = _build_graph(_get_words_for_graph(tokens, pos_filter))\n _set_graph_edges(graph, tokens, split_text)\n del split_text # It's no longer used\n\n _remove_unreachable_nodes(graph)\n\n # Ranks the tokens using the PageRank algorithm. Returns dict of lemma -> score\n pagerank_scores = _pagerank(graph)\n\n extracted_lemmas = _extract_tokens(graph.nodes(), pagerank_scores, ratio, words)\n\n # The results can be polluted by many variations of the same word\n if lemmatize:\n lemmas_to_word = {}\n for word, unit in iteritems(tokens):\n lemmas_to_word[unit.token] = [word]\n else:\n lemmas_to_word = _lemmas_to_words(tokens)\n\n keywords = _get_keywords_with_score(extracted_lemmas, lemmas_to_word)\n\n # text.split() to keep numbers and punctuation marks, so separeted concepts are not combined\n combined_keywords = _get_combined_keywords(keywords, text.split())\n\n return _format_results(keywords, combined_keywords, split, scores)\n\n\ndef get_graph(text):\n tokens = _clean_text_by_word(text)\n split_text = list(_tokenize_by_word(text))\n\n graph = _build_graph(_get_words_for_graph(tokens))\n _set_graph_edges(graph, tokens, split_text)\n\n return graph\n", "path": "gensim/summarization/keywords.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html\n\nfrom gensim.summarization.pagerank_weighted import pagerank_weighted as _pagerank\nfrom gensim.summarization.textcleaner import clean_text_by_word as _clean_text_by_word\nfrom gensim.summarization.textcleaner import tokenize_by_word as _tokenize_by_word\nfrom gensim.summarization.commons import build_graph as _build_graph\nfrom gensim.summarization.commons import remove_unreachable_nodes as _remove_unreachable_nodes\nfrom gensim.utils import to_unicode\nfrom itertools import combinations as _combinations\nfrom six.moves.queue import Queue as _Queue\nfrom six.moves import xrange\nfrom six import iteritems\n\n\nWINDOW_SIZE = 2\n\n\"\"\"\nCheck tags in http://www.clips.ua.ac.be/pages/mbsp-tags and use only first two letters\nExample: filter for nouns and adjectives:\nINCLUDING_FILTER = ['NN', 'JJ']\n\"\"\"\nINCLUDING_FILTER = ['NN', 'JJ']\nEXCLUDING_FILTER = []\n\n\ndef _get_pos_filters():\n return frozenset(INCLUDING_FILTER), frozenset(EXCLUDING_FILTER)\n\n\ndef _get_words_for_graph(tokens, pos_filter):\n if pos_filter is None:\n include_filters, exclude_filters = _get_pos_filters()\n else:\n include_filters = set(pos_filter)\n exclude_filters = frozenset([])\n if include_filters and exclude_filters:\n raise ValueError(\"Can't use both include and exclude filters, should use only one\")\n\n result = []\n for word, unit in iteritems(tokens):\n if exclude_filters and unit.tag in exclude_filters:\n continue\n if (include_filters and unit.tag in include_filters) or not include_filters or not unit.tag:\n result.append(unit.token)\n return result\n\n\ndef _get_first_window(split_text):\n return split_text[:WINDOW_SIZE]\n\n\ndef _set_graph_edge(graph, tokens, word_a, word_b):\n if word_a in tokens and word_b in tokens:\n lemma_a = tokens[word_a].token\n lemma_b = tokens[word_b].token\n edge = (lemma_a, lemma_b)\n\n if graph.has_node(lemma_a) and graph.has_node(lemma_b) and not graph.has_edge(edge):\n graph.add_edge(edge)\n\n\ndef _process_first_window(graph, tokens, split_text):\n first_window = _get_first_window(split_text)\n for word_a, word_b in _combinations(first_window, 2):\n _set_graph_edge(graph, tokens, word_a, word_b)\n\n\ndef _init_queue(split_text):\n queue = _Queue()\n first_window = _get_first_window(split_text)\n for word in first_window[1:]:\n queue.put(word)\n return queue\n\n\ndef _process_word(graph, tokens, queue, word):\n for word_to_compare in _queue_iterator(queue):\n _set_graph_edge(graph, tokens, word, word_to_compare)\n\n\ndef _update_queue(queue, word):\n queue.get()\n queue.put(word)\n assert queue.qsize() == (WINDOW_SIZE - 1)\n\n\ndef _process_text(graph, tokens, split_text):\n queue = _init_queue(split_text)\n for i in xrange(WINDOW_SIZE, len(split_text)):\n word = split_text[i]\n _process_word(graph, tokens, queue, word)\n _update_queue(queue, word)\n\n\ndef _queue_iterator(queue):\n iterations = queue.qsize()\n for i in xrange(iterations):\n var = queue.get()\n yield var\n queue.put(var)\n\n\ndef _set_graph_edges(graph, tokens, split_text):\n _process_first_window(graph, tokens, split_text)\n _process_text(graph, tokens, split_text)\n\n\ndef _extract_tokens(lemmas, scores, ratio, words):\n lemmas.sort(key=lambda s: scores[s], reverse=True)\n\n # If no \"words\" option is selected, the number of sentences is\n # reduced by the provided ratio, else, the ratio is ignored.\n length = len(lemmas) * ratio if words is None else words\n return [(scores[lemmas[i]], lemmas[i],) for i in range(int(length))]\n\n\ndef _lemmas_to_words(tokens):\n lemma_to_word = {}\n for word, unit in iteritems(tokens):\n lemma = unit.token\n if lemma in lemma_to_word:\n lemma_to_word[lemma].append(word)\n else:\n lemma_to_word[lemma] = [word]\n return lemma_to_word\n\n\ndef _get_keywords_with_score(extracted_lemmas, lemma_to_word):\n \"\"\"\n :param extracted_lemmas:list of tuples\n :param lemma_to_word: dict of {lemma:list of words}\n :return: dict of {keyword:score}\n \"\"\"\n keywords = {}\n for score, lemma in extracted_lemmas:\n keyword_list = lemma_to_word[lemma]\n for keyword in keyword_list:\n keywords[keyword] = score\n return keywords\n\n\ndef _strip_word(word):\n stripped_word_list = list(_tokenize_by_word(word))\n return stripped_word_list[0] if stripped_word_list else \"\"\n\n\ndef _get_combined_keywords(_keywords, split_text):\n \"\"\"\n :param keywords:dict of keywords:scores\n :param split_text: list of strings\n :return: combined_keywords:list\n \"\"\"\n result = []\n _keywords = _keywords.copy()\n len_text = len(split_text)\n for i in xrange(len_text):\n word = _strip_word(split_text[i])\n if word in _keywords:\n combined_word = [word]\n if i + 1 == len_text:\n result.append(word) # appends last word if keyword and doesn't iterate\n for j in xrange(i + 1, len_text):\n other_word = _strip_word(split_text[j])\n if other_word in _keywords and other_word == split_text[j] and not other_word in combined_word:\n combined_word.append(other_word)\n else:\n for keyword in combined_word:\n _keywords.pop(keyword)\n result.append(\" \".join(combined_word))\n break\n return result\n\n\ndef _get_average_score(concept, _keywords):\n word_list = concept.split()\n word_counter = 0\n total = 0\n for word in word_list:\n total += _keywords[word]\n word_counter += 1\n return total / word_counter\n\n\ndef _format_results(_keywords, combined_keywords, split, scores):\n \"\"\"\n :param keywords:dict of keywords:scores\n :param combined_keywords:list of word/s\n \"\"\"\n combined_keywords.sort(key=lambda w: _get_average_score(w, _keywords), reverse=True)\n if scores:\n return [(word, _get_average_score(word, _keywords)) for word in combined_keywords]\n if split:\n return combined_keywords\n return \"\\n\".join(combined_keywords)\n\n\ndef keywords(text, ratio=0.2, words=None, split=False, scores=False, pos_filter=['NN', 'JJ'], lemmatize=False):\n # Gets a dict of word -> lemma\n text = to_unicode(text)\n tokens = _clean_text_by_word(text)\n split_text = list(_tokenize_by_word(text))\n\n # Creates the graph and adds the edges\n graph = _build_graph(_get_words_for_graph(tokens, pos_filter))\n _set_graph_edges(graph, tokens, split_text)\n del split_text # It's no longer used\n\n _remove_unreachable_nodes(graph)\n\n # Ranks the tokens using the PageRank algorithm. Returns dict of lemma -> score\n pagerank_scores = _pagerank(graph)\n\n extracted_lemmas = _extract_tokens(graph.nodes(), pagerank_scores, ratio, words)\n\n # The results can be polluted by many variations of the same word\n if lemmatize:\n lemmas_to_word = {}\n for word, unit in iteritems(tokens):\n lemmas_to_word[unit.token] = [word]\n else:\n lemmas_to_word = _lemmas_to_words(tokens)\n\n keywords = _get_keywords_with_score(extracted_lemmas, lemmas_to_word)\n\n # text.split() to keep numbers and punctuation marks, so separeted concepts are not combined\n combined_keywords = _get_combined_keywords(keywords, text.split())\n\n return _format_results(keywords, combined_keywords, split, scores)\n\n\ndef get_graph(text):\n tokens = _clean_text_by_word(text)\n split_text = list(_tokenize_by_word(text))\n\n graph = _build_graph(_get_words_for_graph(tokens))\n _set_graph_edges(graph, tokens, split_text)\n\n return graph\n", "path": "gensim/summarization/keywords.py"}]} | 3,314 | 165 |
gh_patches_debug_14978 | rasdani/github-patches | git_diff | archlinux__archinstall-2241 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Solved] broken intramfs, black screen
Broken initramfs after clean install with encryption, kde desktop and propretary nvidia drivers.
1. Loads grub
2. Decrypt drive
3. Loads initramfs
4. Black screen with no output
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `archinstall/lib/hardware.py`
Content:
```
1 import os
2 from enum import Enum
3 from functools import cached_property
4 from pathlib import Path
5 from typing import Optional, Dict, List
6
7 from .exceptions import SysCallError
8 from .general import SysCommand
9 from .networking import list_interfaces, enrich_iface_types
10 from .output import debug
11
12
13 class CpuVendor(Enum):
14 AuthenticAMD = 'amd'
15 GenuineIntel = 'intel'
16 _Unknown = 'unknown'
17
18 @classmethod
19 def get_vendor(cls, name: str) -> 'CpuVendor':
20 if vendor := getattr(cls, name, None):
21 return vendor
22 else:
23 debug(f"Unknown CPU vendor '{name}' detected.")
24 return cls._Unknown
25
26 def _has_microcode(self) -> bool:
27 match self:
28 case CpuVendor.AuthenticAMD | CpuVendor.GenuineIntel:
29 return True
30 case _:
31 return False
32
33 def get_ucode(self) -> Optional[Path]:
34 if self._has_microcode():
35 return Path(self.value + '-ucode.img')
36 return None
37
38
39 class GfxPackage(Enum):
40 IntelMediaDriver = 'intel-media-driver'
41 LibvaIntelDriver = 'libva-intel-driver'
42 LibvaMesaDriver = 'libva-mesa-driver'
43 Mesa = "mesa"
44 Nvidia = 'nvidia'
45 NvidiaOpen = 'nvidia-open'
46 VulkanIntel = 'vulkan-intel'
47 VulkanRadeon = 'vulkan-radeon'
48 Xf86VideoAmdgpu = "xf86-video-amdgpu"
49 Xf86VideoAti = "xf86-video-ati"
50 Xf86VideoNouveau = 'xf86-video-nouveau'
51 Xf86VideoVmware = 'xf86-video-vmware'
52
53
54 class GfxDriver(Enum):
55 AllOpenSource = 'All open-source'
56 AmdOpenSource = 'AMD / ATI (open-source)'
57 IntelOpenSource = 'Intel (open-source)'
58 NvidiaOpenKernel = 'Nvidia (open kernel module for newer GPUs, Turing+)'
59 NvidiaOpenSource = 'Nvidia (open-source nouveau driver)'
60 NvidiaProprietary = 'Nvidia (proprietary)'
61 VMOpenSource = 'VMware / VirtualBox (open-source)'
62
63 def is_nvidia(self) -> bool:
64 match self:
65 case GfxDriver.NvidiaProprietary | \
66 GfxDriver.NvidiaOpenSource | \
67 GfxDriver.NvidiaOpenKernel:
68 return True
69 case _:
70 return False
71
72 def packages(self) -> List[GfxPackage]:
73 match self:
74 case GfxDriver.AllOpenSource:
75 return [
76 GfxPackage.Mesa,
77 GfxPackage.Xf86VideoAmdgpu,
78 GfxPackage.Xf86VideoAti,
79 GfxPackage.Xf86VideoNouveau,
80 GfxPackage.Xf86VideoVmware,
81 GfxPackage.LibvaMesaDriver,
82 GfxPackage.LibvaIntelDriver,
83 GfxPackage.IntelMediaDriver,
84 GfxPackage.VulkanRadeon,
85 GfxPackage.VulkanIntel
86 ]
87 case GfxDriver.AmdOpenSource:
88 return [
89 GfxPackage.Mesa,
90 GfxPackage.Xf86VideoAmdgpu,
91 GfxPackage.Xf86VideoAti,
92 GfxPackage.LibvaMesaDriver,
93 GfxPackage.VulkanRadeon
94 ]
95 case GfxDriver.IntelOpenSource:
96 return [
97 GfxPackage.Mesa,
98 GfxPackage.LibvaIntelDriver,
99 GfxPackage.IntelMediaDriver,
100 GfxPackage.VulkanIntel
101 ]
102 case GfxDriver.NvidiaOpenKernel:
103 return [GfxPackage.NvidiaOpen]
104 case GfxDriver.NvidiaOpenSource:
105 return [
106 GfxPackage.Mesa,
107 GfxPackage.Xf86VideoNouveau,
108 GfxPackage.LibvaMesaDriver
109 ]
110 case GfxDriver.NvidiaProprietary:
111 return [GfxPackage.Nvidia]
112 case GfxDriver.VMOpenSource:
113 return [
114 GfxPackage.Mesa,
115 GfxPackage.Xf86VideoVmware
116 ]
117
118
119 class _SysInfo:
120 def __init__(self):
121 pass
122
123 @cached_property
124 def cpu_info(self) -> Dict[str, str]:
125 """
126 Returns system cpu information
127 """
128 cpu_info_path = Path("/proc/cpuinfo")
129 cpu: Dict[str, str] = {}
130
131 with cpu_info_path.open() as file:
132 for line in file:
133 if line := line.strip():
134 key, value = line.split(":", maxsplit=1)
135 cpu[key.strip()] = value.strip()
136
137 return cpu
138
139 @cached_property
140 def mem_info(self) -> Dict[str, int]:
141 """
142 Returns system memory information
143 """
144 mem_info_path = Path("/proc/meminfo")
145 mem_info: Dict[str, int] = {}
146
147 with mem_info_path.open() as file:
148 for line in file:
149 key, value = line.strip().split(':')
150 num = value.split()[0]
151 mem_info[key] = int(num)
152
153 return mem_info
154
155 def mem_info_by_key(self, key: str) -> int:
156 return self.mem_info[key]
157
158 @cached_property
159 def loaded_modules(self) -> List[str]:
160 """
161 Returns loaded kernel modules
162 """
163 modules_path = Path('/proc/modules')
164 modules: List[str] = []
165
166 with modules_path.open() as file:
167 for line in file:
168 module = line.split(maxsplit=1)[0]
169 modules.append(module)
170
171 return modules
172
173
174 _sys_info = _SysInfo()
175
176
177 class SysInfo:
178 @staticmethod
179 def has_wifi() -> bool:
180 ifaces = list(list_interfaces().values())
181 return 'WIRELESS' in enrich_iface_types(ifaces).values()
182
183 @staticmethod
184 def has_uefi() -> bool:
185 return os.path.isdir('/sys/firmware/efi')
186
187 @staticmethod
188 def _graphics_devices() -> Dict[str, str]:
189 cards: Dict[str, str] = {}
190 for line in SysCommand("lspci"):
191 if b' VGA ' in line or b' 3D ' in line:
192 _, identifier = line.split(b': ', 1)
193 cards[identifier.strip().decode('UTF-8')] = str(line)
194 return cards
195
196 @staticmethod
197 def has_nvidia_graphics() -> bool:
198 return any('nvidia' in x.lower() for x in SysInfo._graphics_devices())
199
200 @staticmethod
201 def has_amd_graphics() -> bool:
202 return any('amd' in x.lower() for x in SysInfo._graphics_devices())
203
204 @staticmethod
205 def has_intel_graphics() -> bool:
206 return any('intel' in x.lower() for x in SysInfo._graphics_devices())
207
208 @staticmethod
209 def cpu_vendor() -> Optional[CpuVendor]:
210 if vendor := _sys_info.cpu_info.get('vendor_id'):
211 return CpuVendor.get_vendor(vendor)
212 return None
213
214 @staticmethod
215 def cpu_model() -> Optional[str]:
216 return _sys_info.cpu_info.get('model name', None)
217
218 @staticmethod
219 def sys_vendor() -> str:
220 with open(f"/sys/devices/virtual/dmi/id/sys_vendor") as vendor:
221 return vendor.read().strip()
222
223 @staticmethod
224 def product_name() -> str:
225 with open(f"/sys/devices/virtual/dmi/id/product_name") as product:
226 return product.read().strip()
227
228 @staticmethod
229 def mem_available() -> int:
230 return _sys_info.mem_info_by_key('MemAvailable')
231
232 @staticmethod
233 def mem_free() -> int:
234 return _sys_info.mem_info_by_key('MemFree')
235
236 @staticmethod
237 def mem_total() -> int:
238 return _sys_info.mem_info_by_key('MemTotal')
239
240 @staticmethod
241 def virtualization() -> Optional[str]:
242 try:
243 return str(SysCommand("systemd-detect-virt")).strip('\r\n')
244 except SysCallError as err:
245 debug(f"Could not detect virtual system: {err}")
246
247 return None
248
249 @staticmethod
250 def is_vm() -> bool:
251 try:
252 result = SysCommand("systemd-detect-virt")
253 return b"none" not in b"".join(result).lower()
254 except SysCallError as err:
255 debug(f"System is not running in a VM: {err}")
256
257 return False
258
259 @staticmethod
260 def requires_sof_fw() -> bool:
261 return 'snd_sof' in _sys_info.loaded_modules
262
263 @staticmethod
264 def requires_alsa_fw() -> bool:
265 modules = (
266 'snd_asihpi',
267 'snd_cs46xx',
268 'snd_darla20',
269 'snd_darla24',
270 'snd_echo3g',
271 'snd_emu10k1',
272 'snd_gina20',
273 'snd_gina24',
274 'snd_hda_codec_ca0132',
275 'snd_hdsp',
276 'snd_indigo',
277 'snd_indigodj',
278 'snd_indigodjx',
279 'snd_indigoio',
280 'snd_indigoiox',
281 'snd_layla20',
282 'snd_layla24',
283 'snd_mia',
284 'snd_mixart',
285 'snd_mona',
286 'snd_pcxhr',
287 'snd_vx_lib'
288 )
289
290 for loaded_module in _sys_info.loaded_modules:
291 if loaded_module in modules:
292 return True
293
294 return False
295
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/archinstall/lib/hardware.py b/archinstall/lib/hardware.py
--- a/archinstall/lib/hardware.py
+++ b/archinstall/lib/hardware.py
@@ -42,6 +42,7 @@
LibvaMesaDriver = 'libva-mesa-driver'
Mesa = "mesa"
Nvidia = 'nvidia'
+ NvidiaDKMS = 'nvidia-dkms'
NvidiaOpen = 'nvidia-open'
VulkanIntel = 'vulkan-intel'
VulkanRadeon = 'vulkan-radeon'
@@ -108,7 +109,10 @@
GfxPackage.LibvaMesaDriver
]
case GfxDriver.NvidiaProprietary:
- return [GfxPackage.Nvidia]
+ return [
+ GfxPackage.Nvidia,
+ GfxPackage.NvidiaDKMS
+ ]
case GfxDriver.VMOpenSource:
return [
GfxPackage.Mesa,
| {"golden_diff": "diff --git a/archinstall/lib/hardware.py b/archinstall/lib/hardware.py\n--- a/archinstall/lib/hardware.py\n+++ b/archinstall/lib/hardware.py\n@@ -42,6 +42,7 @@\n \tLibvaMesaDriver = 'libva-mesa-driver'\n \tMesa = \"mesa\"\n \tNvidia = 'nvidia'\n+\tNvidiaDKMS = 'nvidia-dkms'\n \tNvidiaOpen = 'nvidia-open'\n \tVulkanIntel = 'vulkan-intel'\n \tVulkanRadeon = 'vulkan-radeon'\n@@ -108,7 +109,10 @@\n \t\t\t\t\tGfxPackage.LibvaMesaDriver\n \t\t\t\t]\n \t\t\tcase GfxDriver.NvidiaProprietary:\n-\t\t\t\treturn [GfxPackage.Nvidia]\n+\t\t\t\treturn [\n+\t\t\t\t\tGfxPackage.Nvidia,\n+\t\t\t\t\tGfxPackage.NvidiaDKMS\n+\t\t\t\t]\n \t\t\tcase GfxDriver.VMOpenSource:\n \t\t\t\treturn [\n \t\t\t\t\tGfxPackage.Mesa,\n", "issue": "[Solved] broken intramfs, black screen\nBroken initramfs after clean install with encryption, kde desktop and propretary nvidia drivers.\r\n1. Loads grub\r\n2. Decrypt drive\r\n3. Loads initramfs\r\n4. Black screen with no output\n", "before_files": [{"content": "import os\nfrom enum import Enum\nfrom functools import cached_property\nfrom pathlib import Path\nfrom typing import Optional, Dict, List\n\nfrom .exceptions import SysCallError\nfrom .general import SysCommand\nfrom .networking import list_interfaces, enrich_iface_types\nfrom .output import debug\n\n\nclass CpuVendor(Enum):\n\tAuthenticAMD = 'amd'\n\tGenuineIntel = 'intel'\n\t_Unknown = 'unknown'\n\n\t@classmethod\n\tdef get_vendor(cls, name: str) -> 'CpuVendor':\n\t\tif vendor := getattr(cls, name, None):\n\t\t\treturn vendor\n\t\telse:\n\t\t\tdebug(f\"Unknown CPU vendor '{name}' detected.\")\n\t\t\treturn cls._Unknown\n\n\tdef _has_microcode(self) -> bool:\n\t\tmatch self:\n\t\t\tcase CpuVendor.AuthenticAMD | CpuVendor.GenuineIntel:\n\t\t\t\treturn True\n\t\t\tcase _:\n\t\t\t\treturn False\n\n\tdef get_ucode(self) -> Optional[Path]:\n\t\tif self._has_microcode():\n\t\t\treturn Path(self.value + '-ucode.img')\n\t\treturn None\n\n\nclass GfxPackage(Enum):\n\tIntelMediaDriver = 'intel-media-driver'\n\tLibvaIntelDriver = 'libva-intel-driver'\n\tLibvaMesaDriver = 'libva-mesa-driver'\n\tMesa = \"mesa\"\n\tNvidia = 'nvidia'\n\tNvidiaOpen = 'nvidia-open'\n\tVulkanIntel = 'vulkan-intel'\n\tVulkanRadeon = 'vulkan-radeon'\n\tXf86VideoAmdgpu = \"xf86-video-amdgpu\"\n\tXf86VideoAti = \"xf86-video-ati\"\n\tXf86VideoNouveau = 'xf86-video-nouveau'\n\tXf86VideoVmware = 'xf86-video-vmware'\n\n\nclass GfxDriver(Enum):\n\tAllOpenSource = 'All open-source'\n\tAmdOpenSource = 'AMD / ATI (open-source)'\n\tIntelOpenSource = 'Intel (open-source)'\n\tNvidiaOpenKernel = 'Nvidia (open kernel module for newer GPUs, Turing+)'\n\tNvidiaOpenSource = 'Nvidia (open-source nouveau driver)'\n\tNvidiaProprietary = 'Nvidia (proprietary)'\n\tVMOpenSource = 'VMware / VirtualBox (open-source)'\n\n\tdef is_nvidia(self) -> bool:\n\t\tmatch self:\n\t\t\tcase GfxDriver.NvidiaProprietary | \\\n\t\t\t\tGfxDriver.NvidiaOpenSource | \\\n\t\t\t\tGfxDriver.NvidiaOpenKernel:\n\t\t\t\treturn True\n\t\t\tcase _:\n\t\t\t\treturn False\n\n\tdef packages(self) -> List[GfxPackage]:\n\t\tmatch self:\n\t\t\tcase GfxDriver.AllOpenSource:\n\t\t\t\treturn [\n\t\t\t\t\tGfxPackage.Mesa,\n\t\t\t\t\tGfxPackage.Xf86VideoAmdgpu,\n\t\t\t\t\tGfxPackage.Xf86VideoAti,\n\t\t\t\t\tGfxPackage.Xf86VideoNouveau,\n\t\t\t\t\tGfxPackage.Xf86VideoVmware,\n\t\t\t\t\tGfxPackage.LibvaMesaDriver,\n\t\t\t\t\tGfxPackage.LibvaIntelDriver,\n\t\t\t\t\tGfxPackage.IntelMediaDriver,\n\t\t\t\t\tGfxPackage.VulkanRadeon,\n\t\t\t\t\tGfxPackage.VulkanIntel\n\t\t\t\t]\n\t\t\tcase GfxDriver.AmdOpenSource:\n\t\t\t\treturn [\n\t\t\t\t\tGfxPackage.Mesa,\n\t\t\t\t\tGfxPackage.Xf86VideoAmdgpu,\n\t\t\t\t\tGfxPackage.Xf86VideoAti,\n\t\t\t\t\tGfxPackage.LibvaMesaDriver,\n\t\t\t\t\tGfxPackage.VulkanRadeon\n\t\t\t\t]\n\t\t\tcase GfxDriver.IntelOpenSource:\n\t\t\t\treturn [\n\t\t\t\t\tGfxPackage.Mesa,\n\t\t\t\t\tGfxPackage.LibvaIntelDriver,\n\t\t\t\t\tGfxPackage.IntelMediaDriver,\n\t\t\t\t\tGfxPackage.VulkanIntel\n\t\t\t\t]\n\t\t\tcase GfxDriver.NvidiaOpenKernel:\n\t\t\t\treturn [GfxPackage.NvidiaOpen]\n\t\t\tcase GfxDriver.NvidiaOpenSource:\n\t\t\t\treturn [\n\t\t\t\t\tGfxPackage.Mesa,\n\t\t\t\t\tGfxPackage.Xf86VideoNouveau,\n\t\t\t\t\tGfxPackage.LibvaMesaDriver\n\t\t\t\t]\n\t\t\tcase GfxDriver.NvidiaProprietary:\n\t\t\t\treturn [GfxPackage.Nvidia]\n\t\t\tcase GfxDriver.VMOpenSource:\n\t\t\t\treturn [\n\t\t\t\t\tGfxPackage.Mesa,\n\t\t\t\t\tGfxPackage.Xf86VideoVmware\n\t\t\t\t]\n\n\nclass _SysInfo:\n\tdef __init__(self):\n\t\tpass\n\n\t@cached_property\n\tdef cpu_info(self) -> Dict[str, str]:\n\t\t\"\"\"\n\t\tReturns system cpu information\n\t\t\"\"\"\n\t\tcpu_info_path = Path(\"/proc/cpuinfo\")\n\t\tcpu: Dict[str, str] = {}\n\n\t\twith cpu_info_path.open() as file:\n\t\t\tfor line in file:\n\t\t\t\tif line := line.strip():\n\t\t\t\t\tkey, value = line.split(\":\", maxsplit=1)\n\t\t\t\t\tcpu[key.strip()] = value.strip()\n\n\t\treturn cpu\n\n\t@cached_property\n\tdef mem_info(self) -> Dict[str, int]:\n\t\t\"\"\"\n\t\tReturns system memory information\n\t\t\"\"\"\n\t\tmem_info_path = Path(\"/proc/meminfo\")\n\t\tmem_info: Dict[str, int] = {}\n\n\t\twith mem_info_path.open() as file:\n\t\t\tfor line in file:\n\t\t\t\tkey, value = line.strip().split(':')\n\t\t\t\tnum = value.split()[0]\n\t\t\t\tmem_info[key] = int(num)\n\n\t\treturn mem_info\n\n\tdef mem_info_by_key(self, key: str) -> int:\n\t\treturn self.mem_info[key]\n\n\t@cached_property\n\tdef loaded_modules(self) -> List[str]:\n\t\t\"\"\"\n\t\tReturns loaded kernel modules\n\t\t\"\"\"\n\t\tmodules_path = Path('/proc/modules')\n\t\tmodules: List[str] = []\n\n\t\twith modules_path.open() as file:\n\t\t\tfor line in file:\n\t\t\t\tmodule = line.split(maxsplit=1)[0]\n\t\t\t\tmodules.append(module)\n\n\t\treturn modules\n\n\n_sys_info = _SysInfo()\n\n\nclass SysInfo:\n\t@staticmethod\n\tdef has_wifi() -> bool:\n\t\tifaces = list(list_interfaces().values())\n\t\treturn 'WIRELESS' in enrich_iface_types(ifaces).values()\n\n\t@staticmethod\n\tdef has_uefi() -> bool:\n\t\treturn os.path.isdir('/sys/firmware/efi')\n\n\t@staticmethod\n\tdef _graphics_devices() -> Dict[str, str]:\n\t\tcards: Dict[str, str] = {}\n\t\tfor line in SysCommand(\"lspci\"):\n\t\t\tif b' VGA ' in line or b' 3D ' in line:\n\t\t\t\t_, identifier = line.split(b': ', 1)\n\t\t\t\tcards[identifier.strip().decode('UTF-8')] = str(line)\n\t\treturn cards\n\n\t@staticmethod\n\tdef has_nvidia_graphics() -> bool:\n\t\treturn any('nvidia' in x.lower() for x in SysInfo._graphics_devices())\n\n\t@staticmethod\n\tdef has_amd_graphics() -> bool:\n\t\treturn any('amd' in x.lower() for x in SysInfo._graphics_devices())\n\n\t@staticmethod\n\tdef has_intel_graphics() -> bool:\n\t\treturn any('intel' in x.lower() for x in SysInfo._graphics_devices())\n\n\t@staticmethod\n\tdef cpu_vendor() -> Optional[CpuVendor]:\n\t\tif vendor := _sys_info.cpu_info.get('vendor_id'):\n\t\t\treturn CpuVendor.get_vendor(vendor)\n\t\treturn None\n\n\t@staticmethod\n\tdef cpu_model() -> Optional[str]:\n\t\treturn _sys_info.cpu_info.get('model name', None)\n\n\t@staticmethod\n\tdef sys_vendor() -> str:\n\t\twith open(f\"/sys/devices/virtual/dmi/id/sys_vendor\") as vendor:\n\t\t\treturn vendor.read().strip()\n\n\t@staticmethod\n\tdef product_name() -> str:\n\t\twith open(f\"/sys/devices/virtual/dmi/id/product_name\") as product:\n\t\t\treturn product.read().strip()\n\n\t@staticmethod\n\tdef mem_available() -> int:\n\t\treturn _sys_info.mem_info_by_key('MemAvailable')\n\n\t@staticmethod\n\tdef mem_free() -> int:\n\t\treturn _sys_info.mem_info_by_key('MemFree')\n\n\t@staticmethod\n\tdef mem_total() -> int:\n\t\treturn _sys_info.mem_info_by_key('MemTotal')\n\n\t@staticmethod\n\tdef virtualization() -> Optional[str]:\n\t\ttry:\n\t\t\treturn str(SysCommand(\"systemd-detect-virt\")).strip('\\r\\n')\n\t\texcept SysCallError as err:\n\t\t\tdebug(f\"Could not detect virtual system: {err}\")\n\n\t\treturn None\n\n\t@staticmethod\n\tdef is_vm() -> bool:\n\t\ttry:\n\t\t\tresult = SysCommand(\"systemd-detect-virt\")\n\t\t\treturn b\"none\" not in b\"\".join(result).lower()\n\t\texcept SysCallError as err:\n\t\t\tdebug(f\"System is not running in a VM: {err}\")\n\n\t\treturn False\n\n\t@staticmethod\n\tdef requires_sof_fw() -> bool:\n\t\treturn 'snd_sof' in _sys_info.loaded_modules\n\n\t@staticmethod\n\tdef requires_alsa_fw() -> bool:\n\t\tmodules = (\n\t\t\t'snd_asihpi',\n\t\t\t'snd_cs46xx',\n\t\t\t'snd_darla20',\n\t\t\t'snd_darla24',\n\t\t\t'snd_echo3g',\n\t\t\t'snd_emu10k1',\n\t\t\t'snd_gina20',\n\t\t\t'snd_gina24',\n\t\t\t'snd_hda_codec_ca0132',\n\t\t\t'snd_hdsp',\n\t\t\t'snd_indigo',\n\t\t\t'snd_indigodj',\n\t\t\t'snd_indigodjx',\n\t\t\t'snd_indigoio',\n\t\t\t'snd_indigoiox',\n\t\t\t'snd_layla20',\n\t\t\t'snd_layla24',\n\t\t\t'snd_mia',\n\t\t\t'snd_mixart',\n\t\t\t'snd_mona',\n\t\t\t'snd_pcxhr',\n\t\t\t'snd_vx_lib'\n\t\t)\n\n\t\tfor loaded_module in _sys_info.loaded_modules:\n\t\t\tif loaded_module in modules:\n\t\t\t\treturn True\n\n\t\treturn False\n", "path": "archinstall/lib/hardware.py"}], "after_files": [{"content": "import os\nfrom enum import Enum\nfrom functools import cached_property\nfrom pathlib import Path\nfrom typing import Optional, Dict, List\n\nfrom .exceptions import SysCallError\nfrom .general import SysCommand\nfrom .networking import list_interfaces, enrich_iface_types\nfrom .output import debug\n\n\nclass CpuVendor(Enum):\n\tAuthenticAMD = 'amd'\n\tGenuineIntel = 'intel'\n\t_Unknown = 'unknown'\n\n\t@classmethod\n\tdef get_vendor(cls, name: str) -> 'CpuVendor':\n\t\tif vendor := getattr(cls, name, None):\n\t\t\treturn vendor\n\t\telse:\n\t\t\tdebug(f\"Unknown CPU vendor '{name}' detected.\")\n\t\t\treturn cls._Unknown\n\n\tdef _has_microcode(self) -> bool:\n\t\tmatch self:\n\t\t\tcase CpuVendor.AuthenticAMD | CpuVendor.GenuineIntel:\n\t\t\t\treturn True\n\t\t\tcase _:\n\t\t\t\treturn False\n\n\tdef get_ucode(self) -> Optional[Path]:\n\t\tif self._has_microcode():\n\t\t\treturn Path(self.value + '-ucode.img')\n\t\treturn None\n\n\nclass GfxPackage(Enum):\n\tIntelMediaDriver = 'intel-media-driver'\n\tLibvaIntelDriver = 'libva-intel-driver'\n\tLibvaMesaDriver = 'libva-mesa-driver'\n\tMesa = \"mesa\"\n\tNvidia = 'nvidia'\n\tNvidiaDKMS = 'nvidia-dkms'\n\tNvidiaOpen = 'nvidia-open'\n\tVulkanIntel = 'vulkan-intel'\n\tVulkanRadeon = 'vulkan-radeon'\n\tXf86VideoAmdgpu = \"xf86-video-amdgpu\"\n\tXf86VideoAti = \"xf86-video-ati\"\n\tXf86VideoNouveau = 'xf86-video-nouveau'\n\tXf86VideoVmware = 'xf86-video-vmware'\n\n\nclass GfxDriver(Enum):\n\tAllOpenSource = 'All open-source'\n\tAmdOpenSource = 'AMD / ATI (open-source)'\n\tIntelOpenSource = 'Intel (open-source)'\n\tNvidiaOpenKernel = 'Nvidia (open kernel module for newer GPUs, Turing+)'\n\tNvidiaOpenSource = 'Nvidia (open-source nouveau driver)'\n\tNvidiaProprietary = 'Nvidia (proprietary)'\n\tVMOpenSource = 'VMware / VirtualBox (open-source)'\n\n\tdef is_nvidia(self) -> bool:\n\t\tmatch self:\n\t\t\tcase GfxDriver.NvidiaProprietary | \\\n\t\t\t\tGfxDriver.NvidiaOpenSource | \\\n\t\t\t\tGfxDriver.NvidiaOpenKernel:\n\t\t\t\treturn True\n\t\t\tcase _:\n\t\t\t\treturn False\n\n\tdef packages(self) -> List[GfxPackage]:\n\t\tmatch self:\n\t\t\tcase GfxDriver.AllOpenSource:\n\t\t\t\treturn [\n\t\t\t\t\tGfxPackage.Mesa,\n\t\t\t\t\tGfxPackage.Xf86VideoAmdgpu,\n\t\t\t\t\tGfxPackage.Xf86VideoAti,\n\t\t\t\t\tGfxPackage.Xf86VideoNouveau,\n\t\t\t\t\tGfxPackage.Xf86VideoVmware,\n\t\t\t\t\tGfxPackage.LibvaMesaDriver,\n\t\t\t\t\tGfxPackage.LibvaIntelDriver,\n\t\t\t\t\tGfxPackage.IntelMediaDriver,\n\t\t\t\t\tGfxPackage.VulkanRadeon,\n\t\t\t\t\tGfxPackage.VulkanIntel\n\t\t\t\t]\n\t\t\tcase GfxDriver.AmdOpenSource:\n\t\t\t\treturn [\n\t\t\t\t\tGfxPackage.Mesa,\n\t\t\t\t\tGfxPackage.Xf86VideoAmdgpu,\n\t\t\t\t\tGfxPackage.Xf86VideoAti,\n\t\t\t\t\tGfxPackage.LibvaMesaDriver,\n\t\t\t\t\tGfxPackage.VulkanRadeon\n\t\t\t\t]\n\t\t\tcase GfxDriver.IntelOpenSource:\n\t\t\t\treturn [\n\t\t\t\t\tGfxPackage.Mesa,\n\t\t\t\t\tGfxPackage.LibvaIntelDriver,\n\t\t\t\t\tGfxPackage.IntelMediaDriver,\n\t\t\t\t\tGfxPackage.VulkanIntel\n\t\t\t\t]\n\t\t\tcase GfxDriver.NvidiaOpenKernel:\n\t\t\t\treturn [GfxPackage.NvidiaOpen]\n\t\t\tcase GfxDriver.NvidiaOpenSource:\n\t\t\t\treturn [\n\t\t\t\t\tGfxPackage.Mesa,\n\t\t\t\t\tGfxPackage.Xf86VideoNouveau,\n\t\t\t\t\tGfxPackage.LibvaMesaDriver\n\t\t\t\t]\n\t\t\tcase GfxDriver.NvidiaProprietary:\n\t\t\t\treturn [\n\t\t\t\t\tGfxPackage.Nvidia,\n\t\t\t\t\tGfxPackage.NvidiaDKMS\n\t\t\t\t]\n\t\t\tcase GfxDriver.VMOpenSource:\n\t\t\t\treturn [\n\t\t\t\t\tGfxPackage.Mesa,\n\t\t\t\t\tGfxPackage.Xf86VideoVmware\n\t\t\t\t]\n\n\nclass _SysInfo:\n\tdef __init__(self):\n\t\tpass\n\n\t@cached_property\n\tdef cpu_info(self) -> Dict[str, str]:\n\t\t\"\"\"\n\t\tReturns system cpu information\n\t\t\"\"\"\n\t\tcpu_info_path = Path(\"/proc/cpuinfo\")\n\t\tcpu: Dict[str, str] = {}\n\n\t\twith cpu_info_path.open() as file:\n\t\t\tfor line in file:\n\t\t\t\tif line := line.strip():\n\t\t\t\t\tkey, value = line.split(\":\", maxsplit=1)\n\t\t\t\t\tcpu[key.strip()] = value.strip()\n\n\t\treturn cpu\n\n\t@cached_property\n\tdef mem_info(self) -> Dict[str, int]:\n\t\t\"\"\"\n\t\tReturns system memory information\n\t\t\"\"\"\n\t\tmem_info_path = Path(\"/proc/meminfo\")\n\t\tmem_info: Dict[str, int] = {}\n\n\t\twith mem_info_path.open() as file:\n\t\t\tfor line in file:\n\t\t\t\tkey, value = line.strip().split(':')\n\t\t\t\tnum = value.split()[0]\n\t\t\t\tmem_info[key] = int(num)\n\n\t\treturn mem_info\n\n\tdef mem_info_by_key(self, key: str) -> int:\n\t\treturn self.mem_info[key]\n\n\t@cached_property\n\tdef loaded_modules(self) -> List[str]:\n\t\t\"\"\"\n\t\tReturns loaded kernel modules\n\t\t\"\"\"\n\t\tmodules_path = Path('/proc/modules')\n\t\tmodules: List[str] = []\n\n\t\twith modules_path.open() as file:\n\t\t\tfor line in file:\n\t\t\t\tmodule = line.split(maxsplit=1)[0]\n\t\t\t\tmodules.append(module)\n\n\t\treturn modules\n\n\n_sys_info = _SysInfo()\n\n\nclass SysInfo:\n\t@staticmethod\n\tdef has_wifi() -> bool:\n\t\tifaces = list(list_interfaces().values())\n\t\treturn 'WIRELESS' in enrich_iface_types(ifaces).values()\n\n\t@staticmethod\n\tdef has_uefi() -> bool:\n\t\treturn os.path.isdir('/sys/firmware/efi')\n\n\t@staticmethod\n\tdef _graphics_devices() -> Dict[str, str]:\n\t\tcards: Dict[str, str] = {}\n\t\tfor line in SysCommand(\"lspci\"):\n\t\t\tif b' VGA ' in line or b' 3D ' in line:\n\t\t\t\t_, identifier = line.split(b': ', 1)\n\t\t\t\tcards[identifier.strip().decode('UTF-8')] = str(line)\n\t\treturn cards\n\n\t@staticmethod\n\tdef has_nvidia_graphics() -> bool:\n\t\treturn any('nvidia' in x.lower() for x in SysInfo._graphics_devices())\n\n\t@staticmethod\n\tdef has_amd_graphics() -> bool:\n\t\treturn any('amd' in x.lower() for x in SysInfo._graphics_devices())\n\n\t@staticmethod\n\tdef has_intel_graphics() -> bool:\n\t\treturn any('intel' in x.lower() for x in SysInfo._graphics_devices())\n\n\t@staticmethod\n\tdef cpu_vendor() -> Optional[CpuVendor]:\n\t\tif vendor := _sys_info.cpu_info.get('vendor_id'):\n\t\t\treturn CpuVendor.get_vendor(vendor)\n\t\treturn None\n\n\t@staticmethod\n\tdef cpu_model() -> Optional[str]:\n\t\treturn _sys_info.cpu_info.get('model name', None)\n\n\t@staticmethod\n\tdef sys_vendor() -> str:\n\t\twith open(f\"/sys/devices/virtual/dmi/id/sys_vendor\") as vendor:\n\t\t\treturn vendor.read().strip()\n\n\t@staticmethod\n\tdef product_name() -> str:\n\t\twith open(f\"/sys/devices/virtual/dmi/id/product_name\") as product:\n\t\t\treturn product.read().strip()\n\n\t@staticmethod\n\tdef mem_available() -> int:\n\t\treturn _sys_info.mem_info_by_key('MemAvailable')\n\n\t@staticmethod\n\tdef mem_free() -> int:\n\t\treturn _sys_info.mem_info_by_key('MemFree')\n\n\t@staticmethod\n\tdef mem_total() -> int:\n\t\treturn _sys_info.mem_info_by_key('MemTotal')\n\n\t@staticmethod\n\tdef virtualization() -> Optional[str]:\n\t\ttry:\n\t\t\treturn str(SysCommand(\"systemd-detect-virt\")).strip('\\r\\n')\n\t\texcept SysCallError as err:\n\t\t\tdebug(f\"Could not detect virtual system: {err}\")\n\n\t\treturn None\n\n\t@staticmethod\n\tdef is_vm() -> bool:\n\t\ttry:\n\t\t\tresult = SysCommand(\"systemd-detect-virt\")\n\t\t\treturn b\"none\" not in b\"\".join(result).lower()\n\t\texcept SysCallError as err:\n\t\t\tdebug(f\"System is not running in a VM: {err}\")\n\n\t\treturn False\n\n\t@staticmethod\n\tdef requires_sof_fw() -> bool:\n\t\treturn 'snd_sof' in _sys_info.loaded_modules\n\n\t@staticmethod\n\tdef requires_alsa_fw() -> bool:\n\t\tmodules = (\n\t\t\t'snd_asihpi',\n\t\t\t'snd_cs46xx',\n\t\t\t'snd_darla20',\n\t\t\t'snd_darla24',\n\t\t\t'snd_echo3g',\n\t\t\t'snd_emu10k1',\n\t\t\t'snd_gina20',\n\t\t\t'snd_gina24',\n\t\t\t'snd_hda_codec_ca0132',\n\t\t\t'snd_hdsp',\n\t\t\t'snd_indigo',\n\t\t\t'snd_indigodj',\n\t\t\t'snd_indigodjx',\n\t\t\t'snd_indigoio',\n\t\t\t'snd_indigoiox',\n\t\t\t'snd_layla20',\n\t\t\t'snd_layla24',\n\t\t\t'snd_mia',\n\t\t\t'snd_mixart',\n\t\t\t'snd_mona',\n\t\t\t'snd_pcxhr',\n\t\t\t'snd_vx_lib'\n\t\t)\n\n\t\tfor loaded_module in _sys_info.loaded_modules:\n\t\t\tif loaded_module in modules:\n\t\t\t\treturn True\n\n\t\treturn False\n", "path": "archinstall/lib/hardware.py"}]} | 3,296 | 220 |
gh_patches_debug_39439 | rasdani/github-patches | git_diff | digitalfabrik__integreat-cms-474 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Development server not starting when GVZ API not reachable
### Describe the Bug
<!-- A clear and concise description of what the bug is. -->
When the [GVZ API](http://gvz.integreat-app.de/api/search/expect_empty_json) is reachable, but does not answer, the development server hangs during start, because there is no timeout defined for the api request [here](https://github.com/Integreat/cms-django/blob/f4f4c698575e975655b1333f630b5f0d6e4e034f/src/gvz_api/apps.py#L27).
### Steps to Reproduce
1. Execute `./dev-tools/run.sh` when GVZ API is reachable but does not answer (tested this morning)
2. Wait
### Expected Behavior
<!-- A clear and concise description of what you expected to happen. -->
The server should start and print a log message that the gvz api is not available
### Actual Behavior
<!-- A clear and concise description of what actually happened. -->
The server does not start
### Additional Information
<!-- Add any other context (e.g. logs, screenshots, etc.) about the problem here. -->
Development server not starting when GVZ API not reachable
### Describe the Bug
<!-- A clear and concise description of what the bug is. -->
When the [GVZ API](http://gvz.integreat-app.de/api/search/expect_empty_json) is reachable, but does not answer, the development server hangs during start, because there is no timeout defined for the api request [here](https://github.com/Integreat/cms-django/blob/f4f4c698575e975655b1333f630b5f0d6e4e034f/src/gvz_api/apps.py#L27).
### Steps to Reproduce
1. Execute `./dev-tools/run.sh` when GVZ API is reachable but does not answer (tested this morning)
2. Wait
### Expected Behavior
<!-- A clear and concise description of what you expected to happen. -->
The server should start and print a log message that the gvz api is not available
### Actual Behavior
<!-- A clear and concise description of what actually happened. -->
The server does not start
### Additional Information
<!-- Add any other context (e.g. logs, screenshots, etc.) about the problem here. -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/gvz_api/apps.py`
Content:
```
1 """
2 Configuration of GVZ API app
3 """
4 import logging
5 import json
6 import requests
7 from django.apps import AppConfig
8 from django.conf import settings
9
10 logger = logging.getLogger(__name__)
11
12
13 class GvzApiConfig(AppConfig):
14 """
15 GVZ API config inheriting the django AppConfig
16 """
17
18 name = "gvz_api"
19 api_available = False
20
21 def ready(self):
22 """
23 Checking if API is available
24 """
25 if settings.GVZ_API_ENABLED:
26 try:
27 response = requests.get(
28 f"{settings.GVZ_API_URL}/search/expect_empty_json"
29 )
30 json.loads(response.text)
31 except json.decoder.JSONDecodeError:
32 self.api_available = False
33 except requests.exceptions.RequestException:
34 self.api_available = False
35 else:
36 self.api_available = True
37 else:
38 self.api_available = False
39 if not self.api_available:
40 logger.info(
41 "GVZ API is not available. You won't be able to "
42 "automatically import coordinates and region aliases."
43 )
44 else:
45 self.api_available = True
46 logger.info("GVZ API is available.")
47
```
Path: `src/backend/settings.py`
Content:
```
1 """
2 Django settings for backend project.
3
4 For more information on this file, see :doc:`topics/settings`.
5 For the full list of settings and their values, see :doc:`ref/settings`.
6 """
7 import os
8
9 # Build paths inside the project like this: os.path.join(BASE_DIR, ...)
10 BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
11
12
13 # Version number
14 VERSION = "0.0.14"
15
16 # Quick-start development settings - unsuitable for production
17 # See https://docs.djangoproject.com/en/2.2/howto/deployment/checklist/
18
19 # SECURITY WARNING: keep the secret key used in production secret!
20 SECRET_KEY = "-!v282$zj815_q@htaxcubylo)(l%a+k*-xi78hw*#s2@i86@_"
21
22 # SECURITY WARNING: don't run with debug turned on in production!
23 DEBUG = True
24
25 ALLOWED_HOSTS = ["localhost", "127.0.0.1", "0.0.0.0"]
26
27 # Needed for webauthn (this is a setting in case the application runs behind a proxy)
28 HOSTNAME = "localhost"
29 BASE_URL = "http://localhost:8000"
30
31 # Application definition
32
33 INSTALLED_APPS = [
34 "cms.apps.CmsConfig",
35 "gvz_api.apps.GvzApiConfig",
36 "django.contrib.admin",
37 "django.contrib.auth",
38 "django.contrib.contenttypes",
39 "django.contrib.messages",
40 "django.contrib.sessions",
41 "django.contrib.staticfiles",
42 "compressor",
43 "compressor_toolkit",
44 "corsheaders",
45 "widget_tweaks",
46 "easy_thumbnails",
47 "filer",
48 "mptt",
49 "rules.apps.AutodiscoverRulesConfig",
50 ]
51
52 MIDDLEWARE = [
53 "corsheaders.middleware.CorsMiddleware",
54 "django.middleware.security.SecurityMiddleware",
55 "django.contrib.sessions.middleware.SessionMiddleware",
56 "django.middleware.locale.LocaleMiddleware",
57 "django.middleware.common.CommonMiddleware",
58 "django.middleware.csrf.CsrfViewMiddleware",
59 "django.contrib.auth.middleware.AuthenticationMiddleware",
60 "django.contrib.messages.middleware.MessageMiddleware",
61 "django.middleware.clickjacking.XFrameOptionsMiddleware",
62 ]
63
64 ROOT_URLCONF = "backend.urls"
65 THUMBNAIL_HIGH_RESOLUTION = True
66
67 TEMPLATES = [
68 {
69 "BACKEND": "django.template.backends.django.DjangoTemplates",
70 "DIRS": [],
71 "APP_DIRS": True,
72 "OPTIONS": {
73 "context_processors": [
74 "django.template.context_processors.debug",
75 "django.template.context_processors.request",
76 "django.contrib.auth.context_processors.auth",
77 "django.contrib.messages.context_processors.messages",
78 "backend.context_processors.region_slug_processor",
79 ],
80 "debug": DEBUG,
81 },
82 },
83 ]
84
85 WSGI_APPLICATION = "backend.wsgi.application"
86
87
88 # Database
89 # https://docs.djangoproject.com/en/2.2/ref/settings/#databases
90
91 DATABASES = {
92 "default": {
93 "ENGINE": "django.db.backends.postgresql_psycopg2",
94 "NAME": "integreat",
95 "USER": "integreat",
96 "PASSWORD": "password",
97 "HOST": "localhost",
98 "PORT": "5432",
99 }
100 }
101
102 # Directory for initial database contents
103
104 FIXTURE_DIRS = (os.path.join(BASE_DIR, "cms/fixtures/"),)
105
106 # Authentication backends
107
108 AUTHENTICATION_BACKENDS = (
109 "rules.permissions.ObjectPermissionBackend",
110 "django.contrib.auth.backends.ModelBackend", # this is default
111 )
112
113
114 # Password validation
115 # https://docs.djangoproject.com/en/2.2/ref/settings/#auth-password-validators
116
117 AUTH_PASSWORD_VALIDATORS = [
118 {
119 "NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
120 },
121 {"NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",},
122 {"NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",},
123 {"NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",},
124 ]
125
126
127 # Internationalization
128 # https://docs.djangoproject.com/en/2.2/topics/i18n/
129
130 LANGUAGES = (
131 ("en-us", "English"),
132 ("de-de", "Deutsch"),
133 )
134
135 LOCALE_PATHS = (os.path.join(BASE_DIR, "locale"),)
136
137 LANGUAGE_CODE = "de-de"
138
139 TIME_ZONE = "UTC"
140
141 USE_I18N = True
142
143 USE_L10N = True
144
145 USE_TZ = True
146
147
148 # Static files (CSS, JavaScript, Images)
149 # https://docs.djangoproject.com/en/2.2/howto/static-files/
150
151 STATICFILES_DIRS = [
152 os.path.join(BASE_DIR, "../node_modules"),
153 ]
154 STATIC_URL = "/static/"
155 STATIC_ROOT = os.path.join(BASE_DIR, "cms/static/")
156
157 # Login
158 LOGIN_URL = "/login"
159 LOGIN_REDIRECT_URL = "/"
160 LOGOUT_REDIRECT_URL = "/login"
161
162 # Miscellaneous
163 EMAIL_BACKEND = "django.core.mail.backends.console.EmailBackend"
164 CSRF_FAILURE_VIEW = "cms.views.error_handler.csrf_failure"
165
166 MEDIA_URL = "/media/"
167 MEDIA_ROOT = os.path.join(BASE_DIR, "media")
168 FILER_CANONICAL_URL = "media/"
169
170 LOGGING = {
171 "version": 1,
172 "disable_existing_loggers": False,
173 "formatters": {
174 "default": {"format": "INTEGREAT CMS - %(levelname)s: %(message)s",},
175 "console": {
176 "format": "%(asctime)s INTEGREAT CMS - %(levelname)s: %(message)s",
177 "datefmt": "%b %d %H:%M:%S",
178 },
179 },
180 "handlers": {
181 "console": {"class": "logging.StreamHandler", "formatter": "console"},
182 "authlog": {
183 "level": "INFO",
184 "class": "logging.handlers.SysLogHandler",
185 "address": "/dev/log",
186 "facility": "auth",
187 "formatter": "default",
188 },
189 "syslog": {
190 "level": "INFO",
191 "class": "logging.handlers.SysLogHandler",
192 "address": "/dev/log",
193 "facility": "syslog",
194 "formatter": "default",
195 },
196 },
197 "loggers": {
198 "django": {"handlers": ["console"], "level": "WARN", "propagate": True,},
199 "api": {"handlers": ["console"], "level": "INFO", "propagate": True,},
200 "cms": {"handlers": ["console"], "level": "INFO", "propagate": True,},
201 "rules": {"handlers": ["console"], "level": "DEBUG", "propagate": True,},
202 "auth": {"handlers": ["console", "authlog", "syslog"], "level": "INFO",},
203 },
204 }
205
206 STATICFILES_FINDERS = (
207 "django.contrib.staticfiles.finders.FileSystemFinder",
208 "django.contrib.staticfiles.finders.AppDirectoriesFinder",
209 "compressor.finders.CompressorFinder",
210 )
211
212 COMPRESS_CSS_FILTERS = [
213 "compressor.filters.css_default.CssAbsoluteFilter",
214 "compressor.filters.cssmin.CSSMinFilter",
215 "compressor.filters.template.TemplateFilter",
216 ]
217 COMPRESS_JS_FILTERS = [
218 "compressor.filters.jsmin.JSMinFilter",
219 ]
220 COMPRESS_PRECOMPILERS = (
221 ("module", "compressor_toolkit.precompilers.ES6Compiler"),
222 ("css", "compressor_toolkit.precompilers.SCSSCompiler"),
223 )
224 COMPRESS_ENABLED = False
225 COMPRESS_OFFLINE = True
226
227 # GVZ (Gemeindeverzeichnis) API URL
228 GVZ_API_URL = "http://gvz.integreat-app.de/api/"
229 GVZ_API_ENABLED = True
230
231 # Allow access to all domains by setting the following variable to TRUE
232 CORS_ORIGIN_ALLOW_ALL = True
233
234 # Extend default headers with development header to differenciate dev traffic in statistics
235 CORS_ALLOW_HEADERS = [
236 "accept",
237 "accept-encoding",
238 "authorization",
239 "content-type",
240 "dnt",
241 "origin",
242 "user-agent",
243 "x-csrftoken",
244 "x-requested-with",
245 "x-integreat-development",
246 ]
247
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/backend/settings.py b/src/backend/settings.py
--- a/src/backend/settings.py
+++ b/src/backend/settings.py
@@ -197,7 +197,9 @@
"loggers": {
"django": {"handlers": ["console"], "level": "WARN", "propagate": True,},
"api": {"handlers": ["console"], "level": "INFO", "propagate": True,},
+ "backend": {"handlers": ["console"], "level": "INFO", "propagate": True,},
"cms": {"handlers": ["console"], "level": "INFO", "propagate": True,},
+ "gvz_api": {"handlers": ["console"], "level": "INFO", "propagate": True,},
"rules": {"handlers": ["console"], "level": "DEBUG", "propagate": True,},
"auth": {"handlers": ["console", "authlog", "syslog"], "level": "INFO",},
},
@@ -225,7 +227,7 @@
COMPRESS_OFFLINE = True
# GVZ (Gemeindeverzeichnis) API URL
-GVZ_API_URL = "http://gvz.integreat-app.de/api/"
+GVZ_API_URL = "https://gvz.integreat-app.de/api/"
GVZ_API_ENABLED = True
# Allow access to all domains by setting the following variable to TRUE
diff --git a/src/gvz_api/apps.py b/src/gvz_api/apps.py
--- a/src/gvz_api/apps.py
+++ b/src/gvz_api/apps.py
@@ -1,6 +1,7 @@
"""
Configuration of GVZ API app
"""
+import sys
import logging
import json
import requests
@@ -22,25 +23,25 @@
"""
Checking if API is available
"""
- if settings.GVZ_API_ENABLED:
- try:
- response = requests.get(
- f"{settings.GVZ_API_URL}/search/expect_empty_json"
- )
- json.loads(response.text)
- except json.decoder.JSONDecodeError:
- self.api_available = False
- except requests.exceptions.RequestException:
- self.api_available = False
+ # Only check availability if current command is "runserver"
+ if sys.argv[1] == "runserver":
+ if settings.GVZ_API_ENABLED:
+ try:
+ response = requests.get(
+ f"{settings.GVZ_API_URL}/search/expect_empty_json", timeout=3
+ )
+ json.loads(response.text)
+ except (
+ json.decoder.JSONDecodeError,
+ requests.exceptions.RequestException,
+ requests.exceptions.Timeout,
+ ):
+ logger.info(
+ "GVZ API is not available. You won't be able to "
+ "automatically import coordinates and region aliases."
+ )
+ else:
+ self.api_available = True
+ logger.debug("GVZ API is available.")
else:
- self.api_available = True
- else:
- self.api_available = False
- if not self.api_available:
- logger.info(
- "GVZ API is not available. You won't be able to "
- "automatically import coordinates and region aliases."
- )
- else:
- self.api_available = True
- logger.info("GVZ API is available.")
+ logger.debug("GVZ API is not enabled.")
| {"golden_diff": "diff --git a/src/backend/settings.py b/src/backend/settings.py\n--- a/src/backend/settings.py\n+++ b/src/backend/settings.py\n@@ -197,7 +197,9 @@\n \"loggers\": {\n \"django\": {\"handlers\": [\"console\"], \"level\": \"WARN\", \"propagate\": True,},\n \"api\": {\"handlers\": [\"console\"], \"level\": \"INFO\", \"propagate\": True,},\n+ \"backend\": {\"handlers\": [\"console\"], \"level\": \"INFO\", \"propagate\": True,},\n \"cms\": {\"handlers\": [\"console\"], \"level\": \"INFO\", \"propagate\": True,},\n+ \"gvz_api\": {\"handlers\": [\"console\"], \"level\": \"INFO\", \"propagate\": True,},\n \"rules\": {\"handlers\": [\"console\"], \"level\": \"DEBUG\", \"propagate\": True,},\n \"auth\": {\"handlers\": [\"console\", \"authlog\", \"syslog\"], \"level\": \"INFO\",},\n },\n@@ -225,7 +227,7 @@\n COMPRESS_OFFLINE = True\n \n # GVZ (Gemeindeverzeichnis) API URL\n-GVZ_API_URL = \"http://gvz.integreat-app.de/api/\"\n+GVZ_API_URL = \"https://gvz.integreat-app.de/api/\"\n GVZ_API_ENABLED = True\n \n # Allow access to all domains by setting the following variable to TRUE\ndiff --git a/src/gvz_api/apps.py b/src/gvz_api/apps.py\n--- a/src/gvz_api/apps.py\n+++ b/src/gvz_api/apps.py\n@@ -1,6 +1,7 @@\n \"\"\"\n Configuration of GVZ API app\n \"\"\"\n+import sys\n import logging\n import json\n import requests\n@@ -22,25 +23,25 @@\n \"\"\"\n Checking if API is available\n \"\"\"\n- if settings.GVZ_API_ENABLED:\n- try:\n- response = requests.get(\n- f\"{settings.GVZ_API_URL}/search/expect_empty_json\"\n- )\n- json.loads(response.text)\n- except json.decoder.JSONDecodeError:\n- self.api_available = False\n- except requests.exceptions.RequestException:\n- self.api_available = False\n+ # Only check availability if current command is \"runserver\"\n+ if sys.argv[1] == \"runserver\":\n+ if settings.GVZ_API_ENABLED:\n+ try:\n+ response = requests.get(\n+ f\"{settings.GVZ_API_URL}/search/expect_empty_json\", timeout=3\n+ )\n+ json.loads(response.text)\n+ except (\n+ json.decoder.JSONDecodeError,\n+ requests.exceptions.RequestException,\n+ requests.exceptions.Timeout,\n+ ):\n+ logger.info(\n+ \"GVZ API is not available. You won't be able to \"\n+ \"automatically import coordinates and region aliases.\"\n+ )\n+ else:\n+ self.api_available = True\n+ logger.debug(\"GVZ API is available.\")\n else:\n- self.api_available = True\n- else:\n- self.api_available = False\n- if not self.api_available:\n- logger.info(\n- \"GVZ API is not available. You won't be able to \"\n- \"automatically import coordinates and region aliases.\"\n- )\n- else:\n- self.api_available = True\n- logger.info(\"GVZ API is available.\")\n+ logger.debug(\"GVZ API is not enabled.\")\n", "issue": "Development server not starting when GVZ API not reachable\n### Describe the Bug\r\n<!-- A clear and concise description of what the bug is. -->\r\nWhen the [GVZ API](http://gvz.integreat-app.de/api/search/expect_empty_json) is reachable, but does not answer, the development server hangs during start, because there is no timeout defined for the api request [here](https://github.com/Integreat/cms-django/blob/f4f4c698575e975655b1333f630b5f0d6e4e034f/src/gvz_api/apps.py#L27).\r\n\r\n### Steps to Reproduce\r\n\r\n1. Execute `./dev-tools/run.sh` when GVZ API is reachable but does not answer (tested this morning)\r\n2. Wait\r\n\r\n### Expected Behavior\r\n<!-- A clear and concise description of what you expected to happen. -->\r\nThe server should start and print a log message that the gvz api is not available\r\n\r\n### Actual Behavior\r\n<!-- A clear and concise description of what actually happened. -->\r\nThe server does not start\r\n\r\n### Additional Information\r\n<!-- Add any other context (e.g. logs, screenshots, etc.) about the problem here. -->\r\n\r\n\nDevelopment server not starting when GVZ API not reachable\n### Describe the Bug\r\n<!-- A clear and concise description of what the bug is. -->\r\nWhen the [GVZ API](http://gvz.integreat-app.de/api/search/expect_empty_json) is reachable, but does not answer, the development server hangs during start, because there is no timeout defined for the api request [here](https://github.com/Integreat/cms-django/blob/f4f4c698575e975655b1333f630b5f0d6e4e034f/src/gvz_api/apps.py#L27).\r\n\r\n### Steps to Reproduce\r\n\r\n1. Execute `./dev-tools/run.sh` when GVZ API is reachable but does not answer (tested this morning)\r\n2. Wait\r\n\r\n### Expected Behavior\r\n<!-- A clear and concise description of what you expected to happen. -->\r\nThe server should start and print a log message that the gvz api is not available\r\n\r\n### Actual Behavior\r\n<!-- A clear and concise description of what actually happened. -->\r\nThe server does not start\r\n\r\n### Additional Information\r\n<!-- Add any other context (e.g. logs, screenshots, etc.) about the problem here. -->\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nConfiguration of GVZ API app\n\"\"\"\nimport logging\nimport json\nimport requests\nfrom django.apps import AppConfig\nfrom django.conf import settings\n\nlogger = logging.getLogger(__name__)\n\n\nclass GvzApiConfig(AppConfig):\n \"\"\"\n GVZ API config inheriting the django AppConfig\n \"\"\"\n\n name = \"gvz_api\"\n api_available = False\n\n def ready(self):\n \"\"\"\n Checking if API is available\n \"\"\"\n if settings.GVZ_API_ENABLED:\n try:\n response = requests.get(\n f\"{settings.GVZ_API_URL}/search/expect_empty_json\"\n )\n json.loads(response.text)\n except json.decoder.JSONDecodeError:\n self.api_available = False\n except requests.exceptions.RequestException:\n self.api_available = False\n else:\n self.api_available = True\n else:\n self.api_available = False\n if not self.api_available:\n logger.info(\n \"GVZ API is not available. You won't be able to \"\n \"automatically import coordinates and region aliases.\"\n )\n else:\n self.api_available = True\n logger.info(\"GVZ API is available.\")\n", "path": "src/gvz_api/apps.py"}, {"content": "\"\"\"\nDjango settings for backend project.\n\nFor more information on this file, see :doc:`topics/settings`.\nFor the full list of settings and their values, see :doc:`ref/settings`.\n\"\"\"\nimport os\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n\n\n# Version number\nVERSION = \"0.0.14\"\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/2.2/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = \"-!v282$zj815_q@htaxcubylo)(l%a+k*-xi78hw*#s2@i86@_\"\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = True\n\nALLOWED_HOSTS = [\"localhost\", \"127.0.0.1\", \"0.0.0.0\"]\n\n# Needed for webauthn (this is a setting in case the application runs behind a proxy)\nHOSTNAME = \"localhost\"\nBASE_URL = \"http://localhost:8000\"\n\n# Application definition\n\nINSTALLED_APPS = [\n \"cms.apps.CmsConfig\",\n \"gvz_api.apps.GvzApiConfig\",\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.messages\",\n \"django.contrib.sessions\",\n \"django.contrib.staticfiles\",\n \"compressor\",\n \"compressor_toolkit\",\n \"corsheaders\",\n \"widget_tweaks\",\n \"easy_thumbnails\",\n \"filer\",\n \"mptt\",\n \"rules.apps.AutodiscoverRulesConfig\",\n]\n\nMIDDLEWARE = [\n \"corsheaders.middleware.CorsMiddleware\",\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.locale.LocaleMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n]\n\nROOT_URLCONF = \"backend.urls\"\nTHUMBNAIL_HIGH_RESOLUTION = True\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n \"backend.context_processors.region_slug_processor\",\n ],\n \"debug\": DEBUG,\n },\n },\n]\n\nWSGI_APPLICATION = \"backend.wsgi.application\"\n\n\n# Database\n# https://docs.djangoproject.com/en/2.2/ref/settings/#databases\n\nDATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.postgresql_psycopg2\",\n \"NAME\": \"integreat\",\n \"USER\": \"integreat\",\n \"PASSWORD\": \"password\",\n \"HOST\": \"localhost\",\n \"PORT\": \"5432\",\n }\n}\n\n# Directory for initial database contents\n\nFIXTURE_DIRS = (os.path.join(BASE_DIR, \"cms/fixtures/\"),)\n\n# Authentication backends\n\nAUTHENTICATION_BACKENDS = (\n \"rules.permissions.ObjectPermissionBackend\",\n \"django.contrib.auth.backends.ModelBackend\", # this is default\n)\n\n\n# Password validation\n# https://docs.djangoproject.com/en/2.2/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",},\n {\"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",},\n {\"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",},\n]\n\n\n# Internationalization\n# https://docs.djangoproject.com/en/2.2/topics/i18n/\n\nLANGUAGES = (\n (\"en-us\", \"English\"),\n (\"de-de\", \"Deutsch\"),\n)\n\nLOCALE_PATHS = (os.path.join(BASE_DIR, \"locale\"),)\n\nLANGUAGE_CODE = \"de-de\"\n\nTIME_ZONE = \"UTC\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/2.2/howto/static-files/\n\nSTATICFILES_DIRS = [\n os.path.join(BASE_DIR, \"../node_modules\"),\n]\nSTATIC_URL = \"/static/\"\nSTATIC_ROOT = os.path.join(BASE_DIR, \"cms/static/\")\n\n# Login\nLOGIN_URL = \"/login\"\nLOGIN_REDIRECT_URL = \"/\"\nLOGOUT_REDIRECT_URL = \"/login\"\n\n# Miscellaneous\nEMAIL_BACKEND = \"django.core.mail.backends.console.EmailBackend\"\nCSRF_FAILURE_VIEW = \"cms.views.error_handler.csrf_failure\"\n\nMEDIA_URL = \"/media/\"\nMEDIA_ROOT = os.path.join(BASE_DIR, \"media\")\nFILER_CANONICAL_URL = \"media/\"\n\nLOGGING = {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"formatters\": {\n \"default\": {\"format\": \"INTEGREAT CMS - %(levelname)s: %(message)s\",},\n \"console\": {\n \"format\": \"%(asctime)s INTEGREAT CMS - %(levelname)s: %(message)s\",\n \"datefmt\": \"%b %d %H:%M:%S\",\n },\n },\n \"handlers\": {\n \"console\": {\"class\": \"logging.StreamHandler\", \"formatter\": \"console\"},\n \"authlog\": {\n \"level\": \"INFO\",\n \"class\": \"logging.handlers.SysLogHandler\",\n \"address\": \"/dev/log\",\n \"facility\": \"auth\",\n \"formatter\": \"default\",\n },\n \"syslog\": {\n \"level\": \"INFO\",\n \"class\": \"logging.handlers.SysLogHandler\",\n \"address\": \"/dev/log\",\n \"facility\": \"syslog\",\n \"formatter\": \"default\",\n },\n },\n \"loggers\": {\n \"django\": {\"handlers\": [\"console\"], \"level\": \"WARN\", \"propagate\": True,},\n \"api\": {\"handlers\": [\"console\"], \"level\": \"INFO\", \"propagate\": True,},\n \"cms\": {\"handlers\": [\"console\"], \"level\": \"INFO\", \"propagate\": True,},\n \"rules\": {\"handlers\": [\"console\"], \"level\": \"DEBUG\", \"propagate\": True,},\n \"auth\": {\"handlers\": [\"console\", \"authlog\", \"syslog\"], \"level\": \"INFO\",},\n },\n}\n\nSTATICFILES_FINDERS = (\n \"django.contrib.staticfiles.finders.FileSystemFinder\",\n \"django.contrib.staticfiles.finders.AppDirectoriesFinder\",\n \"compressor.finders.CompressorFinder\",\n)\n\nCOMPRESS_CSS_FILTERS = [\n \"compressor.filters.css_default.CssAbsoluteFilter\",\n \"compressor.filters.cssmin.CSSMinFilter\",\n \"compressor.filters.template.TemplateFilter\",\n]\nCOMPRESS_JS_FILTERS = [\n \"compressor.filters.jsmin.JSMinFilter\",\n]\nCOMPRESS_PRECOMPILERS = (\n (\"module\", \"compressor_toolkit.precompilers.ES6Compiler\"),\n (\"css\", \"compressor_toolkit.precompilers.SCSSCompiler\"),\n)\nCOMPRESS_ENABLED = False\nCOMPRESS_OFFLINE = True\n\n# GVZ (Gemeindeverzeichnis) API URL\nGVZ_API_URL = \"http://gvz.integreat-app.de/api/\"\nGVZ_API_ENABLED = True\n\n# Allow access to all domains by setting the following variable to TRUE\nCORS_ORIGIN_ALLOW_ALL = True\n\n# Extend default headers with development header to differenciate dev traffic in statistics\nCORS_ALLOW_HEADERS = [\n \"accept\",\n \"accept-encoding\",\n \"authorization\",\n \"content-type\",\n \"dnt\",\n \"origin\",\n \"user-agent\",\n \"x-csrftoken\",\n \"x-requested-with\",\n \"x-integreat-development\",\n]\n", "path": "src/backend/settings.py"}], "after_files": [{"content": "\"\"\"\nConfiguration of GVZ API app\n\"\"\"\nimport sys\nimport logging\nimport json\nimport requests\nfrom django.apps import AppConfig\nfrom django.conf import settings\n\nlogger = logging.getLogger(__name__)\n\n\nclass GvzApiConfig(AppConfig):\n \"\"\"\n GVZ API config inheriting the django AppConfig\n \"\"\"\n\n name = \"gvz_api\"\n api_available = False\n\n def ready(self):\n \"\"\"\n Checking if API is available\n \"\"\"\n # Only check availability if current command is \"runserver\"\n if sys.argv[1] == \"runserver\":\n if settings.GVZ_API_ENABLED:\n try:\n response = requests.get(\n f\"{settings.GVZ_API_URL}/search/expect_empty_json\", timeout=3\n )\n json.loads(response.text)\n except (\n json.decoder.JSONDecodeError,\n requests.exceptions.RequestException,\n requests.exceptions.Timeout,\n ):\n logger.info(\n \"GVZ API is not available. You won't be able to \"\n \"automatically import coordinates and region aliases.\"\n )\n else:\n self.api_available = True\n logger.debug(\"GVZ API is available.\")\n else:\n logger.debug(\"GVZ API is not enabled.\")\n", "path": "src/gvz_api/apps.py"}, {"content": "\"\"\"\nDjango settings for backend project.\n\nFor more information on this file, see :doc:`topics/settings`.\nFor the full list of settings and their values, see :doc:`ref/settings`.\n\"\"\"\nimport os\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n\n\n# Version number\nVERSION = \"0.0.14\"\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/2.2/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = \"-!v282$zj815_q@htaxcubylo)(l%a+k*-xi78hw*#s2@i86@_\"\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = True\n\nALLOWED_HOSTS = [\"localhost\", \"127.0.0.1\", \"0.0.0.0\"]\n\n# Needed for webauthn (this is a setting in case the application runs behind a proxy)\nHOSTNAME = \"localhost\"\nBASE_URL = \"http://localhost:8000\"\n\n# Application definition\n\nINSTALLED_APPS = [\n \"cms.apps.CmsConfig\",\n \"gvz_api.apps.GvzApiConfig\",\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.messages\",\n \"django.contrib.sessions\",\n \"django.contrib.staticfiles\",\n \"compressor\",\n \"compressor_toolkit\",\n \"corsheaders\",\n \"widget_tweaks\",\n \"easy_thumbnails\",\n \"filer\",\n \"mptt\",\n \"rules.apps.AutodiscoverRulesConfig\",\n]\n\nMIDDLEWARE = [\n \"corsheaders.middleware.CorsMiddleware\",\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.locale.LocaleMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n]\n\nROOT_URLCONF = \"backend.urls\"\nTHUMBNAIL_HIGH_RESOLUTION = True\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n \"backend.context_processors.region_slug_processor\",\n ],\n \"debug\": DEBUG,\n },\n },\n]\n\nWSGI_APPLICATION = \"backend.wsgi.application\"\n\n\n# Database\n# https://docs.djangoproject.com/en/2.2/ref/settings/#databases\n\nDATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.postgresql_psycopg2\",\n \"NAME\": \"integreat\",\n \"USER\": \"integreat\",\n \"PASSWORD\": \"password\",\n \"HOST\": \"localhost\",\n \"PORT\": \"5432\",\n }\n}\n\n# Directory for initial database contents\n\nFIXTURE_DIRS = (os.path.join(BASE_DIR, \"cms/fixtures/\"),)\n\n# Authentication backends\n\nAUTHENTICATION_BACKENDS = (\n \"rules.permissions.ObjectPermissionBackend\",\n \"django.contrib.auth.backends.ModelBackend\", # this is default\n)\n\n\n# Password validation\n# https://docs.djangoproject.com/en/2.2/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",},\n {\"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",},\n {\"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",},\n]\n\n\n# Internationalization\n# https://docs.djangoproject.com/en/2.2/topics/i18n/\n\nLANGUAGES = (\n (\"en-us\", \"English\"),\n (\"de-de\", \"Deutsch\"),\n)\n\nLOCALE_PATHS = (os.path.join(BASE_DIR, \"locale\"),)\n\nLANGUAGE_CODE = \"de-de\"\n\nTIME_ZONE = \"UTC\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/2.2/howto/static-files/\n\nSTATICFILES_DIRS = [\n os.path.join(BASE_DIR, \"../node_modules\"),\n]\nSTATIC_URL = \"/static/\"\nSTATIC_ROOT = os.path.join(BASE_DIR, \"cms/static/\")\n\n# Login\nLOGIN_URL = \"/login\"\nLOGIN_REDIRECT_URL = \"/\"\nLOGOUT_REDIRECT_URL = \"/login\"\n\n# Miscellaneous\nEMAIL_BACKEND = \"django.core.mail.backends.console.EmailBackend\"\nCSRF_FAILURE_VIEW = \"cms.views.error_handler.csrf_failure\"\n\nMEDIA_URL = \"/media/\"\nMEDIA_ROOT = os.path.join(BASE_DIR, \"media\")\nFILER_CANONICAL_URL = \"media/\"\n\nLOGGING = {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"formatters\": {\n \"default\": {\"format\": \"INTEGREAT CMS - %(levelname)s: %(message)s\",},\n \"console\": {\n \"format\": \"%(asctime)s INTEGREAT CMS - %(levelname)s: %(message)s\",\n \"datefmt\": \"%b %d %H:%M:%S\",\n },\n },\n \"handlers\": {\n \"console\": {\"class\": \"logging.StreamHandler\", \"formatter\": \"console\"},\n \"authlog\": {\n \"level\": \"INFO\",\n \"class\": \"logging.handlers.SysLogHandler\",\n \"address\": \"/dev/log\",\n \"facility\": \"auth\",\n \"formatter\": \"default\",\n },\n \"syslog\": {\n \"level\": \"INFO\",\n \"class\": \"logging.handlers.SysLogHandler\",\n \"address\": \"/dev/log\",\n \"facility\": \"syslog\",\n \"formatter\": \"default\",\n },\n },\n \"loggers\": {\n \"django\": {\"handlers\": [\"console\"], \"level\": \"WARN\", \"propagate\": True,},\n \"api\": {\"handlers\": [\"console\"], \"level\": \"INFO\", \"propagate\": True,},\n \"backend\": {\"handlers\": [\"console\"], \"level\": \"INFO\", \"propagate\": True,},\n \"cms\": {\"handlers\": [\"console\"], \"level\": \"INFO\", \"propagate\": True,},\n \"gvz_api\": {\"handlers\": [\"console\"], \"level\": \"INFO\", \"propagate\": True,},\n \"rules\": {\"handlers\": [\"console\"], \"level\": \"DEBUG\", \"propagate\": True,},\n \"auth\": {\"handlers\": [\"console\", \"authlog\", \"syslog\"], \"level\": \"INFO\",},\n },\n}\n\nSTATICFILES_FINDERS = (\n \"django.contrib.staticfiles.finders.FileSystemFinder\",\n \"django.contrib.staticfiles.finders.AppDirectoriesFinder\",\n \"compressor.finders.CompressorFinder\",\n)\n\nCOMPRESS_CSS_FILTERS = [\n \"compressor.filters.css_default.CssAbsoluteFilter\",\n \"compressor.filters.cssmin.CSSMinFilter\",\n \"compressor.filters.template.TemplateFilter\",\n]\nCOMPRESS_JS_FILTERS = [\n \"compressor.filters.jsmin.JSMinFilter\",\n]\nCOMPRESS_PRECOMPILERS = (\n (\"module\", \"compressor_toolkit.precompilers.ES6Compiler\"),\n (\"css\", \"compressor_toolkit.precompilers.SCSSCompiler\"),\n)\nCOMPRESS_ENABLED = False\nCOMPRESS_OFFLINE = True\n\n# GVZ (Gemeindeverzeichnis) API URL\nGVZ_API_URL = \"https://gvz.integreat-app.de/api/\"\nGVZ_API_ENABLED = True\n\n# Allow access to all domains by setting the following variable to TRUE\nCORS_ORIGIN_ALLOW_ALL = True\n\n# Extend default headers with development header to differenciate dev traffic in statistics\nCORS_ALLOW_HEADERS = [\n \"accept\",\n \"accept-encoding\",\n \"authorization\",\n \"content-type\",\n \"dnt\",\n \"origin\",\n \"user-agent\",\n \"x-csrftoken\",\n \"x-requested-with\",\n \"x-integreat-development\",\n]\n", "path": "src/backend/settings.py"}]} | 3,521 | 749 |
gh_patches_debug_12356 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-2514 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
UX and frontend implementation for upgrades
### Notes
- [Current Figma design](https://www.figma.com/file/xHb5oIqye3fnXtb2heRH34/Styling?node-id=3804%3A28864&t=HyNupYmgZ9PqjEGr-0)
- [Issue with user flow](https://github.com/centerofci/mathesar/issues/227)
### Tasks
- Finalize the user flow
- Request Figma UX changes if needed
- Implement the frontend (create additional GitHub issues if needed)
### Feasibility
Watchtower, our docker image upgrade backend, doesn't report progress. The only progress reporting available to the frontend will be periodically calling some HTTP endpoint on the service container to check whether it is online or not.
The final UX should take this into account.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mathesar/urls.py`
Content:
```
1 from django.contrib.auth.views import LoginView
2 from django.urls import include, path, re_path
3 from rest_framework_nested import routers
4
5 from mathesar import views
6 from mathesar.api.db import viewsets as db_viewsets
7 from mathesar.api.ui import viewsets as ui_viewsets
8 from mathesar.users.password_reset import MathesarPasswordResetConfirmView
9
10 db_router = routers.DefaultRouter()
11 db_router.register(r'tables', db_viewsets.TableViewSet, basename='table')
12 db_router.register(r'queries', db_viewsets.QueryViewSet, basename='query')
13 db_router.register(r'links', db_viewsets.LinkViewSet, basename='links')
14 db_router.register(r'schemas', db_viewsets.SchemaViewSet, basename='schema')
15 db_router.register(r'databases', db_viewsets.DatabaseViewSet, basename='database')
16 db_router.register(r'data_files', db_viewsets.DataFileViewSet, basename='data-file')
17
18 db_table_router = routers.NestedSimpleRouter(db_router, r'tables', lookup='table')
19 db_table_router.register(r'records', db_viewsets.RecordViewSet, basename='table-record')
20 db_table_router.register(r'settings', db_viewsets.TableSettingsViewSet, basename='table-setting')
21 db_table_router.register(r'columns', db_viewsets.ColumnViewSet, basename='table-column')
22 db_table_router.register(r'constraints', db_viewsets.ConstraintViewSet, basename='table-constraint')
23
24 ui_router = routers.DefaultRouter()
25 ui_router.register(r'version', ui_viewsets.VersionViewSet, basename='version')
26 ui_router.register(r'databases', ui_viewsets.DatabaseViewSet, basename='database')
27 ui_router.register(r'users', ui_viewsets.UserViewSet, basename='user')
28 ui_router.register(r'database_roles', ui_viewsets.DatabaseRoleViewSet, basename='database_role')
29 ui_router.register(r'schema_roles', ui_viewsets.SchemaRoleViewSet, basename='schema_role')
30
31 urlpatterns = [
32 path('api/db/v0/', include(db_router.urls)),
33 path('api/db/v0/', include(db_table_router.urls)),
34 path('api/ui/v0/', include(ui_router.urls)),
35 path('api/ui/v0/reflect/', views.reflect_all, name='reflect_all'),
36 path('auth/password_reset_confirm', MathesarPasswordResetConfirmView.as_view(), name='password_reset_confirm'),
37 path('auth/login/', LoginView.as_view(redirect_authenticated_user=True), name='login'),
38 path('auth/', include('django.contrib.auth.urls')),
39 path('', views.home, name='home'),
40 path('profile/', views.profile, name='profile'),
41 path('administration/', views.admin_home, name='admin_home'),
42 path('administration/users/', views.admin_home, name='admin_users_home'),
43 path('administration/users/<user_id>/', views.admin_home, name='admin_users_edit'),
44 path('administration/general/', views.admin_home, name='admin_general'),
45 path('<db_name>/', views.schemas, name='schemas'),
46 re_path(
47 r'^(?P<db_name>\w+)/(?P<schema_id>\w+)/',
48 views.schema_home,
49 name='schema_home'
50 ),
51 ]
52
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mathesar/urls.py b/mathesar/urls.py
--- a/mathesar/urls.py
+++ b/mathesar/urls.py
@@ -41,7 +41,7 @@
path('administration/', views.admin_home, name='admin_home'),
path('administration/users/', views.admin_home, name='admin_users_home'),
path('administration/users/<user_id>/', views.admin_home, name='admin_users_edit'),
- path('administration/general/', views.admin_home, name='admin_general'),
+ path('administration/update/', views.admin_home, name='admin_update'),
path('<db_name>/', views.schemas, name='schemas'),
re_path(
r'^(?P<db_name>\w+)/(?P<schema_id>\w+)/',
| {"golden_diff": "diff --git a/mathesar/urls.py b/mathesar/urls.py\n--- a/mathesar/urls.py\n+++ b/mathesar/urls.py\n@@ -41,7 +41,7 @@\n path('administration/', views.admin_home, name='admin_home'),\n path('administration/users/', views.admin_home, name='admin_users_home'),\n path('administration/users/<user_id>/', views.admin_home, name='admin_users_edit'),\n- path('administration/general/', views.admin_home, name='admin_general'),\n+ path('administration/update/', views.admin_home, name='admin_update'),\n path('<db_name>/', views.schemas, name='schemas'),\n re_path(\n r'^(?P<db_name>\\w+)/(?P<schema_id>\\w+)/',\n", "issue": "UX and frontend implementation for upgrades\n### Notes\r\n- [Current Figma design](https://www.figma.com/file/xHb5oIqye3fnXtb2heRH34/Styling?node-id=3804%3A28864&t=HyNupYmgZ9PqjEGr-0)\r\n- [Issue with user flow](https://github.com/centerofci/mathesar/issues/227)\r\n\r\n### Tasks\r\n- Finalize the user flow\r\n- Request Figma UX changes if needed\r\n- Implement the frontend (create additional GitHub issues if needed)\r\n\r\n### Feasibility \r\nWatchtower, our docker image upgrade backend, doesn't report progress. The only progress reporting available to the frontend will be periodically calling some HTTP endpoint on the service container to check whether it is online or not.\r\n\r\nThe final UX should take this into account.\n", "before_files": [{"content": "from django.contrib.auth.views import LoginView\nfrom django.urls import include, path, re_path\nfrom rest_framework_nested import routers\n\nfrom mathesar import views\nfrom mathesar.api.db import viewsets as db_viewsets\nfrom mathesar.api.ui import viewsets as ui_viewsets\nfrom mathesar.users.password_reset import MathesarPasswordResetConfirmView\n\ndb_router = routers.DefaultRouter()\ndb_router.register(r'tables', db_viewsets.TableViewSet, basename='table')\ndb_router.register(r'queries', db_viewsets.QueryViewSet, basename='query')\ndb_router.register(r'links', db_viewsets.LinkViewSet, basename='links')\ndb_router.register(r'schemas', db_viewsets.SchemaViewSet, basename='schema')\ndb_router.register(r'databases', db_viewsets.DatabaseViewSet, basename='database')\ndb_router.register(r'data_files', db_viewsets.DataFileViewSet, basename='data-file')\n\ndb_table_router = routers.NestedSimpleRouter(db_router, r'tables', lookup='table')\ndb_table_router.register(r'records', db_viewsets.RecordViewSet, basename='table-record')\ndb_table_router.register(r'settings', db_viewsets.TableSettingsViewSet, basename='table-setting')\ndb_table_router.register(r'columns', db_viewsets.ColumnViewSet, basename='table-column')\ndb_table_router.register(r'constraints', db_viewsets.ConstraintViewSet, basename='table-constraint')\n\nui_router = routers.DefaultRouter()\nui_router.register(r'version', ui_viewsets.VersionViewSet, basename='version')\nui_router.register(r'databases', ui_viewsets.DatabaseViewSet, basename='database')\nui_router.register(r'users', ui_viewsets.UserViewSet, basename='user')\nui_router.register(r'database_roles', ui_viewsets.DatabaseRoleViewSet, basename='database_role')\nui_router.register(r'schema_roles', ui_viewsets.SchemaRoleViewSet, basename='schema_role')\n\nurlpatterns = [\n path('api/db/v0/', include(db_router.urls)),\n path('api/db/v0/', include(db_table_router.urls)),\n path('api/ui/v0/', include(ui_router.urls)),\n path('api/ui/v0/reflect/', views.reflect_all, name='reflect_all'),\n path('auth/password_reset_confirm', MathesarPasswordResetConfirmView.as_view(), name='password_reset_confirm'),\n path('auth/login/', LoginView.as_view(redirect_authenticated_user=True), name='login'),\n path('auth/', include('django.contrib.auth.urls')),\n path('', views.home, name='home'),\n path('profile/', views.profile, name='profile'),\n path('administration/', views.admin_home, name='admin_home'),\n path('administration/users/', views.admin_home, name='admin_users_home'),\n path('administration/users/<user_id>/', views.admin_home, name='admin_users_edit'),\n path('administration/general/', views.admin_home, name='admin_general'),\n path('<db_name>/', views.schemas, name='schemas'),\n re_path(\n r'^(?P<db_name>\\w+)/(?P<schema_id>\\w+)/',\n views.schema_home,\n name='schema_home'\n ),\n]\n", "path": "mathesar/urls.py"}], "after_files": [{"content": "from django.contrib.auth.views import LoginView\nfrom django.urls import include, path, re_path\nfrom rest_framework_nested import routers\n\nfrom mathesar import views\nfrom mathesar.api.db import viewsets as db_viewsets\nfrom mathesar.api.ui import viewsets as ui_viewsets\nfrom mathesar.users.password_reset import MathesarPasswordResetConfirmView\n\ndb_router = routers.DefaultRouter()\ndb_router.register(r'tables', db_viewsets.TableViewSet, basename='table')\ndb_router.register(r'queries', db_viewsets.QueryViewSet, basename='query')\ndb_router.register(r'links', db_viewsets.LinkViewSet, basename='links')\ndb_router.register(r'schemas', db_viewsets.SchemaViewSet, basename='schema')\ndb_router.register(r'databases', db_viewsets.DatabaseViewSet, basename='database')\ndb_router.register(r'data_files', db_viewsets.DataFileViewSet, basename='data-file')\n\ndb_table_router = routers.NestedSimpleRouter(db_router, r'tables', lookup='table')\ndb_table_router.register(r'records', db_viewsets.RecordViewSet, basename='table-record')\ndb_table_router.register(r'settings', db_viewsets.TableSettingsViewSet, basename='table-setting')\ndb_table_router.register(r'columns', db_viewsets.ColumnViewSet, basename='table-column')\ndb_table_router.register(r'constraints', db_viewsets.ConstraintViewSet, basename='table-constraint')\n\nui_router = routers.DefaultRouter()\nui_router.register(r'version', ui_viewsets.VersionViewSet, basename='version')\nui_router.register(r'databases', ui_viewsets.DatabaseViewSet, basename='database')\nui_router.register(r'users', ui_viewsets.UserViewSet, basename='user')\nui_router.register(r'database_roles', ui_viewsets.DatabaseRoleViewSet, basename='database_role')\nui_router.register(r'schema_roles', ui_viewsets.SchemaRoleViewSet, basename='schema_role')\n\nurlpatterns = [\n path('api/db/v0/', include(db_router.urls)),\n path('api/db/v0/', include(db_table_router.urls)),\n path('api/ui/v0/', include(ui_router.urls)),\n path('api/ui/v0/reflect/', views.reflect_all, name='reflect_all'),\n path('auth/password_reset_confirm', MathesarPasswordResetConfirmView.as_view(), name='password_reset_confirm'),\n path('auth/login/', LoginView.as_view(redirect_authenticated_user=True), name='login'),\n path('auth/', include('django.contrib.auth.urls')),\n path('', views.home, name='home'),\n path('profile/', views.profile, name='profile'),\n path('administration/', views.admin_home, name='admin_home'),\n path('administration/users/', views.admin_home, name='admin_users_home'),\n path('administration/users/<user_id>/', views.admin_home, name='admin_users_edit'),\n path('administration/update/', views.admin_home, name='admin_update'),\n path('<db_name>/', views.schemas, name='schemas'),\n re_path(\n r'^(?P<db_name>\\w+)/(?P<schema_id>\\w+)/',\n views.schema_home,\n name='schema_home'\n ),\n]\n", "path": "mathesar/urls.py"}]} | 1,186 | 165 |
gh_patches_debug_19852 | rasdani/github-patches | git_diff | open-mmlab__mmcv-256 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: Deadlock 'fix' introduced deadlock.
Since https://github.com/open-mmlab/mmcv/pull/252 is merged my mmdetection code hangs after evaluation. After reverting the specific commit `git revert c203419f57c2e25ab4307420b9a3688f99e01dea`, my code runs again as expected..
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mmcv/runner/hooks/logger/text.py`
Content:
```
1 # Copyright (c) Open-MMLab. All rights reserved.
2 import datetime
3 import os.path as osp
4 from collections import OrderedDict
5
6 import torch
7 import torch.distributed as dist
8
9 import mmcv
10 from ..hook import HOOKS
11 from .base import LoggerHook
12
13
14 @HOOKS.register_module
15 class TextLoggerHook(LoggerHook):
16
17 def __init__(self, interval=10, ignore_last=True, reset_flag=False):
18 super(TextLoggerHook, self).__init__(interval, ignore_last, reset_flag)
19 self.time_sec_tot = 0
20
21 def before_run(self, runner):
22 super(TextLoggerHook, self).before_run(runner)
23 self.start_iter = runner.iter
24 self.json_log_path = osp.join(runner.work_dir,
25 f'{runner.timestamp}.log.json')
26 if runner.meta is not None:
27 self._dump_log(runner.meta, runner)
28
29 def _get_max_memory(self, runner):
30 mem = torch.cuda.max_memory_allocated()
31 mem_mb = torch.tensor([mem / (1024 * 1024)],
32 dtype=torch.int,
33 device=torch.device('cuda'))
34 if runner.world_size > 1:
35 dist.reduce(mem_mb, 0, op=dist.ReduceOp.MAX)
36 return mem_mb.item()
37
38 def _log_info(self, log_dict, runner):
39 if runner.mode == 'train':
40 log_str = f'Epoch [{log_dict["epoch"]}]' \
41 f'[{log_dict["iter"]}/{len(runner.data_loader)}]\t' \
42 f'lr: {log_dict["lr"]:.5f}, '
43 if 'time' in log_dict.keys():
44 self.time_sec_tot += (log_dict['time'] * self.interval)
45 time_sec_avg = self.time_sec_tot / (
46 runner.iter - self.start_iter + 1)
47 eta_sec = time_sec_avg * (runner.max_iters - runner.iter - 1)
48 eta_str = str(datetime.timedelta(seconds=int(eta_sec)))
49 log_str += f'eta: {eta_str}, '
50 log_str += f'time: {log_dict["time"]:.3f}, ' \
51 f'data_time: {log_dict["data_time"]:.3f}, '
52 # statistic memory
53 if torch.cuda.is_available():
54 log_str += f'memory: {log_dict["memory"]}, '
55 else:
56 log_str = 'Epoch({log_dict["mode"]}) ' \
57 f'[{log_dict["epoch"] - 1}][{log_dict["iter"]}]\t'
58 log_items = []
59 for name, val in log_dict.items():
60 # TODO: resolve this hack
61 # these items have been in log_str
62 if name in [
63 'mode', 'Epoch', 'iter', 'lr', 'time', 'data_time',
64 'memory', 'epoch'
65 ]:
66 continue
67 if isinstance(val, float):
68 val = f'{val:.4f}'
69 log_items.append(f'{name}: {val}')
70 log_str += ', '.join(log_items)
71 runner.logger.info(log_str)
72
73 def _dump_log(self, log_dict, runner):
74 # dump log in json format
75 json_log = OrderedDict()
76 for k, v in log_dict.items():
77 json_log[k] = self._round_float(v)
78 # only append log at last line
79 if runner.rank == 0:
80 with open(self.json_log_path, 'a+') as f:
81 mmcv.dump(json_log, f, file_format='json')
82 f.write('\n')
83
84 def _round_float(self, items):
85 if isinstance(items, list):
86 return [self._round_float(item) for item in items]
87 elif isinstance(items, float):
88 return round(items, 5)
89 else:
90 return items
91
92 def log(self, runner):
93 log_dict = OrderedDict()
94 # training mode if the output contains the key "time"
95 mode = 'train' if 'time' in runner.log_buffer.output else 'val'
96 log_dict['mode'] = mode
97 log_dict['epoch'] = runner.epoch + 1
98 log_dict['iter'] = runner.inner_iter + 1
99 # only record lr of the first param group
100 log_dict['lr'] = runner.current_lr()[0]
101 memory = None
102 if torch.cuda.is_available():
103 memory = self._get_max_memory(runner)
104 if mode == 'train':
105 log_dict['time'] = runner.log_buffer.output['time']
106 log_dict['data_time'] = runner.log_buffer.output['data_time']
107
108 # statistic memory
109 if memory is not None:
110 log_dict['memory'] = memory
111 for name, val in runner.log_buffer.output.items():
112 if name in ['time', 'data_time']:
113 continue
114 log_dict[name] = val
115
116 self._log_info(log_dict, runner)
117 self._dump_log(log_dict, runner)
118
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mmcv/runner/hooks/logger/text.py b/mmcv/runner/hooks/logger/text.py
--- a/mmcv/runner/hooks/logger/text.py
+++ b/mmcv/runner/hooks/logger/text.py
@@ -98,16 +98,13 @@
log_dict['iter'] = runner.inner_iter + 1
# only record lr of the first param group
log_dict['lr'] = runner.current_lr()[0]
- memory = None
- if torch.cuda.is_available():
- memory = self._get_max_memory(runner)
if mode == 'train':
log_dict['time'] = runner.log_buffer.output['time']
log_dict['data_time'] = runner.log_buffer.output['data_time']
# statistic memory
- if memory is not None:
- log_dict['memory'] = memory
+ if torch.cuda.is_available():
+ log_dict['memory'] = self._get_max_memory(runner)
for name, val in runner.log_buffer.output.items():
if name in ['time', 'data_time']:
continue
| {"golden_diff": "diff --git a/mmcv/runner/hooks/logger/text.py b/mmcv/runner/hooks/logger/text.py\n--- a/mmcv/runner/hooks/logger/text.py\n+++ b/mmcv/runner/hooks/logger/text.py\n@@ -98,16 +98,13 @@\n log_dict['iter'] = runner.inner_iter + 1\n # only record lr of the first param group\n log_dict['lr'] = runner.current_lr()[0]\n- memory = None\n- if torch.cuda.is_available():\n- memory = self._get_max_memory(runner)\n if mode == 'train':\n log_dict['time'] = runner.log_buffer.output['time']\n log_dict['data_time'] = runner.log_buffer.output['data_time']\n \n # statistic memory\n- if memory is not None:\n- log_dict['memory'] = memory\n+ if torch.cuda.is_available():\n+ log_dict['memory'] = self._get_max_memory(runner)\n for name, val in runner.log_buffer.output.items():\n if name in ['time', 'data_time']:\n continue\n", "issue": "Bug: Deadlock 'fix' introduced deadlock.\nSince https://github.com/open-mmlab/mmcv/pull/252 is merged my mmdetection code hangs after evaluation. After reverting the specific commit `git revert c203419f57c2e25ab4307420b9a3688f99e01dea`, my code runs again as expected..\n", "before_files": [{"content": "# Copyright (c) Open-MMLab. All rights reserved.\nimport datetime\nimport os.path as osp\nfrom collections import OrderedDict\n\nimport torch\nimport torch.distributed as dist\n\nimport mmcv\nfrom ..hook import HOOKS\nfrom .base import LoggerHook\n\n\[email protected]_module\nclass TextLoggerHook(LoggerHook):\n\n def __init__(self, interval=10, ignore_last=True, reset_flag=False):\n super(TextLoggerHook, self).__init__(interval, ignore_last, reset_flag)\n self.time_sec_tot = 0\n\n def before_run(self, runner):\n super(TextLoggerHook, self).before_run(runner)\n self.start_iter = runner.iter\n self.json_log_path = osp.join(runner.work_dir,\n f'{runner.timestamp}.log.json')\n if runner.meta is not None:\n self._dump_log(runner.meta, runner)\n\n def _get_max_memory(self, runner):\n mem = torch.cuda.max_memory_allocated()\n mem_mb = torch.tensor([mem / (1024 * 1024)],\n dtype=torch.int,\n device=torch.device('cuda'))\n if runner.world_size > 1:\n dist.reduce(mem_mb, 0, op=dist.ReduceOp.MAX)\n return mem_mb.item()\n\n def _log_info(self, log_dict, runner):\n if runner.mode == 'train':\n log_str = f'Epoch [{log_dict[\"epoch\"]}]' \\\n f'[{log_dict[\"iter\"]}/{len(runner.data_loader)}]\\t' \\\n f'lr: {log_dict[\"lr\"]:.5f}, '\n if 'time' in log_dict.keys():\n self.time_sec_tot += (log_dict['time'] * self.interval)\n time_sec_avg = self.time_sec_tot / (\n runner.iter - self.start_iter + 1)\n eta_sec = time_sec_avg * (runner.max_iters - runner.iter - 1)\n eta_str = str(datetime.timedelta(seconds=int(eta_sec)))\n log_str += f'eta: {eta_str}, '\n log_str += f'time: {log_dict[\"time\"]:.3f}, ' \\\n f'data_time: {log_dict[\"data_time\"]:.3f}, '\n # statistic memory\n if torch.cuda.is_available():\n log_str += f'memory: {log_dict[\"memory\"]}, '\n else:\n log_str = 'Epoch({log_dict[\"mode\"]}) ' \\\n f'[{log_dict[\"epoch\"] - 1}][{log_dict[\"iter\"]}]\\t'\n log_items = []\n for name, val in log_dict.items():\n # TODO: resolve this hack\n # these items have been in log_str\n if name in [\n 'mode', 'Epoch', 'iter', 'lr', 'time', 'data_time',\n 'memory', 'epoch'\n ]:\n continue\n if isinstance(val, float):\n val = f'{val:.4f}'\n log_items.append(f'{name}: {val}')\n log_str += ', '.join(log_items)\n runner.logger.info(log_str)\n\n def _dump_log(self, log_dict, runner):\n # dump log in json format\n json_log = OrderedDict()\n for k, v in log_dict.items():\n json_log[k] = self._round_float(v)\n # only append log at last line\n if runner.rank == 0:\n with open(self.json_log_path, 'a+') as f:\n mmcv.dump(json_log, f, file_format='json')\n f.write('\\n')\n\n def _round_float(self, items):\n if isinstance(items, list):\n return [self._round_float(item) for item in items]\n elif isinstance(items, float):\n return round(items, 5)\n else:\n return items\n\n def log(self, runner):\n log_dict = OrderedDict()\n # training mode if the output contains the key \"time\"\n mode = 'train' if 'time' in runner.log_buffer.output else 'val'\n log_dict['mode'] = mode\n log_dict['epoch'] = runner.epoch + 1\n log_dict['iter'] = runner.inner_iter + 1\n # only record lr of the first param group\n log_dict['lr'] = runner.current_lr()[0]\n memory = None\n if torch.cuda.is_available():\n memory = self._get_max_memory(runner)\n if mode == 'train':\n log_dict['time'] = runner.log_buffer.output['time']\n log_dict['data_time'] = runner.log_buffer.output['data_time']\n\n # statistic memory\n if memory is not None:\n log_dict['memory'] = memory\n for name, val in runner.log_buffer.output.items():\n if name in ['time', 'data_time']:\n continue\n log_dict[name] = val\n\n self._log_info(log_dict, runner)\n self._dump_log(log_dict, runner)\n", "path": "mmcv/runner/hooks/logger/text.py"}], "after_files": [{"content": "# Copyright (c) Open-MMLab. All rights reserved.\nimport datetime\nimport os.path as osp\nfrom collections import OrderedDict\n\nimport torch\nimport torch.distributed as dist\n\nimport mmcv\nfrom ..hook import HOOKS\nfrom .base import LoggerHook\n\n\[email protected]_module\nclass TextLoggerHook(LoggerHook):\n\n def __init__(self, interval=10, ignore_last=True, reset_flag=False):\n super(TextLoggerHook, self).__init__(interval, ignore_last, reset_flag)\n self.time_sec_tot = 0\n\n def before_run(self, runner):\n super(TextLoggerHook, self).before_run(runner)\n self.start_iter = runner.iter\n self.json_log_path = osp.join(runner.work_dir,\n f'{runner.timestamp}.log.json')\n if runner.meta is not None:\n self._dump_log(runner.meta, runner)\n\n def _get_max_memory(self, runner):\n mem = torch.cuda.max_memory_allocated()\n mem_mb = torch.tensor([mem / (1024 * 1024)],\n dtype=torch.int,\n device=torch.device('cuda'))\n if runner.world_size > 1:\n dist.reduce(mem_mb, 0, op=dist.ReduceOp.MAX)\n return mem_mb.item()\n\n def _log_info(self, log_dict, runner):\n if runner.mode == 'train':\n log_str = f'Epoch [{log_dict[\"epoch\"]}]' \\\n f'[{log_dict[\"iter\"]}/{len(runner.data_loader)}]\\t' \\\n f'lr: {log_dict[\"lr\"]:.5f}, '\n if 'time' in log_dict.keys():\n self.time_sec_tot += (log_dict['time'] * self.interval)\n time_sec_avg = self.time_sec_tot / (\n runner.iter - self.start_iter + 1)\n eta_sec = time_sec_avg * (runner.max_iters - runner.iter - 1)\n eta_str = str(datetime.timedelta(seconds=int(eta_sec)))\n log_str += f'eta: {eta_str}, '\n log_str += f'time: {log_dict[\"time\"]:.3f}, ' \\\n f'data_time: {log_dict[\"data_time\"]:.3f}, '\n # statistic memory\n if torch.cuda.is_available():\n log_str += f'memory: {log_dict[\"memory\"]}, '\n else:\n log_str = 'Epoch({log_dict[\"mode\"]}) ' \\\n f'[{log_dict[\"epoch\"] - 1}][{log_dict[\"iter\"]}]\\t'\n log_items = []\n for name, val in log_dict.items():\n # TODO: resolve this hack\n # these items have been in log_str\n if name in [\n 'mode', 'Epoch', 'iter', 'lr', 'time', 'data_time',\n 'memory', 'epoch'\n ]:\n continue\n if isinstance(val, float):\n val = f'{val:.4f}'\n log_items.append(f'{name}: {val}')\n log_str += ', '.join(log_items)\n runner.logger.info(log_str)\n\n def _dump_log(self, log_dict, runner):\n # dump log in json format\n json_log = OrderedDict()\n for k, v in log_dict.items():\n json_log[k] = self._round_float(v)\n # only append log at last line\n if runner.rank == 0:\n with open(self.json_log_path, 'a+') as f:\n mmcv.dump(json_log, f, file_format='json')\n f.write('\\n')\n\n def _round_float(self, items):\n if isinstance(items, list):\n return [self._round_float(item) for item in items]\n elif isinstance(items, float):\n return round(items, 5)\n else:\n return items\n\n def log(self, runner):\n log_dict = OrderedDict()\n # training mode if the output contains the key \"time\"\n mode = 'train' if 'time' in runner.log_buffer.output else 'val'\n log_dict['mode'] = mode\n log_dict['epoch'] = runner.epoch + 1\n log_dict['iter'] = runner.inner_iter + 1\n # only record lr of the first param group\n log_dict['lr'] = runner.current_lr()[0]\n if mode == 'train':\n log_dict['time'] = runner.log_buffer.output['time']\n log_dict['data_time'] = runner.log_buffer.output['data_time']\n\n # statistic memory\n if torch.cuda.is_available():\n log_dict['memory'] = self._get_max_memory(runner)\n for name, val in runner.log_buffer.output.items():\n if name in ['time', 'data_time']:\n continue\n log_dict[name] = val\n\n self._log_info(log_dict, runner)\n self._dump_log(log_dict, runner)\n", "path": "mmcv/runner/hooks/logger/text.py"}]} | 1,664 | 237 |
gh_patches_debug_15503 | rasdani/github-patches | git_diff | ytdl-org__youtube-dl-16157 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ERROR: Unable to extract video data
## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
- Use the *Preview* tab to see what your issue will actually look like
---
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.04.03*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [X] I've **verified** and **I assure** that I'm running youtube-dl **2018.04.03**
### Before submitting an *issue* make sure you have:
- [X] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [X] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
- [X] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser
### What is the purpose of your *issue*?
- [X] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*
---
### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:
Add the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):
```
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: [u'-v', u'http://www.fxnetworks.com/video/1199474243732']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2018.04.03
[debug] Python version 2.7.13 (CPython) - Linux-4.4.114-42-default-x86_64-with-SuSE-42.3-x86_64
[debug] exe versions: ffmpeg 3.4.1, ffprobe 3.4.1, rtmpdump 2.4
[debug] Proxy map: {}
[FXNetworks] 1199474243732: Downloading webpage
ERROR: Unable to extract video data; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Traceback (most recent call last):
File "/home/user/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 785, in extract_info
ie_result = ie.extract(url)
File "/home/user/bin/youtube-dl/youtube_dl/extractor/common.py", line 440, in extract
ie_result = self._real_extract(url)
File "/home/user/bin/youtube-dl/youtube_dl/extractor/fxnetworks.py", line 44, in _real_extract
r'(<a.+?rel="http://link\.theplatform\.com/s/.+?</a>)', webpage, 'video data'))
File "/home/user/bin/youtube-dl/youtube_dl/extractor/common.py", line 808, in _search_regex
raise RegexNotFoundError('Unable to extract %s' % _name)
RegexNotFoundError: Unable to extract video data; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
```
---
### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):
- Single video: https://www.youtube.com/watch?v=BaW_jenozKc
- Single video: https://youtu.be/BaW_jenozKc
- Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc
Note that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
---
### Description of your *issue*, suggested solution and other information
Explanation of your *issue* in arbitrary form goes here. Please make sure the [description is worded well enough to be understood](https://github.com/rg3/youtube-dl#is-the-description-of-the-issue-itself-sufficient). Provide as much context and examples as possible.
If work on your *issue* requires account credentials please provide them or explain how one can obtain them.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `youtube_dl/extractor/fxnetworks.py`
Content:
```
1 # coding: utf-8
2 from __future__ import unicode_literals
3
4 from .adobepass import AdobePassIE
5 from ..utils import (
6 extract_attributes,
7 int_or_none,
8 parse_age_limit,
9 smuggle_url,
10 update_url_query,
11 )
12
13
14 class FXNetworksIE(AdobePassIE):
15 _VALID_URL = r'https?://(?:www\.)?(?:fxnetworks|simpsonsworld)\.com/video/(?P<id>\d+)'
16 _TESTS = [{
17 'url': 'http://www.fxnetworks.com/video/1032565827847',
18 'md5': '8d99b97b4aa7a202f55b6ed47ea7e703',
19 'info_dict': {
20 'id': 'dRzwHC_MMqIv',
21 'ext': 'mp4',
22 'title': 'First Look: Better Things - Season 2',
23 'description': 'Because real life is like a fart. Watch this FIRST LOOK to see what inspired the new season of Better Things.',
24 'age_limit': 14,
25 'uploader': 'NEWA-FNG-FX',
26 'upload_date': '20170825',
27 'timestamp': 1503686274,
28 'episode_number': 0,
29 'season_number': 2,
30 'series': 'Better Things',
31 },
32 'add_ie': ['ThePlatform'],
33 }, {
34 'url': 'http://www.simpsonsworld.com/video/716094019682',
35 'only_matching': True,
36 }]
37
38 def _real_extract(self, url):
39 video_id = self._match_id(url)
40 webpage = self._download_webpage(url, video_id)
41 if 'The content you are trying to access is not available in your region.' in webpage:
42 self.raise_geo_restricted()
43 video_data = extract_attributes(self._search_regex(
44 r'(<a.+?rel="http://link\.theplatform\.com/s/.+?</a>)', webpage, 'video data'))
45 player_type = self._search_regex(r'playerType\s*=\s*[\'"]([^\'"]+)', webpage, 'player type', default=None)
46 release_url = video_data['rel']
47 title = video_data['data-title']
48 rating = video_data.get('data-rating')
49 query = {
50 'mbr': 'true',
51 }
52 if player_type == 'movies':
53 query.update({
54 'manifest': 'm3u',
55 })
56 else:
57 query.update({
58 'switch': 'http',
59 })
60 if video_data.get('data-req-auth') == '1':
61 resource = self._get_mvpd_resource(
62 video_data['data-channel'], title,
63 video_data.get('data-guid'), rating)
64 query['auth'] = self._extract_mvpd_auth(url, video_id, 'fx', resource)
65
66 return {
67 '_type': 'url_transparent',
68 'id': video_id,
69 'title': title,
70 'url': smuggle_url(update_url_query(release_url, query), {'force_smil_url': True}),
71 'series': video_data.get('data-show-title'),
72 'episode_number': int_or_none(video_data.get('data-episode')),
73 'season_number': int_or_none(video_data.get('data-season')),
74 'thumbnail': video_data.get('data-large-thumb'),
75 'age_limit': parse_age_limit(rating),
76 'ie_key': 'ThePlatform',
77 }
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/youtube_dl/extractor/fxnetworks.py b/youtube_dl/extractor/fxnetworks.py
--- a/youtube_dl/extractor/fxnetworks.py
+++ b/youtube_dl/extractor/fxnetworks.py
@@ -41,7 +41,7 @@
if 'The content you are trying to access is not available in your region.' in webpage:
self.raise_geo_restricted()
video_data = extract_attributes(self._search_regex(
- r'(<a.+?rel="http://link\.theplatform\.com/s/.+?</a>)', webpage, 'video data'))
+ r'(<a.+?rel="https?://link\.theplatform\.com/s/.+?</a>)', webpage, 'video data'))
player_type = self._search_regex(r'playerType\s*=\s*[\'"]([^\'"]+)', webpage, 'player type', default=None)
release_url = video_data['rel']
title = video_data['data-title']
| {"golden_diff": "diff --git a/youtube_dl/extractor/fxnetworks.py b/youtube_dl/extractor/fxnetworks.py\n--- a/youtube_dl/extractor/fxnetworks.py\n+++ b/youtube_dl/extractor/fxnetworks.py\n@@ -41,7 +41,7 @@\n if 'The content you are trying to access is not available in your region.' in webpage:\n self.raise_geo_restricted()\n video_data = extract_attributes(self._search_regex(\n- r'(<a.+?rel=\"http://link\\.theplatform\\.com/s/.+?</a>)', webpage, 'video data'))\n+ r'(<a.+?rel=\"https?://link\\.theplatform\\.com/s/.+?</a>)', webpage, 'video data'))\n player_type = self._search_regex(r'playerType\\s*=\\s*[\\'\"]([^\\'\"]+)', webpage, 'player type', default=None)\n release_url = video_data['rel']\n title = video_data['data-title']\n", "issue": "ERROR: Unable to extract video data\n## Please follow the guide below\r\n\r\n- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly\r\n- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)\r\n- Use the *Preview* tab to see what your issue will actually look like\r\n\r\n---\r\n\r\n### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.04.03*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.\r\n- [X] I've **verified** and **I assure** that I'm running youtube-dl **2018.04.03**\r\n\r\n### Before submitting an *issue* make sure you have:\r\n- [X] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections\r\n- [X] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones\r\n- [X] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser\r\n\r\n### What is the purpose of your *issue*?\r\n- [X] Bug report (encountered problems with youtube-dl)\r\n- [ ] Site support request (request for adding support for a new site)\r\n- [ ] Feature request (request for a new functionality)\r\n- [ ] Question\r\n- [ ] Other\r\n\r\n---\r\n\r\n### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*\r\n\r\n---\r\n\r\n### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:\r\n\r\nAdd the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):\r\n\r\n```\r\n[debug] System config: []\r\n[debug] User config: []\r\n[debug] Custom config: []\r\n[debug] Command-line args: [u'-v', u'http://www.fxnetworks.com/video/1199474243732']\r\n[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8\r\n[debug] youtube-dl version 2018.04.03\r\n[debug] Python version 2.7.13 (CPython) - Linux-4.4.114-42-default-x86_64-with-SuSE-42.3-x86_64\r\n[debug] exe versions: ffmpeg 3.4.1, ffprobe 3.4.1, rtmpdump 2.4\r\n[debug] Proxy map: {}\r\n[FXNetworks] 1199474243732: Downloading webpage\r\nERROR: Unable to extract video data; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\r\nTraceback (most recent call last):\r\n File \"/home/user/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 785, in extract_info\r\n ie_result = ie.extract(url)\r\n File \"/home/user/bin/youtube-dl/youtube_dl/extractor/common.py\", line 440, in extract\r\n ie_result = self._real_extract(url)\r\n File \"/home/user/bin/youtube-dl/youtube_dl/extractor/fxnetworks.py\", line 44, in _real_extract\r\n r'(<a.+?rel=\"http://link\\.theplatform\\.com/s/.+?</a>)', webpage, 'video data'))\r\n File \"/home/user/bin/youtube-dl/youtube_dl/extractor/common.py\", line 808, in _search_regex\r\n raise RegexNotFoundError('Unable to extract %s' % _name)\r\nRegexNotFoundError: Unable to extract video data; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\r\n```\r\n\r\n---\r\n\r\n### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):\r\n- Single video: https://www.youtube.com/watch?v=BaW_jenozKc\r\n- Single video: https://youtu.be/BaW_jenozKc\r\n- Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc\r\n\r\nNote that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights.\r\n\r\n---\r\n\r\n### Description of your *issue*, suggested solution and other information\r\n\r\nExplanation of your *issue* in arbitrary form goes here. Please make sure the [description is worded well enough to be understood](https://github.com/rg3/youtube-dl#is-the-description-of-the-issue-itself-sufficient). Provide as much context and examples as possible.\r\nIf work on your *issue* requires account credentials please provide them or explain how one can obtain them.\r\n\n", "before_files": [{"content": "# coding: utf-8\nfrom __future__ import unicode_literals\n\nfrom .adobepass import AdobePassIE\nfrom ..utils import (\n extract_attributes,\n int_or_none,\n parse_age_limit,\n smuggle_url,\n update_url_query,\n)\n\n\nclass FXNetworksIE(AdobePassIE):\n _VALID_URL = r'https?://(?:www\\.)?(?:fxnetworks|simpsonsworld)\\.com/video/(?P<id>\\d+)'\n _TESTS = [{\n 'url': 'http://www.fxnetworks.com/video/1032565827847',\n 'md5': '8d99b97b4aa7a202f55b6ed47ea7e703',\n 'info_dict': {\n 'id': 'dRzwHC_MMqIv',\n 'ext': 'mp4',\n 'title': 'First Look: Better Things - Season 2',\n 'description': 'Because real life is like a fart. Watch this FIRST LOOK to see what inspired the new season of Better Things.',\n 'age_limit': 14,\n 'uploader': 'NEWA-FNG-FX',\n 'upload_date': '20170825',\n 'timestamp': 1503686274,\n 'episode_number': 0,\n 'season_number': 2,\n 'series': 'Better Things',\n },\n 'add_ie': ['ThePlatform'],\n }, {\n 'url': 'http://www.simpsonsworld.com/video/716094019682',\n 'only_matching': True,\n }]\n\n def _real_extract(self, url):\n video_id = self._match_id(url)\n webpage = self._download_webpage(url, video_id)\n if 'The content you are trying to access is not available in your region.' in webpage:\n self.raise_geo_restricted()\n video_data = extract_attributes(self._search_regex(\n r'(<a.+?rel=\"http://link\\.theplatform\\.com/s/.+?</a>)', webpage, 'video data'))\n player_type = self._search_regex(r'playerType\\s*=\\s*[\\'\"]([^\\'\"]+)', webpage, 'player type', default=None)\n release_url = video_data['rel']\n title = video_data['data-title']\n rating = video_data.get('data-rating')\n query = {\n 'mbr': 'true',\n }\n if player_type == 'movies':\n query.update({\n 'manifest': 'm3u',\n })\n else:\n query.update({\n 'switch': 'http',\n })\n if video_data.get('data-req-auth') == '1':\n resource = self._get_mvpd_resource(\n video_data['data-channel'], title,\n video_data.get('data-guid'), rating)\n query['auth'] = self._extract_mvpd_auth(url, video_id, 'fx', resource)\n\n return {\n '_type': 'url_transparent',\n 'id': video_id,\n 'title': title,\n 'url': smuggle_url(update_url_query(release_url, query), {'force_smil_url': True}),\n 'series': video_data.get('data-show-title'),\n 'episode_number': int_or_none(video_data.get('data-episode')),\n 'season_number': int_or_none(video_data.get('data-season')),\n 'thumbnail': video_data.get('data-large-thumb'),\n 'age_limit': parse_age_limit(rating),\n 'ie_key': 'ThePlatform',\n }\n", "path": "youtube_dl/extractor/fxnetworks.py"}], "after_files": [{"content": "# coding: utf-8\nfrom __future__ import unicode_literals\n\nfrom .adobepass import AdobePassIE\nfrom ..utils import (\n extract_attributes,\n int_or_none,\n parse_age_limit,\n smuggle_url,\n update_url_query,\n)\n\n\nclass FXNetworksIE(AdobePassIE):\n _VALID_URL = r'https?://(?:www\\.)?(?:fxnetworks|simpsonsworld)\\.com/video/(?P<id>\\d+)'\n _TESTS = [{\n 'url': 'http://www.fxnetworks.com/video/1032565827847',\n 'md5': '8d99b97b4aa7a202f55b6ed47ea7e703',\n 'info_dict': {\n 'id': 'dRzwHC_MMqIv',\n 'ext': 'mp4',\n 'title': 'First Look: Better Things - Season 2',\n 'description': 'Because real life is like a fart. Watch this FIRST LOOK to see what inspired the new season of Better Things.',\n 'age_limit': 14,\n 'uploader': 'NEWA-FNG-FX',\n 'upload_date': '20170825',\n 'timestamp': 1503686274,\n 'episode_number': 0,\n 'season_number': 2,\n 'series': 'Better Things',\n },\n 'add_ie': ['ThePlatform'],\n }, {\n 'url': 'http://www.simpsonsworld.com/video/716094019682',\n 'only_matching': True,\n }]\n\n def _real_extract(self, url):\n video_id = self._match_id(url)\n webpage = self._download_webpage(url, video_id)\n if 'The content you are trying to access is not available in your region.' in webpage:\n self.raise_geo_restricted()\n video_data = extract_attributes(self._search_regex(\n r'(<a.+?rel=\"https?://link\\.theplatform\\.com/s/.+?</a>)', webpage, 'video data'))\n player_type = self._search_regex(r'playerType\\s*=\\s*[\\'\"]([^\\'\"]+)', webpage, 'player type', default=None)\n release_url = video_data['rel']\n title = video_data['data-title']\n rating = video_data.get('data-rating')\n query = {\n 'mbr': 'true',\n }\n if player_type == 'movies':\n query.update({\n 'manifest': 'm3u',\n })\n else:\n query.update({\n 'switch': 'http',\n })\n if video_data.get('data-req-auth') == '1':\n resource = self._get_mvpd_resource(\n video_data['data-channel'], title,\n video_data.get('data-guid'), rating)\n query['auth'] = self._extract_mvpd_auth(url, video_id, 'fx', resource)\n\n return {\n '_type': 'url_transparent',\n 'id': video_id,\n 'title': title,\n 'url': smuggle_url(update_url_query(release_url, query), {'force_smil_url': True}),\n 'series': video_data.get('data-show-title'),\n 'episode_number': int_or_none(video_data.get('data-episode')),\n 'season_number': int_or_none(video_data.get('data-season')),\n 'thumbnail': video_data.get('data-large-thumb'),\n 'age_limit': parse_age_limit(rating),\n 'ie_key': 'ThePlatform',\n }\n", "path": "youtube_dl/extractor/fxnetworks.py"}]} | 2,551 | 220 |
gh_patches_debug_4706 | rasdani/github-patches | git_diff | sherlock-project__sherlock-2109 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SHERLOCK// ERROR "string indices must be integers, not 'str' "
como puedo solucionar este error que me sale al querer usar sherlock, no se como solucionarlo la verdad
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sherlock/sites.py`
Content:
```
1 """Sherlock Sites Information Module
2
3 This module supports storing information about websites.
4 This is the raw data that will be used to search for usernames.
5 """
6 import json
7 import requests
8 import secrets
9
10 class SiteInformation:
11 def __init__(self, name, url_home, url_username_format, username_claimed,
12 information, is_nsfw, username_unclaimed=secrets.token_urlsafe(10)):
13 """Create Site Information Object.
14
15 Contains information about a specific website.
16
17 Keyword Arguments:
18 self -- This object.
19 name -- String which identifies site.
20 url_home -- String containing URL for home of site.
21 url_username_format -- String containing URL for Username format
22 on site.
23 NOTE: The string should contain the
24 token "{}" where the username should
25 be substituted. For example, a string
26 of "https://somesite.com/users/{}"
27 indicates that the individual
28 usernames would show up under the
29 "https://somesite.com/users/" area of
30 the website.
31 username_claimed -- String containing username which is known
32 to be claimed on website.
33 username_unclaimed -- String containing username which is known
34 to be unclaimed on website.
35 information -- Dictionary containing all known information
36 about website.
37 NOTE: Custom information about how to
38 actually detect the existence of the
39 username will be included in this
40 dictionary. This information will
41 be needed by the detection method,
42 but it is only recorded in this
43 object for future use.
44 is_nsfw -- Boolean indicating if site is Not Safe For Work.
45
46 Return Value:
47 Nothing.
48 """
49
50 self.name = name
51 self.url_home = url_home
52 self.url_username_format = url_username_format
53
54 self.username_claimed = username_claimed
55 self.username_unclaimed = secrets.token_urlsafe(32)
56 self.information = information
57 self.is_nsfw = is_nsfw
58
59 return
60
61 def __str__(self):
62 """Convert Object To String.
63
64 Keyword Arguments:
65 self -- This object.
66
67 Return Value:
68 Nicely formatted string to get information about this object.
69 """
70
71 return f"{self.name} ({self.url_home})"
72
73
74 class SitesInformation:
75 def __init__(self, data_file_path=None):
76 """Create Sites Information Object.
77
78 Contains information about all supported websites.
79
80 Keyword Arguments:
81 self -- This object.
82 data_file_path -- String which indicates path to data file.
83 The file name must end in ".json".
84
85 There are 3 possible formats:
86 * Absolute File Format
87 For example, "c:/stuff/data.json".
88 * Relative File Format
89 The current working directory is used
90 as the context.
91 For example, "data.json".
92 * URL Format
93 For example,
94 "https://example.com/data.json", or
95 "http://example.com/data.json".
96
97 An exception will be thrown if the path
98 to the data file is not in the expected
99 format, or if there was any problem loading
100 the file.
101
102 If this option is not specified, then a
103 default site list will be used.
104
105 Return Value:
106 Nothing.
107 """
108
109 if not data_file_path:
110 # The default data file is the live data.json which is in the GitHub repo. The reason why we are using
111 # this instead of the local one is so that the user has the most up-to-date data. This prevents
112 # users from creating issue about false positives which has already been fixed or having outdated data
113 data_file_path = "https://raw.githubusercontent.com/sherlock-project/sherlock/master/sherlock/resources/data.json"
114
115 # Ensure that specified data file has correct extension.
116 if not data_file_path.lower().endswith(".json"):
117 raise FileNotFoundError(f"Incorrect JSON file extension for data file '{data_file_path}'.")
118
119 # if "http://" == data_file_path[:7].lower() or "https://" == data_file_path[:8].lower():
120 if data_file_path.lower().startswith("http"):
121 # Reference is to a URL.
122 try:
123 response = requests.get(url=data_file_path)
124 except Exception as error:
125 raise FileNotFoundError(
126 f"Problem while attempting to access data file URL '{data_file_path}': {error}"
127 )
128
129 if response.status_code != 200:
130 raise FileNotFoundError(f"Bad response while accessing "
131 f"data file URL '{data_file_path}'."
132 )
133 try:
134 site_data = response.json()
135 except Exception as error:
136 raise ValueError(
137 f"Problem parsing json contents at '{data_file_path}': {error}."
138 )
139
140 else:
141 # Reference is to a file.
142 try:
143 with open(data_file_path, "r", encoding="utf-8") as file:
144 try:
145 site_data = json.load(file)
146 except Exception as error:
147 raise ValueError(
148 f"Problem parsing json contents at '{data_file_path}': {error}."
149 )
150
151 except FileNotFoundError:
152 raise FileNotFoundError(f"Problem while attempting to access "
153 f"data file '{data_file_path}'."
154 )
155
156 try:
157 site_data.pop('$schema')
158 except:
159 pass
160
161 self.sites = {}
162
163 # Add all site information from the json file to internal site list.
164 for site_name in site_data:
165 try:
166
167 self.sites[site_name] = \
168 SiteInformation(site_name,
169 site_data[site_name]["urlMain"],
170 site_data[site_name]["url"],
171 site_data[site_name]["username_claimed"],
172 site_data[site_name],
173 site_data[site_name].get("isNSFW",False)
174
175 )
176 except KeyError as error:
177 raise ValueError(
178 f"Problem parsing json contents at '{data_file_path}': Missing attribute {error}."
179 )
180
181 return
182
183 def remove_nsfw_sites(self):
184 """
185 Remove NSFW sites from the sites, if isNSFW flag is true for site
186
187 Keyword Arguments:
188 self -- This object.
189
190 Return Value:
191 None
192 """
193 sites = {}
194 for site in self.sites:
195 if self.sites[site].is_nsfw:
196 continue
197 sites[site] = self.sites[site]
198 self.sites = sites
199
200 def site_name_list(self):
201 """Get Site Name List.
202
203 Keyword Arguments:
204 self -- This object.
205
206 Return Value:
207 List of strings containing names of sites.
208 """
209
210 return sorted([site.name for site in self], key=str.lower)
211
212 def __iter__(self):
213 """Iterator For Object.
214
215 Keyword Arguments:
216 self -- This object.
217
218 Return Value:
219 Iterator for sites object.
220 """
221
222 for site_name in self.sites:
223 yield self.sites[site_name]
224
225 def __len__(self):
226 """Length For Object.
227
228 Keyword Arguments:
229 self -- This object.
230
231 Return Value:
232 Length of sites object.
233 """
234 return len(self.sites)
235
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sherlock/sites.py b/sherlock/sites.py
--- a/sherlock/sites.py
+++ b/sherlock/sites.py
@@ -177,6 +177,8 @@
raise ValueError(
f"Problem parsing json contents at '{data_file_path}': Missing attribute {error}."
)
+ except TypeError as error:
+ print(f"Encountered TypeError parsing json contents for target '{site_name}' at {data_file_path}\nSkipping target.\n")
return
| {"golden_diff": "diff --git a/sherlock/sites.py b/sherlock/sites.py\n--- a/sherlock/sites.py\n+++ b/sherlock/sites.py\n@@ -177,6 +177,8 @@\n raise ValueError(\n f\"Problem parsing json contents at '{data_file_path}': Missing attribute {error}.\"\n )\n+ except TypeError as error:\n+ print(f\"Encountered TypeError parsing json contents for target '{site_name}' at {data_file_path}\\nSkipping target.\\n\")\n \n return\n", "issue": "SHERLOCK// ERROR \"string indices must be integers, not 'str' \"\ncomo puedo solucionar este error que me sale al querer usar sherlock, no se como solucionarlo la verdad\n", "before_files": [{"content": "\"\"\"Sherlock Sites Information Module\n\nThis module supports storing information about websites.\nThis is the raw data that will be used to search for usernames.\n\"\"\"\nimport json\nimport requests\nimport secrets\n\nclass SiteInformation:\n def __init__(self, name, url_home, url_username_format, username_claimed,\n information, is_nsfw, username_unclaimed=secrets.token_urlsafe(10)):\n \"\"\"Create Site Information Object.\n\n Contains information about a specific website.\n\n Keyword Arguments:\n self -- This object.\n name -- String which identifies site.\n url_home -- String containing URL for home of site.\n url_username_format -- String containing URL for Username format\n on site.\n NOTE: The string should contain the\n token \"{}\" where the username should\n be substituted. For example, a string\n of \"https://somesite.com/users/{}\"\n indicates that the individual\n usernames would show up under the\n \"https://somesite.com/users/\" area of\n the website.\n username_claimed -- String containing username which is known\n to be claimed on website.\n username_unclaimed -- String containing username which is known\n to be unclaimed on website.\n information -- Dictionary containing all known information\n about website.\n NOTE: Custom information about how to\n actually detect the existence of the\n username will be included in this\n dictionary. This information will\n be needed by the detection method,\n but it is only recorded in this\n object for future use.\n is_nsfw -- Boolean indicating if site is Not Safe For Work.\n\n Return Value:\n Nothing.\n \"\"\"\n\n self.name = name\n self.url_home = url_home\n self.url_username_format = url_username_format\n\n self.username_claimed = username_claimed\n self.username_unclaimed = secrets.token_urlsafe(32)\n self.information = information\n self.is_nsfw = is_nsfw\n\n return\n\n def __str__(self):\n \"\"\"Convert Object To String.\n\n Keyword Arguments:\n self -- This object.\n\n Return Value:\n Nicely formatted string to get information about this object.\n \"\"\"\n \n return f\"{self.name} ({self.url_home})\"\n\n\nclass SitesInformation:\n def __init__(self, data_file_path=None):\n \"\"\"Create Sites Information Object.\n\n Contains information about all supported websites.\n\n Keyword Arguments:\n self -- This object.\n data_file_path -- String which indicates path to data file.\n The file name must end in \".json\".\n\n There are 3 possible formats:\n * Absolute File Format\n For example, \"c:/stuff/data.json\".\n * Relative File Format\n The current working directory is used\n as the context.\n For example, \"data.json\".\n * URL Format\n For example,\n \"https://example.com/data.json\", or\n \"http://example.com/data.json\".\n\n An exception will be thrown if the path\n to the data file is not in the expected\n format, or if there was any problem loading\n the file.\n\n If this option is not specified, then a\n default site list will be used.\n\n Return Value:\n Nothing.\n \"\"\"\n\n if not data_file_path:\n # The default data file is the live data.json which is in the GitHub repo. The reason why we are using\n # this instead of the local one is so that the user has the most up-to-date data. This prevents\n # users from creating issue about false positives which has already been fixed or having outdated data\n data_file_path = \"https://raw.githubusercontent.com/sherlock-project/sherlock/master/sherlock/resources/data.json\"\n\n # Ensure that specified data file has correct extension.\n if not data_file_path.lower().endswith(\".json\"):\n raise FileNotFoundError(f\"Incorrect JSON file extension for data file '{data_file_path}'.\")\n\n # if \"http://\" == data_file_path[:7].lower() or \"https://\" == data_file_path[:8].lower():\n if data_file_path.lower().startswith(\"http\"):\n # Reference is to a URL.\n try:\n response = requests.get(url=data_file_path)\n except Exception as error:\n raise FileNotFoundError(\n f\"Problem while attempting to access data file URL '{data_file_path}': {error}\"\n )\n\n if response.status_code != 200:\n raise FileNotFoundError(f\"Bad response while accessing \"\n f\"data file URL '{data_file_path}'.\"\n )\n try:\n site_data = response.json()\n except Exception as error:\n raise ValueError(\n f\"Problem parsing json contents at '{data_file_path}': {error}.\"\n )\n\n else:\n # Reference is to a file.\n try:\n with open(data_file_path, \"r\", encoding=\"utf-8\") as file:\n try:\n site_data = json.load(file)\n except Exception as error:\n raise ValueError(\n f\"Problem parsing json contents at '{data_file_path}': {error}.\"\n )\n\n except FileNotFoundError:\n raise FileNotFoundError(f\"Problem while attempting to access \"\n f\"data file '{data_file_path}'.\"\n )\n \n try:\n site_data.pop('$schema')\n except:\n pass\n\n self.sites = {}\n\n # Add all site information from the json file to internal site list.\n for site_name in site_data:\n try:\n\n self.sites[site_name] = \\\n SiteInformation(site_name,\n site_data[site_name][\"urlMain\"],\n site_data[site_name][\"url\"],\n site_data[site_name][\"username_claimed\"],\n site_data[site_name],\n site_data[site_name].get(\"isNSFW\",False)\n\n )\n except KeyError as error:\n raise ValueError(\n f\"Problem parsing json contents at '{data_file_path}': Missing attribute {error}.\"\n )\n\n return\n\n def remove_nsfw_sites(self):\n \"\"\"\n Remove NSFW sites from the sites, if isNSFW flag is true for site\n\n Keyword Arguments:\n self -- This object.\n\n Return Value:\n None\n \"\"\"\n sites = {}\n for site in self.sites:\n if self.sites[site].is_nsfw:\n continue\n sites[site] = self.sites[site] \n self.sites = sites\n\n def site_name_list(self):\n \"\"\"Get Site Name List.\n\n Keyword Arguments:\n self -- This object.\n\n Return Value:\n List of strings containing names of sites.\n \"\"\"\n\n return sorted([site.name for site in self], key=str.lower)\n\n def __iter__(self):\n \"\"\"Iterator For Object.\n\n Keyword Arguments:\n self -- This object.\n\n Return Value:\n Iterator for sites object.\n \"\"\"\n\n for site_name in self.sites:\n yield self.sites[site_name]\n\n def __len__(self):\n \"\"\"Length For Object.\n\n Keyword Arguments:\n self -- This object.\n\n Return Value:\n Length of sites object.\n \"\"\"\n return len(self.sites)\n", "path": "sherlock/sites.py"}], "after_files": [{"content": "\"\"\"Sherlock Sites Information Module\n\nThis module supports storing information about websites.\nThis is the raw data that will be used to search for usernames.\n\"\"\"\nimport json\nimport requests\nimport secrets\n\nclass SiteInformation:\n def __init__(self, name, url_home, url_username_format, username_claimed,\n information, is_nsfw, username_unclaimed=secrets.token_urlsafe(10)):\n \"\"\"Create Site Information Object.\n\n Contains information about a specific website.\n\n Keyword Arguments:\n self -- This object.\n name -- String which identifies site.\n url_home -- String containing URL for home of site.\n url_username_format -- String containing URL for Username format\n on site.\n NOTE: The string should contain the\n token \"{}\" where the username should\n be substituted. For example, a string\n of \"https://somesite.com/users/{}\"\n indicates that the individual\n usernames would show up under the\n \"https://somesite.com/users/\" area of\n the website.\n username_claimed -- String containing username which is known\n to be claimed on website.\n username_unclaimed -- String containing username which is known\n to be unclaimed on website.\n information -- Dictionary containing all known information\n about website.\n NOTE: Custom information about how to\n actually detect the existence of the\n username will be included in this\n dictionary. This information will\n be needed by the detection method,\n but it is only recorded in this\n object for future use.\n is_nsfw -- Boolean indicating if site is Not Safe For Work.\n\n Return Value:\n Nothing.\n \"\"\"\n\n self.name = name\n self.url_home = url_home\n self.url_username_format = url_username_format\n\n self.username_claimed = username_claimed\n self.username_unclaimed = secrets.token_urlsafe(32)\n self.information = information\n self.is_nsfw = is_nsfw\n\n return\n\n def __str__(self):\n \"\"\"Convert Object To String.\n\n Keyword Arguments:\n self -- This object.\n\n Return Value:\n Nicely formatted string to get information about this object.\n \"\"\"\n \n return f\"{self.name} ({self.url_home})\"\n\n\nclass SitesInformation:\n def __init__(self, data_file_path=None):\n \"\"\"Create Sites Information Object.\n\n Contains information about all supported websites.\n\n Keyword Arguments:\n self -- This object.\n data_file_path -- String which indicates path to data file.\n The file name must end in \".json\".\n\n There are 3 possible formats:\n * Absolute File Format\n For example, \"c:/stuff/data.json\".\n * Relative File Format\n The current working directory is used\n as the context.\n For example, \"data.json\".\n * URL Format\n For example,\n \"https://example.com/data.json\", or\n \"http://example.com/data.json\".\n\n An exception will be thrown if the path\n to the data file is not in the expected\n format, or if there was any problem loading\n the file.\n\n If this option is not specified, then a\n default site list will be used.\n\n Return Value:\n Nothing.\n \"\"\"\n\n if not data_file_path:\n # The default data file is the live data.json which is in the GitHub repo. The reason why we are using\n # this instead of the local one is so that the user has the most up-to-date data. This prevents\n # users from creating issue about false positives which has already been fixed or having outdated data\n data_file_path = \"https://raw.githubusercontent.com/sherlock-project/sherlock/master/sherlock/resources/data.json\"\n\n # Ensure that specified data file has correct extension.\n if not data_file_path.lower().endswith(\".json\"):\n raise FileNotFoundError(f\"Incorrect JSON file extension for data file '{data_file_path}'.\")\n\n # if \"http://\" == data_file_path[:7].lower() or \"https://\" == data_file_path[:8].lower():\n if data_file_path.lower().startswith(\"http\"):\n # Reference is to a URL.\n try:\n response = requests.get(url=data_file_path)\n except Exception as error:\n raise FileNotFoundError(\n f\"Problem while attempting to access data file URL '{data_file_path}': {error}\"\n )\n\n if response.status_code != 200:\n raise FileNotFoundError(f\"Bad response while accessing \"\n f\"data file URL '{data_file_path}'.\"\n )\n try:\n site_data = response.json()\n except Exception as error:\n raise ValueError(\n f\"Problem parsing json contents at '{data_file_path}': {error}.\"\n )\n\n else:\n # Reference is to a file.\n try:\n with open(data_file_path, \"r\", encoding=\"utf-8\") as file:\n try:\n site_data = json.load(file)\n except Exception as error:\n raise ValueError(\n f\"Problem parsing json contents at '{data_file_path}': {error}.\"\n )\n\n except FileNotFoundError:\n raise FileNotFoundError(f\"Problem while attempting to access \"\n f\"data file '{data_file_path}'.\"\n )\n \n try:\n site_data.pop('$schema')\n except:\n pass\n\n self.sites = {}\n\n # Add all site information from the json file to internal site list.\n for site_name in site_data:\n try:\n\n self.sites[site_name] = \\\n SiteInformation(site_name,\n site_data[site_name][\"urlMain\"],\n site_data[site_name][\"url\"],\n site_data[site_name][\"username_claimed\"],\n site_data[site_name],\n site_data[site_name].get(\"isNSFW\",False)\n\n )\n except KeyError as error:\n raise ValueError(\n f\"Problem parsing json contents at '{data_file_path}': Missing attribute {error}.\"\n )\n except TypeError as error:\n print(f\"Encountered TypeError parsing json contents for target '{site_name}' at {data_file_path}\\nSkipping target.\\n\")\n\n return\n\n def remove_nsfw_sites(self):\n \"\"\"\n Remove NSFW sites from the sites, if isNSFW flag is true for site\n\n Keyword Arguments:\n self -- This object.\n\n Return Value:\n None\n \"\"\"\n sites = {}\n for site in self.sites:\n if self.sites[site].is_nsfw:\n continue\n sites[site] = self.sites[site] \n self.sites = sites\n\n def site_name_list(self):\n \"\"\"Get Site Name List.\n\n Keyword Arguments:\n self -- This object.\n\n Return Value:\n List of strings containing names of sites.\n \"\"\"\n\n return sorted([site.name for site in self], key=str.lower)\n\n def __iter__(self):\n \"\"\"Iterator For Object.\n\n Keyword Arguments:\n self -- This object.\n\n Return Value:\n Iterator for sites object.\n \"\"\"\n\n for site_name in self.sites:\n yield self.sites[site_name]\n\n def __len__(self):\n \"\"\"Length For Object.\n\n Keyword Arguments:\n self -- This object.\n\n Return Value:\n Length of sites object.\n \"\"\"\n return len(self.sites)\n", "path": "sherlock/sites.py"}]} | 2,462 | 114 |
gh_patches_debug_28686 | rasdani/github-patches | git_diff | fossasia__open-event-server-4019 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Method to insert session-speaker relationship missing
**I'm submitting a ...** (check one with "x")
- [x] bug report
- [ ] feature request
- [ ] support request => Please do not submit support requests here, instead ask your query in out Gitter channel at https://gitter.im/fossasia/open-event-orga-server
**Current behavior:**
<!-- Describe how the bug manifests. -->
**Expected behavior:**
Session must be updated with related speakers and data inserted in speakers-session table.
**Steps to reproduce:**
<!-- If you are able to illustrate the bug or feature request with an example, please provide steps to reproduce -->
**Related code:**
```
insert any relevant code here else remove this section
```
**Other information:**
<!-- List any other information that is relevant to your issue. Stack traces, related issues, suggestions on how to fix, Stack Overflow links, forum links, etc. -->
**System information:**
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/api/speakers.py`
Content:
```
1 from marshmallow_jsonapi import fields
2 from marshmallow_jsonapi.flask import Schema, Relationship
3 from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship
4
5 from app.api.helpers.utilities import dasherize
6 from app.api.helpers.permissions import jwt_required, current_identity
7 from app.models import db
8 from app.models.speaker import Speaker
9 from app.models.session import Session
10 from app.models.user import User
11 from app.models.event import Event
12 from app.api.helpers.db import safe_query
13 from app.api.bootstrap import api
14
15
16 class SpeakerSchema(Schema):
17 """
18 Speaker Schema based on Speaker Model
19 """
20 class Meta:
21 """
22 Meta class for speaker schema
23 """
24 type_ = 'speaker'
25 self_view = 'v1.speaker_detail'
26 self_view_kwargs = {'id': '<id>'}
27 inflect = dasherize
28
29 id = fields.Str(dump_only=True)
30 name = fields.Str(required=True)
31 email = fields.Str(required=True)
32 photo_url = fields.Url(allow_none=True)
33 thumbnail_image_url = fields.Url(allow_none=True)
34 small_image_url = fields.Url(allow_none=True)
35 icon_image_url = fields.Url(allow_none=True)
36 short_biography = fields.Str(allow_none=True)
37 long_biography = fields.Str(allow_none=True)
38 speaking_experience = fields.Str(allow_none=True)
39 mobile = fields.Str(allow_none=True)
40 website = fields.Url(allow_none=True)
41 twitter = fields.Url(allow_none=True)
42 facebook = fields.Url(allow_none=True)
43 github = fields.Url(allow_none=True)
44 linkedin = fields.Url(allow_none=True)
45 organisation = fields.Str(allow_none=True)
46 is_featured = fields.Boolean(default=False)
47 position = fields.Str(allow_none=True)
48 country = fields.Str(allow_none=True)
49 city = fields.Str(allow_none=True)
50 gender = fields.Str(allow_none=True)
51 heard_from = fields.Str(allow_none=True)
52 sponsorship_required = fields.Str(allow_none=True)
53 event = Relationship(attribute='event',
54 self_view='v1.speaker_event',
55 self_view_kwargs={'id': '<id>'},
56 related_view='v1.event_detail',
57 related_view_kwargs={'speaker_id': '<id>'},
58 schema='EventSchema',
59 type_='event')
60 user = Relationship(attribute='user',
61 self_view='v1.speaker_user',
62 self_view_kwargs={'id': '<id>'},
63 related_view='v1.user_detail',
64 related_view_kwargs={'speaker_id': '<id>'},
65 schema='UserSchema',
66 type_='user')
67 sessions = Relationship(attribute='sessions',
68 self_view='v1.speaker_session',
69 self_view_kwargs={'id': '<id>'},
70 related_view='v1.session_list',
71 related_view_kwargs={'speaker_id': '<id>'},
72 schema='SessionSchema',
73 type_='session')
74
75
76 class SpeakerList(ResourceList):
77 """
78 List and create speakers
79 """
80 def query(self, view_kwargs):
81 """
82 query method for speakers list class
83 :param view_kwargs:
84 :return:
85 """
86 query_ = self.session.query(Speaker)
87 if view_kwargs.get('event_id'):
88 event = safe_query(self, Event, 'id', view_kwargs['event_id'], 'event_id')
89 query_ = query_.join(Event).filter(Event.id == event.id)
90 elif view_kwargs.get('event_identifier'):
91 event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')
92 query_ = query_.join(Event).filter(Event.id == event.id)
93 if view_kwargs.get('user_id'):
94 user = safe_query(self, User, 'id', view_kwargs['user_id'], 'user_id')
95 query_ = query_.join(User).filter(User.id == user.id)
96 if view_kwargs.get('session_id'):
97 session = safe_query(self, Session, 'id', view_kwargs['session_id'], 'session_id')
98 # session-speaker :: many-to-many relationship
99 query_ = Speaker.query.filter(Speaker.sessions.any(id=session.id))
100 return query_
101
102 def before_post(self, args, kwargs, data):
103 """
104 method to add user_id to view_kwargs before post
105 :param args:
106 :param kwargs:
107 :param data:
108 :return:
109 """
110 kwargs['user_id'] = current_identity.id
111
112 def before_create_object(self, data, view_kwargs):
113 """
114 before create object method for speaker list class
115 :param data:
116 :param view_kwargs:
117 :return:
118 """
119 if view_kwargs.get('event_id'):
120 event = safe_query(self, Event, 'id', view_kwargs['event_id'], 'event_id')
121 data['event_id'] = event.id
122 elif view_kwargs.get('event_identifier'):
123 event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')
124 data['event_id'] = event.id
125 data['user_id'] = current_identity.id
126
127 view_kwargs = True
128 decorators = (api.has_permission('accessible_role_based_events'),)
129 schema = SpeakerSchema
130 data_layer = {'session': db.session,
131 'model': Speaker,
132 'methods': {
133 'query': query,
134 'before_create_object': before_create_object,
135 'before_post': before_post
136 }}
137
138
139 class SpeakerDetail(ResourceDetail):
140 """
141 Speakers Detail by id
142 """
143 decorators = (api.has_permission('is_coorganizer_or_user_itself', methods="PATCH,DELETE", fetch="event_id",
144 fetch_as="event_id", model=Speaker, check=lambda a: a.get('id') is not None),)
145 schema = SpeakerSchema
146 data_layer = {'session': db.session,
147 'model': Speaker}
148
149
150 class SpeakerRelationship(ResourceRelationship):
151 """
152 Speaker Relationship class
153 """
154 decorators = (jwt_required,)
155 schema = SpeakerSchema
156 data_layer = {'session': db.session,
157 'model': Speaker}
158
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/api/speakers.py b/app/api/speakers.py
--- a/app/api/speakers.py
+++ b/app/api/speakers.py
@@ -116,6 +116,9 @@
:param view_kwargs:
:return:
"""
+ if view_kwargs.get('session_id'):
+ session = safe_query(self, Session, 'id', view_kwargs['session_id'], 'session_id')
+ data['event_id'] = session.event_id
if view_kwargs.get('event_id'):
event = safe_query(self, Event, 'id', view_kwargs['event_id'], 'event_id')
data['event_id'] = event.id
@@ -124,6 +127,12 @@
data['event_id'] = event.id
data['user_id'] = current_identity.id
+ def after_create_object(self, obj, data, view_kwargs):
+ if view_kwargs.get('session_id'):
+ session = safe_query(self, Session, 'id', view_kwargs['session_id'], 'session_id')
+ session.speakers.append(obj)
+ self.session.commit()
+
view_kwargs = True
decorators = (api.has_permission('accessible_role_based_events'),)
schema = SpeakerSchema
@@ -132,7 +141,8 @@
'methods': {
'query': query,
'before_create_object': before_create_object,
- 'before_post': before_post
+ 'before_post': before_post,
+ 'after_create_object': after_create_object
}}
| {"golden_diff": "diff --git a/app/api/speakers.py b/app/api/speakers.py\n--- a/app/api/speakers.py\n+++ b/app/api/speakers.py\n@@ -116,6 +116,9 @@\n :param view_kwargs:\n :return:\n \"\"\"\n+ if view_kwargs.get('session_id'):\n+ session = safe_query(self, Session, 'id', view_kwargs['session_id'], 'session_id')\n+ data['event_id'] = session.event_id\n if view_kwargs.get('event_id'):\n event = safe_query(self, Event, 'id', view_kwargs['event_id'], 'event_id')\n data['event_id'] = event.id\n@@ -124,6 +127,12 @@\n data['event_id'] = event.id\n data['user_id'] = current_identity.id\n \n+ def after_create_object(self, obj, data, view_kwargs):\n+ if view_kwargs.get('session_id'):\n+ session = safe_query(self, Session, 'id', view_kwargs['session_id'], 'session_id')\n+ session.speakers.append(obj)\n+ self.session.commit()\n+\n view_kwargs = True\n decorators = (api.has_permission('accessible_role_based_events'),)\n schema = SpeakerSchema\n@@ -132,7 +141,8 @@\n 'methods': {\n 'query': query,\n 'before_create_object': before_create_object,\n- 'before_post': before_post\n+ 'before_post': before_post,\n+ 'after_create_object': after_create_object\n }}\n", "issue": "Method to insert session-speaker relationship missing\n**I'm submitting a ...** (check one with \"x\")\r\n- [x] bug report\r\n- [ ] feature request\r\n- [ ] support request => Please do not submit support requests here, instead ask your query in out Gitter channel at https://gitter.im/fossasia/open-event-orga-server\r\n\r\n**Current behavior:**\r\n<!-- Describe how the bug manifests. -->\r\n\r\n**Expected behavior:**\r\nSession must be updated with related speakers and data inserted in speakers-session table. \r\n\r\n**Steps to reproduce:**\r\n<!-- If you are able to illustrate the bug or feature request with an example, please provide steps to reproduce -->\r\n\r\n**Related code:**\r\n\r\n```\r\ninsert any relevant code here else remove this section\r\n```\r\n\r\n**Other information:**\r\n<!-- List any other information that is relevant to your issue. Stack traces, related issues, suggestions on how to fix, Stack Overflow links, forum links, etc. -->\r\n\r\n**System information:** \r\n\r\n\r\n\n", "before_files": [{"content": "from marshmallow_jsonapi import fields\nfrom marshmallow_jsonapi.flask import Schema, Relationship\nfrom flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\n\nfrom app.api.helpers.utilities import dasherize\nfrom app.api.helpers.permissions import jwt_required, current_identity\nfrom app.models import db\nfrom app.models.speaker import Speaker\nfrom app.models.session import Session\nfrom app.models.user import User\nfrom app.models.event import Event\nfrom app.api.helpers.db import safe_query\nfrom app.api.bootstrap import api\n\n\nclass SpeakerSchema(Schema):\n \"\"\"\n Speaker Schema based on Speaker Model\n \"\"\"\n class Meta:\n \"\"\"\n Meta class for speaker schema\n \"\"\"\n type_ = 'speaker'\n self_view = 'v1.speaker_detail'\n self_view_kwargs = {'id': '<id>'}\n inflect = dasherize\n\n id = fields.Str(dump_only=True)\n name = fields.Str(required=True)\n email = fields.Str(required=True)\n photo_url = fields.Url(allow_none=True)\n thumbnail_image_url = fields.Url(allow_none=True)\n small_image_url = fields.Url(allow_none=True)\n icon_image_url = fields.Url(allow_none=True)\n short_biography = fields.Str(allow_none=True)\n long_biography = fields.Str(allow_none=True)\n speaking_experience = fields.Str(allow_none=True)\n mobile = fields.Str(allow_none=True)\n website = fields.Url(allow_none=True)\n twitter = fields.Url(allow_none=True)\n facebook = fields.Url(allow_none=True)\n github = fields.Url(allow_none=True)\n linkedin = fields.Url(allow_none=True)\n organisation = fields.Str(allow_none=True)\n is_featured = fields.Boolean(default=False)\n position = fields.Str(allow_none=True)\n country = fields.Str(allow_none=True)\n city = fields.Str(allow_none=True)\n gender = fields.Str(allow_none=True)\n heard_from = fields.Str(allow_none=True)\n sponsorship_required = fields.Str(allow_none=True)\n event = Relationship(attribute='event',\n self_view='v1.speaker_event',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.event_detail',\n related_view_kwargs={'speaker_id': '<id>'},\n schema='EventSchema',\n type_='event')\n user = Relationship(attribute='user',\n self_view='v1.speaker_user',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.user_detail',\n related_view_kwargs={'speaker_id': '<id>'},\n schema='UserSchema',\n type_='user')\n sessions = Relationship(attribute='sessions',\n self_view='v1.speaker_session',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.session_list',\n related_view_kwargs={'speaker_id': '<id>'},\n schema='SessionSchema',\n type_='session')\n\n\nclass SpeakerList(ResourceList):\n \"\"\"\n List and create speakers\n \"\"\"\n def query(self, view_kwargs):\n \"\"\"\n query method for speakers list class\n :param view_kwargs:\n :return:\n \"\"\"\n query_ = self.session.query(Speaker)\n if view_kwargs.get('event_id'):\n event = safe_query(self, Event, 'id', view_kwargs['event_id'], 'event_id')\n query_ = query_.join(Event).filter(Event.id == event.id)\n elif view_kwargs.get('event_identifier'):\n event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')\n query_ = query_.join(Event).filter(Event.id == event.id)\n if view_kwargs.get('user_id'):\n user = safe_query(self, User, 'id', view_kwargs['user_id'], 'user_id')\n query_ = query_.join(User).filter(User.id == user.id)\n if view_kwargs.get('session_id'):\n session = safe_query(self, Session, 'id', view_kwargs['session_id'], 'session_id')\n # session-speaker :: many-to-many relationship\n query_ = Speaker.query.filter(Speaker.sessions.any(id=session.id))\n return query_\n\n def before_post(self, args, kwargs, data):\n \"\"\"\n method to add user_id to view_kwargs before post\n :param args:\n :param kwargs:\n :param data:\n :return:\n \"\"\"\n kwargs['user_id'] = current_identity.id\n\n def before_create_object(self, data, view_kwargs):\n \"\"\"\n before create object method for speaker list class\n :param data:\n :param view_kwargs:\n :return:\n \"\"\"\n if view_kwargs.get('event_id'):\n event = safe_query(self, Event, 'id', view_kwargs['event_id'], 'event_id')\n data['event_id'] = event.id\n elif view_kwargs.get('event_identifier'):\n event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')\n data['event_id'] = event.id\n data['user_id'] = current_identity.id\n\n view_kwargs = True\n decorators = (api.has_permission('accessible_role_based_events'),)\n schema = SpeakerSchema\n data_layer = {'session': db.session,\n 'model': Speaker,\n 'methods': {\n 'query': query,\n 'before_create_object': before_create_object,\n 'before_post': before_post\n }}\n\n\nclass SpeakerDetail(ResourceDetail):\n \"\"\"\n Speakers Detail by id\n \"\"\"\n decorators = (api.has_permission('is_coorganizer_or_user_itself', methods=\"PATCH,DELETE\", fetch=\"event_id\",\n fetch_as=\"event_id\", model=Speaker, check=lambda a: a.get('id') is not None),)\n schema = SpeakerSchema\n data_layer = {'session': db.session,\n 'model': Speaker}\n\n\nclass SpeakerRelationship(ResourceRelationship):\n \"\"\"\n Speaker Relationship class\n \"\"\"\n decorators = (jwt_required,)\n schema = SpeakerSchema\n data_layer = {'session': db.session,\n 'model': Speaker}\n", "path": "app/api/speakers.py"}], "after_files": [{"content": "from marshmallow_jsonapi import fields\nfrom marshmallow_jsonapi.flask import Schema, Relationship\nfrom flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\n\nfrom app.api.helpers.utilities import dasherize\nfrom app.api.helpers.permissions import jwt_required, current_identity\nfrom app.models import db\nfrom app.models.speaker import Speaker\nfrom app.models.session import Session\nfrom app.models.user import User\nfrom app.models.event import Event\nfrom app.api.helpers.db import safe_query\nfrom app.api.bootstrap import api\n\n\nclass SpeakerSchema(Schema):\n \"\"\"\n Speaker Schema based on Speaker Model\n \"\"\"\n class Meta:\n \"\"\"\n Meta class for speaker schema\n \"\"\"\n type_ = 'speaker'\n self_view = 'v1.speaker_detail'\n self_view_kwargs = {'id': '<id>'}\n inflect = dasherize\n\n id = fields.Str(dump_only=True)\n name = fields.Str(required=True)\n email = fields.Str(required=True)\n photo_url = fields.Url(allow_none=True)\n thumbnail_image_url = fields.Url(allow_none=True)\n small_image_url = fields.Url(allow_none=True)\n icon_image_url = fields.Url(allow_none=True)\n short_biography = fields.Str(allow_none=True)\n long_biography = fields.Str(allow_none=True)\n speaking_experience = fields.Str(allow_none=True)\n mobile = fields.Str(allow_none=True)\n website = fields.Url(allow_none=True)\n twitter = fields.Url(allow_none=True)\n facebook = fields.Url(allow_none=True)\n github = fields.Url(allow_none=True)\n linkedin = fields.Url(allow_none=True)\n organisation = fields.Str(allow_none=True)\n is_featured = fields.Boolean(default=False)\n position = fields.Str(allow_none=True)\n country = fields.Str(allow_none=True)\n city = fields.Str(allow_none=True)\n gender = fields.Str(allow_none=True)\n heard_from = fields.Str(allow_none=True)\n sponsorship_required = fields.Str(allow_none=True)\n event = Relationship(attribute='event',\n self_view='v1.speaker_event',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.event_detail',\n related_view_kwargs={'speaker_id': '<id>'},\n schema='EventSchema',\n type_='event')\n user = Relationship(attribute='user',\n self_view='v1.speaker_user',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.user_detail',\n related_view_kwargs={'speaker_id': '<id>'},\n schema='UserSchema',\n type_='user')\n sessions = Relationship(attribute='sessions',\n self_view='v1.speaker_session',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.session_list',\n related_view_kwargs={'speaker_id': '<id>'},\n schema='SessionSchema',\n type_='session')\n\n\nclass SpeakerList(ResourceList):\n \"\"\"\n List and create speakers\n \"\"\"\n def query(self, view_kwargs):\n \"\"\"\n query method for speakers list class\n :param view_kwargs:\n :return:\n \"\"\"\n query_ = self.session.query(Speaker)\n if view_kwargs.get('event_id'):\n event = safe_query(self, Event, 'id', view_kwargs['event_id'], 'event_id')\n query_ = query_.join(Event).filter(Event.id == event.id)\n elif view_kwargs.get('event_identifier'):\n event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')\n query_ = query_.join(Event).filter(Event.id == event.id)\n if view_kwargs.get('user_id'):\n user = safe_query(self, User, 'id', view_kwargs['user_id'], 'user_id')\n query_ = query_.join(User).filter(User.id == user.id)\n if view_kwargs.get('session_id'):\n session = safe_query(self, Session, 'id', view_kwargs['session_id'], 'session_id')\n # session-speaker :: many-to-many relationship\n query_ = Speaker.query.filter(Speaker.sessions.any(id=session.id))\n return query_\n\n def before_post(self, args, kwargs, data):\n \"\"\"\n method to add user_id to view_kwargs before post\n :param args:\n :param kwargs:\n :param data:\n :return:\n \"\"\"\n kwargs['user_id'] = current_identity.id\n\n def before_create_object(self, data, view_kwargs):\n \"\"\"\n before create object method for speaker list class\n :param data:\n :param view_kwargs:\n :return:\n \"\"\"\n if view_kwargs.get('session_id'):\n session = safe_query(self, Session, 'id', view_kwargs['session_id'], 'session_id')\n data['event_id'] = session.event_id\n if view_kwargs.get('event_id'):\n event = safe_query(self, Event, 'id', view_kwargs['event_id'], 'event_id')\n data['event_id'] = event.id\n elif view_kwargs.get('event_identifier'):\n event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')\n data['event_id'] = event.id\n data['user_id'] = current_identity.id\n\n def after_create_object(self, obj, data, view_kwargs):\n if view_kwargs.get('session_id'):\n session = safe_query(self, Session, 'id', view_kwargs['session_id'], 'session_id')\n session.speakers.append(obj)\n self.session.commit()\n\n view_kwargs = True\n decorators = (api.has_permission('accessible_role_based_events'),)\n schema = SpeakerSchema\n data_layer = {'session': db.session,\n 'model': Speaker,\n 'methods': {\n 'query': query,\n 'before_create_object': before_create_object,\n 'before_post': before_post,\n 'after_create_object': after_create_object\n }}\n\n\nclass SpeakerDetail(ResourceDetail):\n \"\"\"\n Speakers Detail by id\n \"\"\"\n decorators = (api.has_permission('is_coorganizer_or_user_itself', methods=\"PATCH,DELETE\", fetch=\"event_id\",\n fetch_as=\"event_id\", model=Speaker, check=lambda a: a.get('id') is not None),)\n schema = SpeakerSchema\n data_layer = {'session': db.session,\n 'model': Speaker}\n\n\nclass SpeakerRelationship(ResourceRelationship):\n \"\"\"\n Speaker Relationship class\n \"\"\"\n decorators = (jwt_required,)\n schema = SpeakerSchema\n data_layer = {'session': db.session,\n 'model': Speaker}\n", "path": "app/api/speakers.py"}]} | 2,111 | 340 |
gh_patches_debug_30038 | rasdani/github-patches | git_diff | frappe__frappe-21064 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Proposal] Add Parameters to Webhook Request URL
We use webhooks to connect erpnext to other apps and systems.
Most REST-Apis contain some ids in their URL. Even Frappe does (`/api/resource/{{doctype}}/{{docname}}`)
AFAIK, there is no way to dynamically set the Webhook request URL with contents of the document.
I propose to render the webhook-url as a template, much the same as the json body is. This would allow to call endpoints like
`http://example.com/api/{{doc.some_field}}`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `frappe/integrations/doctype/webhook/webhook.py`
Content:
```
1 # Copyright (c) 2017, Frappe Technologies and contributors
2 # License: MIT. See LICENSE
3
4 import base64
5 import hashlib
6 import hmac
7 import json
8 from time import sleep
9 from urllib.parse import urlparse
10
11 import requests
12
13 import frappe
14 from frappe import _
15 from frappe.model.document import Document
16 from frappe.utils.jinja import validate_template
17 from frappe.utils.safe_exec import get_safe_globals
18
19 WEBHOOK_SECRET_HEADER = "X-Frappe-Webhook-Signature"
20
21
22 class Webhook(Document):
23 def validate(self):
24 self.validate_docevent()
25 self.validate_condition()
26 self.validate_request_url()
27 self.validate_request_body()
28 self.validate_repeating_fields()
29 self.preview_document = None
30
31 def on_update(self):
32 frappe.cache().delete_value("webhooks")
33
34 def validate_docevent(self):
35 if self.webhook_doctype:
36 is_submittable = frappe.get_value("DocType", self.webhook_doctype, "is_submittable")
37 if not is_submittable and self.webhook_docevent in [
38 "on_submit",
39 "on_cancel",
40 "on_update_after_submit",
41 ]:
42 frappe.throw(_("DocType must be Submittable for the selected Doc Event"))
43
44 def validate_condition(self):
45 temp_doc = frappe.new_doc(self.webhook_doctype)
46 if self.condition:
47 try:
48 frappe.safe_eval(self.condition, eval_locals=get_context(temp_doc))
49 except Exception as e:
50 frappe.throw(_("Invalid Condition: {}").format(e))
51
52 def validate_request_url(self):
53 try:
54 request_url = urlparse(self.request_url).netloc
55 if not request_url:
56 raise frappe.ValidationError
57 except Exception as e:
58 frappe.throw(_("Check Request URL"), exc=e)
59
60 def validate_request_body(self):
61 if self.request_structure:
62 if self.request_structure == "Form URL-Encoded":
63 self.webhook_json = None
64 elif self.request_structure == "JSON":
65 validate_template(self.webhook_json)
66 self.webhook_data = []
67
68 def validate_repeating_fields(self):
69 """Error when Same Field is entered multiple times in webhook_data"""
70 webhook_data = []
71 for entry in self.webhook_data:
72 webhook_data.append(entry.fieldname)
73
74 if len(webhook_data) != len(set(webhook_data)):
75 frappe.throw(_("Same Field is entered more than once"))
76
77 @frappe.whitelist()
78 def generate_preview(self):
79 # This function doesn't need to do anything specific as virtual fields
80 # get evaluated automatically.
81 pass
82
83 @property
84 def meets_condition(self):
85 if not self.condition:
86 return _("Yes")
87
88 if not (self.preview_document and self.webhook_doctype):
89 return _("Select a document to check if it meets conditions.")
90
91 try:
92 doc = frappe.get_cached_doc(self.webhook_doctype, self.preview_document)
93 met_condition = frappe.safe_eval(self.condition, eval_locals=get_context(doc))
94 except Exception as e:
95 return _("Failed to evaluate conditions: {}").format(e)
96 return _("Yes") if met_condition else _("No")
97
98 @property
99 def preview_request_body(self):
100 if not (self.preview_document and self.webhook_doctype):
101 return _("Select a document to preview request data")
102
103 try:
104 doc = frappe.get_cached_doc(self.webhook_doctype, self.preview_document)
105 return frappe.as_json(get_webhook_data(doc, self))
106 except Exception as e:
107 return _("Failed to compute request body: {}").format(e)
108
109
110 def get_context(doc):
111 return {"doc": doc, "utils": get_safe_globals().get("frappe").get("utils")}
112
113
114 def enqueue_webhook(doc, webhook) -> None:
115 webhook: Webhook = frappe.get_doc("Webhook", webhook.get("name"))
116 headers = get_webhook_headers(doc, webhook)
117 data = get_webhook_data(doc, webhook)
118 r = None
119
120 for i in range(3):
121 try:
122 r = requests.request(
123 method=webhook.request_method,
124 url=webhook.request_url,
125 data=json.dumps(data, default=str),
126 headers=headers,
127 timeout=5,
128 )
129 r.raise_for_status()
130 frappe.logger().debug({"webhook_success": r.text})
131 log_request(webhook.name, doc.name, webhook.request_url, headers, data, r)
132 break
133
134 except requests.exceptions.ReadTimeout as e:
135 frappe.logger().debug({"webhook_error": e, "try": i + 1})
136 log_request(webhook.name, doc.name, webhook.request_url, headers, data)
137
138 except Exception as e:
139 frappe.logger().debug({"webhook_error": e, "try": i + 1})
140 log_request(webhook.name, doc.name, webhook.request_url, headers, data, r)
141 sleep(3 * i + 1)
142 if i != 2:
143 continue
144
145
146 def log_request(
147 webhook: str,
148 docname: str,
149 url: str,
150 headers: dict,
151 data: dict,
152 res: requests.Response | None = None,
153 ):
154 request_log = frappe.get_doc(
155 {
156 "doctype": "Webhook Request Log",
157 "webhook": webhook,
158 "reference_document": docname,
159 "user": frappe.session.user if frappe.session.user else None,
160 "url": url,
161 "headers": frappe.as_json(headers) if headers else None,
162 "data": frappe.as_json(data) if data else None,
163 "response": res and res.text,
164 "error": frappe.get_traceback(),
165 }
166 )
167
168 request_log.save(ignore_permissions=True)
169
170
171 def get_webhook_headers(doc, webhook):
172 headers = {}
173
174 if webhook.enable_security:
175 data = get_webhook_data(doc, webhook)
176 signature = base64.b64encode(
177 hmac.new(
178 webhook.get_password("webhook_secret").encode("utf8"),
179 json.dumps(data).encode("utf8"),
180 hashlib.sha256,
181 ).digest()
182 )
183 headers[WEBHOOK_SECRET_HEADER] = signature
184
185 if webhook.webhook_headers:
186 for h in webhook.webhook_headers:
187 if h.get("key") and h.get("value"):
188 headers[h.get("key")] = h.get("value")
189
190 return headers
191
192
193 def get_webhook_data(doc, webhook):
194 data = {}
195 doc = doc.as_dict(convert_dates_to_str=True)
196
197 if webhook.webhook_data:
198 data = {w.key: doc.get(w.fieldname) for w in webhook.webhook_data}
199 elif webhook.webhook_json:
200 data = frappe.render_template(webhook.webhook_json, get_context(doc))
201 data = json.loads(data)
202
203 return data
204
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/frappe/integrations/doctype/webhook/webhook.py b/frappe/integrations/doctype/webhook/webhook.py
--- a/frappe/integrations/doctype/webhook/webhook.py
+++ b/frappe/integrations/doctype/webhook/webhook.py
@@ -115,29 +115,34 @@
webhook: Webhook = frappe.get_doc("Webhook", webhook.get("name"))
headers = get_webhook_headers(doc, webhook)
data = get_webhook_data(doc, webhook)
- r = None
+ if webhook.is_dynamic_url:
+ request_url = frappe.render_template(webhook.request_url, get_context(doc))
+ else:
+ request_url = webhook.request_url
+
+ r = None
for i in range(3):
try:
r = requests.request(
method=webhook.request_method,
- url=webhook.request_url,
+ url=request_url,
data=json.dumps(data, default=str),
headers=headers,
timeout=5,
)
r.raise_for_status()
frappe.logger().debug({"webhook_success": r.text})
- log_request(webhook.name, doc.name, webhook.request_url, headers, data, r)
+ log_request(webhook.name, doc.name, request_url, headers, data, r)
break
except requests.exceptions.ReadTimeout as e:
frappe.logger().debug({"webhook_error": e, "try": i + 1})
- log_request(webhook.name, doc.name, webhook.request_url, headers, data)
+ log_request(webhook.name, doc.name, request_url, headers, data)
except Exception as e:
frappe.logger().debug({"webhook_error": e, "try": i + 1})
- log_request(webhook.name, doc.name, webhook.request_url, headers, data, r)
+ log_request(webhook.name, doc.name, request_url, headers, data, r)
sleep(3 * i + 1)
if i != 2:
continue
| {"golden_diff": "diff --git a/frappe/integrations/doctype/webhook/webhook.py b/frappe/integrations/doctype/webhook/webhook.py\n--- a/frappe/integrations/doctype/webhook/webhook.py\n+++ b/frappe/integrations/doctype/webhook/webhook.py\n@@ -115,29 +115,34 @@\n \twebhook: Webhook = frappe.get_doc(\"Webhook\", webhook.get(\"name\"))\n \theaders = get_webhook_headers(doc, webhook)\n \tdata = get_webhook_data(doc, webhook)\n-\tr = None\n \n+\tif webhook.is_dynamic_url:\n+\t\trequest_url = frappe.render_template(webhook.request_url, get_context(doc))\n+\telse:\n+\t\trequest_url = webhook.request_url\n+\n+\tr = None\n \tfor i in range(3):\n \t\ttry:\n \t\t\tr = requests.request(\n \t\t\t\tmethod=webhook.request_method,\n-\t\t\t\turl=webhook.request_url,\n+\t\t\t\turl=request_url,\n \t\t\t\tdata=json.dumps(data, default=str),\n \t\t\t\theaders=headers,\n \t\t\t\ttimeout=5,\n \t\t\t)\n \t\t\tr.raise_for_status()\n \t\t\tfrappe.logger().debug({\"webhook_success\": r.text})\n-\t\t\tlog_request(webhook.name, doc.name, webhook.request_url, headers, data, r)\n+\t\t\tlog_request(webhook.name, doc.name, request_url, headers, data, r)\n \t\t\tbreak\n \n \t\texcept requests.exceptions.ReadTimeout as e:\n \t\t\tfrappe.logger().debug({\"webhook_error\": e, \"try\": i + 1})\n-\t\t\tlog_request(webhook.name, doc.name, webhook.request_url, headers, data)\n+\t\t\tlog_request(webhook.name, doc.name, request_url, headers, data)\n \n \t\texcept Exception as e:\n \t\t\tfrappe.logger().debug({\"webhook_error\": e, \"try\": i + 1})\n-\t\t\tlog_request(webhook.name, doc.name, webhook.request_url, headers, data, r)\n+\t\t\tlog_request(webhook.name, doc.name, request_url, headers, data, r)\n \t\t\tsleep(3 * i + 1)\n \t\t\tif i != 2:\n \t\t\t\tcontinue\n", "issue": "[Proposal] Add Parameters to Webhook Request URL\nWe use webhooks to connect erpnext to other apps and systems.\r\n\r\nMost REST-Apis contain some ids in their URL. Even Frappe does (`/api/resource/{{doctype}}/{{docname}}`)\r\n\r\nAFAIK, there is no way to dynamically set the Webhook request URL with contents of the document.\r\n\r\n\r\nI propose to render the webhook-url as a template, much the same as the json body is. This would allow to call endpoints like\r\n`http://example.com/api/{{doc.some_field}}`\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) 2017, Frappe Technologies and contributors\n# License: MIT. See LICENSE\n\nimport base64\nimport hashlib\nimport hmac\nimport json\nfrom time import sleep\nfrom urllib.parse import urlparse\n\nimport requests\n\nimport frappe\nfrom frappe import _\nfrom frappe.model.document import Document\nfrom frappe.utils.jinja import validate_template\nfrom frappe.utils.safe_exec import get_safe_globals\n\nWEBHOOK_SECRET_HEADER = \"X-Frappe-Webhook-Signature\"\n\n\nclass Webhook(Document):\n\tdef validate(self):\n\t\tself.validate_docevent()\n\t\tself.validate_condition()\n\t\tself.validate_request_url()\n\t\tself.validate_request_body()\n\t\tself.validate_repeating_fields()\n\t\tself.preview_document = None\n\n\tdef on_update(self):\n\t\tfrappe.cache().delete_value(\"webhooks\")\n\n\tdef validate_docevent(self):\n\t\tif self.webhook_doctype:\n\t\t\tis_submittable = frappe.get_value(\"DocType\", self.webhook_doctype, \"is_submittable\")\n\t\t\tif not is_submittable and self.webhook_docevent in [\n\t\t\t\t\"on_submit\",\n\t\t\t\t\"on_cancel\",\n\t\t\t\t\"on_update_after_submit\",\n\t\t\t]:\n\t\t\t\tfrappe.throw(_(\"DocType must be Submittable for the selected Doc Event\"))\n\n\tdef validate_condition(self):\n\t\ttemp_doc = frappe.new_doc(self.webhook_doctype)\n\t\tif self.condition:\n\t\t\ttry:\n\t\t\t\tfrappe.safe_eval(self.condition, eval_locals=get_context(temp_doc))\n\t\t\texcept Exception as e:\n\t\t\t\tfrappe.throw(_(\"Invalid Condition: {}\").format(e))\n\n\tdef validate_request_url(self):\n\t\ttry:\n\t\t\trequest_url = urlparse(self.request_url).netloc\n\t\t\tif not request_url:\n\t\t\t\traise frappe.ValidationError\n\t\texcept Exception as e:\n\t\t\tfrappe.throw(_(\"Check Request URL\"), exc=e)\n\n\tdef validate_request_body(self):\n\t\tif self.request_structure:\n\t\t\tif self.request_structure == \"Form URL-Encoded\":\n\t\t\t\tself.webhook_json = None\n\t\t\telif self.request_structure == \"JSON\":\n\t\t\t\tvalidate_template(self.webhook_json)\n\t\t\t\tself.webhook_data = []\n\n\tdef validate_repeating_fields(self):\n\t\t\"\"\"Error when Same Field is entered multiple times in webhook_data\"\"\"\n\t\twebhook_data = []\n\t\tfor entry in self.webhook_data:\n\t\t\twebhook_data.append(entry.fieldname)\n\n\t\tif len(webhook_data) != len(set(webhook_data)):\n\t\t\tfrappe.throw(_(\"Same Field is entered more than once\"))\n\n\[email protected]()\n\tdef generate_preview(self):\n\t\t# This function doesn't need to do anything specific as virtual fields\n\t\t# get evaluated automatically.\n\t\tpass\n\n\t@property\n\tdef meets_condition(self):\n\t\tif not self.condition:\n\t\t\treturn _(\"Yes\")\n\n\t\tif not (self.preview_document and self.webhook_doctype):\n\t\t\treturn _(\"Select a document to check if it meets conditions.\")\n\n\t\ttry:\n\t\t\tdoc = frappe.get_cached_doc(self.webhook_doctype, self.preview_document)\n\t\t\tmet_condition = frappe.safe_eval(self.condition, eval_locals=get_context(doc))\n\t\texcept Exception as e:\n\t\t\treturn _(\"Failed to evaluate conditions: {}\").format(e)\n\t\treturn _(\"Yes\") if met_condition else _(\"No\")\n\n\t@property\n\tdef preview_request_body(self):\n\t\tif not (self.preview_document and self.webhook_doctype):\n\t\t\treturn _(\"Select a document to preview request data\")\n\n\t\ttry:\n\t\t\tdoc = frappe.get_cached_doc(self.webhook_doctype, self.preview_document)\n\t\t\treturn frappe.as_json(get_webhook_data(doc, self))\n\t\texcept Exception as e:\n\t\t\treturn _(\"Failed to compute request body: {}\").format(e)\n\n\ndef get_context(doc):\n\treturn {\"doc\": doc, \"utils\": get_safe_globals().get(\"frappe\").get(\"utils\")}\n\n\ndef enqueue_webhook(doc, webhook) -> None:\n\twebhook: Webhook = frappe.get_doc(\"Webhook\", webhook.get(\"name\"))\n\theaders = get_webhook_headers(doc, webhook)\n\tdata = get_webhook_data(doc, webhook)\n\tr = None\n\n\tfor i in range(3):\n\t\ttry:\n\t\t\tr = requests.request(\n\t\t\t\tmethod=webhook.request_method,\n\t\t\t\turl=webhook.request_url,\n\t\t\t\tdata=json.dumps(data, default=str),\n\t\t\t\theaders=headers,\n\t\t\t\ttimeout=5,\n\t\t\t)\n\t\t\tr.raise_for_status()\n\t\t\tfrappe.logger().debug({\"webhook_success\": r.text})\n\t\t\tlog_request(webhook.name, doc.name, webhook.request_url, headers, data, r)\n\t\t\tbreak\n\n\t\texcept requests.exceptions.ReadTimeout as e:\n\t\t\tfrappe.logger().debug({\"webhook_error\": e, \"try\": i + 1})\n\t\t\tlog_request(webhook.name, doc.name, webhook.request_url, headers, data)\n\n\t\texcept Exception as e:\n\t\t\tfrappe.logger().debug({\"webhook_error\": e, \"try\": i + 1})\n\t\t\tlog_request(webhook.name, doc.name, webhook.request_url, headers, data, r)\n\t\t\tsleep(3 * i + 1)\n\t\t\tif i != 2:\n\t\t\t\tcontinue\n\n\ndef log_request(\n\twebhook: str,\n\tdocname: str,\n\turl: str,\n\theaders: dict,\n\tdata: dict,\n\tres: requests.Response | None = None,\n):\n\trequest_log = frappe.get_doc(\n\t\t{\n\t\t\t\"doctype\": \"Webhook Request Log\",\n\t\t\t\"webhook\": webhook,\n\t\t\t\"reference_document\": docname,\n\t\t\t\"user\": frappe.session.user if frappe.session.user else None,\n\t\t\t\"url\": url,\n\t\t\t\"headers\": frappe.as_json(headers) if headers else None,\n\t\t\t\"data\": frappe.as_json(data) if data else None,\n\t\t\t\"response\": res and res.text,\n\t\t\t\"error\": frappe.get_traceback(),\n\t\t}\n\t)\n\n\trequest_log.save(ignore_permissions=True)\n\n\ndef get_webhook_headers(doc, webhook):\n\theaders = {}\n\n\tif webhook.enable_security:\n\t\tdata = get_webhook_data(doc, webhook)\n\t\tsignature = base64.b64encode(\n\t\t\thmac.new(\n\t\t\t\twebhook.get_password(\"webhook_secret\").encode(\"utf8\"),\n\t\t\t\tjson.dumps(data).encode(\"utf8\"),\n\t\t\t\thashlib.sha256,\n\t\t\t).digest()\n\t\t)\n\t\theaders[WEBHOOK_SECRET_HEADER] = signature\n\n\tif webhook.webhook_headers:\n\t\tfor h in webhook.webhook_headers:\n\t\t\tif h.get(\"key\") and h.get(\"value\"):\n\t\t\t\theaders[h.get(\"key\")] = h.get(\"value\")\n\n\treturn headers\n\n\ndef get_webhook_data(doc, webhook):\n\tdata = {}\n\tdoc = doc.as_dict(convert_dates_to_str=True)\n\n\tif webhook.webhook_data:\n\t\tdata = {w.key: doc.get(w.fieldname) for w in webhook.webhook_data}\n\telif webhook.webhook_json:\n\t\tdata = frappe.render_template(webhook.webhook_json, get_context(doc))\n\t\tdata = json.loads(data)\n\n\treturn data\n", "path": "frappe/integrations/doctype/webhook/webhook.py"}], "after_files": [{"content": "# Copyright (c) 2017, Frappe Technologies and contributors\n# License: MIT. See LICENSE\n\nimport base64\nimport hashlib\nimport hmac\nimport json\nfrom time import sleep\nfrom urllib.parse import urlparse\n\nimport requests\n\nimport frappe\nfrom frappe import _\nfrom frappe.model.document import Document\nfrom frappe.utils.jinja import validate_template\nfrom frappe.utils.safe_exec import get_safe_globals\n\nWEBHOOK_SECRET_HEADER = \"X-Frappe-Webhook-Signature\"\n\n\nclass Webhook(Document):\n\tdef validate(self):\n\t\tself.validate_docevent()\n\t\tself.validate_condition()\n\t\tself.validate_request_url()\n\t\tself.validate_request_body()\n\t\tself.validate_repeating_fields()\n\t\tself.preview_document = None\n\n\tdef on_update(self):\n\t\tfrappe.cache().delete_value(\"webhooks\")\n\n\tdef validate_docevent(self):\n\t\tif self.webhook_doctype:\n\t\t\tis_submittable = frappe.get_value(\"DocType\", self.webhook_doctype, \"is_submittable\")\n\t\t\tif not is_submittable and self.webhook_docevent in [\n\t\t\t\t\"on_submit\",\n\t\t\t\t\"on_cancel\",\n\t\t\t\t\"on_update_after_submit\",\n\t\t\t]:\n\t\t\t\tfrappe.throw(_(\"DocType must be Submittable for the selected Doc Event\"))\n\n\tdef validate_condition(self):\n\t\ttemp_doc = frappe.new_doc(self.webhook_doctype)\n\t\tif self.condition:\n\t\t\ttry:\n\t\t\t\tfrappe.safe_eval(self.condition, eval_locals=get_context(temp_doc))\n\t\t\texcept Exception as e:\n\t\t\t\tfrappe.throw(_(\"Invalid Condition: {}\").format(e))\n\n\tdef validate_request_url(self):\n\t\ttry:\n\t\t\trequest_url = urlparse(self.request_url).netloc\n\t\t\tif not request_url:\n\t\t\t\traise frappe.ValidationError\n\t\texcept Exception as e:\n\t\t\tfrappe.throw(_(\"Check Request URL\"), exc=e)\n\n\tdef validate_request_body(self):\n\t\tif self.request_structure:\n\t\t\tif self.request_structure == \"Form URL-Encoded\":\n\t\t\t\tself.webhook_json = None\n\t\t\telif self.request_structure == \"JSON\":\n\t\t\t\tvalidate_template(self.webhook_json)\n\t\t\t\tself.webhook_data = []\n\n\tdef validate_repeating_fields(self):\n\t\t\"\"\"Error when Same Field is entered multiple times in webhook_data\"\"\"\n\t\twebhook_data = []\n\t\tfor entry in self.webhook_data:\n\t\t\twebhook_data.append(entry.fieldname)\n\n\t\tif len(webhook_data) != len(set(webhook_data)):\n\t\t\tfrappe.throw(_(\"Same Field is entered more than once\"))\n\n\[email protected]()\n\tdef generate_preview(self):\n\t\t# This function doesn't need to do anything specific as virtual fields\n\t\t# get evaluated automatically.\n\t\tpass\n\n\t@property\n\tdef meets_condition(self):\n\t\tif not self.condition:\n\t\t\treturn _(\"Yes\")\n\n\t\tif not (self.preview_document and self.webhook_doctype):\n\t\t\treturn _(\"Select a document to check if it meets conditions.\")\n\n\t\ttry:\n\t\t\tdoc = frappe.get_cached_doc(self.webhook_doctype, self.preview_document)\n\t\t\tmet_condition = frappe.safe_eval(self.condition, eval_locals=get_context(doc))\n\t\texcept Exception as e:\n\t\t\treturn _(\"Failed to evaluate conditions: {}\").format(e)\n\t\treturn _(\"Yes\") if met_condition else _(\"No\")\n\n\t@property\n\tdef preview_request_body(self):\n\t\tif not (self.preview_document and self.webhook_doctype):\n\t\t\treturn _(\"Select a document to preview request data\")\n\n\t\ttry:\n\t\t\tdoc = frappe.get_cached_doc(self.webhook_doctype, self.preview_document)\n\t\t\treturn frappe.as_json(get_webhook_data(doc, self))\n\t\texcept Exception as e:\n\t\t\treturn _(\"Failed to compute request body: {}\").format(e)\n\n\ndef get_context(doc):\n\treturn {\"doc\": doc, \"utils\": get_safe_globals().get(\"frappe\").get(\"utils\")}\n\n\ndef enqueue_webhook(doc, webhook) -> None:\n\twebhook: Webhook = frappe.get_doc(\"Webhook\", webhook.get(\"name\"))\n\theaders = get_webhook_headers(doc, webhook)\n\tdata = get_webhook_data(doc, webhook)\n\n\tif webhook.is_dynamic_url:\n\t\trequest_url = frappe.render_template(webhook.request_url, get_context(doc))\n\telse:\n\t\trequest_url = webhook.request_url\n\n\tr = None\n\tfor i in range(3):\n\t\ttry:\n\t\t\tr = requests.request(\n\t\t\t\tmethod=webhook.request_method,\n\t\t\t\turl=request_url,\n\t\t\t\tdata=json.dumps(data, default=str),\n\t\t\t\theaders=headers,\n\t\t\t\ttimeout=5,\n\t\t\t)\n\t\t\tr.raise_for_status()\n\t\t\tfrappe.logger().debug({\"webhook_success\": r.text})\n\t\t\tlog_request(webhook.name, doc.name, request_url, headers, data, r)\n\t\t\tbreak\n\n\t\texcept requests.exceptions.ReadTimeout as e:\n\t\t\tfrappe.logger().debug({\"webhook_error\": e, \"try\": i + 1})\n\t\t\tlog_request(webhook.name, doc.name, request_url, headers, data)\n\n\t\texcept Exception as e:\n\t\t\tfrappe.logger().debug({\"webhook_error\": e, \"try\": i + 1})\n\t\t\tlog_request(webhook.name, doc.name, request_url, headers, data, r)\n\t\t\tsleep(3 * i + 1)\n\t\t\tif i != 2:\n\t\t\t\tcontinue\n\n\ndef log_request(\n\twebhook: str,\n\tdocname: str,\n\turl: str,\n\theaders: dict,\n\tdata: dict,\n\tres: requests.Response | None = None,\n):\n\trequest_log = frappe.get_doc(\n\t\t{\n\t\t\t\"doctype\": \"Webhook Request Log\",\n\t\t\t\"webhook\": webhook,\n\t\t\t\"reference_document\": docname,\n\t\t\t\"user\": frappe.session.user if frappe.session.user else None,\n\t\t\t\"url\": url,\n\t\t\t\"headers\": frappe.as_json(headers) if headers else None,\n\t\t\t\"data\": frappe.as_json(data) if data else None,\n\t\t\t\"response\": res and res.text,\n\t\t\t\"error\": frappe.get_traceback(),\n\t\t}\n\t)\n\n\trequest_log.save(ignore_permissions=True)\n\n\ndef get_webhook_headers(doc, webhook):\n\theaders = {}\n\n\tif webhook.enable_security:\n\t\tdata = get_webhook_data(doc, webhook)\n\t\tsignature = base64.b64encode(\n\t\t\thmac.new(\n\t\t\t\twebhook.get_password(\"webhook_secret\").encode(\"utf8\"),\n\t\t\t\tjson.dumps(data).encode(\"utf8\"),\n\t\t\t\thashlib.sha256,\n\t\t\t).digest()\n\t\t)\n\t\theaders[WEBHOOK_SECRET_HEADER] = signature\n\n\tif webhook.webhook_headers:\n\t\tfor h in webhook.webhook_headers:\n\t\t\tif h.get(\"key\") and h.get(\"value\"):\n\t\t\t\theaders[h.get(\"key\")] = h.get(\"value\")\n\n\treturn headers\n\n\ndef get_webhook_data(doc, webhook):\n\tdata = {}\n\tdoc = doc.as_dict(convert_dates_to_str=True)\n\n\tif webhook.webhook_data:\n\t\tdata = {w.key: doc.get(w.fieldname) for w in webhook.webhook_data}\n\telif webhook.webhook_json:\n\t\tdata = frappe.render_template(webhook.webhook_json, get_context(doc))\n\t\tdata = json.loads(data)\n\n\treturn data\n", "path": "frappe/integrations/doctype/webhook/webhook.py"}]} | 2,405 | 450 |
gh_patches_debug_21010 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-3169 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Condition Specified but with no condition passes linting but fails deploy
### CloudFormation Lint Version
0.83.1
### What operating system are you using?
mac/ubuntu
### Describe the bug
in a cfn template if you specify root level item `Conditions` but have no conditions this passes cfn-lint but always fails on deploy
### Expected behavior
cfn-lint should fail if there is a Conditions root level object but no array entries under it.
### Reproduction template
```
AWSTemplateFormatVersion: "2010-09-09"
Parameters:
myParam
Conditions:
Resources:
myTopic:
Type: AWS::SNS::Topic
Properties:
DisplayName: mytopic
TopicName: mytopic
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/rules/conditions/Configuration.py`
Content:
```
1 """
2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5
6 from cfnlint.rules import CloudFormationLintRule, RuleMatch
7
8
9 class Configuration(CloudFormationLintRule):
10 """Check if Conditions are configured correctly"""
11
12 id = "E8001"
13 shortdesc = "Conditions have appropriate properties"
14 description = "Check if Conditions are properly configured"
15 source_url = "https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/conditions-section-structure.html"
16 tags = ["conditions"]
17
18 condition_keys = [
19 "Condition",
20 "Fn::And",
21 "Fn::Equals",
22 "Fn::Not",
23 "Fn::Or",
24 ]
25
26 def match(self, cfn):
27 matches = []
28
29 conditions = cfn.template.get("Conditions", {})
30 if conditions:
31 for condname, condobj in conditions.items():
32 if not isinstance(condobj, dict):
33 message = "Condition {0} has invalid property"
34 matches.append(
35 RuleMatch(["Conditions", condname], message.format(condname))
36 )
37 else:
38 if len(condobj) != 1:
39 message = "Condition {0} has too many intrinsic conditions"
40 matches.append(
41 RuleMatch(
42 ["Conditions", condname], message.format(condname)
43 )
44 )
45 else:
46 for k, _ in condobj.items():
47 if k not in self.condition_keys:
48 message = "Condition {0} has invalid property {1}"
49 matches.append(
50 RuleMatch(
51 ["Conditions", condname] + [k],
52 message.format(condname, k),
53 )
54 )
55
56 return matches
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cfnlint/rules/conditions/Configuration.py b/src/cfnlint/rules/conditions/Configuration.py
--- a/src/cfnlint/rules/conditions/Configuration.py
+++ b/src/cfnlint/rules/conditions/Configuration.py
@@ -26,8 +26,10 @@
def match(self, cfn):
matches = []
- conditions = cfn.template.get("Conditions", {})
- if conditions:
+ if "Conditions" not in cfn.template:
+ return matches
+ conditions = cfn.template.get("Conditions", None)
+ if isinstance(conditions, dict):
for condname, condobj in conditions.items():
if not isinstance(condobj, dict):
message = "Condition {0} has invalid property"
@@ -52,5 +54,12 @@
message.format(condname, k),
)
)
+ else:
+ matches.append(
+ RuleMatch(
+ ["Conditions"],
+ "Condition must be an object",
+ )
+ )
return matches
| {"golden_diff": "diff --git a/src/cfnlint/rules/conditions/Configuration.py b/src/cfnlint/rules/conditions/Configuration.py\n--- a/src/cfnlint/rules/conditions/Configuration.py\n+++ b/src/cfnlint/rules/conditions/Configuration.py\n@@ -26,8 +26,10 @@\n def match(self, cfn):\n matches = []\n \n- conditions = cfn.template.get(\"Conditions\", {})\n- if conditions:\n+ if \"Conditions\" not in cfn.template:\n+ return matches\n+ conditions = cfn.template.get(\"Conditions\", None)\n+ if isinstance(conditions, dict):\n for condname, condobj in conditions.items():\n if not isinstance(condobj, dict):\n message = \"Condition {0} has invalid property\"\n@@ -52,5 +54,12 @@\n message.format(condname, k),\n )\n )\n+ else:\n+ matches.append(\n+ RuleMatch(\n+ [\"Conditions\"],\n+ \"Condition must be an object\",\n+ )\n+ )\n \n return matches\n", "issue": "Condition Specified but with no condition passes linting but fails deploy\n### CloudFormation Lint Version\r\n\r\n0.83.1\r\n\r\n### What operating system are you using?\r\n\r\nmac/ubuntu\r\n\r\n### Describe the bug\r\n\r\nin a cfn template if you specify root level item `Conditions` but have no conditions this passes cfn-lint but always fails on deploy\r\n\r\n### Expected behavior\r\n\r\ncfn-lint should fail if there is a Conditions root level object but no array entries under it.\r\n\r\n### Reproduction template\r\n\r\n```\r\nAWSTemplateFormatVersion: \"2010-09-09\"\r\nParameters:\r\n myParam\r\nConditions:\r\nResources:\r\n myTopic:\r\n Type: AWS::SNS::Topic\r\n Properties:\r\n DisplayName: mytopic\r\n TopicName: mytopic\r\n```\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\n\nfrom cfnlint.rules import CloudFormationLintRule, RuleMatch\n\n\nclass Configuration(CloudFormationLintRule):\n \"\"\"Check if Conditions are configured correctly\"\"\"\n\n id = \"E8001\"\n shortdesc = \"Conditions have appropriate properties\"\n description = \"Check if Conditions are properly configured\"\n source_url = \"https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/conditions-section-structure.html\"\n tags = [\"conditions\"]\n\n condition_keys = [\n \"Condition\",\n \"Fn::And\",\n \"Fn::Equals\",\n \"Fn::Not\",\n \"Fn::Or\",\n ]\n\n def match(self, cfn):\n matches = []\n\n conditions = cfn.template.get(\"Conditions\", {})\n if conditions:\n for condname, condobj in conditions.items():\n if not isinstance(condobj, dict):\n message = \"Condition {0} has invalid property\"\n matches.append(\n RuleMatch([\"Conditions\", condname], message.format(condname))\n )\n else:\n if len(condobj) != 1:\n message = \"Condition {0} has too many intrinsic conditions\"\n matches.append(\n RuleMatch(\n [\"Conditions\", condname], message.format(condname)\n )\n )\n else:\n for k, _ in condobj.items():\n if k not in self.condition_keys:\n message = \"Condition {0} has invalid property {1}\"\n matches.append(\n RuleMatch(\n [\"Conditions\", condname] + [k],\n message.format(condname, k),\n )\n )\n\n return matches\n", "path": "src/cfnlint/rules/conditions/Configuration.py"}], "after_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\n\nfrom cfnlint.rules import CloudFormationLintRule, RuleMatch\n\n\nclass Configuration(CloudFormationLintRule):\n \"\"\"Check if Conditions are configured correctly\"\"\"\n\n id = \"E8001\"\n shortdesc = \"Conditions have appropriate properties\"\n description = \"Check if Conditions are properly configured\"\n source_url = \"https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/conditions-section-structure.html\"\n tags = [\"conditions\"]\n\n condition_keys = [\n \"Condition\",\n \"Fn::And\",\n \"Fn::Equals\",\n \"Fn::Not\",\n \"Fn::Or\",\n ]\n\n def match(self, cfn):\n matches = []\n\n if \"Conditions\" not in cfn.template:\n return matches\n conditions = cfn.template.get(\"Conditions\", None)\n if isinstance(conditions, dict):\n for condname, condobj in conditions.items():\n if not isinstance(condobj, dict):\n message = \"Condition {0} has invalid property\"\n matches.append(\n RuleMatch([\"Conditions\", condname], message.format(condname))\n )\n else:\n if len(condobj) != 1:\n message = \"Condition {0} has too many intrinsic conditions\"\n matches.append(\n RuleMatch(\n [\"Conditions\", condname], message.format(condname)\n )\n )\n else:\n for k, _ in condobj.items():\n if k not in self.condition_keys:\n message = \"Condition {0} has invalid property {1}\"\n matches.append(\n RuleMatch(\n [\"Conditions\", condname] + [k],\n message.format(condname, k),\n )\n )\n else:\n matches.append(\n RuleMatch(\n [\"Conditions\"],\n \"Condition must be an object\",\n )\n )\n\n return matches\n", "path": "src/cfnlint/rules/conditions/Configuration.py"}]} | 897 | 230 |
gh_patches_debug_43642 | rasdani/github-patches | git_diff | lightly-ai__lightly-701 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
_lightly_cli is missing it's return statement
as a side effect `train_model_and_embed_images` is not working as expected.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lightly/cli/embed_cli.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """**Lightly Embed:** Embed images with one command.
3
4 This module contains the entrypoint for the **lightly-embed**
5 command-line interface.
6 """
7
8 # Copyright (c) 2020. Lightly AG and its affiliates.
9 # All Rights Reserved
10
11 import os
12
13 import hydra
14 import torch
15 import torchvision
16 from torch.utils.hipify.hipify_python import bcolors
17
18 from lightly.data import LightlyDataset
19
20 from lightly.utils import save_embeddings
21
22 from lightly.cli._helpers import get_model_from_config
23 from lightly.cli._helpers import fix_input_path
24 from lightly.cli._helpers import cpu_count
25
26
27 def _embed_cli(cfg, is_cli_call=True):
28 input_dir = cfg['input_dir']
29 if input_dir and is_cli_call:
30 input_dir = fix_input_path(input_dir)
31
32 torch.backends.cudnn.deterministic = True
33 torch.backends.cudnn.benchmark = False
34
35 if torch.cuda.is_available():
36 device = torch.device('cuda')
37 else:
38 device = torch.device('cpu')
39
40 transform = torchvision.transforms.Compose(
41 [
42 torchvision.transforms.Resize(
43 (cfg['collate']['input_size'], cfg['collate']['input_size'])
44 ),
45 torchvision.transforms.ToTensor(),
46 torchvision.transforms.Normalize(
47 mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]
48 ),
49 ]
50 )
51
52 dataset = LightlyDataset(input_dir, transform=transform)
53
54 # disable drop_last and shuffle
55 cfg['loader']['drop_last'] = False
56 cfg['loader']['shuffle'] = False
57 cfg['loader']['batch_size'] = min(cfg['loader']['batch_size'], len(dataset))
58
59 # determine the number of available cores
60 if cfg['loader']['num_workers'] < 0:
61 cfg['loader']['num_workers'] = cpu_count()
62
63 dataloader = torch.utils.data.DataLoader(dataset, **cfg['loader'])
64
65 encoder = get_model_from_config(cfg, is_cli_call)
66
67 embeddings, labels, filenames = encoder.embed(dataloader, device=device)
68
69 if is_cli_call:
70 path = os.path.join(os.getcwd(), 'embeddings.csv')
71 save_embeddings(path, embeddings, labels, filenames)
72 print(f'Embeddings are stored at {bcolors.OKBLUE}{path}{bcolors.ENDC}')
73 os.environ[
74 cfg['environment_variable_names']['lightly_last_embedding_path']
75 ] = path
76 return path
77
78 return embeddings, labels, filenames
79
80
81 @hydra.main(config_path='config', config_name='config')
82 def embed_cli(cfg):
83 """Embed images from the command-line.
84
85 Args:
86 cfg:
87 The default configs are loaded from the config file.
88 To overwrite them please see the section on the config file
89 (.config.config.yaml).
90
91 Command-Line Args:
92 input_dir:
93 Path to the input directory where images are stored.
94 checkpoint:
95 Path to the checkpoint of a pretrained model. If left
96 empty, a pretrained model by lightly is used.
97
98 Examples:
99 >>> # embed images with default settings and a lightly model
100 >>> lightly-embed input_dir=data/
101 >>>
102 >>> # embed images with default settings and a custom checkpoint
103 >>> lightly-embed input_dir=data/ checkpoint=my_checkpoint.ckpt
104 >>>
105 >>> # embed images with custom settings
106 >>> lightly-embed input_dir=data/ model.num_ftrs=32
107
108 """
109 return _embed_cli(cfg)
110
111
112 def entry():
113 embed_cli()
114
```
Path: `lightly/core.py`
Content:
```
1 """ Contains the core functionality of the lightly Python package. """
2
3 # Copyright (c) 2020. Lightly AG and its affiliates.
4 # All Rights Reserved
5
6 from lightly.cli.train_cli import _train_cli
7 from lightly.cli.embed_cli import _embed_cli
8 from lightly.cli.lightly_cli import _lightly_cli
9 import lightly.cli as cli
10
11 import yaml
12 import os
13
14
15 def _get_config_path(config_path):
16 """Find path to yaml config file
17
18 Args:
19 config_path: (str) Path to config.yaml file
20
21 Returns:
22 Path to config.yaml if specified else default config.yaml
23
24 Raises:
25 ValueError: If the config_path is not None but doesn't exist
26
27 """
28 if config_path is None:
29 dirname = os.path.dirname(cli.__file__)
30 config_path = os.path.join(dirname, 'config/config.yaml')
31 if not os.path.exists(config_path):
32 raise ValueError("Config path {} does not exist!".format(config_path))
33
34 return config_path
35
36
37 def _load_config_file(config_path):
38 """Load a yaml config file
39
40 Args:
41 config_path: (str) Path to config.yaml file
42
43 Returns:
44 Dictionary with configs from config.yaml
45
46 """
47 Loader = yaml.FullLoader
48 with open(config_path, 'r') as config_file:
49 cfg = yaml.load(config_file, Loader=Loader)
50
51 return cfg
52
53
54 def _add_kwargs(cfg, kwargs):
55 """Add keyword arguments to config
56
57 Args:
58 cfg: (dict) Dictionary of configs from config.yaml
59 kwargs: (dict) Dictionary of keyword arguments
60
61 Returns:
62 Union of cfg and kwargs
63
64 """
65 for key, item in kwargs.items():
66 if isinstance(item, dict):
67 if key in cfg:
68 cfg[key] = _add_kwargs(cfg[key], item)
69 else:
70 cfg[key] = item
71 else:
72 cfg[key] = item
73 return cfg
74
75
76 def train_model_and_embed_images(config_path: str = None, **kwargs):
77 """Train a self-supervised model and use it to embed images.
78
79 Calls the same function as lightly-magic. All arguments passed to
80 lightly-magic can also be passed to this function (see below for an
81 example).
82
83 Args:
84 config_path:
85 Path to config.yaml. If None, the default configs will be used.
86 **kwargs:
87 Overwrite default configs py passing keyword arguments.
88
89 Returns:
90 Embeddings, labels, and filenames of the images.
91
92 Examples:
93 >>> import lightly
94 >>>
95 >>> # train a model and embed images with default configs
96 >>> embeddings, _, _ = lightly.train_model_and_embed_images(
97 >>> input_dir='path/to/data')
98 >>>
99 >>> # train a model and embed images with separate config file
100 >>> my_config_path = 'my/config/file.yaml'
101 >>> embeddings, _, _ = lightly.train_model_and_embed_images(
102 >>> input_dir='path/to/data', config_path=my_config_path)
103 >>>
104 >>> # train a model and embed images with default settings + overwrites
105 >>> my_trainer = {max_epochs: 10}
106 >>> embeddings, _, _ = lightly.train_model_and_embed_images(
107 >>> input_dir='path/to/data', trainer=my_trainer)
108 >>> # the command above is equivalent to:
109 >>> # lightly-magic input_dir='path/to/data' trainer.max_epochs=10
110
111 """
112 config_path = _get_config_path(config_path)
113 config_args = _load_config_file(config_path)
114 config_args = _add_kwargs(config_args, kwargs)
115 return _lightly_cli(config_args, is_cli_call=False)
116
117
118 def train_embedding_model(config_path: str = None, **kwargs):
119 """Train a self-supervised model.
120
121 Calls the same function as lightly-train. All arguments passed to
122 lightly-train can also be passed to this function (see below for an
123 example).
124
125 Args:
126 config_path:
127 Path to config.yaml. If None, the default configs will be used.
128 **kwargs:
129 Overwrite default configs py passing keyword arguments.
130
131 Returns:
132 Path to checkpoint of the trained embedding model.
133
134 Examples:
135 >>> import lightly
136 >>>
137 >>> # train a model with default configs
138 >>> checkpoint_path = lightly.train_embedding_model(
139 >>> input_dir='path/to/data')
140 >>>
141 >>> # train a model with separate config file
142 >>> my_config_path = 'my/config/file.yaml'
143 >>> checkpoint_path = lightly.train_embedding_model(
144 >>> input_dir='path/to/data', config_path=my_config_path)
145 >>>
146 >>> # train a model with default settings and overwrites: large batch
147 >>> # sizes are benefitial for self-supervised training and more
148 >>> # workers speed up the dataloading process.
149 >>> my_loader = {
150 >>> batch_size: 100,
151 >>> num_workers: 8,
152 >>> }
153 >>> checkpoint_path = lightly.train_embedding_model(
154 >>> input_dir='path/to/data', loader=my_loader)
155 >>> # the command above is equivalent to:
156 >>> # lightly-train input_dir='path/to/data' loader.batch_size=100 loader.num_workers=8
157 """
158 config_path = _get_config_path(config_path)
159 config_args = _load_config_file(config_path)
160 config_args = _add_kwargs(config_args, kwargs)
161
162 return _train_cli(config_args, is_cli_call=False)
163
164
165 def embed_images(checkpoint: str, config_path: str = None, **kwargs):
166 """Embed images with a self-supervised model.
167
168 Calls the same function as lightly-embed. All arguments passed to
169 lightly-embed can also be passed to this function (see below for an
170 example).
171
172 Args:
173 checkpoint:
174 Path to the checkpoint file for the embedding model.
175 config_path:
176 Path to config.yaml. If None, the default configs will be used.
177 **kwargs:
178 Overwrite default configs py passing keyword arguments.
179
180 Returns:
181 Embeddings, labels, and filenames of the images.
182
183 Examples:
184 >>> import lightly
185 >>> my_checkpoint_path = 'path/to/checkpoint.ckpt'
186 >>>
187 >>> # embed images with default configs
188 >>> embeddings, _, _ = lightly.embed_images(
189 >>> my_checkpoint_path, input_dir='path/to/data')
190 >>>
191 >>> # embed images with separate config file
192 >>> my_config_path = 'my/config/file.yaml'
193 >>> embeddings, _, _ = lightly.embed_images(
194 >>> my_checkpoint_path, input_dir='path/to/data', config_path=my_config_path)
195 >>>
196 >>> # embed images with default settings and overwrites: at inference,
197 >>> # we can use larger input_sizes because it requires less memory.
198 >>> my_collate = {input_size: 256}
199 >>> embeddings, _, _ = lightly.embed_images(
200 >>> my_checkpoint_path, input_dir='path/to/data', collate=my_collate)
201 >>> # the command above is equivalent to:
202 >>> # lightly-embed input_dir='path/to/data' collate.input_size=256
203
204 """
205 config_path = _get_config_path(config_path)
206 config_args = _load_config_file(config_path)
207 config_args = _add_kwargs(config_args, kwargs)
208
209 config_args['checkpoint'] = checkpoint
210
211 return _embed_cli(config_args, is_cli_call=False)
212
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lightly/cli/embed_cli.py b/lightly/cli/embed_cli.py
--- a/lightly/cli/embed_cli.py
+++ b/lightly/cli/embed_cli.py
@@ -9,8 +9,10 @@
# All Rights Reserved
import os
+from typing import Union, Tuple, List
import hydra
+import numpy as np
import torch
import torchvision
from torch.utils.hipify.hipify_python import bcolors
@@ -24,7 +26,21 @@
from lightly.cli._helpers import cpu_count
-def _embed_cli(cfg, is_cli_call=True):
+def _embed_cli(cfg, is_cli_call=True) -> \
+ Union[
+ Tuple[np.ndarray, List[int], List[str]],
+ str
+ ]:
+ """ See embed_cli() for usage documentation
+
+ is_cli_call:
+ If True:
+ Saves the embeddings as file and returns the filepath.
+ If False:
+ Returns the embeddings, labels, filenames as tuple.
+ Embeddings are of shape (n_samples, embedding_size)
+ len(labels) = len(filenames) = n_samples
+ """
input_dir = cfg['input_dir']
if input_dir and is_cli_call:
input_dir = fix_input_path(input_dir)
@@ -79,7 +95,7 @@
@hydra.main(config_path='config', config_name='config')
-def embed_cli(cfg):
+def embed_cli(cfg) -> str:
"""Embed images from the command-line.
Args:
@@ -95,6 +111,9 @@
Path to the checkpoint of a pretrained model. If left
empty, a pretrained model by lightly is used.
+ Returns:
+ The path to the created embeddings file.
+
Examples:
>>> # embed images with default settings and a lightly model
>>> lightly-embed input_dir=data/
diff --git a/lightly/core.py b/lightly/core.py
--- a/lightly/core.py
+++ b/lightly/core.py
@@ -2,6 +2,9 @@
# Copyright (c) 2020. Lightly AG and its affiliates.
# All Rights Reserved
+from typing import Tuple, List
+
+import numpy as np
from lightly.cli.train_cli import _train_cli
from lightly.cli.embed_cli import _embed_cli
@@ -73,12 +76,15 @@
return cfg
-def train_model_and_embed_images(config_path: str = None, **kwargs):
+def train_model_and_embed_images(config_path: str = None, **kwargs) -> Tuple[
+ np.ndarray, List[int], List[str]
+]:
"""Train a self-supervised model and use it to embed images.
- Calls the same function as lightly-magic. All arguments passed to
- lightly-magic can also be passed to this function (see below for an
- example).
+ First trains a modle using the _train_cli(),
+ then embeds with the _embed_cli().
+ All arguments passed to the CLI functions
+ can also be passed to this function (see below for an example).
Args:
config_path:
@@ -88,6 +94,8 @@
Returns:
Embeddings, labels, and filenames of the images.
+ Embeddings are of shape (n_samples, embedding_size)
+ len(labels) = len(filenames) = n_samples
Examples:
>>> import lightly
@@ -105,14 +113,16 @@
>>> my_trainer = {max_epochs: 10}
>>> embeddings, _, _ = lightly.train_model_and_embed_images(
>>> input_dir='path/to/data', trainer=my_trainer)
- >>> # the command above is equivalent to:
- >>> # lightly-magic input_dir='path/to/data' trainer.max_epochs=10
"""
config_path = _get_config_path(config_path)
config_args = _load_config_file(config_path)
config_args = _add_kwargs(config_args, kwargs)
- return _lightly_cli(config_args, is_cli_call=False)
+
+ checkpoint = _train_cli(config_args, is_cli_call=False)
+ config_args['checkpoint'] = checkpoint
+ embeddings, labels, filenames = _embed_cli(config_args, is_cli_call=False)
+ return embeddings, labels, filenames
def train_embedding_model(config_path: str = None, **kwargs):
| {"golden_diff": "diff --git a/lightly/cli/embed_cli.py b/lightly/cli/embed_cli.py\n--- a/lightly/cli/embed_cli.py\n+++ b/lightly/cli/embed_cli.py\n@@ -9,8 +9,10 @@\n # All Rights Reserved\n \n import os\n+from typing import Union, Tuple, List\n \n import hydra\n+import numpy as np\n import torch\n import torchvision\n from torch.utils.hipify.hipify_python import bcolors\n@@ -24,7 +26,21 @@\n from lightly.cli._helpers import cpu_count\n \n \n-def _embed_cli(cfg, is_cli_call=True):\n+def _embed_cli(cfg, is_cli_call=True) -> \\\n+ Union[\n+ Tuple[np.ndarray, List[int], List[str]],\n+ str\n+ ]:\n+ \"\"\" See embed_cli() for usage documentation\n+\n+ is_cli_call:\n+ If True:\n+ Saves the embeddings as file and returns the filepath.\n+ If False:\n+ Returns the embeddings, labels, filenames as tuple.\n+ Embeddings are of shape (n_samples, embedding_size)\n+ len(labels) = len(filenames) = n_samples\n+ \"\"\"\n input_dir = cfg['input_dir']\n if input_dir and is_cli_call:\n input_dir = fix_input_path(input_dir)\n@@ -79,7 +95,7 @@\n \n \n @hydra.main(config_path='config', config_name='config')\n-def embed_cli(cfg):\n+def embed_cli(cfg) -> str:\n \"\"\"Embed images from the command-line.\n \n Args:\n@@ -95,6 +111,9 @@\n Path to the checkpoint of a pretrained model. If left\n empty, a pretrained model by lightly is used.\n \n+ Returns:\n+ The path to the created embeddings file.\n+\n Examples:\n >>> #\u00a0embed images with default settings and a lightly model\n >>> lightly-embed input_dir=data/\ndiff --git a/lightly/core.py b/lightly/core.py\n--- a/lightly/core.py\n+++ b/lightly/core.py\n@@ -2,6 +2,9 @@\n \n # Copyright (c) 2020. Lightly AG and its affiliates.\n # All Rights Reserved\n+from typing import Tuple, List\n+\n+import numpy as np\n \n from lightly.cli.train_cli import _train_cli\n from lightly.cli.embed_cli import _embed_cli\n@@ -73,12 +76,15 @@\n return cfg\n \n \n-def train_model_and_embed_images(config_path: str = None, **kwargs):\n+def train_model_and_embed_images(config_path: str = None, **kwargs) -> Tuple[\n+ np.ndarray, List[int], List[str]\n+]:\n \"\"\"Train a self-supervised model and use it to embed images.\n \n- Calls the same function as lightly-magic. All arguments passed to\n- lightly-magic can also be passed to this function (see below for an\n- example).\n+ First trains a modle using the _train_cli(),\n+ then embeds with the _embed_cli().\n+ All arguments passed to the CLI functions\n+ can also be passed to this function (see below for an example).\n \n Args:\n config_path:\n@@ -88,6 +94,8 @@\n \n Returns:\n Embeddings, labels, and filenames of the images.\n+ Embeddings are of shape (n_samples, embedding_size)\n+ len(labels) = len(filenames) = n_samples\n \n Examples:\n >>> import lightly\n@@ -105,14 +113,16 @@\n >>> my_trainer = {max_epochs: 10}\n >>> embeddings, _, _ = lightly.train_model_and_embed_images(\n >>> input_dir='path/to/data', trainer=my_trainer)\n- >>> #\u00a0the command above is equivalent to:\n- >>> #\u00a0lightly-magic input_dir='path/to/data' trainer.max_epochs=10\n \n \"\"\"\n config_path = _get_config_path(config_path)\n config_args = _load_config_file(config_path)\n config_args = _add_kwargs(config_args, kwargs)\n- return _lightly_cli(config_args, is_cli_call=False)\n+\n+ checkpoint = _train_cli(config_args, is_cli_call=False)\n+ config_args['checkpoint'] = checkpoint\n+ embeddings, labels, filenames = _embed_cli(config_args, is_cli_call=False)\n+ return embeddings, labels, filenames\n \n \n def train_embedding_model(config_path: str = None, **kwargs):\n", "issue": "_lightly_cli is missing it's return statement\nas a side effect `train_model_and_embed_images` is not working as expected.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"**Lightly Embed:** Embed images with one command.\n\nThis module contains the entrypoint for the **lightly-embed**\ncommand-line interface.\n\"\"\"\n\n# Copyright (c) 2020. Lightly AG and its affiliates.\n# All Rights Reserved\n\nimport os\n\nimport hydra\nimport torch\nimport torchvision\nfrom torch.utils.hipify.hipify_python import bcolors\n\nfrom lightly.data import LightlyDataset\n\nfrom lightly.utils import save_embeddings\n\nfrom lightly.cli._helpers import get_model_from_config\nfrom lightly.cli._helpers import fix_input_path\nfrom lightly.cli._helpers import cpu_count\n\n\ndef _embed_cli(cfg, is_cli_call=True):\n input_dir = cfg['input_dir']\n if input_dir and is_cli_call:\n input_dir = fix_input_path(input_dir)\n\n torch.backends.cudnn.deterministic = True\n torch.backends.cudnn.benchmark = False\n\n if torch.cuda.is_available():\n device = torch.device('cuda')\n else:\n device = torch.device('cpu')\n\n transform = torchvision.transforms.Compose(\n [\n torchvision.transforms.Resize(\n (cfg['collate']['input_size'], cfg['collate']['input_size'])\n ),\n torchvision.transforms.ToTensor(),\n torchvision.transforms.Normalize(\n mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]\n ),\n ]\n )\n\n dataset = LightlyDataset(input_dir, transform=transform)\n\n # disable drop_last and shuffle\n cfg['loader']['drop_last'] = False\n cfg['loader']['shuffle'] = False\n cfg['loader']['batch_size'] = min(cfg['loader']['batch_size'], len(dataset))\n\n # determine the number of available cores\n if cfg['loader']['num_workers'] < 0:\n cfg['loader']['num_workers'] = cpu_count()\n\n dataloader = torch.utils.data.DataLoader(dataset, **cfg['loader'])\n\n encoder = get_model_from_config(cfg, is_cli_call)\n\n embeddings, labels, filenames = encoder.embed(dataloader, device=device)\n\n if is_cli_call:\n path = os.path.join(os.getcwd(), 'embeddings.csv')\n save_embeddings(path, embeddings, labels, filenames)\n print(f'Embeddings are stored at {bcolors.OKBLUE}{path}{bcolors.ENDC}')\n os.environ[\n cfg['environment_variable_names']['lightly_last_embedding_path']\n ] = path\n return path\n\n return embeddings, labels, filenames\n\n\[email protected](config_path='config', config_name='config')\ndef embed_cli(cfg):\n \"\"\"Embed images from the command-line.\n\n Args:\n cfg:\n The default configs are loaded from the config file.\n To overwrite them please see the section on the config file\n (.config.config.yaml).\n\n Command-Line Args:\n input_dir:\n Path to the input directory where images are stored.\n checkpoint:\n Path to the checkpoint of a pretrained model. If left\n empty, a pretrained model by lightly is used.\n\n Examples:\n >>> #\u00a0embed images with default settings and a lightly model\n >>> lightly-embed input_dir=data/\n >>>\n >>> # embed images with default settings and a custom checkpoint\n >>> lightly-embed input_dir=data/ checkpoint=my_checkpoint.ckpt\n >>>\n >>> # embed images with custom settings\n >>> lightly-embed input_dir=data/ model.num_ftrs=32\n\n \"\"\"\n return _embed_cli(cfg)\n\n\ndef entry():\n embed_cli()\n", "path": "lightly/cli/embed_cli.py"}, {"content": "\"\"\" Contains the core functionality of the lightly Python package. \"\"\"\n\n# Copyright (c) 2020. Lightly AG and its affiliates.\n# All Rights Reserved\n\nfrom lightly.cli.train_cli import _train_cli\nfrom lightly.cli.embed_cli import _embed_cli\nfrom lightly.cli.lightly_cli import _lightly_cli\nimport lightly.cli as cli\n\nimport yaml\nimport os\n\n\ndef _get_config_path(config_path):\n \"\"\"Find path to yaml config file\n\n Args:\n config_path: (str) Path to config.yaml file\n\n Returns:\n Path to config.yaml if specified else default config.yaml\n\n Raises:\n ValueError: If the config_path is not None but doesn't exist\n\n \"\"\"\n if config_path is None:\n dirname = os.path.dirname(cli.__file__)\n config_path = os.path.join(dirname, 'config/config.yaml')\n if not os.path.exists(config_path):\n raise ValueError(\"Config path {} does not exist!\".format(config_path))\n\n return config_path\n\n\ndef _load_config_file(config_path):\n \"\"\"Load a yaml config file\n\n Args:\n config_path: (str) Path to config.yaml file\n\n Returns:\n Dictionary with configs from config.yaml\n\n \"\"\"\n Loader = yaml.FullLoader\n with open(config_path, 'r') as config_file:\n cfg = yaml.load(config_file, Loader=Loader)\n\n return cfg\n\n\ndef _add_kwargs(cfg, kwargs):\n \"\"\"Add keyword arguments to config\n\n Args:\n cfg: (dict) Dictionary of configs from config.yaml\n kwargs: (dict) Dictionary of keyword arguments\n\n Returns:\n Union of cfg and kwargs\n\n \"\"\"\n for key, item in kwargs.items():\n if isinstance(item, dict):\n if key in cfg:\n cfg[key] = _add_kwargs(cfg[key], item)\n else:\n cfg[key] = item\n else:\n cfg[key] = item\n return cfg\n\n\ndef train_model_and_embed_images(config_path: str = None, **kwargs):\n \"\"\"Train a self-supervised model and use it to embed images.\n\n Calls the same function as lightly-magic. All arguments passed to\n lightly-magic can also be passed to this function (see below for an\n example).\n\n Args:\n config_path:\n Path to config.yaml. If None, the default configs will be used.\n **kwargs:\n Overwrite default configs py passing keyword arguments.\n\n Returns:\n Embeddings, labels, and filenames of the images.\n\n Examples:\n >>> import lightly\n >>>\n >>> # train a model and embed images with default configs\n >>> embeddings, _, _ = lightly.train_model_and_embed_images(\n >>> input_dir='path/to/data')\n >>>\n >>> #\u00a0train a model and embed images with separate config file\n >>> my_config_path = 'my/config/file.yaml'\n >>> embeddings, _, _ = lightly.train_model_and_embed_images(\n >>> input_dir='path/to/data', config_path=my_config_path)\n >>>\n >>> # train a model and embed images with default settings + overwrites\n >>> my_trainer = {max_epochs: 10}\n >>> embeddings, _, _ = lightly.train_model_and_embed_images(\n >>> input_dir='path/to/data', trainer=my_trainer)\n >>> #\u00a0the command above is equivalent to:\n >>> #\u00a0lightly-magic input_dir='path/to/data' trainer.max_epochs=10\n\n \"\"\"\n config_path = _get_config_path(config_path)\n config_args = _load_config_file(config_path)\n config_args = _add_kwargs(config_args, kwargs)\n return _lightly_cli(config_args, is_cli_call=False)\n\n\ndef train_embedding_model(config_path: str = None, **kwargs):\n \"\"\"Train a self-supervised model.\n\n Calls the same function as lightly-train. All arguments passed to\n lightly-train can also be passed to this function (see below for an\n example).\n\n Args:\n config_path:\n Path to config.yaml. If None, the default configs will be used.\n **kwargs:\n Overwrite default configs py passing keyword arguments.\n\n Returns:\n Path to checkpoint of the trained embedding model.\n\n Examples:\n >>> import lightly\n >>>\n >>> # train a model with default configs\n >>> checkpoint_path = lightly.train_embedding_model(\n >>> input_dir='path/to/data')\n >>>\n >>> #\u00a0train a model with separate config file\n >>> my_config_path = 'my/config/file.yaml'\n >>> checkpoint_path = lightly.train_embedding_model(\n >>> input_dir='path/to/data', config_path=my_config_path)\n >>>\n >>> # train a model with default settings and overwrites: large batch\n >>> # sizes are benefitial for self-supervised training and more \n >>> #\u00a0workers speed up the dataloading process.\n >>> my_loader = {\n >>> batch_size: 100,\n >>> num_workers: 8,\n >>> }\n >>> checkpoint_path = lightly.train_embedding_model(\n >>> input_dir='path/to/data', loader=my_loader)\n >>> #\u00a0the command above is equivalent to:\n >>> #\u00a0lightly-train input_dir='path/to/data' loader.batch_size=100 loader.num_workers=8\n \"\"\"\n config_path = _get_config_path(config_path)\n config_args = _load_config_file(config_path)\n config_args = _add_kwargs(config_args, kwargs)\n\n return _train_cli(config_args, is_cli_call=False)\n\n\ndef embed_images(checkpoint: str, config_path: str = None, **kwargs):\n \"\"\"Embed images with a self-supervised model.\n\n Calls the same function as lightly-embed. All arguments passed to\n lightly-embed can also be passed to this function (see below for an\n example).\n\n Args:\n checkpoint:\n Path to the checkpoint file for the embedding model.\n config_path:\n Path to config.yaml. If None, the default configs will be used.\n **kwargs:\n Overwrite default configs py passing keyword arguments.\n\n Returns:\n Embeddings, labels, and filenames of the images.\n\n Examples:\n >>> import lightly\n >>> my_checkpoint_path = 'path/to/checkpoint.ckpt'\n >>>\n >>> # embed images with default configs\n >>> embeddings, _, _ = lightly.embed_images(\n >>> my_checkpoint_path, input_dir='path/to/data')\n >>>\n >>> #\u00a0embed images with separate config file\n >>> my_config_path = 'my/config/file.yaml'\n >>> embeddings, _, _ = lightly.embed_images(\n >>> my_checkpoint_path, input_dir='path/to/data', config_path=my_config_path)\n >>>\n >>> # embed images with default settings and overwrites: at inference,\n >>> #\u00a0we can use larger input_sizes because it requires less memory.\n >>> my_collate = {input_size: 256}\n >>> embeddings, _, _ = lightly.embed_images(\n >>> my_checkpoint_path, input_dir='path/to/data', collate=my_collate)\n >>> #\u00a0the command above is equivalent to:\n >>> #\u00a0lightly-embed input_dir='path/to/data' collate.input_size=256\n\n \"\"\"\n config_path = _get_config_path(config_path)\n config_args = _load_config_file(config_path)\n config_args = _add_kwargs(config_args, kwargs)\n\n config_args['checkpoint'] = checkpoint\n\n return _embed_cli(config_args, is_cli_call=False)\n", "path": "lightly/core.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"**Lightly Embed:** Embed images with one command.\n\nThis module contains the entrypoint for the **lightly-embed**\ncommand-line interface.\n\"\"\"\n\n# Copyright (c) 2020. Lightly AG and its affiliates.\n# All Rights Reserved\n\nimport os\nfrom typing import Union, Tuple, List\n\nimport hydra\nimport numpy as np\nimport torch\nimport torchvision\nfrom torch.utils.hipify.hipify_python import bcolors\n\nfrom lightly.data import LightlyDataset\n\nfrom lightly.utils import save_embeddings\n\nfrom lightly.cli._helpers import get_model_from_config\nfrom lightly.cli._helpers import fix_input_path\nfrom lightly.cli._helpers import cpu_count\n\n\ndef _embed_cli(cfg, is_cli_call=True) -> \\\n Union[\n Tuple[np.ndarray, List[int], List[str]],\n str\n ]:\n \"\"\" See embed_cli() for usage documentation\n\n is_cli_call:\n If True:\n Saves the embeddings as file and returns the filepath.\n If False:\n Returns the embeddings, labels, filenames as tuple.\n Embeddings are of shape (n_samples, embedding_size)\n len(labels) = len(filenames) = n_samples\n \"\"\"\n input_dir = cfg['input_dir']\n if input_dir and is_cli_call:\n input_dir = fix_input_path(input_dir)\n\n torch.backends.cudnn.deterministic = True\n torch.backends.cudnn.benchmark = False\n\n if torch.cuda.is_available():\n device = torch.device('cuda')\n else:\n device = torch.device('cpu')\n\n transform = torchvision.transforms.Compose(\n [\n torchvision.transforms.Resize(\n (cfg['collate']['input_size'], cfg['collate']['input_size'])\n ),\n torchvision.transforms.ToTensor(),\n torchvision.transforms.Normalize(\n mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]\n ),\n ]\n )\n\n dataset = LightlyDataset(input_dir, transform=transform)\n\n # disable drop_last and shuffle\n cfg['loader']['drop_last'] = False\n cfg['loader']['shuffle'] = False\n cfg['loader']['batch_size'] = min(cfg['loader']['batch_size'], len(dataset))\n\n # determine the number of available cores\n if cfg['loader']['num_workers'] < 0:\n cfg['loader']['num_workers'] = cpu_count()\n\n dataloader = torch.utils.data.DataLoader(dataset, **cfg['loader'])\n\n encoder = get_model_from_config(cfg, is_cli_call)\n\n embeddings, labels, filenames = encoder.embed(dataloader, device=device)\n\n if is_cli_call:\n path = os.path.join(os.getcwd(), 'embeddings.csv')\n save_embeddings(path, embeddings, labels, filenames)\n print(f'Embeddings are stored at {bcolors.OKBLUE}{path}{bcolors.ENDC}')\n os.environ[\n cfg['environment_variable_names']['lightly_last_embedding_path']\n ] = path\n return path\n\n return embeddings, labels, filenames\n\n\[email protected](config_path='config', config_name='config')\ndef embed_cli(cfg) -> str:\n \"\"\"Embed images from the command-line.\n\n Args:\n cfg:\n The default configs are loaded from the config file.\n To overwrite them please see the section on the config file\n (.config.config.yaml).\n\n Command-Line Args:\n input_dir:\n Path to the input directory where images are stored.\n checkpoint:\n Path to the checkpoint of a pretrained model. If left\n empty, a pretrained model by lightly is used.\n\n Returns:\n The path to the created embeddings file.\n\n Examples:\n >>> #\u00a0embed images with default settings and a lightly model\n >>> lightly-embed input_dir=data/\n >>>\n >>> # embed images with default settings and a custom checkpoint\n >>> lightly-embed input_dir=data/ checkpoint=my_checkpoint.ckpt\n >>>\n >>> # embed images with custom settings\n >>> lightly-embed input_dir=data/ model.num_ftrs=32\n\n \"\"\"\n return _embed_cli(cfg)\n\n\ndef entry():\n embed_cli()\n", "path": "lightly/cli/embed_cli.py"}, {"content": "\"\"\" Contains the core functionality of the lightly Python package. \"\"\"\n\n# Copyright (c) 2020. Lightly AG and its affiliates.\n# All Rights Reserved\nfrom typing import Tuple, List\n\nimport numpy as np\n\nfrom lightly.cli.train_cli import _train_cli\nfrom lightly.cli.embed_cli import _embed_cli\nfrom lightly.cli.lightly_cli import _lightly_cli\nimport lightly.cli as cli\n\nimport yaml\nimport os\n\n\ndef _get_config_path(config_path):\n \"\"\"Find path to yaml config file\n\n Args:\n config_path: (str) Path to config.yaml file\n\n Returns:\n Path to config.yaml if specified else default config.yaml\n\n Raises:\n ValueError: If the config_path is not None but doesn't exist\n\n \"\"\"\n if config_path is None:\n dirname = os.path.dirname(cli.__file__)\n config_path = os.path.join(dirname, 'config/config.yaml')\n if not os.path.exists(config_path):\n raise ValueError(\"Config path {} does not exist!\".format(config_path))\n\n return config_path\n\n\ndef _load_config_file(config_path):\n \"\"\"Load a yaml config file\n\n Args:\n config_path: (str) Path to config.yaml file\n\n Returns:\n Dictionary with configs from config.yaml\n\n \"\"\"\n Loader = yaml.FullLoader\n with open(config_path, 'r') as config_file:\n cfg = yaml.load(config_file, Loader=Loader)\n\n return cfg\n\n\ndef _add_kwargs(cfg, kwargs):\n \"\"\"Add keyword arguments to config\n\n Args:\n cfg: (dict) Dictionary of configs from config.yaml\n kwargs: (dict) Dictionary of keyword arguments\n\n Returns:\n Union of cfg and kwargs\n\n \"\"\"\n for key, item in kwargs.items():\n if isinstance(item, dict):\n if key in cfg:\n cfg[key] = _add_kwargs(cfg[key], item)\n else:\n cfg[key] = item\n else:\n cfg[key] = item\n return cfg\n\n\ndef train_model_and_embed_images(config_path: str = None, **kwargs) -> Tuple[\n np.ndarray, List[int], List[str]\n]:\n \"\"\"Train a self-supervised model and use it to embed images.\n\n First trains a modle using the _train_cli(),\n then embeds with the _embed_cli().\n All arguments passed to the CLI functions\n can also be passed to this function (see below for an example).\n\n Args:\n config_path:\n Path to config.yaml. If None, the default configs will be used.\n **kwargs:\n Overwrite default configs py passing keyword arguments.\n\n Returns:\n Embeddings, labels, and filenames of the images.\n Embeddings are of shape (n_samples, embedding_size)\n len(labels) = len(filenames) = n_samples\n\n Examples:\n >>> import lightly\n >>>\n >>> # train a model and embed images with default configs\n >>> embeddings, _, _ = lightly.train_model_and_embed_images(\n >>> input_dir='path/to/data')\n >>>\n >>> #\u00a0train a model and embed images with separate config file\n >>> my_config_path = 'my/config/file.yaml'\n >>> embeddings, _, _ = lightly.train_model_and_embed_images(\n >>> input_dir='path/to/data', config_path=my_config_path)\n >>>\n >>> # train a model and embed images with default settings + overwrites\n >>> my_trainer = {max_epochs: 10}\n >>> embeddings, _, _ = lightly.train_model_and_embed_images(\n >>> input_dir='path/to/data', trainer=my_trainer)\n\n \"\"\"\n config_path = _get_config_path(config_path)\n config_args = _load_config_file(config_path)\n config_args = _add_kwargs(config_args, kwargs)\n\n checkpoint = _train_cli(config_args, is_cli_call=False)\n config_args['checkpoint'] = checkpoint\n embeddings, labels, filenames = _embed_cli(config_args, is_cli_call=False)\n return embeddings, labels, filenames\n\n\ndef train_embedding_model(config_path: str = None, **kwargs):\n \"\"\"Train a self-supervised model.\n\n Calls the same function as lightly-train. All arguments passed to\n lightly-train can also be passed to this function (see below for an\n example).\n\n Args:\n config_path:\n Path to config.yaml. If None, the default configs will be used.\n **kwargs:\n Overwrite default configs py passing keyword arguments.\n\n Returns:\n Path to checkpoint of the trained embedding model.\n\n Examples:\n >>> import lightly\n >>>\n >>> # train a model with default configs\n >>> checkpoint_path = lightly.train_embedding_model(\n >>> input_dir='path/to/data')\n >>>\n >>> #\u00a0train a model with separate config file\n >>> my_config_path = 'my/config/file.yaml'\n >>> checkpoint_path = lightly.train_embedding_model(\n >>> input_dir='path/to/data', config_path=my_config_path)\n >>>\n >>> # train a model with default settings and overwrites: large batch\n >>> # sizes are benefitial for self-supervised training and more \n >>> #\u00a0workers speed up the dataloading process.\n >>> my_loader = {\n >>> batch_size: 100,\n >>> num_workers: 8,\n >>> }\n >>> checkpoint_path = lightly.train_embedding_model(\n >>> input_dir='path/to/data', loader=my_loader)\n >>> #\u00a0the command above is equivalent to:\n >>> #\u00a0lightly-train input_dir='path/to/data' loader.batch_size=100 loader.num_workers=8\n \"\"\"\n config_path = _get_config_path(config_path)\n config_args = _load_config_file(config_path)\n config_args = _add_kwargs(config_args, kwargs)\n\n return _train_cli(config_args, is_cli_call=False)\n\n\ndef embed_images(checkpoint: str, config_path: str = None, **kwargs):\n \"\"\"Embed images with a self-supervised model.\n\n Calls the same function as lightly-embed. All arguments passed to\n lightly-embed can also be passed to this function (see below for an\n example).\n\n Args:\n checkpoint:\n Path to the checkpoint file for the embedding model.\n config_path:\n Path to config.yaml. If None, the default configs will be used.\n **kwargs:\n Overwrite default configs py passing keyword arguments.\n\n Returns:\n Embeddings, labels, and filenames of the images.\n\n Examples:\n >>> import lightly\n >>> my_checkpoint_path = 'path/to/checkpoint.ckpt'\n >>>\n >>> # embed images with default configs\n >>> embeddings, _, _ = lightly.embed_images(\n >>> my_checkpoint_path, input_dir='path/to/data')\n >>>\n >>> #\u00a0embed images with separate config file\n >>> my_config_path = 'my/config/file.yaml'\n >>> embeddings, _, _ = lightly.embed_images(\n >>> my_checkpoint_path, input_dir='path/to/data', config_path=my_config_path)\n >>>\n >>> # embed images with default settings and overwrites: at inference,\n >>> #\u00a0we can use larger input_sizes because it requires less memory.\n >>> my_collate = {input_size: 256}\n >>> embeddings, _, _ = lightly.embed_images(\n >>> my_checkpoint_path, input_dir='path/to/data', collate=my_collate)\n >>> #\u00a0the command above is equivalent to:\n >>> #\u00a0lightly-embed input_dir='path/to/data' collate.input_size=256\n\n \"\"\"\n config_path = _get_config_path(config_path)\n config_args = _load_config_file(config_path)\n config_args = _add_kwargs(config_args, kwargs)\n\n config_args['checkpoint'] = checkpoint\n\n return _embed_cli(config_args, is_cli_call=False)\n", "path": "lightly/core.py"}]} | 3,468 | 964 |
gh_patches_debug_632 | rasdani/github-patches | git_diff | pex-tool__pex-2245 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 2.1.147
On the docket:
+ [x] pex does not use .pip/pip.conf to resolve packages #336 / #838
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.146"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.146"
+__version__ = "2.1.147"
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.146\"\n+__version__ = \"2.1.147\"\n", "issue": "Release 2.1.147\nOn the docket:\r\n+ [x] pex does not use .pip/pip.conf to resolve packages #336 / #838\r\n\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.146\"\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.147\"\n", "path": "pex/version.py"}]} | 351 | 98 |
gh_patches_debug_2936 | rasdani/github-patches | git_diff | bookwyrm-social__bookwyrm-2387 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Internal Server Errors (e.g. on delete of user)
**Describe the bug**
Internal server error for some actions. I have set up a dockerless installation and am able to access the application and the admin pages. However, some actions create errors. For example:
**To Reproduce**
Steps to reproduce the behavior:
1. Clicking delete user after providing admin password. Browser shows internal server error. Error in application is:
```
Internal Server Error: /settings/reports/2/delete
Traceback (most recent call last):
File "/opt/bookwyrm/venv/lib/python3.10/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/opt/bookwyrm/venv/lib/python3.10/site-packages/django/core/handlers/base.py", line 204, in _get_response
response = response.render()
File "/opt/bookwyrm/venv/lib/python3.10/site-packages/django/template/response.py", line 105, in render
self.content = self.rendered_content
File "/opt/bookwyrm/venv/lib/python3.10/site-packages/django/template/response.py", line 81, in rendered_content
template = self.resolve_template(self.template_name)
File "/opt/bookwyrm/venv/lib/python3.10/site-packages/django/template/response.py", line 65, in resolve_template
return get_template(template, using=self.using)
File "/opt/bookwyrm/venv/lib/python3.10/site-packages/django/template/loader.py", line 19, in get_template
raise TemplateDoesNotExist(template_name, chain=chain)
django.template.exceptions.TemplateDoesNotExist: user_admin/user.html
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bookwyrm/views/admin/reports.py`
Content:
```
1 """ moderation via flagged posts and users """
2 from django.contrib.auth.decorators import login_required, permission_required
3 from django.core.paginator import Paginator
4 from django.core.exceptions import PermissionDenied
5 from django.shortcuts import get_object_or_404, redirect
6 from django.template.response import TemplateResponse
7 from django.utils.decorators import method_decorator
8 from django.views import View
9
10 from bookwyrm import forms, models
11 from bookwyrm.settings import PAGE_LENGTH
12
13
14 # pylint: disable=no-self-use
15 @method_decorator(login_required, name="dispatch")
16 @method_decorator(
17 permission_required("bookwyrm.moderate_user", raise_exception=True),
18 name="dispatch",
19 )
20 @method_decorator(
21 permission_required("bookwyrm.moderate_post", raise_exception=True),
22 name="dispatch",
23 )
24 class ReportsAdmin(View):
25 """list of reports"""
26
27 def get(self, request):
28 """view current reports"""
29 filters = {}
30
31 resolved = request.GET.get("resolved") == "true"
32 server = request.GET.get("server")
33 if server:
34 filters["user__federated_server__server_name"] = server
35 username = request.GET.get("username")
36 if username:
37 filters["user__username__icontains"] = username
38 filters["resolved"] = resolved
39
40 reports = models.Report.objects.filter(**filters)
41 paginated = Paginator(reports, PAGE_LENGTH)
42 page = paginated.get_page(request.GET.get("page"))
43 data = {
44 "resolved": resolved,
45 "server": server,
46 "reports": page,
47 "page_range": paginated.get_elided_page_range(
48 page.number, on_each_side=2, on_ends=1
49 ),
50 }
51 return TemplateResponse(request, "settings/reports/reports.html", data)
52
53
54 @method_decorator(login_required, name="dispatch")
55 @method_decorator(
56 permission_required("bookwyrm.moderate_user", raise_exception=True),
57 name="dispatch",
58 )
59 @method_decorator(
60 permission_required("bookwyrm.moderate_post", raise_exception=True),
61 name="dispatch",
62 )
63 class ReportAdmin(View):
64 """view a specific report"""
65
66 def get(self, request, report_id):
67 """load a report"""
68 data = {
69 "report": get_object_or_404(models.Report, id=report_id),
70 "group_form": forms.UserGroupForm(),
71 }
72 return TemplateResponse(request, "settings/reports/report.html", data)
73
74 def post(self, request, report_id):
75 """comment on a report"""
76 report = get_object_or_404(models.Report, id=report_id)
77 models.ReportComment.objects.create(
78 user=request.user,
79 report=report,
80 note=request.POST.get("note"),
81 )
82 return redirect("settings-report", report.id)
83
84
85 @login_required
86 @permission_required("bookwyrm.moderate_user")
87 def suspend_user(_, user_id):
88 """mark an account as inactive"""
89 user = get_object_or_404(models.User, id=user_id)
90 user.is_active = False
91 user.deactivation_reason = "moderator_suspension"
92 # this isn't a full deletion, so we don't want to tell the world
93 user.save(broadcast=False)
94 return redirect("settings-user", user.id)
95
96
97 @login_required
98 @permission_required("bookwyrm.moderate_user")
99 def unsuspend_user(_, user_id):
100 """mark an account as inactive"""
101 user = get_object_or_404(models.User, id=user_id)
102 user.is_active = True
103 user.deactivation_reason = None
104 # this isn't a full deletion, so we don't want to tell the world
105 user.save(broadcast=False)
106 return redirect("settings-user", user.id)
107
108
109 @login_required
110 @permission_required("bookwyrm.moderate_user")
111 def moderator_delete_user(request, user_id):
112 """permanently delete a user"""
113 user = get_object_or_404(models.User, id=user_id)
114
115 # we can't delete users on other instances
116 if not user.local:
117 raise PermissionDenied()
118
119 form = forms.DeleteUserForm(request.POST, instance=user)
120
121 moderator = models.User.objects.get(id=request.user.id)
122 # check the moderator's password
123 if form.is_valid() and moderator.check_password(form.cleaned_data["password"]):
124 user.deactivation_reason = "moderator_deletion"
125 user.delete()
126 return redirect("settings-user", user.id)
127
128 form.errors["password"] = ["Invalid password"]
129
130 data = {"user": user, "group_form": forms.UserGroupForm(), "form": form}
131 return TemplateResponse(request, "user_admin/user.html", data)
132
133
134 @login_required
135 @permission_required("bookwyrm.moderate_post")
136 def resolve_report(_, report_id):
137 """mark a report as (un)resolved"""
138 report = get_object_or_404(models.Report, id=report_id)
139 report.resolved = not report.resolved
140 report.save()
141 if not report.resolved:
142 return redirect("settings-report", report.id)
143 return redirect("settings-reports")
144
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bookwyrm/views/admin/reports.py b/bookwyrm/views/admin/reports.py
--- a/bookwyrm/views/admin/reports.py
+++ b/bookwyrm/views/admin/reports.py
@@ -128,7 +128,7 @@
form.errors["password"] = ["Invalid password"]
data = {"user": user, "group_form": forms.UserGroupForm(), "form": form}
- return TemplateResponse(request, "user_admin/user.html", data)
+ return TemplateResponse(request, "settings/users/user.html", data)
@login_required
| {"golden_diff": "diff --git a/bookwyrm/views/admin/reports.py b/bookwyrm/views/admin/reports.py\n--- a/bookwyrm/views/admin/reports.py\n+++ b/bookwyrm/views/admin/reports.py\n@@ -128,7 +128,7 @@\n form.errors[\"password\"] = [\"Invalid password\"]\n \n data = {\"user\": user, \"group_form\": forms.UserGroupForm(), \"form\": form}\n- return TemplateResponse(request, \"user_admin/user.html\", data)\n+ return TemplateResponse(request, \"settings/users/user.html\", data)\n \n \n @login_required\n", "issue": "Internal Server Errors (e.g. on delete of user)\n**Describe the bug**\r\nInternal server error for some actions. I have set up a dockerless installation and am able to access the application and the admin pages. However, some actions create errors. For example: \r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Clicking delete user after providing admin password. Browser shows internal server error. Error in application is:\r\n\r\n```\r\nInternal Server Error: /settings/reports/2/delete\r\nTraceback (most recent call last):\r\n File \"/opt/bookwyrm/venv/lib/python3.10/site-packages/django/core/handlers/exception.py\", line 47, in inner\r\n response = get_response(request)\r\n File \"/opt/bookwyrm/venv/lib/python3.10/site-packages/django/core/handlers/base.py\", line 204, in _get_response\r\n response = response.render()\r\n File \"/opt/bookwyrm/venv/lib/python3.10/site-packages/django/template/response.py\", line 105, in render\r\n self.content = self.rendered_content\r\n File \"/opt/bookwyrm/venv/lib/python3.10/site-packages/django/template/response.py\", line 81, in rendered_content\r\n template = self.resolve_template(self.template_name)\r\n File \"/opt/bookwyrm/venv/lib/python3.10/site-packages/django/template/response.py\", line 65, in resolve_template\r\n return get_template(template, using=self.using)\r\n File \"/opt/bookwyrm/venv/lib/python3.10/site-packages/django/template/loader.py\", line 19, in get_template\r\n raise TemplateDoesNotExist(template_name, chain=chain)\r\ndjango.template.exceptions.TemplateDoesNotExist: user_admin/user.html\r\n```\r\n\r\n\n", "before_files": [{"content": "\"\"\" moderation via flagged posts and users \"\"\"\nfrom django.contrib.auth.decorators import login_required, permission_required\nfrom django.core.paginator import Paginator\nfrom django.core.exceptions import PermissionDenied\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils.decorators import method_decorator\nfrom django.views import View\n\nfrom bookwyrm import forms, models\nfrom bookwyrm.settings import PAGE_LENGTH\n\n\n# pylint: disable=no-self-use\n@method_decorator(login_required, name=\"dispatch\")\n@method_decorator(\n permission_required(\"bookwyrm.moderate_user\", raise_exception=True),\n name=\"dispatch\",\n)\n@method_decorator(\n permission_required(\"bookwyrm.moderate_post\", raise_exception=True),\n name=\"dispatch\",\n)\nclass ReportsAdmin(View):\n \"\"\"list of reports\"\"\"\n\n def get(self, request):\n \"\"\"view current reports\"\"\"\n filters = {}\n\n resolved = request.GET.get(\"resolved\") == \"true\"\n server = request.GET.get(\"server\")\n if server:\n filters[\"user__federated_server__server_name\"] = server\n username = request.GET.get(\"username\")\n if username:\n filters[\"user__username__icontains\"] = username\n filters[\"resolved\"] = resolved\n\n reports = models.Report.objects.filter(**filters)\n paginated = Paginator(reports, PAGE_LENGTH)\n page = paginated.get_page(request.GET.get(\"page\"))\n data = {\n \"resolved\": resolved,\n \"server\": server,\n \"reports\": page,\n \"page_range\": paginated.get_elided_page_range(\n page.number, on_each_side=2, on_ends=1\n ),\n }\n return TemplateResponse(request, \"settings/reports/reports.html\", data)\n\n\n@method_decorator(login_required, name=\"dispatch\")\n@method_decorator(\n permission_required(\"bookwyrm.moderate_user\", raise_exception=True),\n name=\"dispatch\",\n)\n@method_decorator(\n permission_required(\"bookwyrm.moderate_post\", raise_exception=True),\n name=\"dispatch\",\n)\nclass ReportAdmin(View):\n \"\"\"view a specific report\"\"\"\n\n def get(self, request, report_id):\n \"\"\"load a report\"\"\"\n data = {\n \"report\": get_object_or_404(models.Report, id=report_id),\n \"group_form\": forms.UserGroupForm(),\n }\n return TemplateResponse(request, \"settings/reports/report.html\", data)\n\n def post(self, request, report_id):\n \"\"\"comment on a report\"\"\"\n report = get_object_or_404(models.Report, id=report_id)\n models.ReportComment.objects.create(\n user=request.user,\n report=report,\n note=request.POST.get(\"note\"),\n )\n return redirect(\"settings-report\", report.id)\n\n\n@login_required\n@permission_required(\"bookwyrm.moderate_user\")\ndef suspend_user(_, user_id):\n \"\"\"mark an account as inactive\"\"\"\n user = get_object_or_404(models.User, id=user_id)\n user.is_active = False\n user.deactivation_reason = \"moderator_suspension\"\n # this isn't a full deletion, so we don't want to tell the world\n user.save(broadcast=False)\n return redirect(\"settings-user\", user.id)\n\n\n@login_required\n@permission_required(\"bookwyrm.moderate_user\")\ndef unsuspend_user(_, user_id):\n \"\"\"mark an account as inactive\"\"\"\n user = get_object_or_404(models.User, id=user_id)\n user.is_active = True\n user.deactivation_reason = None\n # this isn't a full deletion, so we don't want to tell the world\n user.save(broadcast=False)\n return redirect(\"settings-user\", user.id)\n\n\n@login_required\n@permission_required(\"bookwyrm.moderate_user\")\ndef moderator_delete_user(request, user_id):\n \"\"\"permanently delete a user\"\"\"\n user = get_object_or_404(models.User, id=user_id)\n\n # we can't delete users on other instances\n if not user.local:\n raise PermissionDenied()\n\n form = forms.DeleteUserForm(request.POST, instance=user)\n\n moderator = models.User.objects.get(id=request.user.id)\n # check the moderator's password\n if form.is_valid() and moderator.check_password(form.cleaned_data[\"password\"]):\n user.deactivation_reason = \"moderator_deletion\"\n user.delete()\n return redirect(\"settings-user\", user.id)\n\n form.errors[\"password\"] = [\"Invalid password\"]\n\n data = {\"user\": user, \"group_form\": forms.UserGroupForm(), \"form\": form}\n return TemplateResponse(request, \"user_admin/user.html\", data)\n\n\n@login_required\n@permission_required(\"bookwyrm.moderate_post\")\ndef resolve_report(_, report_id):\n \"\"\"mark a report as (un)resolved\"\"\"\n report = get_object_or_404(models.Report, id=report_id)\n report.resolved = not report.resolved\n report.save()\n if not report.resolved:\n return redirect(\"settings-report\", report.id)\n return redirect(\"settings-reports\")\n", "path": "bookwyrm/views/admin/reports.py"}], "after_files": [{"content": "\"\"\" moderation via flagged posts and users \"\"\"\nfrom django.contrib.auth.decorators import login_required, permission_required\nfrom django.core.paginator import Paginator\nfrom django.core.exceptions import PermissionDenied\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils.decorators import method_decorator\nfrom django.views import View\n\nfrom bookwyrm import forms, models\nfrom bookwyrm.settings import PAGE_LENGTH\n\n\n# pylint: disable=no-self-use\n@method_decorator(login_required, name=\"dispatch\")\n@method_decorator(\n permission_required(\"bookwyrm.moderate_user\", raise_exception=True),\n name=\"dispatch\",\n)\n@method_decorator(\n permission_required(\"bookwyrm.moderate_post\", raise_exception=True),\n name=\"dispatch\",\n)\nclass ReportsAdmin(View):\n \"\"\"list of reports\"\"\"\n\n def get(self, request):\n \"\"\"view current reports\"\"\"\n filters = {}\n\n resolved = request.GET.get(\"resolved\") == \"true\"\n server = request.GET.get(\"server\")\n if server:\n filters[\"user__federated_server__server_name\"] = server\n username = request.GET.get(\"username\")\n if username:\n filters[\"user__username__icontains\"] = username\n filters[\"resolved\"] = resolved\n\n reports = models.Report.objects.filter(**filters)\n paginated = Paginator(reports, PAGE_LENGTH)\n page = paginated.get_page(request.GET.get(\"page\"))\n data = {\n \"resolved\": resolved,\n \"server\": server,\n \"reports\": page,\n \"page_range\": paginated.get_elided_page_range(\n page.number, on_each_side=2, on_ends=1\n ),\n }\n return TemplateResponse(request, \"settings/reports/reports.html\", data)\n\n\n@method_decorator(login_required, name=\"dispatch\")\n@method_decorator(\n permission_required(\"bookwyrm.moderate_user\", raise_exception=True),\n name=\"dispatch\",\n)\n@method_decorator(\n permission_required(\"bookwyrm.moderate_post\", raise_exception=True),\n name=\"dispatch\",\n)\nclass ReportAdmin(View):\n \"\"\"view a specific report\"\"\"\n\n def get(self, request, report_id):\n \"\"\"load a report\"\"\"\n data = {\n \"report\": get_object_or_404(models.Report, id=report_id),\n \"group_form\": forms.UserGroupForm(),\n }\n return TemplateResponse(request, \"settings/reports/report.html\", data)\n\n def post(self, request, report_id):\n \"\"\"comment on a report\"\"\"\n report = get_object_or_404(models.Report, id=report_id)\n models.ReportComment.objects.create(\n user=request.user,\n report=report,\n note=request.POST.get(\"note\"),\n )\n return redirect(\"settings-report\", report.id)\n\n\n@login_required\n@permission_required(\"bookwyrm.moderate_user\")\ndef suspend_user(_, user_id):\n \"\"\"mark an account as inactive\"\"\"\n user = get_object_or_404(models.User, id=user_id)\n user.is_active = False\n user.deactivation_reason = \"moderator_suspension\"\n # this isn't a full deletion, so we don't want to tell the world\n user.save(broadcast=False)\n return redirect(\"settings-user\", user.id)\n\n\n@login_required\n@permission_required(\"bookwyrm.moderate_user\")\ndef unsuspend_user(_, user_id):\n \"\"\"mark an account as inactive\"\"\"\n user = get_object_or_404(models.User, id=user_id)\n user.is_active = True\n user.deactivation_reason = None\n # this isn't a full deletion, so we don't want to tell the world\n user.save(broadcast=False)\n return redirect(\"settings-user\", user.id)\n\n\n@login_required\n@permission_required(\"bookwyrm.moderate_user\")\ndef moderator_delete_user(request, user_id):\n \"\"\"permanently delete a user\"\"\"\n user = get_object_or_404(models.User, id=user_id)\n\n # we can't delete users on other instances\n if not user.local:\n raise PermissionDenied()\n\n form = forms.DeleteUserForm(request.POST, instance=user)\n\n moderator = models.User.objects.get(id=request.user.id)\n # check the moderator's password\n if form.is_valid() and moderator.check_password(form.cleaned_data[\"password\"]):\n user.deactivation_reason = \"moderator_deletion\"\n user.delete()\n return redirect(\"settings-user\", user.id)\n\n form.errors[\"password\"] = [\"Invalid password\"]\n\n data = {\"user\": user, \"group_form\": forms.UserGroupForm(), \"form\": form}\n return TemplateResponse(request, \"settings/users/user.html\", data)\n\n\n@login_required\n@permission_required(\"bookwyrm.moderate_post\")\ndef resolve_report(_, report_id):\n \"\"\"mark a report as (un)resolved\"\"\"\n report = get_object_or_404(models.Report, id=report_id)\n report.resolved = not report.resolved\n report.save()\n if not report.resolved:\n return redirect(\"settings-report\", report.id)\n return redirect(\"settings-reports\")\n", "path": "bookwyrm/views/admin/reports.py"}]} | 2,067 | 127 |
gh_patches_debug_12622 | rasdani/github-patches | git_diff | googleapis__google-cloud-python-4754 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BigQuery: SchemaField cannot load resource with missing `mode` property
The [mode property](https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#schema.fields.mode) is optional in the REST API, but this code fails:
```
SchemaField.from_api_repr({'name': 'colname', 'type': 'INT64'})
```
I believe this is due to the line
```
mode=api_repr['mode'].upper(),
```
which does not properly handle when the `mode` property is missing.
Related to https://github.com/GoogleCloudPlatform/google-cloud-python/issues/3723 as it is often convenient to omit the mode parameter when defining a schema file by hand.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bigquery/google/cloud/bigquery/schema.py`
Content:
```
1 # Copyright 2015 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Schemas for BigQuery tables / queries."""
16
17
18 class SchemaField(object):
19 """Describe a single field within a table schema.
20
21 :type name: str
22 :param name: the name of the field.
23
24 :type field_type: str
25 :param field_type: the type of the field (one of 'STRING', 'INTEGER',
26 'FLOAT', 'BOOLEAN', 'TIMESTAMP' or 'RECORD').
27
28 :type mode: str
29 :param mode: the mode of the field (one of 'NULLABLE', 'REQUIRED',
30 or 'REPEATED').
31
32 :type description: str
33 :param description: optional description for the field.
34
35 :type fields: tuple of :class:`~google.cloud.bigquery.schema.SchemaField`
36 :param fields: subfields (requires ``field_type`` of 'RECORD').
37 """
38 def __init__(self, name, field_type, mode='NULLABLE',
39 description=None, fields=()):
40 self._name = name
41 self._field_type = field_type
42 self._mode = mode
43 self._description = description
44 self._fields = tuple(fields)
45
46 @classmethod
47 def from_api_repr(cls, api_repr):
48 """Return a ``SchemaField`` object deserialized from a dictionary.
49
50 Args:
51 api_repr (Mapping[str, str]): The serialized representation
52 of the SchemaField, such as what is output by
53 :meth:`to_api_repr`.
54
55 Returns:
56 google.cloud.biquery.schema.SchemaField:
57 The ``SchemaField`` object.
58 """
59 return cls(
60 field_type=api_repr['type'].upper(),
61 fields=[cls.from_api_repr(f) for f in api_repr.get('fields', ())],
62 mode=api_repr['mode'].upper(),
63 name=api_repr['name'],
64 )
65
66 @property
67 def name(self):
68 """str: The name of the field."""
69 return self._name
70
71 @property
72 def field_type(self):
73 """str: The type of the field.
74
75 Will be one of 'STRING', 'INTEGER', 'FLOAT', 'BOOLEAN',
76 'TIMESTAMP' or 'RECORD'.
77 """
78 return self._field_type
79
80 @property
81 def mode(self):
82 """str: The mode of the field.
83
84 Will be one of 'NULLABLE', 'REQUIRED', or 'REPEATED'.
85 """
86 return self._mode
87
88 @property
89 def is_nullable(self):
90 """Check whether 'mode' is 'nullable'."""
91 return self._mode == 'NULLABLE'
92
93 @property
94 def description(self):
95 """Optional[str]: Description for the field."""
96 return self._description
97
98 @property
99 def fields(self):
100 """tuple: Subfields contained in this field.
101
102 If ``field_type`` is not 'RECORD', this property must be
103 empty / unset.
104 """
105 return self._fields
106
107 def to_api_repr(self):
108 """Return a dictionary representing this schema field.
109
110 Returns:
111 dict: A dictionary representing the SchemaField in a serialized
112 form.
113 """
114 # Put together the basic representation. See http://bit.ly/2hOAT5u.
115 answer = {
116 'mode': self.mode.lower(),
117 'name': self.name,
118 'type': self.field_type.lower(),
119 }
120
121 # If this is a RECORD type, then sub-fields are also included,
122 # add this to the serialized representation.
123 if self.field_type.upper() == 'RECORD':
124 answer['fields'] = [f.to_api_repr() for f in self.fields]
125
126 # Done; return the serialized dictionary.
127 return answer
128
129 def _key(self):
130 """A tuple key that uniquely describes this field.
131
132 Used to compute this instance's hashcode and evaluate equality.
133
134 Returns:
135 tuple: The contents of this
136 :class:`~google.cloud.bigquery.schema.SchemaField`.
137 """
138 return (
139 self._name,
140 self._field_type.lower(),
141 self._mode,
142 self._description,
143 self._fields,
144 )
145
146 def __eq__(self, other):
147 if not isinstance(other, SchemaField):
148 return NotImplemented
149 return self._key() == other._key()
150
151 def __ne__(self, other):
152 return not self == other
153
154 def __hash__(self):
155 return hash(self._key())
156
157 def __repr__(self):
158 return 'SchemaField{}'.format(self._key())
159
160
161 def _parse_schema_resource(info):
162 """Parse a resource fragment into a schema field.
163
164 :type info: mapping
165 :param info: should contain a "fields" key to be parsed
166
167 :rtype:
168 list of :class:`google.cloud.bigquery.schema.SchemaField`, or
169 ``NoneType``
170 :returns: a list of parsed fields, or ``None`` if no "fields" key is
171 present in ``info``.
172 """
173 if 'fields' not in info:
174 return ()
175
176 schema = []
177 for r_field in info['fields']:
178 name = r_field['name']
179 field_type = r_field['type']
180 mode = r_field.get('mode', 'NULLABLE')
181 description = r_field.get('description')
182 sub_fields = _parse_schema_resource(r_field)
183 schema.append(
184 SchemaField(name, field_type, mode, description, sub_fields))
185 return schema
186
187
188 def _build_schema_resource(fields):
189 """Generate a resource fragment for a schema.
190
191 :type fields:
192 sequence of :class:`~google.cloud.bigquery.schema.SchemaField`
193 :param fields: schema to be dumped
194
195 :rtype: mapping
196 :returns: a mapping describing the schema of the supplied fields.
197 """
198 infos = []
199 for field in fields:
200 info = {'name': field.name,
201 'type': field.field_type,
202 'mode': field.mode}
203 if field.description is not None:
204 info['description'] = field.description
205 if field.fields:
206 info['fields'] = _build_schema_resource(field.fields)
207 infos.append(info)
208 return infos
209
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bigquery/google/cloud/bigquery/schema.py b/bigquery/google/cloud/bigquery/schema.py
--- a/bigquery/google/cloud/bigquery/schema.py
+++ b/bigquery/google/cloud/bigquery/schema.py
@@ -56,10 +56,13 @@
google.cloud.biquery.schema.SchemaField:
The ``SchemaField`` object.
"""
+ # Handle optional properties with default values
+ mode = api_repr.get('mode', 'NULLABLE')
+ fields = api_repr.get('fields', ())
return cls(
field_type=api_repr['type'].upper(),
- fields=[cls.from_api_repr(f) for f in api_repr.get('fields', ())],
- mode=api_repr['mode'].upper(),
+ fields=[cls.from_api_repr(f) for f in fields],
+ mode=mode.upper(),
name=api_repr['name'],
)
| {"golden_diff": "diff --git a/bigquery/google/cloud/bigquery/schema.py b/bigquery/google/cloud/bigquery/schema.py\n--- a/bigquery/google/cloud/bigquery/schema.py\n+++ b/bigquery/google/cloud/bigquery/schema.py\n@@ -56,10 +56,13 @@\n google.cloud.biquery.schema.SchemaField:\n The ``SchemaField`` object.\n \"\"\"\n+ # Handle optional properties with default values\n+ mode = api_repr.get('mode', 'NULLABLE')\n+ fields = api_repr.get('fields', ())\n return cls(\n field_type=api_repr['type'].upper(),\n- fields=[cls.from_api_repr(f) for f in api_repr.get('fields', ())],\n- mode=api_repr['mode'].upper(),\n+ fields=[cls.from_api_repr(f) for f in fields],\n+ mode=mode.upper(),\n name=api_repr['name'],\n )\n", "issue": "BigQuery: SchemaField cannot load resource with missing `mode` property\nThe [mode property](https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#schema.fields.mode) is optional in the REST API, but this code fails:\r\n\r\n```\r\nSchemaField.from_api_repr({'name': 'colname', 'type': 'INT64'})\r\n```\r\n\r\nI believe this is due to the line\r\n\r\n```\r\nmode=api_repr['mode'].upper(),\r\n```\r\n\r\nwhich does not properly handle when the `mode` property is missing.\r\n\r\nRelated to https://github.com/GoogleCloudPlatform/google-cloud-python/issues/3723 as it is often convenient to omit the mode parameter when defining a schema file by hand.\r\n\n", "before_files": [{"content": "# Copyright 2015 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Schemas for BigQuery tables / queries.\"\"\"\n\n\nclass SchemaField(object):\n \"\"\"Describe a single field within a table schema.\n\n :type name: str\n :param name: the name of the field.\n\n :type field_type: str\n :param field_type: the type of the field (one of 'STRING', 'INTEGER',\n 'FLOAT', 'BOOLEAN', 'TIMESTAMP' or 'RECORD').\n\n :type mode: str\n :param mode: the mode of the field (one of 'NULLABLE', 'REQUIRED',\n or 'REPEATED').\n\n :type description: str\n :param description: optional description for the field.\n\n :type fields: tuple of :class:`~google.cloud.bigquery.schema.SchemaField`\n :param fields: subfields (requires ``field_type`` of 'RECORD').\n \"\"\"\n def __init__(self, name, field_type, mode='NULLABLE',\n description=None, fields=()):\n self._name = name\n self._field_type = field_type\n self._mode = mode\n self._description = description\n self._fields = tuple(fields)\n\n @classmethod\n def from_api_repr(cls, api_repr):\n \"\"\"Return a ``SchemaField`` object deserialized from a dictionary.\n\n Args:\n api_repr (Mapping[str, str]): The serialized representation\n of the SchemaField, such as what is output by\n :meth:`to_api_repr`.\n\n Returns:\n google.cloud.biquery.schema.SchemaField:\n The ``SchemaField`` object.\n \"\"\"\n return cls(\n field_type=api_repr['type'].upper(),\n fields=[cls.from_api_repr(f) for f in api_repr.get('fields', ())],\n mode=api_repr['mode'].upper(),\n name=api_repr['name'],\n )\n\n @property\n def name(self):\n \"\"\"str: The name of the field.\"\"\"\n return self._name\n\n @property\n def field_type(self):\n \"\"\"str: The type of the field.\n\n Will be one of 'STRING', 'INTEGER', 'FLOAT', 'BOOLEAN',\n 'TIMESTAMP' or 'RECORD'.\n \"\"\"\n return self._field_type\n\n @property\n def mode(self):\n \"\"\"str: The mode of the field.\n\n Will be one of 'NULLABLE', 'REQUIRED', or 'REPEATED'.\n \"\"\"\n return self._mode\n\n @property\n def is_nullable(self):\n \"\"\"Check whether 'mode' is 'nullable'.\"\"\"\n return self._mode == 'NULLABLE'\n\n @property\n def description(self):\n \"\"\"Optional[str]: Description for the field.\"\"\"\n return self._description\n\n @property\n def fields(self):\n \"\"\"tuple: Subfields contained in this field.\n\n If ``field_type`` is not 'RECORD', this property must be\n empty / unset.\n \"\"\"\n return self._fields\n\n def to_api_repr(self):\n \"\"\"Return a dictionary representing this schema field.\n\n Returns:\n dict: A dictionary representing the SchemaField in a serialized\n form.\n \"\"\"\n # Put together the basic representation. See http://bit.ly/2hOAT5u.\n answer = {\n 'mode': self.mode.lower(),\n 'name': self.name,\n 'type': self.field_type.lower(),\n }\n\n # If this is a RECORD type, then sub-fields are also included,\n # add this to the serialized representation.\n if self.field_type.upper() == 'RECORD':\n answer['fields'] = [f.to_api_repr() for f in self.fields]\n\n # Done; return the serialized dictionary.\n return answer\n\n def _key(self):\n \"\"\"A tuple key that uniquely describes this field.\n\n Used to compute this instance's hashcode and evaluate equality.\n\n Returns:\n tuple: The contents of this\n :class:`~google.cloud.bigquery.schema.SchemaField`.\n \"\"\"\n return (\n self._name,\n self._field_type.lower(),\n self._mode,\n self._description,\n self._fields,\n )\n\n def __eq__(self, other):\n if not isinstance(other, SchemaField):\n return NotImplemented\n return self._key() == other._key()\n\n def __ne__(self, other):\n return not self == other\n\n def __hash__(self):\n return hash(self._key())\n\n def __repr__(self):\n return 'SchemaField{}'.format(self._key())\n\n\ndef _parse_schema_resource(info):\n \"\"\"Parse a resource fragment into a schema field.\n\n :type info: mapping\n :param info: should contain a \"fields\" key to be parsed\n\n :rtype:\n list of :class:`google.cloud.bigquery.schema.SchemaField`, or\n ``NoneType``\n :returns: a list of parsed fields, or ``None`` if no \"fields\" key is\n present in ``info``.\n \"\"\"\n if 'fields' not in info:\n return ()\n\n schema = []\n for r_field in info['fields']:\n name = r_field['name']\n field_type = r_field['type']\n mode = r_field.get('mode', 'NULLABLE')\n description = r_field.get('description')\n sub_fields = _parse_schema_resource(r_field)\n schema.append(\n SchemaField(name, field_type, mode, description, sub_fields))\n return schema\n\n\ndef _build_schema_resource(fields):\n \"\"\"Generate a resource fragment for a schema.\n\n :type fields:\n sequence of :class:`~google.cloud.bigquery.schema.SchemaField`\n :param fields: schema to be dumped\n\n :rtype: mapping\n :returns: a mapping describing the schema of the supplied fields.\n \"\"\"\n infos = []\n for field in fields:\n info = {'name': field.name,\n 'type': field.field_type,\n 'mode': field.mode}\n if field.description is not None:\n info['description'] = field.description\n if field.fields:\n info['fields'] = _build_schema_resource(field.fields)\n infos.append(info)\n return infos\n", "path": "bigquery/google/cloud/bigquery/schema.py"}], "after_files": [{"content": "# Copyright 2015 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Schemas for BigQuery tables / queries.\"\"\"\n\n\nclass SchemaField(object):\n \"\"\"Describe a single field within a table schema.\n\n :type name: str\n :param name: the name of the field.\n\n :type field_type: str\n :param field_type: the type of the field (one of 'STRING', 'INTEGER',\n 'FLOAT', 'BOOLEAN', 'TIMESTAMP' or 'RECORD').\n\n :type mode: str\n :param mode: the mode of the field (one of 'NULLABLE', 'REQUIRED',\n or 'REPEATED').\n\n :type description: str\n :param description: optional description for the field.\n\n :type fields: tuple of :class:`~google.cloud.bigquery.schema.SchemaField`\n :param fields: subfields (requires ``field_type`` of 'RECORD').\n \"\"\"\n def __init__(self, name, field_type, mode='NULLABLE',\n description=None, fields=()):\n self._name = name\n self._field_type = field_type\n self._mode = mode\n self._description = description\n self._fields = tuple(fields)\n\n @classmethod\n def from_api_repr(cls, api_repr):\n \"\"\"Return a ``SchemaField`` object deserialized from a dictionary.\n\n Args:\n api_repr (Mapping[str, str]): The serialized representation\n of the SchemaField, such as what is output by\n :meth:`to_api_repr`.\n\n Returns:\n google.cloud.biquery.schema.SchemaField:\n The ``SchemaField`` object.\n \"\"\"\n # Handle optional properties with default values\n mode = api_repr.get('mode', 'NULLABLE')\n fields = api_repr.get('fields', ())\n return cls(\n field_type=api_repr['type'].upper(),\n fields=[cls.from_api_repr(f) for f in fields],\n mode=mode.upper(),\n name=api_repr['name'],\n )\n\n @property\n def name(self):\n \"\"\"str: The name of the field.\"\"\"\n return self._name\n\n @property\n def field_type(self):\n \"\"\"str: The type of the field.\n\n Will be one of 'STRING', 'INTEGER', 'FLOAT', 'BOOLEAN',\n 'TIMESTAMP' or 'RECORD'.\n \"\"\"\n return self._field_type\n\n @property\n def mode(self):\n \"\"\"str: The mode of the field.\n\n Will be one of 'NULLABLE', 'REQUIRED', or 'REPEATED'.\n \"\"\"\n return self._mode\n\n @property\n def is_nullable(self):\n \"\"\"Check whether 'mode' is 'nullable'.\"\"\"\n return self._mode == 'NULLABLE'\n\n @property\n def description(self):\n \"\"\"Optional[str]: Description for the field.\"\"\"\n return self._description\n\n @property\n def fields(self):\n \"\"\"tuple: Subfields contained in this field.\n\n If ``field_type`` is not 'RECORD', this property must be\n empty / unset.\n \"\"\"\n return self._fields\n\n def to_api_repr(self):\n \"\"\"Return a dictionary representing this schema field.\n\n Returns:\n dict: A dictionary representing the SchemaField in a serialized\n form.\n \"\"\"\n # Put together the basic representation. See http://bit.ly/2hOAT5u.\n answer = {\n 'mode': self.mode.lower(),\n 'name': self.name,\n 'type': self.field_type.lower(),\n }\n\n # If this is a RECORD type, then sub-fields are also included,\n # add this to the serialized representation.\n if self.field_type.upper() == 'RECORD':\n answer['fields'] = [f.to_api_repr() for f in self.fields]\n\n # Done; return the serialized dictionary.\n return answer\n\n def _key(self):\n \"\"\"A tuple key that uniquely describes this field.\n\n Used to compute this instance's hashcode and evaluate equality.\n\n Returns:\n tuple: The contents of this\n :class:`~google.cloud.bigquery.schema.SchemaField`.\n \"\"\"\n return (\n self._name,\n self._field_type.lower(),\n self._mode,\n self._description,\n self._fields,\n )\n\n def __eq__(self, other):\n if not isinstance(other, SchemaField):\n return NotImplemented\n return self._key() == other._key()\n\n def __ne__(self, other):\n return not self == other\n\n def __hash__(self):\n return hash(self._key())\n\n def __repr__(self):\n return 'SchemaField{}'.format(self._key())\n\n\ndef _parse_schema_resource(info):\n \"\"\"Parse a resource fragment into a schema field.\n\n :type info: mapping\n :param info: should contain a \"fields\" key to be parsed\n\n :rtype:\n list of :class:`google.cloud.bigquery.schema.SchemaField`, or\n ``NoneType``\n :returns: a list of parsed fields, or ``None`` if no \"fields\" key is\n present in ``info``.\n \"\"\"\n if 'fields' not in info:\n return ()\n\n schema = []\n for r_field in info['fields']:\n name = r_field['name']\n field_type = r_field['type']\n mode = r_field.get('mode', 'NULLABLE')\n description = r_field.get('description')\n sub_fields = _parse_schema_resource(r_field)\n schema.append(\n SchemaField(name, field_type, mode, description, sub_fields))\n return schema\n\n\ndef _build_schema_resource(fields):\n \"\"\"Generate a resource fragment for a schema.\n\n :type fields:\n sequence of :class:`~google.cloud.bigquery.schema.SchemaField`\n :param fields: schema to be dumped\n\n :rtype: mapping\n :returns: a mapping describing the schema of the supplied fields.\n \"\"\"\n infos = []\n for field in fields:\n info = {'name': field.name,\n 'type': field.field_type,\n 'mode': field.mode}\n if field.description is not None:\n info['description'] = field.description\n if field.fields:\n info['fields'] = _build_schema_resource(field.fields)\n infos.append(info)\n return infos\n", "path": "bigquery/google/cloud/bigquery/schema.py"}]} | 2,384 | 191 |
gh_patches_debug_26000 | rasdani/github-patches | git_diff | oppia__oppia-17178 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG]: Quiz does not automatically move to next question on input
### Describe the bug
I was taking a lesson on decimal place values and when I input a correct answer, I get a message that
_"Note: When you select an option, the quiz will automatically move to the next question."_
But it doesn't move. It still asks me to click on 'continue'

### Steps To Reproduce
1. Go Math Lessons
2. Click on Decimals (This occurs in other lessons too)
3. Take the introductory quizzes
4. Take quiz on Decimals
5. Answer it correctly
6. See error
### Expected Behavior
I expect to be taken to next page when I get my answer correctly, as displayed by the message.
### Screenshots/Videos

### What device are you using?
Desktop
### Operating System
Windows
### What browsers are you seeing the problem on?
Chrome
### Browser version
Version 111.0.5563.65 (Official Build) (64-bit)
### Additional context
_No response_
[BUG]: Quiz does not automatically move to next question on input
### Describe the bug
I was taking a lesson on decimal place values and when I input a correct answer, I get a message that
_"Note: When you select an option, the quiz will automatically move to the next question."_
But it doesn't move. It still asks me to click on 'continue'

### Steps To Reproduce
1. Go Math Lessons
2. Click on Decimals (This occurs in other lessons too)
3. Take the introductory quizzes
4. Take quiz on Decimals
5. Answer it correctly
6. See error
### Expected Behavior
I expect to be taken to next page when I get my answer correctly, as displayed by the message.
### Screenshots/Videos

### What device are you using?
Desktop
### Operating System
Windows
### What browsers are you seeing the problem on?
Chrome
### Browser version
Version 111.0.5563.65 (Official Build) (64-bit)
### Additional context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `extensions/interactions/ItemSelectionInput/ItemSelectionInput.py`
Content:
```
1 # coding: utf-8
2 #
3 # Copyright 2014 The Oppia Authors. All Rights Reserved.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, softwar
12 # distributed under the License is distributed on an "AS-IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 """Python configuration for ItemSelectionInput interaction."""
18
19 from __future__ import annotations
20
21 from extensions.interactions import base
22
23 from typing import List
24
25 MYPY = False
26 if MYPY: # pragma: no cover
27 from extensions import domain
28
29
30 class ItemSelectionInput(base.BaseInteraction):
31 """Interaction for item selection input."""
32
33 name: str = 'Item Selection'
34 description: str = (
35 'Allows learners to select various options.')
36 display_mode: str = base.DISPLAY_MODE_INLINE
37 _dependency_ids: List[str] = []
38 answer_type: str = 'SetOfTranslatableHtmlContentIds'
39 # Radio buttons get unselected when specifying a solution. This needs to be
40 # fixed before solution feature can support this interaction.
41 can_have_solution: bool = False
42 # ItemSelectionInput's submit button is dynamic and is handled
43 # separately.
44 show_generic_submit_button: bool = False
45
46 _customization_arg_specs: List[domain.CustomizationArgSpecsDict] = [{
47 'name': 'minAllowableSelectionCount',
48 'description': 'Minimum number of selections permitted',
49 'schema': {
50 'type': 'int',
51 'validators': [{
52 'id': 'is_at_least',
53 'min_value': 0,
54 }],
55 },
56 'default_value': 1,
57 }, {
58 'name': 'maxAllowableSelectionCount',
59 'description': 'Maximum number of selections permitted',
60 'schema': {
61 'type': 'int',
62 'validators': [{
63 'id': 'is_at_least',
64 'min_value': 1,
65 }],
66 },
67 'default_value': 1,
68 }, {
69 'name': 'choices',
70 'description': 'Items for selection',
71 'schema': {
72 'type': 'list',
73 'validators': [{
74 'id': 'has_unique_subtitled_contents'
75 }],
76 'items': {
77 'type': 'custom',
78 'obj_type': 'SubtitledHtml',
79 'validators': [{
80 'id': 'has_subtitled_html_non_empty'
81 }],
82 'replacement_ui_config': {
83 'html': {
84 'hide_complex_extensions': True,
85 'placeholder': 'Sample item answer',
86 }
87 }
88 },
89 'ui_config': {
90 'add_element_text': 'Add item for selection',
91 }
92 },
93 'default_value': [{
94 'content_id': None,
95 'html': ''
96 }],
97 }]
98
99 _answer_visualization_specs: List[base.AnswerVisualizationSpecsDict] = [{
100 # Table with keyed answer counts for top N answers.
101 'id': 'EnumeratedFrequencyTable',
102 'options': {
103 'column_headers': ['Answer (click to expand/collapse)', 'Count'],
104 'title': 'Top answers',
105 },
106 'calculation_id': 'Top10AnswerFrequencies',
107 'addressed_info_is_supported': True,
108 }]
109
```
Path: `extensions/interactions/MultipleChoiceInput/MultipleChoiceInput.py`
Content:
```
1 # coding: utf-8
2 #
3 # Copyright 2014 The Oppia Authors. All Rights Reserved.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, softwar
12 # distributed under the License is distributed on an "AS-IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 """Python configuration for MultipleChoiceInput interaction."""
18
19 from __future__ import annotations
20
21 from extensions.interactions import base
22
23 from typing import List, Optional
24
25 MYPY = False
26 if MYPY: # pragma: no cover
27 from extensions import domain
28
29
30 class MultipleChoiceInput(base.BaseInteraction):
31 """Interaction for multiple choice input."""
32
33 name: str = 'Multiple Choice'
34 description: str = (
35 'Allows learners to select one of a list of multiple-choice options.')
36 display_mode: str = base.DISPLAY_MODE_INLINE
37 _dependency_ids: List[str] = []
38 answer_type: str = 'NonnegativeInt'
39 instructions: Optional[str] = None
40 narrow_instructions: Optional[str] = None
41 needs_summary: bool = False
42 # Radio buttons get unselected when specifying a solution. This needs to be
43 # fixed before solution feature can support this interaction.
44 can_have_solution: bool = False
45 show_generic_submit_button: bool = False
46
47 _customization_arg_specs: List[domain.CustomizationArgSpecsDict] = [{
48 'name': 'choices',
49 'description': 'Multiple Choice options',
50 'schema': {
51 'type': 'list',
52 'validators': [
53 {
54 'id': 'has_length_at_least',
55 'min_value': 1,
56 },
57 {
58 'id': 'has_unique_subtitled_contents'
59 }
60 ],
61 'items': {
62 'type': 'custom',
63 'obj_type': 'SubtitledHtml',
64 'validators': [{
65 'id': 'has_subtitled_html_non_empty'
66 }],
67 'replacement_ui_config': {
68 'html': {
69 'hide_complex_extensions': True,
70 'placeholder': (
71 'Enter an option for the learner to select'),
72 }
73 }
74 },
75 'ui_config': {
76 'add_element_text': 'Add multiple choice option',
77 }
78 },
79 'default_value': [{
80 'content_id': None,
81 'html': ''
82 }],
83 }, {
84 'name': 'showChoicesInShuffledOrder',
85 'description': 'Shuffle answer choices',
86 'schema': {
87 'type': 'bool',
88 },
89 'default_value': True
90 }]
91
92 _answer_visualization_specs: List[base.AnswerVisualizationSpecsDict] = [{
93 'id': 'SortedTiles',
94 'options': {'header': 'Top answers', 'use_percentages': True},
95 'calculation_id': 'AnswerFrequencies',
96 'addressed_info_is_supported': True,
97 }]
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/extensions/interactions/ItemSelectionInput/ItemSelectionInput.py b/extensions/interactions/ItemSelectionInput/ItemSelectionInput.py
--- a/extensions/interactions/ItemSelectionInput/ItemSelectionInput.py
+++ b/extensions/interactions/ItemSelectionInput/ItemSelectionInput.py
@@ -39,9 +39,8 @@
# Radio buttons get unselected when specifying a solution. This needs to be
# fixed before solution feature can support this interaction.
can_have_solution: bool = False
- # ItemSelectionInput's submit button is dynamic and is handled
- # separately.
- show_generic_submit_button: bool = False
+ # ItemSelectionInput interaction must contain a generic submit button.
+ show_generic_submit_button: bool = True
_customization_arg_specs: List[domain.CustomizationArgSpecsDict] = [{
'name': 'minAllowableSelectionCount',
diff --git a/extensions/interactions/MultipleChoiceInput/MultipleChoiceInput.py b/extensions/interactions/MultipleChoiceInput/MultipleChoiceInput.py
--- a/extensions/interactions/MultipleChoiceInput/MultipleChoiceInput.py
+++ b/extensions/interactions/MultipleChoiceInput/MultipleChoiceInput.py
@@ -42,7 +42,8 @@
# Radio buttons get unselected when specifying a solution. This needs to be
# fixed before solution feature can support this interaction.
can_have_solution: bool = False
- show_generic_submit_button: bool = False
+ # MultipleChoiceInput interaction must contain a generic submit button.
+ show_generic_submit_button: bool = True
_customization_arg_specs: List[domain.CustomizationArgSpecsDict] = [{
'name': 'choices',
| {"golden_diff": "diff --git a/extensions/interactions/ItemSelectionInput/ItemSelectionInput.py b/extensions/interactions/ItemSelectionInput/ItemSelectionInput.py\n--- a/extensions/interactions/ItemSelectionInput/ItemSelectionInput.py\n+++ b/extensions/interactions/ItemSelectionInput/ItemSelectionInput.py\n@@ -39,9 +39,8 @@\n # Radio buttons get unselected when specifying a solution. This needs to be\n # fixed before solution feature can support this interaction.\n can_have_solution: bool = False\n- # ItemSelectionInput's submit button is dynamic and is handled\n- # separately.\n- show_generic_submit_button: bool = False\n+ # ItemSelectionInput interaction must contain a generic submit button.\n+ show_generic_submit_button: bool = True\n \n _customization_arg_specs: List[domain.CustomizationArgSpecsDict] = [{\n 'name': 'minAllowableSelectionCount',\ndiff --git a/extensions/interactions/MultipleChoiceInput/MultipleChoiceInput.py b/extensions/interactions/MultipleChoiceInput/MultipleChoiceInput.py\n--- a/extensions/interactions/MultipleChoiceInput/MultipleChoiceInput.py\n+++ b/extensions/interactions/MultipleChoiceInput/MultipleChoiceInput.py\n@@ -42,7 +42,8 @@\n # Radio buttons get unselected when specifying a solution. This needs to be\n # fixed before solution feature can support this interaction.\n can_have_solution: bool = False\n- show_generic_submit_button: bool = False\n+ # MultipleChoiceInput interaction must contain a generic submit button.\n+ show_generic_submit_button: bool = True\n \n _customization_arg_specs: List[domain.CustomizationArgSpecsDict] = [{\n 'name': 'choices',\n", "issue": "[BUG]: Quiz does not automatically move to next question on input\n### Describe the bug\n\nI was taking a lesson on decimal place values and when I input a correct answer, I get a message that \r\n\r\n_\"Note: When you select an option, the quiz will automatically move to the next question.\"_\r\n\r\nBut it doesn't move. It still asks me to click on 'continue'\r\n\r\n\r\n\n\n### Steps To Reproduce\n\n1. Go Math Lessons\r\n2. Click on Decimals (This occurs in other lessons too)\r\n3. Take the introductory quizzes\r\n4. Take quiz on Decimals\r\n5. Answer it correctly\r\n6. See error\n\n### Expected Behavior\n\nI expect to be taken to next page when I get my answer correctly, as displayed by the message.\n\n### Screenshots/Videos\n\n\r\n\n\n### What device are you using?\n\nDesktop\n\n### Operating System\n\nWindows\n\n### What browsers are you seeing the problem on?\n\nChrome\n\n### Browser version\n\nVersion 111.0.5563.65 (Official Build) (64-bit)\n\n### Additional context\n\n_No response_\n[BUG]: Quiz does not automatically move to next question on input\n### Describe the bug\n\nI was taking a lesson on decimal place values and when I input a correct answer, I get a message that \r\n\r\n_\"Note: When you select an option, the quiz will automatically move to the next question.\"_\r\n\r\nBut it doesn't move. It still asks me to click on 'continue'\r\n\r\n\r\n\n\n### Steps To Reproduce\n\n1. Go Math Lessons\r\n2. Click on Decimals (This occurs in other lessons too)\r\n3. Take the introductory quizzes\r\n4. Take quiz on Decimals\r\n5. Answer it correctly\r\n6. See error\n\n### Expected Behavior\n\nI expect to be taken to next page when I get my answer correctly, as displayed by the message.\n\n### Screenshots/Videos\n\n\r\n\n\n### What device are you using?\n\nDesktop\n\n### Operating System\n\nWindows\n\n### What browsers are you seeing the problem on?\n\nChrome\n\n### Browser version\n\nVersion 111.0.5563.65 (Official Build) (64-bit)\n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "# coding: utf-8\n#\n# Copyright 2014 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, softwar\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Python configuration for ItemSelectionInput interaction.\"\"\"\n\nfrom __future__ import annotations\n\nfrom extensions.interactions import base\n\nfrom typing import List\n\nMYPY = False\nif MYPY: # pragma: no cover\n from extensions import domain\n\n\nclass ItemSelectionInput(base.BaseInteraction):\n \"\"\"Interaction for item selection input.\"\"\"\n\n name: str = 'Item Selection'\n description: str = (\n 'Allows learners to select various options.')\n display_mode: str = base.DISPLAY_MODE_INLINE\n _dependency_ids: List[str] = []\n answer_type: str = 'SetOfTranslatableHtmlContentIds'\n # Radio buttons get unselected when specifying a solution. This needs to be\n # fixed before solution feature can support this interaction.\n can_have_solution: bool = False\n # ItemSelectionInput's submit button is dynamic and is handled\n # separately.\n show_generic_submit_button: bool = False\n\n _customization_arg_specs: List[domain.CustomizationArgSpecsDict] = [{\n 'name': 'minAllowableSelectionCount',\n 'description': 'Minimum number of selections permitted',\n 'schema': {\n 'type': 'int',\n 'validators': [{\n 'id': 'is_at_least',\n 'min_value': 0,\n }],\n },\n 'default_value': 1,\n }, {\n 'name': 'maxAllowableSelectionCount',\n 'description': 'Maximum number of selections permitted',\n 'schema': {\n 'type': 'int',\n 'validators': [{\n 'id': 'is_at_least',\n 'min_value': 1,\n }],\n },\n 'default_value': 1,\n }, {\n 'name': 'choices',\n 'description': 'Items for selection',\n 'schema': {\n 'type': 'list',\n 'validators': [{\n 'id': 'has_unique_subtitled_contents'\n }],\n 'items': {\n 'type': 'custom',\n 'obj_type': 'SubtitledHtml',\n 'validators': [{\n 'id': 'has_subtitled_html_non_empty'\n }],\n 'replacement_ui_config': {\n 'html': {\n 'hide_complex_extensions': True,\n 'placeholder': 'Sample item answer',\n }\n }\n },\n 'ui_config': {\n 'add_element_text': 'Add item for selection',\n }\n },\n 'default_value': [{\n 'content_id': None,\n 'html': ''\n }],\n }]\n\n _answer_visualization_specs: List[base.AnswerVisualizationSpecsDict] = [{\n # Table with keyed answer counts for top N answers.\n 'id': 'EnumeratedFrequencyTable',\n 'options': {\n 'column_headers': ['Answer (click to expand/collapse)', 'Count'],\n 'title': 'Top answers',\n },\n 'calculation_id': 'Top10AnswerFrequencies',\n 'addressed_info_is_supported': True,\n }]\n", "path": "extensions/interactions/ItemSelectionInput/ItemSelectionInput.py"}, {"content": "# coding: utf-8\n#\n# Copyright 2014 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, softwar\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Python configuration for MultipleChoiceInput interaction.\"\"\"\n\nfrom __future__ import annotations\n\nfrom extensions.interactions import base\n\nfrom typing import List, Optional\n\nMYPY = False\nif MYPY: # pragma: no cover\n from extensions import domain\n\n\nclass MultipleChoiceInput(base.BaseInteraction):\n \"\"\"Interaction for multiple choice input.\"\"\"\n\n name: str = 'Multiple Choice'\n description: str = (\n 'Allows learners to select one of a list of multiple-choice options.')\n display_mode: str = base.DISPLAY_MODE_INLINE\n _dependency_ids: List[str] = []\n answer_type: str = 'NonnegativeInt'\n instructions: Optional[str] = None\n narrow_instructions: Optional[str] = None\n needs_summary: bool = False\n # Radio buttons get unselected when specifying a solution. This needs to be\n # fixed before solution feature can support this interaction.\n can_have_solution: bool = False\n show_generic_submit_button: bool = False\n\n _customization_arg_specs: List[domain.CustomizationArgSpecsDict] = [{\n 'name': 'choices',\n 'description': 'Multiple Choice options',\n 'schema': {\n 'type': 'list',\n 'validators': [\n {\n 'id': 'has_length_at_least',\n 'min_value': 1,\n },\n {\n 'id': 'has_unique_subtitled_contents'\n }\n ],\n 'items': {\n 'type': 'custom',\n 'obj_type': 'SubtitledHtml',\n 'validators': [{\n 'id': 'has_subtitled_html_non_empty'\n }],\n 'replacement_ui_config': {\n 'html': {\n 'hide_complex_extensions': True,\n 'placeholder': (\n 'Enter an option for the learner to select'),\n }\n }\n },\n 'ui_config': {\n 'add_element_text': 'Add multiple choice option',\n }\n },\n 'default_value': [{\n 'content_id': None,\n 'html': ''\n }],\n }, {\n 'name': 'showChoicesInShuffledOrder',\n 'description': 'Shuffle answer choices',\n 'schema': {\n 'type': 'bool',\n },\n 'default_value': True\n }]\n\n _answer_visualization_specs: List[base.AnswerVisualizationSpecsDict] = [{\n 'id': 'SortedTiles',\n 'options': {'header': 'Top answers', 'use_percentages': True},\n 'calculation_id': 'AnswerFrequencies',\n 'addressed_info_is_supported': True,\n }]\n", "path": "extensions/interactions/MultipleChoiceInput/MultipleChoiceInput.py"}], "after_files": [{"content": "# coding: utf-8\n#\n# Copyright 2014 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, softwar\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Python configuration for ItemSelectionInput interaction.\"\"\"\n\nfrom __future__ import annotations\n\nfrom extensions.interactions import base\n\nfrom typing import List\n\nMYPY = False\nif MYPY: # pragma: no cover\n from extensions import domain\n\n\nclass ItemSelectionInput(base.BaseInteraction):\n \"\"\"Interaction for item selection input.\"\"\"\n\n name: str = 'Item Selection'\n description: str = (\n 'Allows learners to select various options.')\n display_mode: str = base.DISPLAY_MODE_INLINE\n _dependency_ids: List[str] = []\n answer_type: str = 'SetOfTranslatableHtmlContentIds'\n # Radio buttons get unselected when specifying a solution. This needs to be\n # fixed before solution feature can support this interaction.\n can_have_solution: bool = False\n # ItemSelectionInput interaction must contain a generic submit button.\n show_generic_submit_button: bool = True\n\n _customization_arg_specs: List[domain.CustomizationArgSpecsDict] = [{\n 'name': 'minAllowableSelectionCount',\n 'description': 'Minimum number of selections permitted',\n 'schema': {\n 'type': 'int',\n 'validators': [{\n 'id': 'is_at_least',\n 'min_value': 0,\n }],\n },\n 'default_value': 1,\n }, {\n 'name': 'maxAllowableSelectionCount',\n 'description': 'Maximum number of selections permitted',\n 'schema': {\n 'type': 'int',\n 'validators': [{\n 'id': 'is_at_least',\n 'min_value': 1,\n }],\n },\n 'default_value': 1,\n }, {\n 'name': 'choices',\n 'description': 'Items for selection',\n 'schema': {\n 'type': 'list',\n 'validators': [{\n 'id': 'has_unique_subtitled_contents'\n }],\n 'items': {\n 'type': 'custom',\n 'obj_type': 'SubtitledHtml',\n 'validators': [{\n 'id': 'has_subtitled_html_non_empty'\n }],\n 'replacement_ui_config': {\n 'html': {\n 'hide_complex_extensions': True,\n 'placeholder': 'Sample item answer',\n }\n }\n },\n 'ui_config': {\n 'add_element_text': 'Add item for selection',\n }\n },\n 'default_value': [{\n 'content_id': None,\n 'html': ''\n }],\n }]\n\n _answer_visualization_specs: List[base.AnswerVisualizationSpecsDict] = [{\n # Table with keyed answer counts for top N answers.\n 'id': 'EnumeratedFrequencyTable',\n 'options': {\n 'column_headers': ['Answer (click to expand/collapse)', 'Count'],\n 'title': 'Top answers',\n },\n 'calculation_id': 'Top10AnswerFrequencies',\n 'addressed_info_is_supported': True,\n }]\n", "path": "extensions/interactions/ItemSelectionInput/ItemSelectionInput.py"}, {"content": "# coding: utf-8\n#\n# Copyright 2014 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, softwar\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Python configuration for MultipleChoiceInput interaction.\"\"\"\n\nfrom __future__ import annotations\n\nfrom extensions.interactions import base\n\nfrom typing import List, Optional\n\nMYPY = False\nif MYPY: # pragma: no cover\n from extensions import domain\n\n\nclass MultipleChoiceInput(base.BaseInteraction):\n \"\"\"Interaction for multiple choice input.\"\"\"\n\n name: str = 'Multiple Choice'\n description: str = (\n 'Allows learners to select one of a list of multiple-choice options.')\n display_mode: str = base.DISPLAY_MODE_INLINE\n _dependency_ids: List[str] = []\n answer_type: str = 'NonnegativeInt'\n instructions: Optional[str] = None\n narrow_instructions: Optional[str] = None\n needs_summary: bool = False\n # Radio buttons get unselected when specifying a solution. This needs to be\n # fixed before solution feature can support this interaction.\n can_have_solution: bool = False\n # MultipleChoiceInput interaction must contain a generic submit button.\n show_generic_submit_button: bool = True\n\n _customization_arg_specs: List[domain.CustomizationArgSpecsDict] = [{\n 'name': 'choices',\n 'description': 'Multiple Choice options',\n 'schema': {\n 'type': 'list',\n 'validators': [\n {\n 'id': 'has_length_at_least',\n 'min_value': 1,\n },\n {\n 'id': 'has_unique_subtitled_contents'\n }\n ],\n 'items': {\n 'type': 'custom',\n 'obj_type': 'SubtitledHtml',\n 'validators': [{\n 'id': 'has_subtitled_html_non_empty'\n }],\n 'replacement_ui_config': {\n 'html': {\n 'hide_complex_extensions': True,\n 'placeholder': (\n 'Enter an option for the learner to select'),\n }\n }\n },\n 'ui_config': {\n 'add_element_text': 'Add multiple choice option',\n }\n },\n 'default_value': [{\n 'content_id': None,\n 'html': ''\n }],\n }, {\n 'name': 'showChoicesInShuffledOrder',\n 'description': 'Shuffle answer choices',\n 'schema': {\n 'type': 'bool',\n },\n 'default_value': True\n }]\n\n _answer_visualization_specs: List[base.AnswerVisualizationSpecsDict] = [{\n 'id': 'SortedTiles',\n 'options': {'header': 'Top answers', 'use_percentages': True},\n 'calculation_id': 'AnswerFrequencies',\n 'addressed_info_is_supported': True,\n }]\n", "path": "extensions/interactions/MultipleChoiceInput/MultipleChoiceInput.py"}]} | 2,890 | 374 |
gh_patches_debug_8057 | rasdani/github-patches | git_diff | dmlc__dgl-1478 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Loading dataset PROTEINS leads to IndexError
## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
When loading dataset PROTEINS using function `LegacyTUDataset()`, raise `IndexError: too many indices for array`
## To Reproduce
Steps to reproduce the behavior:
```python
from dgl.data import LegacyTUDataset
dataset = LegacyTUDataset(name='PROTEINS')
```
The error message as following:
```
~/opt/anaconda3/lib/python3.7/site-packages/dgl/data/tu.py in __init__(self, name, use_pandas, hidden_size, max_allow_node)
78 self._file_path("node_attributes"), delimiter=",")
79 for idxs, g in zip(node_idx_list, self.graph_lists):
---> 80 g.ndata['feat'] = DS_node_attr[idxs, :]
81 self.data_mode = "node_attr"
82 except IOError:
IndexError: too many indices for array
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
Successfully loading PROTEINS dataset
## Environment
- DGL Version (e.g., 1.0): 0.4.2
- Backend Library & Version (e.g., PyTorch 0.4.1, MXNet/Gluon 1.3): PyTorch
- OS (e.g., Linux): MacOS X
- How you installed DGL (`conda`, `pip`, source): pip
- Build command you used (if compiling from source):
- Python version: 3.7
- CUDA/cuDNN version (if applicable):
- GPU models and configuration (e.g. V100):
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/dgl/data/tu.py`
Content:
```
1 from __future__ import absolute_import
2 import numpy as np
3 import os
4 import random
5
6 from .utils import download, extract_archive, get_download_dir, loadtxt
7 from ..graph import DGLGraph
8
9 class LegacyTUDataset(object):
10 """
11 TUDataset contains lots of graph kernel datasets for graph classification.
12 Use provided node feature by default. If no feature provided, use one-hot node label instead.
13 If neither labels provided, use constant for node feature.
14
15 :param name: Dataset Name, such as `ENZYMES`, `DD`, `COLLAB`
16 :param use_pandas: Default: False.
17 Numpy's file read function has performance issue when file is large,
18 using pandas can be faster.
19 :param hidden_size: Default 10. Some dataset doesn't contain features.
20 Use constant node features initialization instead, with hidden size as `hidden_size`.
21
22 """
23
24 _url = r"https://ls11-www.cs.tu-dortmund.de/people/morris/graphkerneldatasets/{}.zip"
25
26 def __init__(self, name, use_pandas=False,
27 hidden_size=10, max_allow_node=None):
28
29 self.name = name
30 self.hidden_size = hidden_size
31 self.extract_dir = self._download()
32 self.data_mode = None
33 self.max_allow_node = max_allow_node
34
35 if use_pandas:
36 import pandas as pd
37 DS_edge_list = self._idx_from_zero(
38 pd.read_csv(self._file_path("A"), delimiter=",", dtype=int, header=None).values)
39 else:
40 DS_edge_list = self._idx_from_zero(
41 np.genfromtxt(self._file_path("A"), delimiter=",", dtype=int))
42
43 DS_indicator = self._idx_from_zero(
44 np.genfromtxt(self._file_path("graph_indicator"), dtype=int))
45 DS_graph_labels = self._idx_from_zero(
46 np.genfromtxt(self._file_path("graph_labels"), dtype=int))
47
48 g = DGLGraph()
49 g.add_nodes(int(DS_edge_list.max()) + 1)
50 g.add_edges(DS_edge_list[:, 0], DS_edge_list[:, 1])
51
52 node_idx_list = []
53 self.max_num_node = 0
54 for idx in range(np.max(DS_indicator) + 1):
55 node_idx = np.where(DS_indicator == idx)
56 node_idx_list.append(node_idx[0])
57 if len(node_idx[0]) > self.max_num_node:
58 self.max_num_node = len(node_idx[0])
59
60 self.graph_lists = g.subgraphs(node_idx_list)
61 self.num_labels = max(DS_graph_labels) + 1
62 self.graph_labels = DS_graph_labels
63
64 try:
65 DS_node_labels = self._idx_from_zero(
66 np.loadtxt(self._file_path("node_labels"), dtype=int))
67 g.ndata['node_label'] = DS_node_labels
68 one_hot_node_labels = self._to_onehot(DS_node_labels)
69 for idxs, g in zip(node_idx_list, self.graph_lists):
70 g.ndata['feat'] = one_hot_node_labels[idxs, :]
71 self.data_mode = "node_label"
72 except IOError:
73 print("No Node Label Data")
74
75 try:
76 DS_node_attr = np.loadtxt(
77 self._file_path("node_attributes"), delimiter=",")
78 for idxs, g in zip(node_idx_list, self.graph_lists):
79 g.ndata['feat'] = DS_node_attr[idxs, :]
80 self.data_mode = "node_attr"
81 except IOError:
82 print("No Node Attribute Data")
83
84 if 'feat' not in g.ndata.keys():
85 for idxs, g in zip(node_idx_list, self.graph_lists):
86 g.ndata['feat'] = np.ones((g.number_of_nodes(), hidden_size))
87 self.data_mode = "constant"
88 print(
89 "Use Constant one as Feature with hidden size {}".format(hidden_size))
90
91 # remove graphs that are too large by user given standard
92 # optional pre-processing steop in conformity with Rex Ying's original
93 # DiffPool implementation
94 if self.max_allow_node:
95 preserve_idx = []
96 print("original dataset length : ", len(self.graph_lists))
97 for (i, g) in enumerate(self.graph_lists):
98 if g.number_of_nodes() <= self.max_allow_node:
99 preserve_idx.append(i)
100 self.graph_lists = [self.graph_lists[i] for i in preserve_idx]
101 print(
102 "after pruning graphs that are too big : ", len(
103 self.graph_lists))
104 self.graph_labels = [self.graph_labels[i] for i in preserve_idx]
105 self.max_num_node = self.max_allow_node
106
107 def __getitem__(self, idx):
108 """Get the i^th sample.
109 Paramters
110 ---------
111 idx : int
112 The sample index.
113 Returns
114 -------
115 (dgl.DGLGraph, int)
116 DGLGraph with node feature stored in `feat` field and node label in `node_label` if available.
117 And its label.
118 """
119 g = self.graph_lists[idx]
120 return g, self.graph_labels[idx]
121
122 def __len__(self):
123 return len(self.graph_lists)
124
125 def _download(self):
126 download_dir = get_download_dir()
127 zip_file_path = os.path.join(
128 download_dir,
129 "tu_{}.zip".format(
130 self.name))
131 download(self._url.format(self.name), path=zip_file_path)
132 extract_dir = os.path.join(download_dir, "tu_{}".format(self.name))
133 extract_archive(zip_file_path, extract_dir)
134 return extract_dir
135
136 def _file_path(self, category):
137 return os.path.join(self.extract_dir, self.name,
138 "{}_{}.txt".format(self.name, category))
139
140 @staticmethod
141 def _idx_from_zero(idx_tensor):
142 return idx_tensor - np.min(idx_tensor)
143
144 @staticmethod
145 def _to_onehot(label_tensor):
146 label_num = label_tensor.shape[0]
147 assert np.min(label_tensor) == 0
148 one_hot_tensor = np.zeros((label_num, np.max(label_tensor) + 1))
149 one_hot_tensor[np.arange(label_num), label_tensor] = 1
150 return one_hot_tensor
151
152 def statistics(self):
153 return self.graph_lists[0].ndata['feat'].shape[1],\
154 self.num_labels,\
155 self.max_num_node
156
157
158 class TUDataset(object):
159 """
160 TUDataset contains lots of graph kernel datasets for graph classification.
161 Graphs may have node labels, node attributes, edge labels, and edge attributes,
162 varing from different dataset.
163
164 :param name: Dataset Name, such as `ENZYMES`, `DD`, `COLLAB`, `MUTAG`, can be the
165 datasets name on https://ls11-www.cs.tu-dortmund.de/staff/morris/graphkerneldatasets.
166 """
167
168 _url = r"https://ls11-www.cs.tu-dortmund.de/people/morris/graphkerneldatasets/{}.zip"
169
170 def __init__(self, name):
171
172 self.name = name
173 self.extract_dir = self._download()
174
175 DS_edge_list = self._idx_from_zero(
176 loadtxt(self._file_path("A"), delimiter=",").astype(int))
177 DS_indicator = self._idx_from_zero(
178 loadtxt(self._file_path("graph_indicator"), delimiter=",").astype(int))
179 DS_graph_labels = self._idx_from_zero(
180 loadtxt(self._file_path("graph_labels"), delimiter=",").astype(int))
181
182 g = DGLGraph()
183 g.add_nodes(int(DS_edge_list.max()) + 1)
184 g.add_edges(DS_edge_list[:, 0], DS_edge_list[:, 1])
185
186 node_idx_list = []
187 self.max_num_node = 0
188 for idx in range(np.max(DS_indicator) + 1):
189 node_idx = np.where(DS_indicator == idx)
190 node_idx_list.append(node_idx[0])
191 if len(node_idx[0]) > self.max_num_node:
192 self.max_num_node = len(node_idx[0])
193
194 self.num_labels = max(DS_graph_labels) + 1
195 self.graph_labels = DS_graph_labels
196
197 self.attr_dict = {
198 'node_labels': ('ndata', 'node_labels'),
199 'node_attributes': ('ndata', 'node_attr'),
200 'edge_labels': ('edata', 'edge_labels'),
201 'edge_attributes': ('edata', 'node_labels'),
202 }
203
204 for filename, field_name in self.attr_dict.items():
205 try:
206 data = loadtxt(self._file_path(filename),
207 delimiter=',').astype(int)
208 if 'label' in filename:
209 data = self._idx_from_zero(data)
210 getattr(g, field_name[0])[field_name[1]] = data
211 except IOError:
212 pass
213
214 self.graph_lists = g.subgraphs(node_idx_list)
215 for g in self.graph_lists:
216 g.copy_from_parent()
217
218 def __getitem__(self, idx):
219 """Get the i^th sample.
220 Paramters
221 ---------
222 idx : int
223 The sample index.
224 Returns
225 -------
226 (dgl.DGLGraph, int)
227 DGLGraph with node feature stored in `feat` field and node label in `node_label` if available.
228 And its label.
229 """
230 g = self.graph_lists[idx]
231 return g, self.graph_labels[idx]
232
233 def __len__(self):
234 return len(self.graph_lists)
235
236 def _download(self):
237 download_dir = get_download_dir()
238 zip_file_path = os.path.join(
239 download_dir,
240 "tu_{}.zip".format(
241 self.name))
242 download(self._url.format(self.name), path=zip_file_path)
243 extract_dir = os.path.join(download_dir, "tu_{}".format(self.name))
244 extract_archive(zip_file_path, extract_dir)
245 return extract_dir
246
247 def _file_path(self, category):
248 return os.path.join(self.extract_dir, self.name,
249 "{}_{}.txt".format(self.name, category))
250
251 @staticmethod
252 def _idx_from_zero(idx_tensor):
253 return idx_tensor - np.min(idx_tensor)
254
255 def statistics(self):
256 return self.graph_lists[0].ndata['feat'].shape[1], \
257 self.num_labels, \
258 self.max_num_node
259
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/python/dgl/data/tu.py b/python/dgl/data/tu.py
--- a/python/dgl/data/tu.py
+++ b/python/dgl/data/tu.py
@@ -75,6 +75,8 @@
try:
DS_node_attr = np.loadtxt(
self._file_path("node_attributes"), delimiter=",")
+ if DS_node_attr.ndim == 1:
+ DS_node_attr = np.expand_dims(DS_node_attr, -1)
for idxs, g in zip(node_idx_list, self.graph_lists):
g.ndata['feat'] = DS_node_attr[idxs, :]
self.data_mode = "node_attr"
| {"golden_diff": "diff --git a/python/dgl/data/tu.py b/python/dgl/data/tu.py\n--- a/python/dgl/data/tu.py\n+++ b/python/dgl/data/tu.py\n@@ -75,6 +75,8 @@\n try:\n DS_node_attr = np.loadtxt(\n self._file_path(\"node_attributes\"), delimiter=\",\")\n+ if DS_node_attr.ndim == 1:\n+ DS_node_attr = np.expand_dims(DS_node_attr, -1)\n for idxs, g in zip(node_idx_list, self.graph_lists):\n g.ndata['feat'] = DS_node_attr[idxs, :]\n self.data_mode = \"node_attr\"\n", "issue": "Loading dataset PROTEINS leads to IndexError\n## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\nWhen loading dataset PROTEINS using function `LegacyTUDataset()`, raise `IndexError: too many indices for array`\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n```python\r\nfrom dgl.data import LegacyTUDataset\r\n\r\ndataset = LegacyTUDataset(name='PROTEINS')\r\n ```\r\nThe error message as following:\r\n```\r\n~/opt/anaconda3/lib/python3.7/site-packages/dgl/data/tu.py in __init__(self, name, use_pandas, hidden_size, max_allow_node)\r\n 78 self._file_path(\"node_attributes\"), delimiter=\",\")\r\n 79 for idxs, g in zip(node_idx_list, self.graph_lists):\r\n---> 80 g.ndata['feat'] = DS_node_attr[idxs, :]\r\n 81 self.data_mode = \"node_attr\"\r\n 82 except IOError:\r\n\r\nIndexError: too many indices for array\r\n```\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n\r\n## Expected behavior\r\n\r\n<!-- A clear and concise description of what you expected to happen. -->\r\nSuccessfully loading PROTEINS dataset\r\n## Environment\r\n\r\n - DGL Version (e.g., 1.0): 0.4.2\r\n - Backend Library & Version (e.g., PyTorch 0.4.1, MXNet/Gluon 1.3): PyTorch\r\n - OS (e.g., Linux): MacOS X\r\n - How you installed DGL (`conda`, `pip`, source): pip\r\n - Build command you used (if compiling from source):\r\n - Python version: 3.7\r\n - CUDA/cuDNN version (if applicable):\r\n - GPU models and configuration (e.g. V100):\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n\n", "before_files": [{"content": "from __future__ import absolute_import\nimport numpy as np\nimport os\nimport random\n\nfrom .utils import download, extract_archive, get_download_dir, loadtxt\nfrom ..graph import DGLGraph\n\nclass LegacyTUDataset(object):\n \"\"\"\n TUDataset contains lots of graph kernel datasets for graph classification.\n Use provided node feature by default. If no feature provided, use one-hot node label instead.\n If neither labels provided, use constant for node feature.\n\n :param name: Dataset Name, such as `ENZYMES`, `DD`, `COLLAB`\n :param use_pandas: Default: False.\n Numpy's file read function has performance issue when file is large,\n using pandas can be faster.\n :param hidden_size: Default 10. Some dataset doesn't contain features.\n Use constant node features initialization instead, with hidden size as `hidden_size`.\n\n \"\"\"\n\n _url = r\"https://ls11-www.cs.tu-dortmund.de/people/morris/graphkerneldatasets/{}.zip\"\n\n def __init__(self, name, use_pandas=False,\n hidden_size=10, max_allow_node=None):\n\n self.name = name\n self.hidden_size = hidden_size\n self.extract_dir = self._download()\n self.data_mode = None\n self.max_allow_node = max_allow_node\n\n if use_pandas:\n import pandas as pd\n DS_edge_list = self._idx_from_zero(\n pd.read_csv(self._file_path(\"A\"), delimiter=\",\", dtype=int, header=None).values)\n else:\n DS_edge_list = self._idx_from_zero(\n np.genfromtxt(self._file_path(\"A\"), delimiter=\",\", dtype=int))\n\n DS_indicator = self._idx_from_zero(\n np.genfromtxt(self._file_path(\"graph_indicator\"), dtype=int))\n DS_graph_labels = self._idx_from_zero(\n np.genfromtxt(self._file_path(\"graph_labels\"), dtype=int))\n\n g = DGLGraph()\n g.add_nodes(int(DS_edge_list.max()) + 1)\n g.add_edges(DS_edge_list[:, 0], DS_edge_list[:, 1])\n\n node_idx_list = []\n self.max_num_node = 0\n for idx in range(np.max(DS_indicator) + 1):\n node_idx = np.where(DS_indicator == idx)\n node_idx_list.append(node_idx[0])\n if len(node_idx[0]) > self.max_num_node:\n self.max_num_node = len(node_idx[0])\n\n self.graph_lists = g.subgraphs(node_idx_list)\n self.num_labels = max(DS_graph_labels) + 1\n self.graph_labels = DS_graph_labels\n\n try:\n DS_node_labels = self._idx_from_zero(\n np.loadtxt(self._file_path(\"node_labels\"), dtype=int))\n g.ndata['node_label'] = DS_node_labels\n one_hot_node_labels = self._to_onehot(DS_node_labels)\n for idxs, g in zip(node_idx_list, self.graph_lists):\n g.ndata['feat'] = one_hot_node_labels[idxs, :]\n self.data_mode = \"node_label\"\n except IOError:\n print(\"No Node Label Data\")\n\n try:\n DS_node_attr = np.loadtxt(\n self._file_path(\"node_attributes\"), delimiter=\",\")\n for idxs, g in zip(node_idx_list, self.graph_lists):\n g.ndata['feat'] = DS_node_attr[idxs, :]\n self.data_mode = \"node_attr\"\n except IOError:\n print(\"No Node Attribute Data\")\n\n if 'feat' not in g.ndata.keys():\n for idxs, g in zip(node_idx_list, self.graph_lists):\n g.ndata['feat'] = np.ones((g.number_of_nodes(), hidden_size))\n self.data_mode = \"constant\"\n print(\n \"Use Constant one as Feature with hidden size {}\".format(hidden_size))\n\n # remove graphs that are too large by user given standard\n # optional pre-processing steop in conformity with Rex Ying's original\n # DiffPool implementation\n if self.max_allow_node:\n preserve_idx = []\n print(\"original dataset length : \", len(self.graph_lists))\n for (i, g) in enumerate(self.graph_lists):\n if g.number_of_nodes() <= self.max_allow_node:\n preserve_idx.append(i)\n self.graph_lists = [self.graph_lists[i] for i in preserve_idx]\n print(\n \"after pruning graphs that are too big : \", len(\n self.graph_lists))\n self.graph_labels = [self.graph_labels[i] for i in preserve_idx]\n self.max_num_node = self.max_allow_node\n\n def __getitem__(self, idx):\n \"\"\"Get the i^th sample.\n Paramters\n ---------\n idx : int\n The sample index.\n Returns\n -------\n (dgl.DGLGraph, int)\n DGLGraph with node feature stored in `feat` field and node label in `node_label` if available.\n And its label.\n \"\"\"\n g = self.graph_lists[idx]\n return g, self.graph_labels[idx]\n\n def __len__(self):\n return len(self.graph_lists)\n\n def _download(self):\n download_dir = get_download_dir()\n zip_file_path = os.path.join(\n download_dir,\n \"tu_{}.zip\".format(\n self.name))\n download(self._url.format(self.name), path=zip_file_path)\n extract_dir = os.path.join(download_dir, \"tu_{}\".format(self.name))\n extract_archive(zip_file_path, extract_dir)\n return extract_dir\n\n def _file_path(self, category):\n return os.path.join(self.extract_dir, self.name,\n \"{}_{}.txt\".format(self.name, category))\n\n @staticmethod\n def _idx_from_zero(idx_tensor):\n return idx_tensor - np.min(idx_tensor)\n\n @staticmethod\n def _to_onehot(label_tensor):\n label_num = label_tensor.shape[0]\n assert np.min(label_tensor) == 0\n one_hot_tensor = np.zeros((label_num, np.max(label_tensor) + 1))\n one_hot_tensor[np.arange(label_num), label_tensor] = 1\n return one_hot_tensor\n\n def statistics(self):\n return self.graph_lists[0].ndata['feat'].shape[1],\\\n self.num_labels,\\\n self.max_num_node\n\n\nclass TUDataset(object):\n \"\"\"\n TUDataset contains lots of graph kernel datasets for graph classification.\n Graphs may have node labels, node attributes, edge labels, and edge attributes,\n varing from different dataset.\n\n :param name: Dataset Name, such as `ENZYMES`, `DD`, `COLLAB`, `MUTAG`, can be the \n datasets name on https://ls11-www.cs.tu-dortmund.de/staff/morris/graphkerneldatasets.\n \"\"\"\n\n _url = r\"https://ls11-www.cs.tu-dortmund.de/people/morris/graphkerneldatasets/{}.zip\"\n\n def __init__(self, name):\n\n self.name = name\n self.extract_dir = self._download()\n\n DS_edge_list = self._idx_from_zero(\n loadtxt(self._file_path(\"A\"), delimiter=\",\").astype(int))\n DS_indicator = self._idx_from_zero(\n loadtxt(self._file_path(\"graph_indicator\"), delimiter=\",\").astype(int))\n DS_graph_labels = self._idx_from_zero(\n loadtxt(self._file_path(\"graph_labels\"), delimiter=\",\").astype(int))\n\n g = DGLGraph()\n g.add_nodes(int(DS_edge_list.max()) + 1)\n g.add_edges(DS_edge_list[:, 0], DS_edge_list[:, 1])\n\n node_idx_list = []\n self.max_num_node = 0\n for idx in range(np.max(DS_indicator) + 1):\n node_idx = np.where(DS_indicator == idx)\n node_idx_list.append(node_idx[0])\n if len(node_idx[0]) > self.max_num_node:\n self.max_num_node = len(node_idx[0])\n\n self.num_labels = max(DS_graph_labels) + 1\n self.graph_labels = DS_graph_labels\n\n self.attr_dict = {\n 'node_labels': ('ndata', 'node_labels'),\n 'node_attributes': ('ndata', 'node_attr'),\n 'edge_labels': ('edata', 'edge_labels'),\n 'edge_attributes': ('edata', 'node_labels'),\n }\n\n for filename, field_name in self.attr_dict.items():\n try:\n data = loadtxt(self._file_path(filename),\n delimiter=',').astype(int)\n if 'label' in filename:\n data = self._idx_from_zero(data)\n getattr(g, field_name[0])[field_name[1]] = data\n except IOError:\n pass\n\n self.graph_lists = g.subgraphs(node_idx_list)\n for g in self.graph_lists:\n g.copy_from_parent()\n\n def __getitem__(self, idx):\n \"\"\"Get the i^th sample.\n Paramters\n ---------\n idx : int\n The sample index.\n Returns\n -------\n (dgl.DGLGraph, int)\n DGLGraph with node feature stored in `feat` field and node label in `node_label` if available.\n And its label.\n \"\"\"\n g = self.graph_lists[idx]\n return g, self.graph_labels[idx]\n\n def __len__(self):\n return len(self.graph_lists)\n\n def _download(self):\n download_dir = get_download_dir()\n zip_file_path = os.path.join(\n download_dir,\n \"tu_{}.zip\".format(\n self.name))\n download(self._url.format(self.name), path=zip_file_path)\n extract_dir = os.path.join(download_dir, \"tu_{}\".format(self.name))\n extract_archive(zip_file_path, extract_dir)\n return extract_dir\n\n def _file_path(self, category):\n return os.path.join(self.extract_dir, self.name,\n \"{}_{}.txt\".format(self.name, category))\n\n @staticmethod\n def _idx_from_zero(idx_tensor):\n return idx_tensor - np.min(idx_tensor)\n\n def statistics(self):\n return self.graph_lists[0].ndata['feat'].shape[1], \\\n self.num_labels, \\\n self.max_num_node\n", "path": "python/dgl/data/tu.py"}], "after_files": [{"content": "from __future__ import absolute_import\nimport numpy as np\nimport os\nimport random\n\nfrom .utils import download, extract_archive, get_download_dir, loadtxt\nfrom ..graph import DGLGraph\n\nclass LegacyTUDataset(object):\n \"\"\"\n TUDataset contains lots of graph kernel datasets for graph classification.\n Use provided node feature by default. If no feature provided, use one-hot node label instead.\n If neither labels provided, use constant for node feature.\n\n :param name: Dataset Name, such as `ENZYMES`, `DD`, `COLLAB`\n :param use_pandas: Default: False.\n Numpy's file read function has performance issue when file is large,\n using pandas can be faster.\n :param hidden_size: Default 10. Some dataset doesn't contain features.\n Use constant node features initialization instead, with hidden size as `hidden_size`.\n\n \"\"\"\n\n _url = r\"https://ls11-www.cs.tu-dortmund.de/people/morris/graphkerneldatasets/{}.zip\"\n\n def __init__(self, name, use_pandas=False,\n hidden_size=10, max_allow_node=None):\n\n self.name = name\n self.hidden_size = hidden_size\n self.extract_dir = self._download()\n self.data_mode = None\n self.max_allow_node = max_allow_node\n\n if use_pandas:\n import pandas as pd\n DS_edge_list = self._idx_from_zero(\n pd.read_csv(self._file_path(\"A\"), delimiter=\",\", dtype=int, header=None).values)\n else:\n DS_edge_list = self._idx_from_zero(\n np.genfromtxt(self._file_path(\"A\"), delimiter=\",\", dtype=int))\n\n DS_indicator = self._idx_from_zero(\n np.genfromtxt(self._file_path(\"graph_indicator\"), dtype=int))\n DS_graph_labels = self._idx_from_zero(\n np.genfromtxt(self._file_path(\"graph_labels\"), dtype=int))\n\n g = DGLGraph()\n g.add_nodes(int(DS_edge_list.max()) + 1)\n g.add_edges(DS_edge_list[:, 0], DS_edge_list[:, 1])\n\n node_idx_list = []\n self.max_num_node = 0\n for idx in range(np.max(DS_indicator) + 1):\n node_idx = np.where(DS_indicator == idx)\n node_idx_list.append(node_idx[0])\n if len(node_idx[0]) > self.max_num_node:\n self.max_num_node = len(node_idx[0])\n\n self.graph_lists = g.subgraphs(node_idx_list)\n self.num_labels = max(DS_graph_labels) + 1\n self.graph_labels = DS_graph_labels\n\n try:\n DS_node_labels = self._idx_from_zero(\n np.loadtxt(self._file_path(\"node_labels\"), dtype=int))\n g.ndata['node_label'] = DS_node_labels\n one_hot_node_labels = self._to_onehot(DS_node_labels)\n for idxs, g in zip(node_idx_list, self.graph_lists):\n g.ndata['feat'] = one_hot_node_labels[idxs, :]\n self.data_mode = \"node_label\"\n except IOError:\n print(\"No Node Label Data\")\n\n try:\n DS_node_attr = np.loadtxt(\n self._file_path(\"node_attributes\"), delimiter=\",\")\n if DS_node_attr.ndim == 1:\n DS_node_attr = np.expand_dims(DS_node_attr, -1)\n for idxs, g in zip(node_idx_list, self.graph_lists):\n g.ndata['feat'] = DS_node_attr[idxs, :]\n self.data_mode = \"node_attr\"\n except IOError:\n print(\"No Node Attribute Data\")\n\n if 'feat' not in g.ndata.keys():\n for idxs, g in zip(node_idx_list, self.graph_lists):\n g.ndata['feat'] = np.ones((g.number_of_nodes(), hidden_size))\n self.data_mode = \"constant\"\n print(\n \"Use Constant one as Feature with hidden size {}\".format(hidden_size))\n\n # remove graphs that are too large by user given standard\n # optional pre-processing steop in conformity with Rex Ying's original\n # DiffPool implementation\n if self.max_allow_node:\n preserve_idx = []\n print(\"original dataset length : \", len(self.graph_lists))\n for (i, g) in enumerate(self.graph_lists):\n if g.number_of_nodes() <= self.max_allow_node:\n preserve_idx.append(i)\n self.graph_lists = [self.graph_lists[i] for i in preserve_idx]\n print(\n \"after pruning graphs that are too big : \", len(\n self.graph_lists))\n self.graph_labels = [self.graph_labels[i] for i in preserve_idx]\n self.max_num_node = self.max_allow_node\n\n def __getitem__(self, idx):\n \"\"\"Get the i^th sample.\n Paramters\n ---------\n idx : int\n The sample index.\n Returns\n -------\n (dgl.DGLGraph, int)\n DGLGraph with node feature stored in `feat` field and node label in `node_label` if available.\n And its label.\n \"\"\"\n g = self.graph_lists[idx]\n return g, self.graph_labels[idx]\n\n def __len__(self):\n return len(self.graph_lists)\n\n def _download(self):\n download_dir = get_download_dir()\n zip_file_path = os.path.join(\n download_dir,\n \"tu_{}.zip\".format(\n self.name))\n download(self._url.format(self.name), path=zip_file_path)\n extract_dir = os.path.join(download_dir, \"tu_{}\".format(self.name))\n extract_archive(zip_file_path, extract_dir)\n return extract_dir\n\n def _file_path(self, category):\n return os.path.join(self.extract_dir, self.name,\n \"{}_{}.txt\".format(self.name, category))\n\n @staticmethod\n def _idx_from_zero(idx_tensor):\n return idx_tensor - np.min(idx_tensor)\n\n @staticmethod\n def _to_onehot(label_tensor):\n label_num = label_tensor.shape[0]\n assert np.min(label_tensor) == 0\n one_hot_tensor = np.zeros((label_num, np.max(label_tensor) + 1))\n one_hot_tensor[np.arange(label_num), label_tensor] = 1\n return one_hot_tensor\n\n def statistics(self):\n return self.graph_lists[0].ndata['feat'].shape[1],\\\n self.num_labels,\\\n self.max_num_node\n\n\nclass TUDataset(object):\n \"\"\"\n TUDataset contains lots of graph kernel datasets for graph classification.\n Graphs may have node labels, node attributes, edge labels, and edge attributes,\n varing from different dataset.\n\n :param name: Dataset Name, such as `ENZYMES`, `DD`, `COLLAB`, `MUTAG`, can be the \n datasets name on https://ls11-www.cs.tu-dortmund.de/staff/morris/graphkerneldatasets.\n \"\"\"\n\n _url = r\"https://ls11-www.cs.tu-dortmund.de/people/morris/graphkerneldatasets/{}.zip\"\n\n def __init__(self, name):\n\n self.name = name\n self.extract_dir = self._download()\n\n DS_edge_list = self._idx_from_zero(\n loadtxt(self._file_path(\"A\"), delimiter=\",\").astype(int))\n DS_indicator = self._idx_from_zero(\n loadtxt(self._file_path(\"graph_indicator\"), delimiter=\",\").astype(int))\n DS_graph_labels = self._idx_from_zero(\n loadtxt(self._file_path(\"graph_labels\"), delimiter=\",\").astype(int))\n\n g = DGLGraph()\n g.add_nodes(int(DS_edge_list.max()) + 1)\n g.add_edges(DS_edge_list[:, 0], DS_edge_list[:, 1])\n\n node_idx_list = []\n self.max_num_node = 0\n for idx in range(np.max(DS_indicator) + 1):\n node_idx = np.where(DS_indicator == idx)\n node_idx_list.append(node_idx[0])\n if len(node_idx[0]) > self.max_num_node:\n self.max_num_node = len(node_idx[0])\n\n self.num_labels = max(DS_graph_labels) + 1\n self.graph_labels = DS_graph_labels\n\n self.attr_dict = {\n 'node_labels': ('ndata', 'node_labels'),\n 'node_attributes': ('ndata', 'node_attr'),\n 'edge_labels': ('edata', 'edge_labels'),\n 'edge_attributes': ('edata', 'node_labels'),\n }\n\n for filename, field_name in self.attr_dict.items():\n try:\n data = loadtxt(self._file_path(filename),\n delimiter=',').astype(int)\n if 'label' in filename:\n data = self._idx_from_zero(data)\n getattr(g, field_name[0])[field_name[1]] = data\n except IOError:\n pass\n\n self.graph_lists = g.subgraphs(node_idx_list)\n for g in self.graph_lists:\n g.copy_from_parent()\n\n def __getitem__(self, idx):\n \"\"\"Get the i^th sample.\n Paramters\n ---------\n idx : int\n The sample index.\n Returns\n -------\n (dgl.DGLGraph, int)\n DGLGraph with node feature stored in `feat` field and node label in `node_label` if available.\n And its label.\n \"\"\"\n g = self.graph_lists[idx]\n return g, self.graph_labels[idx]\n\n def __len__(self):\n return len(self.graph_lists)\n\n def _download(self):\n download_dir = get_download_dir()\n zip_file_path = os.path.join(\n download_dir,\n \"tu_{}.zip\".format(\n self.name))\n download(self._url.format(self.name), path=zip_file_path)\n extract_dir = os.path.join(download_dir, \"tu_{}\".format(self.name))\n extract_archive(zip_file_path, extract_dir)\n return extract_dir\n\n def _file_path(self, category):\n return os.path.join(self.extract_dir, self.name,\n \"{}_{}.txt\".format(self.name, category))\n\n @staticmethod\n def _idx_from_zero(idx_tensor):\n return idx_tensor - np.min(idx_tensor)\n\n def statistics(self):\n return self.graph_lists[0].ndata['feat'].shape[1], \\\n self.num_labels, \\\n self.max_num_node\n", "path": "python/dgl/data/tu.py"}]} | 3,583 | 143 |
gh_patches_debug_24048 | rasdani/github-patches | git_diff | translate__pootle-5595 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Directory hashes are not expired when directories are added/removed
if you add or remove a directory it should expire hashes on all parents and related - ie /projects/x directories
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pootle/apps/pootle_revision/receivers.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 from django.db.models.signals import post_save
10 from django.dispatch import receiver
11
12 from pootle.core.delegate import revision_updater
13 from pootle_data.models import StoreData
14 from pootle_store.models import Store
15
16
17 @receiver(post_save, sender=StoreData)
18 def handle_storedata_save(**kwargs):
19 revision_updater.get(Store)(
20 context=kwargs["instance"].store).update(keys=["stats", "checks"])
21
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pootle/apps/pootle_revision/receivers.py b/pootle/apps/pootle_revision/receivers.py
--- a/pootle/apps/pootle_revision/receivers.py
+++ b/pootle/apps/pootle_revision/receivers.py
@@ -6,10 +6,11 @@
# or later license. See the LICENSE file for a copy of the license and the
# AUTHORS file for copyright and authorship information.
-from django.db.models.signals import post_save
+from django.db.models.signals import post_save, pre_delete
from django.dispatch import receiver
from pootle.core.delegate import revision_updater
+from pootle_app.models import Directory
from pootle_data.models import StoreData
from pootle_store.models import Store
@@ -18,3 +19,17 @@
def handle_storedata_save(**kwargs):
revision_updater.get(Store)(
context=kwargs["instance"].store).update(keys=["stats", "checks"])
+
+
+@receiver(post_save, sender=Directory)
+def handle_directory_save(**kwargs):
+ if kwargs.get("created"):
+ return
+ revision_updater.get(Directory)(
+ context=kwargs["instance"]).update(keys=["stats", "checks"])
+
+
+@receiver(pre_delete, sender=Directory)
+def handle_directory_delete(**kwargs):
+ revision_updater.get(Directory)(
+ context=kwargs["instance"].parent).update(keys=["stats", "checks"])
| {"golden_diff": "diff --git a/pootle/apps/pootle_revision/receivers.py b/pootle/apps/pootle_revision/receivers.py\n--- a/pootle/apps/pootle_revision/receivers.py\n+++ b/pootle/apps/pootle_revision/receivers.py\n@@ -6,10 +6,11 @@\n # or later license. See the LICENSE file for a copy of the license and the\n # AUTHORS file for copyright and authorship information.\n \n-from django.db.models.signals import post_save\n+from django.db.models.signals import post_save, pre_delete\n from django.dispatch import receiver\n \n from pootle.core.delegate import revision_updater\n+from pootle_app.models import Directory\n from pootle_data.models import StoreData\n from pootle_store.models import Store\n \n@@ -18,3 +19,17 @@\n def handle_storedata_save(**kwargs):\n revision_updater.get(Store)(\n context=kwargs[\"instance\"].store).update(keys=[\"stats\", \"checks\"])\n+\n+\n+@receiver(post_save, sender=Directory)\n+def handle_directory_save(**kwargs):\n+ if kwargs.get(\"created\"):\n+ return\n+ revision_updater.get(Directory)(\n+ context=kwargs[\"instance\"]).update(keys=[\"stats\", \"checks\"])\n+\n+\n+@receiver(pre_delete, sender=Directory)\n+def handle_directory_delete(**kwargs):\n+ revision_updater.get(Directory)(\n+ context=kwargs[\"instance\"].parent).update(keys=[\"stats\", \"checks\"])\n", "issue": "Directory hashes are not expired when directories are added/removed\nif you add or remove a directory it should expire hashes on all parents and related - ie /projects/x directories\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django.db.models.signals import post_save\nfrom django.dispatch import receiver\n\nfrom pootle.core.delegate import revision_updater\nfrom pootle_data.models import StoreData\nfrom pootle_store.models import Store\n\n\n@receiver(post_save, sender=StoreData)\ndef handle_storedata_save(**kwargs):\n revision_updater.get(Store)(\n context=kwargs[\"instance\"].store).update(keys=[\"stats\", \"checks\"])\n", "path": "pootle/apps/pootle_revision/receivers.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django.db.models.signals import post_save, pre_delete\nfrom django.dispatch import receiver\n\nfrom pootle.core.delegate import revision_updater\nfrom pootle_app.models import Directory\nfrom pootle_data.models import StoreData\nfrom pootle_store.models import Store\n\n\n@receiver(post_save, sender=StoreData)\ndef handle_storedata_save(**kwargs):\n revision_updater.get(Store)(\n context=kwargs[\"instance\"].store).update(keys=[\"stats\", \"checks\"])\n\n\n@receiver(post_save, sender=Directory)\ndef handle_directory_save(**kwargs):\n if kwargs.get(\"created\"):\n return\n revision_updater.get(Directory)(\n context=kwargs[\"instance\"]).update(keys=[\"stats\", \"checks\"])\n\n\n@receiver(pre_delete, sender=Directory)\ndef handle_directory_delete(**kwargs):\n revision_updater.get(Directory)(\n context=kwargs[\"instance\"].parent).update(keys=[\"stats\", \"checks\"])\n", "path": "pootle/apps/pootle_revision/receivers.py"}]} | 495 | 319 |
gh_patches_debug_43900 | rasdani/github-patches | git_diff | ansible__awx-8348 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
collection module tower_inventory_source_update should have "name" instead of "inventory_source"
<!-- Issues are for **concrete, actionable bugs and feature requests** only - if you're just asking for debugging help or technical support, please use:
- http://webchat.freenode.net/?channels=ansible-awx
- https://groups.google.com/forum/#!forum/awx-project
We have to limit this because of limited volunteer time to respond to issues! -->
##### ISSUE TYPE
- Feature Idea
##### SUMMARY
<!-- Briefly describe the problem or desired enhancement. -->
Module that works on an entity should reference that entity by `name`. It feels more ansibly anyway.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `awx_collection/plugins/modules/tower_inventory_source_update.py`
Content:
```
1 #!/usr/bin/python
2 # coding: utf-8 -*-
3
4 # (c) 2020, Bianca Henderson <[email protected]>
5 # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
6
7 from __future__ import absolute_import, division, print_function
8 __metaclass__ = type
9
10
11 ANSIBLE_METADATA = {'metadata_version': '1.1',
12 'status': ['preview'],
13 'supported_by': 'community'}
14
15
16 DOCUMENTATION = '''
17 ---
18 module: tower_inventory_source_update
19 author: "Bianca Henderson (@beeankha)"
20 short_description: Update inventory source(s).
21 description:
22 - Update Ansible Tower inventory source(s). See
23 U(https://www.ansible.com/tower) for an overview.
24 options:
25 inventory:
26 description:
27 - Name of the inventory that contains the inventory source(s) to update.
28 required: True
29 type: str
30 inventory_source:
31 description:
32 - The name of the inventory source to update.
33 required: True
34 type: str
35 organization:
36 description:
37 - Name of the inventory source's inventory's organization.
38 type: str
39 wait:
40 description:
41 - Wait for the job to complete.
42 default: False
43 type: bool
44 interval:
45 description:
46 - The interval to request an update from Tower.
47 required: False
48 default: 1
49 type: float
50 timeout:
51 description:
52 - If waiting for the job to complete this will abort after this
53 amount of seconds
54 type: int
55 extends_documentation_fragment: awx.awx.auth
56 '''
57
58 EXAMPLES = '''
59 - name: Update a single inventory source
60 tower_inventory_source_update:
61 inventory: "My Inventory"
62 inventory_source: "Example Inventory Source"
63 organization: Default
64
65 - name: Update all inventory sources
66 tower_inventory_source_update:
67 inventory: "My Other Inventory"
68 inventory_source: "{{ item }}"
69 loop: "{{ query('awx.awx.tower_api', 'inventory_sources', query_params={ 'inventory': 30 }, return_ids=True ) }}"
70 '''
71
72 RETURN = '''
73 id:
74 description: id of the inventory update
75 returned: success
76 type: int
77 sample: 86
78 status:
79 description: status of the inventory update
80 returned: success
81 type: str
82 sample: pending
83 '''
84
85 from ..module_utils.tower_api import TowerAPIModule
86
87
88 def main():
89 # Any additional arguments that are not fields of the item can be added here
90 argument_spec = dict(
91 inventory=dict(required=True),
92 inventory_source=dict(required=True),
93 organization=dict(),
94 wait=dict(default=False, type='bool'),
95 interval=dict(default=1.0, type='float'),
96 timeout=dict(default=None, type='int'),
97 )
98
99 # Create a module for ourselves
100 module = TowerAPIModule(argument_spec=argument_spec)
101
102 # Extract our parameters
103 inventory = module.params.get('inventory')
104 inventory_source = module.params.get('inventory_source')
105 organization = module.params.get('organization')
106 wait = module.params.get('wait')
107 interval = module.params.get('interval')
108 timeout = module.params.get('timeout')
109
110 lookup_data = {}
111 if organization:
112 lookup_data['organization'] = module.resolve_name_to_id('organizations', organization)
113 inventory_object = module.get_one('inventories', name_or_id=inventory, data=lookup_data)
114
115 if not inventory_object:
116 module.fail_json(msg='The specified inventory, {0}, was not found.'.format(lookup_data))
117
118 inventory_source_object = module.get_one('inventory_sources', name_or_id=inventory_source, **{
119 'data': {
120 'inventory': inventory_object['id'],
121 }
122 })
123
124 if not inventory_source_object:
125 module.fail_json(msg='The specified inventory source was not found.')
126
127 # Sync the inventory source(s)
128 inventory_source_update_results = module.post_endpoint(inventory_source_object['related']['update'], **{'data': {}})
129
130 if inventory_source_update_results['status_code'] != 202:
131 module.fail_json(msg="Failed to update inventory source, see response for details", **{'response': inventory_source_update_results})
132
133 module.json_output['changed'] = True
134 module.json_output['id'] = inventory_source_update_results['json']['id']
135 module.json_output['status'] = inventory_source_update_results['json']['status']
136
137 if not wait:
138 module.exit_json(**module.json_output)
139
140 # Invoke wait function
141 module.wait_on_url(
142 url=inventory_source_update_results['json']['url'],
143 object_name=inventory_object,
144 object_type='inventory_update',
145 timeout=timeout, interval=interval
146 )
147
148 module.exit_json(**module.json_output)
149
150
151 if __name__ == '__main__':
152 main()
153
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/awx_collection/plugins/modules/tower_inventory_source_update.py b/awx_collection/plugins/modules/tower_inventory_source_update.py
--- a/awx_collection/plugins/modules/tower_inventory_source_update.py
+++ b/awx_collection/plugins/modules/tower_inventory_source_update.py
@@ -22,14 +22,16 @@
- Update Ansible Tower inventory source(s). See
U(https://www.ansible.com/tower) for an overview.
options:
- inventory:
+ name:
description:
- - Name of the inventory that contains the inventory source(s) to update.
+ - The name or id of the inventory source to update.
required: True
type: str
- inventory_source:
+ aliases:
+ - inventory_source
+ inventory:
description:
- - The name of the inventory source to update.
+ - Name or id of the inventory that contains the inventory source(s) to update.
required: True
type: str
organization:
@@ -58,14 +60,14 @@
EXAMPLES = '''
- name: Update a single inventory source
tower_inventory_source_update:
+ name: "Example Inventory Source"
inventory: "My Inventory"
- inventory_source: "Example Inventory Source"
organization: Default
- name: Update all inventory sources
tower_inventory_source_update:
+ name: "{{ item }}"
inventory: "My Other Inventory"
- inventory_source: "{{ item }}"
loop: "{{ query('awx.awx.tower_api', 'inventory_sources', query_params={ 'inventory': 30 }, return_ids=True ) }}"
'''
@@ -88,8 +90,8 @@
def main():
# Any additional arguments that are not fields of the item can be added here
argument_spec = dict(
+ name=dict(required=True, aliases=['inventory_source']),
inventory=dict(required=True),
- inventory_source=dict(required=True),
organization=dict(),
wait=dict(default=False, type='bool'),
interval=dict(default=1.0, type='float'),
@@ -100,8 +102,8 @@
module = TowerAPIModule(argument_spec=argument_spec)
# Extract our parameters
+ name = module.params.get('name')
inventory = module.params.get('inventory')
- inventory_source = module.params.get('inventory_source')
organization = module.params.get('organization')
wait = module.params.get('wait')
interval = module.params.get('interval')
@@ -115,20 +117,18 @@
if not inventory_object:
module.fail_json(msg='The specified inventory, {0}, was not found.'.format(lookup_data))
- inventory_source_object = module.get_one('inventory_sources', name_or_id=inventory_source, **{
- 'data': {
- 'inventory': inventory_object['id'],
- }
- })
+ inventory_source_object = module.get_one('inventory_sources',
+ name_or_id=name,
+ data={'inventory': inventory_object['id']})
if not inventory_source_object:
module.fail_json(msg='The specified inventory source was not found.')
# Sync the inventory source(s)
- inventory_source_update_results = module.post_endpoint(inventory_source_object['related']['update'], **{'data': {}})
+ inventory_source_update_results = module.post_endpoint(inventory_source_object['related']['update'])
if inventory_source_update_results['status_code'] != 202:
- module.fail_json(msg="Failed to update inventory source, see response for details", **{'response': inventory_source_update_results})
+ module.fail_json(msg="Failed to update inventory source, see response for details", response=inventory_source_update_results)
module.json_output['changed'] = True
module.json_output['id'] = inventory_source_update_results['json']['id']
| {"golden_diff": "diff --git a/awx_collection/plugins/modules/tower_inventory_source_update.py b/awx_collection/plugins/modules/tower_inventory_source_update.py\n--- a/awx_collection/plugins/modules/tower_inventory_source_update.py\n+++ b/awx_collection/plugins/modules/tower_inventory_source_update.py\n@@ -22,14 +22,16 @@\n - Update Ansible Tower inventory source(s). See\n U(https://www.ansible.com/tower) for an overview.\n options:\n- inventory:\n+ name:\n description:\n- - Name of the inventory that contains the inventory source(s) to update.\n+ - The name or id of the inventory source to update.\n required: True\n type: str\n- inventory_source:\n+ aliases:\n+ - inventory_source\n+ inventory:\n description:\n- - The name of the inventory source to update.\n+ - Name or id of the inventory that contains the inventory source(s) to update.\n required: True\n type: str\n organization:\n@@ -58,14 +60,14 @@\n EXAMPLES = '''\n - name: Update a single inventory source\n tower_inventory_source_update:\n+ name: \"Example Inventory Source\"\n inventory: \"My Inventory\"\n- inventory_source: \"Example Inventory Source\"\n organization: Default\n \n - name: Update all inventory sources\n tower_inventory_source_update:\n+ name: \"{{ item }}\"\n inventory: \"My Other Inventory\"\n- inventory_source: \"{{ item }}\"\n loop: \"{{ query('awx.awx.tower_api', 'inventory_sources', query_params={ 'inventory': 30 }, return_ids=True ) }}\"\n '''\n \n@@ -88,8 +90,8 @@\n def main():\n # Any additional arguments that are not fields of the item can be added here\n argument_spec = dict(\n+ name=dict(required=True, aliases=['inventory_source']),\n inventory=dict(required=True),\n- inventory_source=dict(required=True),\n organization=dict(),\n wait=dict(default=False, type='bool'),\n interval=dict(default=1.0, type='float'),\n@@ -100,8 +102,8 @@\n module = TowerAPIModule(argument_spec=argument_spec)\n \n # Extract our parameters\n+ name = module.params.get('name')\n inventory = module.params.get('inventory')\n- inventory_source = module.params.get('inventory_source')\n organization = module.params.get('organization')\n wait = module.params.get('wait')\n interval = module.params.get('interval')\n@@ -115,20 +117,18 @@\n if not inventory_object:\n module.fail_json(msg='The specified inventory, {0}, was not found.'.format(lookup_data))\n \n- inventory_source_object = module.get_one('inventory_sources', name_or_id=inventory_source, **{\n- 'data': {\n- 'inventory': inventory_object['id'],\n- }\n- })\n+ inventory_source_object = module.get_one('inventory_sources',\n+ name_or_id=name,\n+ data={'inventory': inventory_object['id']})\n \n if not inventory_source_object:\n module.fail_json(msg='The specified inventory source was not found.')\n \n # Sync the inventory source(s)\n- inventory_source_update_results = module.post_endpoint(inventory_source_object['related']['update'], **{'data': {}})\n+ inventory_source_update_results = module.post_endpoint(inventory_source_object['related']['update'])\n \n if inventory_source_update_results['status_code'] != 202:\n- module.fail_json(msg=\"Failed to update inventory source, see response for details\", **{'response': inventory_source_update_results})\n+ module.fail_json(msg=\"Failed to update inventory source, see response for details\", response=inventory_source_update_results)\n \n module.json_output['changed'] = True\n module.json_output['id'] = inventory_source_update_results['json']['id']\n", "issue": "collection module tower_inventory_source_update should have \"name\" instead of \"inventory_source\"\n<!-- Issues are for **concrete, actionable bugs and feature requests** only - if you're just asking for debugging help or technical support, please use:\r\n\r\n- http://webchat.freenode.net/?channels=ansible-awx\r\n- https://groups.google.com/forum/#!forum/awx-project\r\n\r\nWe have to limit this because of limited volunteer time to respond to issues! -->\r\n\r\n##### ISSUE TYPE\r\n - Feature Idea\r\n\r\n##### SUMMARY\r\n<!-- Briefly describe the problem or desired enhancement. -->\r\nModule that works on an entity should reference that entity by `name`. It feels more ansibly anyway.\n", "before_files": [{"content": "#!/usr/bin/python\n# coding: utf-8 -*-\n\n# (c) 2020, Bianca Henderson <[email protected]>\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nANSIBLE_METADATA = {'metadata_version': '1.1',\n 'status': ['preview'],\n 'supported_by': 'community'}\n\n\nDOCUMENTATION = '''\n---\nmodule: tower_inventory_source_update\nauthor: \"Bianca Henderson (@beeankha)\"\nshort_description: Update inventory source(s).\ndescription:\n - Update Ansible Tower inventory source(s). See\n U(https://www.ansible.com/tower) for an overview.\noptions:\n inventory:\n description:\n - Name of the inventory that contains the inventory source(s) to update.\n required: True\n type: str\n inventory_source:\n description:\n - The name of the inventory source to update.\n required: True\n type: str\n organization:\n description:\n - Name of the inventory source's inventory's organization.\n type: str\n wait:\n description:\n - Wait for the job to complete.\n default: False\n type: bool\n interval:\n description:\n - The interval to request an update from Tower.\n required: False\n default: 1\n type: float\n timeout:\n description:\n - If waiting for the job to complete this will abort after this\n amount of seconds\n type: int\nextends_documentation_fragment: awx.awx.auth\n'''\n\nEXAMPLES = '''\n- name: Update a single inventory source\n tower_inventory_source_update:\n inventory: \"My Inventory\"\n inventory_source: \"Example Inventory Source\"\n organization: Default\n\n- name: Update all inventory sources\n tower_inventory_source_update:\n inventory: \"My Other Inventory\"\n inventory_source: \"{{ item }}\"\n loop: \"{{ query('awx.awx.tower_api', 'inventory_sources', query_params={ 'inventory': 30 }, return_ids=True ) }}\"\n'''\n\nRETURN = '''\nid:\n description: id of the inventory update\n returned: success\n type: int\n sample: 86\nstatus:\n description: status of the inventory update\n returned: success\n type: str\n sample: pending\n'''\n\nfrom ..module_utils.tower_api import TowerAPIModule\n\n\ndef main():\n # Any additional arguments that are not fields of the item can be added here\n argument_spec = dict(\n inventory=dict(required=True),\n inventory_source=dict(required=True),\n organization=dict(),\n wait=dict(default=False, type='bool'),\n interval=dict(default=1.0, type='float'),\n timeout=dict(default=None, type='int'),\n )\n\n # Create a module for ourselves\n module = TowerAPIModule(argument_spec=argument_spec)\n\n # Extract our parameters\n inventory = module.params.get('inventory')\n inventory_source = module.params.get('inventory_source')\n organization = module.params.get('organization')\n wait = module.params.get('wait')\n interval = module.params.get('interval')\n timeout = module.params.get('timeout')\n\n lookup_data = {}\n if organization:\n lookup_data['organization'] = module.resolve_name_to_id('organizations', organization)\n inventory_object = module.get_one('inventories', name_or_id=inventory, data=lookup_data)\n\n if not inventory_object:\n module.fail_json(msg='The specified inventory, {0}, was not found.'.format(lookup_data))\n\n inventory_source_object = module.get_one('inventory_sources', name_or_id=inventory_source, **{\n 'data': {\n 'inventory': inventory_object['id'],\n }\n })\n\n if not inventory_source_object:\n module.fail_json(msg='The specified inventory source was not found.')\n\n # Sync the inventory source(s)\n inventory_source_update_results = module.post_endpoint(inventory_source_object['related']['update'], **{'data': {}})\n\n if inventory_source_update_results['status_code'] != 202:\n module.fail_json(msg=\"Failed to update inventory source, see response for details\", **{'response': inventory_source_update_results})\n\n module.json_output['changed'] = True\n module.json_output['id'] = inventory_source_update_results['json']['id']\n module.json_output['status'] = inventory_source_update_results['json']['status']\n\n if not wait:\n module.exit_json(**module.json_output)\n\n # Invoke wait function\n module.wait_on_url(\n url=inventory_source_update_results['json']['url'],\n object_name=inventory_object,\n object_type='inventory_update',\n timeout=timeout, interval=interval\n )\n\n module.exit_json(**module.json_output)\n\n\nif __name__ == '__main__':\n main()\n", "path": "awx_collection/plugins/modules/tower_inventory_source_update.py"}], "after_files": [{"content": "#!/usr/bin/python\n# coding: utf-8 -*-\n\n# (c) 2020, Bianca Henderson <[email protected]>\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nANSIBLE_METADATA = {'metadata_version': '1.1',\n 'status': ['preview'],\n 'supported_by': 'community'}\n\n\nDOCUMENTATION = '''\n---\nmodule: tower_inventory_source_update\nauthor: \"Bianca Henderson (@beeankha)\"\nshort_description: Update inventory source(s).\ndescription:\n - Update Ansible Tower inventory source(s). See\n U(https://www.ansible.com/tower) for an overview.\noptions:\n name:\n description:\n - The name or id of the inventory source to update.\n required: True\n type: str\n aliases:\n - inventory_source\n inventory:\n description:\n - Name or id of the inventory that contains the inventory source(s) to update.\n required: True\n type: str\n organization:\n description:\n - Name of the inventory source's inventory's organization.\n type: str\n wait:\n description:\n - Wait for the job to complete.\n default: False\n type: bool\n interval:\n description:\n - The interval to request an update from Tower.\n required: False\n default: 1\n type: float\n timeout:\n description:\n - If waiting for the job to complete this will abort after this\n amount of seconds\n type: int\nextends_documentation_fragment: awx.awx.auth\n'''\n\nEXAMPLES = '''\n- name: Update a single inventory source\n tower_inventory_source_update:\n name: \"Example Inventory Source\"\n inventory: \"My Inventory\"\n organization: Default\n\n- name: Update all inventory sources\n tower_inventory_source_update:\n name: \"{{ item }}\"\n inventory: \"My Other Inventory\"\n loop: \"{{ query('awx.awx.tower_api', 'inventory_sources', query_params={ 'inventory': 30 }, return_ids=True ) }}\"\n'''\n\nRETURN = '''\nid:\n description: id of the inventory update\n returned: success\n type: int\n sample: 86\nstatus:\n description: status of the inventory update\n returned: success\n type: str\n sample: pending\n'''\n\nfrom ..module_utils.tower_api import TowerAPIModule\n\n\ndef main():\n # Any additional arguments that are not fields of the item can be added here\n argument_spec = dict(\n name=dict(required=True, aliases=['inventory_source']),\n inventory=dict(required=True),\n organization=dict(),\n wait=dict(default=False, type='bool'),\n interval=dict(default=1.0, type='float'),\n timeout=dict(default=None, type='int'),\n )\n\n # Create a module for ourselves\n module = TowerAPIModule(argument_spec=argument_spec)\n\n # Extract our parameters\n name = module.params.get('name')\n inventory = module.params.get('inventory')\n organization = module.params.get('organization')\n wait = module.params.get('wait')\n interval = module.params.get('interval')\n timeout = module.params.get('timeout')\n\n lookup_data = {}\n if organization:\n lookup_data['organization'] = module.resolve_name_to_id('organizations', organization)\n inventory_object = module.get_one('inventories', name_or_id=inventory, data=lookup_data)\n\n if not inventory_object:\n module.fail_json(msg='The specified inventory, {0}, was not found.'.format(lookup_data))\n\n inventory_source_object = module.get_one('inventory_sources',\n name_or_id=name,\n data={'inventory': inventory_object['id']})\n\n if not inventory_source_object:\n module.fail_json(msg='The specified inventory source was not found.')\n\n # Sync the inventory source(s)\n inventory_source_update_results = module.post_endpoint(inventory_source_object['related']['update'])\n\n if inventory_source_update_results['status_code'] != 202:\n module.fail_json(msg=\"Failed to update inventory source, see response for details\", response=inventory_source_update_results)\n\n module.json_output['changed'] = True\n module.json_output['id'] = inventory_source_update_results['json']['id']\n module.json_output['status'] = inventory_source_update_results['json']['status']\n\n if not wait:\n module.exit_json(**module.json_output)\n\n # Invoke wait function\n module.wait_on_url(\n url=inventory_source_update_results['json']['url'],\n object_name=inventory_object,\n object_type='inventory_update',\n timeout=timeout, interval=interval\n )\n\n module.exit_json(**module.json_output)\n\n\nif __name__ == '__main__':\n main()\n", "path": "awx_collection/plugins/modules/tower_inventory_source_update.py"}]} | 1,812 | 834 |
gh_patches_debug_6433 | rasdani/github-patches | git_diff | scikit-image__scikit-image-5971 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
bug: imsave cannot detect right extension of pathlib
imsave cannot detect right extension of pathlib
```python
from pathlib import Path
from skimage import io
filename = Path("out.jpg")
io.imsave(filename, im) # this saves im as a png file though the extension is jpg
io.imsave("out.jpg", im) #this is working correctly
io.imsave(str(filename), im) #this is working correctly
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `skimage/io/_io.py`
Content:
```
1 import pathlib
2
3 import numpy as np
4
5 from .._shared.utils import warn
6 from ..exposure import is_low_contrast
7 from ..color.colorconv import rgb2gray, rgba2rgb
8 from ..io.manage_plugins import call_plugin
9 from .util import file_or_url_context
10
11
12 __all__ = ['imread', 'imsave', 'imshow', 'show',
13 'imread_collection', 'imshow_collection']
14
15
16 def imread(fname, as_gray=False, plugin=None, **plugin_args):
17 """Load an image from file.
18
19 Parameters
20 ----------
21 fname : string
22 Image file name, e.g. ``test.jpg`` or URL.
23 as_gray : bool, optional
24 If True, convert color images to gray-scale (64-bit floats).
25 Images that are already in gray-scale format are not converted.
26 plugin : str, optional
27 Name of plugin to use. By default, the different plugins are
28 tried (starting with imageio) until a suitable
29 candidate is found. If not given and fname is a tiff file, the
30 tifffile plugin will be used.
31
32 Other Parameters
33 ----------------
34 plugin_args : keywords
35 Passed to the given plugin.
36
37 Returns
38 -------
39 img_array : ndarray
40 The different color bands/channels are stored in the
41 third dimension, such that a gray-image is MxN, an
42 RGB-image MxNx3 and an RGBA-image MxNx4.
43
44 """
45 if isinstance(fname, pathlib.Path):
46 fname = str(fname.resolve())
47
48 if plugin is None and hasattr(fname, 'lower'):
49 if fname.lower().endswith(('.tiff', '.tif')):
50 plugin = 'tifffile'
51
52 with file_or_url_context(fname) as fname:
53 img = call_plugin('imread', fname, plugin=plugin, **plugin_args)
54
55 if not hasattr(img, 'ndim'):
56 return img
57
58 if img.ndim > 2:
59 if img.shape[-1] not in (3, 4) and img.shape[-3] in (3, 4):
60 img = np.swapaxes(img, -1, -3)
61 img = np.swapaxes(img, -2, -3)
62
63 if as_gray:
64 if img.shape[2] == 4:
65 img = rgba2rgb(img)
66 img = rgb2gray(img)
67
68 return img
69
70
71 def imread_collection(load_pattern, conserve_memory=True,
72 plugin=None, **plugin_args):
73 """
74 Load a collection of images.
75
76 Parameters
77 ----------
78 load_pattern : str or list
79 List of objects to load. These are usually filenames, but may
80 vary depending on the currently active plugin. See the docstring
81 for ``ImageCollection`` for the default behaviour of this parameter.
82 conserve_memory : bool, optional
83 If True, never keep more than one in memory at a specific
84 time. Otherwise, images will be cached once they are loaded.
85
86 Returns
87 -------
88 ic : ImageCollection
89 Collection of images.
90
91 Other Parameters
92 ----------------
93 plugin_args : keywords
94 Passed to the given plugin.
95
96 """
97 return call_plugin('imread_collection', load_pattern, conserve_memory,
98 plugin=plugin, **plugin_args)
99
100
101 def imsave(fname, arr, plugin=None, check_contrast=True, **plugin_args):
102 """Save an image to file.
103
104 Parameters
105 ----------
106 fname : str
107 Target filename.
108 arr : ndarray of shape (M,N) or (M,N,3) or (M,N,4)
109 Image data.
110 plugin : str, optional
111 Name of plugin to use. By default, the different plugins are
112 tried (starting with imageio) until a suitable
113 candidate is found. If not given and fname is a tiff file, the
114 tifffile plugin will be used.
115 check_contrast : bool, optional
116 Check for low contrast and print warning (default: True).
117
118 Other Parameters
119 ----------------
120 plugin_args : keywords
121 Passed to the given plugin.
122
123 Notes
124 -----
125 When saving a JPEG, the compression ratio may be controlled using the
126 ``quality`` keyword argument which is an integer with values in [1, 100]
127 where 1 is worst quality and smallest file size, and 100 is best quality
128 and largest file size (default 75). This is only available when using
129 the PIL and imageio plugins.
130 """
131 if plugin is None and hasattr(fname, 'lower'):
132 if fname.lower().endswith(('.tiff', '.tif')):
133 plugin = 'tifffile'
134 if arr.dtype == bool:
135 warn('%s is a boolean image: setting True to 255 and False to 0. '
136 'To silence this warning, please convert the image using '
137 'img_as_ubyte.' % fname, stacklevel=2)
138 arr = arr.astype('uint8') * 255
139 if check_contrast and is_low_contrast(arr):
140 warn('%s is a low contrast image' % fname)
141 return call_plugin('imsave', fname, arr, plugin=plugin, **plugin_args)
142
143
144 def imshow(arr, plugin=None, **plugin_args):
145 """Display an image.
146
147 Parameters
148 ----------
149 arr : ndarray or str
150 Image data or name of image file.
151 plugin : str
152 Name of plugin to use. By default, the different plugins are
153 tried (starting with imageio) until a suitable
154 candidate is found.
155
156 Other Parameters
157 ----------------
158 plugin_args : keywords
159 Passed to the given plugin.
160
161 """
162 if isinstance(arr, str):
163 arr = call_plugin('imread', arr, plugin=plugin)
164 return call_plugin('imshow', arr, plugin=plugin, **plugin_args)
165
166
167 def imshow_collection(ic, plugin=None, **plugin_args):
168 """Display a collection of images.
169
170 Parameters
171 ----------
172 ic : ImageCollection
173 Collection to display.
174 plugin : str
175 Name of plugin to use. By default, the different plugins are
176 tried until a suitable candidate is found.
177
178 Other Parameters
179 ----------------
180 plugin_args : keywords
181 Passed to the given plugin.
182
183 """
184 return call_plugin('imshow_collection', ic, plugin=plugin, **plugin_args)
185
186
187 def show():
188 '''Display pending images.
189
190 Launch the event loop of the current gui plugin, and display all
191 pending images, queued via `imshow`. This is required when using
192 `imshow` from non-interactive scripts.
193
194 A call to `show` will block execution of code until all windows
195 have been closed.
196
197 Examples
198 --------
199 >>> import skimage.io as io
200
201 >>> rng = np.random.default_rng()
202 >>> for i in range(4):
203 ... ax_im = io.imshow(rng.random((50, 50)))
204 >>> io.show() # doctest: +SKIP
205
206 '''
207 return call_plugin('_app_show')
208
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/skimage/io/_io.py b/skimage/io/_io.py
--- a/skimage/io/_io.py
+++ b/skimage/io/_io.py
@@ -128,6 +128,8 @@
and largest file size (default 75). This is only available when using
the PIL and imageio plugins.
"""
+ if isinstance(fname, pathlib.Path):
+ fname = str(fname.resolve())
if plugin is None and hasattr(fname, 'lower'):
if fname.lower().endswith(('.tiff', '.tif')):
plugin = 'tifffile'
| {"golden_diff": "diff --git a/skimage/io/_io.py b/skimage/io/_io.py\n--- a/skimage/io/_io.py\n+++ b/skimage/io/_io.py\n@@ -128,6 +128,8 @@\n and largest file size (default 75). This is only available when using\n the PIL and imageio plugins.\n \"\"\"\n+ if isinstance(fname, pathlib.Path):\n+ fname = str(fname.resolve())\n if plugin is None and hasattr(fname, 'lower'):\n if fname.lower().endswith(('.tiff', '.tif')):\n plugin = 'tifffile'\n", "issue": "bug: imsave cannot detect right extension of pathlib \nimsave cannot detect right extension of pathlib \r\n\r\n```python\r\nfrom pathlib import Path\r\nfrom skimage import io\r\n\r\nfilename = Path(\"out.jpg\")\r\nio.imsave(filename, im) # this saves im as a png file though the extension is jpg\r\n\r\nio.imsave(\"out.jpg\", im) #this is working correctly\r\nio.imsave(str(filename), im) #this is working correctly\r\n```\n", "before_files": [{"content": "import pathlib\n\nimport numpy as np\n\nfrom .._shared.utils import warn\nfrom ..exposure import is_low_contrast\nfrom ..color.colorconv import rgb2gray, rgba2rgb\nfrom ..io.manage_plugins import call_plugin\nfrom .util import file_or_url_context\n\n\n__all__ = ['imread', 'imsave', 'imshow', 'show',\n 'imread_collection', 'imshow_collection']\n\n\ndef imread(fname, as_gray=False, plugin=None, **plugin_args):\n \"\"\"Load an image from file.\n\n Parameters\n ----------\n fname : string\n Image file name, e.g. ``test.jpg`` or URL.\n as_gray : bool, optional\n If True, convert color images to gray-scale (64-bit floats).\n Images that are already in gray-scale format are not converted.\n plugin : str, optional\n Name of plugin to use. By default, the different plugins are\n tried (starting with imageio) until a suitable\n candidate is found. If not given and fname is a tiff file, the\n tifffile plugin will be used.\n\n Other Parameters\n ----------------\n plugin_args : keywords\n Passed to the given plugin.\n\n Returns\n -------\n img_array : ndarray\n The different color bands/channels are stored in the\n third dimension, such that a gray-image is MxN, an\n RGB-image MxNx3 and an RGBA-image MxNx4.\n\n \"\"\"\n if isinstance(fname, pathlib.Path):\n fname = str(fname.resolve())\n\n if plugin is None and hasattr(fname, 'lower'):\n if fname.lower().endswith(('.tiff', '.tif')):\n plugin = 'tifffile'\n\n with file_or_url_context(fname) as fname:\n img = call_plugin('imread', fname, plugin=plugin, **plugin_args)\n\n if not hasattr(img, 'ndim'):\n return img\n\n if img.ndim > 2:\n if img.shape[-1] not in (3, 4) and img.shape[-3] in (3, 4):\n img = np.swapaxes(img, -1, -3)\n img = np.swapaxes(img, -2, -3)\n\n if as_gray:\n if img.shape[2] == 4:\n img = rgba2rgb(img)\n img = rgb2gray(img)\n\n return img\n\n\ndef imread_collection(load_pattern, conserve_memory=True,\n plugin=None, **plugin_args):\n \"\"\"\n Load a collection of images.\n\n Parameters\n ----------\n load_pattern : str or list\n List of objects to load. These are usually filenames, but may\n vary depending on the currently active plugin. See the docstring\n for ``ImageCollection`` for the default behaviour of this parameter.\n conserve_memory : bool, optional\n If True, never keep more than one in memory at a specific\n time. Otherwise, images will be cached once they are loaded.\n\n Returns\n -------\n ic : ImageCollection\n Collection of images.\n\n Other Parameters\n ----------------\n plugin_args : keywords\n Passed to the given plugin.\n\n \"\"\"\n return call_plugin('imread_collection', load_pattern, conserve_memory,\n plugin=plugin, **plugin_args)\n\n\ndef imsave(fname, arr, plugin=None, check_contrast=True, **plugin_args):\n \"\"\"Save an image to file.\n\n Parameters\n ----------\n fname : str\n Target filename.\n arr : ndarray of shape (M,N) or (M,N,3) or (M,N,4)\n Image data.\n plugin : str, optional\n Name of plugin to use. By default, the different plugins are\n tried (starting with imageio) until a suitable\n candidate is found. If not given and fname is a tiff file, the\n tifffile plugin will be used.\n check_contrast : bool, optional\n Check for low contrast and print warning (default: True).\n\n Other Parameters\n ----------------\n plugin_args : keywords\n Passed to the given plugin.\n\n Notes\n -----\n When saving a JPEG, the compression ratio may be controlled using the\n ``quality`` keyword argument which is an integer with values in [1, 100]\n where 1 is worst quality and smallest file size, and 100 is best quality\n and largest file size (default 75). This is only available when using\n the PIL and imageio plugins.\n \"\"\"\n if plugin is None and hasattr(fname, 'lower'):\n if fname.lower().endswith(('.tiff', '.tif')):\n plugin = 'tifffile'\n if arr.dtype == bool:\n warn('%s is a boolean image: setting True to 255 and False to 0. '\n 'To silence this warning, please convert the image using '\n 'img_as_ubyte.' % fname, stacklevel=2)\n arr = arr.astype('uint8') * 255\n if check_contrast and is_low_contrast(arr):\n warn('%s is a low contrast image' % fname)\n return call_plugin('imsave', fname, arr, plugin=plugin, **plugin_args)\n\n\ndef imshow(arr, plugin=None, **plugin_args):\n \"\"\"Display an image.\n\n Parameters\n ----------\n arr : ndarray or str\n Image data or name of image file.\n plugin : str\n Name of plugin to use. By default, the different plugins are\n tried (starting with imageio) until a suitable\n candidate is found.\n\n Other Parameters\n ----------------\n plugin_args : keywords\n Passed to the given plugin.\n\n \"\"\"\n if isinstance(arr, str):\n arr = call_plugin('imread', arr, plugin=plugin)\n return call_plugin('imshow', arr, plugin=plugin, **plugin_args)\n\n\ndef imshow_collection(ic, plugin=None, **plugin_args):\n \"\"\"Display a collection of images.\n\n Parameters\n ----------\n ic : ImageCollection\n Collection to display.\n plugin : str\n Name of plugin to use. By default, the different plugins are\n tried until a suitable candidate is found.\n\n Other Parameters\n ----------------\n plugin_args : keywords\n Passed to the given plugin.\n\n \"\"\"\n return call_plugin('imshow_collection', ic, plugin=plugin, **plugin_args)\n\n\ndef show():\n '''Display pending images.\n\n Launch the event loop of the current gui plugin, and display all\n pending images, queued via `imshow`. This is required when using\n `imshow` from non-interactive scripts.\n\n A call to `show` will block execution of code until all windows\n have been closed.\n\n Examples\n --------\n >>> import skimage.io as io\n\n >>> rng = np.random.default_rng()\n >>> for i in range(4):\n ... ax_im = io.imshow(rng.random((50, 50)))\n >>> io.show() # doctest: +SKIP\n\n '''\n return call_plugin('_app_show')\n", "path": "skimage/io/_io.py"}], "after_files": [{"content": "import pathlib\n\nimport numpy as np\n\nfrom .._shared.utils import warn\nfrom ..exposure import is_low_contrast\nfrom ..color.colorconv import rgb2gray, rgba2rgb\nfrom ..io.manage_plugins import call_plugin\nfrom .util import file_or_url_context\n\n\n__all__ = ['imread', 'imsave', 'imshow', 'show',\n 'imread_collection', 'imshow_collection']\n\n\ndef imread(fname, as_gray=False, plugin=None, **plugin_args):\n \"\"\"Load an image from file.\n\n Parameters\n ----------\n fname : string\n Image file name, e.g. ``test.jpg`` or URL.\n as_gray : bool, optional\n If True, convert color images to gray-scale (64-bit floats).\n Images that are already in gray-scale format are not converted.\n plugin : str, optional\n Name of plugin to use. By default, the different plugins are\n tried (starting with imageio) until a suitable\n candidate is found. If not given and fname is a tiff file, the\n tifffile plugin will be used.\n\n Other Parameters\n ----------------\n plugin_args : keywords\n Passed to the given plugin.\n\n Returns\n -------\n img_array : ndarray\n The different color bands/channels are stored in the\n third dimension, such that a gray-image is MxN, an\n RGB-image MxNx3 and an RGBA-image MxNx4.\n\n \"\"\"\n if isinstance(fname, pathlib.Path):\n fname = str(fname.resolve())\n\n if plugin is None and hasattr(fname, 'lower'):\n if fname.lower().endswith(('.tiff', '.tif')):\n plugin = 'tifffile'\n\n with file_or_url_context(fname) as fname:\n img = call_plugin('imread', fname, plugin=plugin, **plugin_args)\n\n if not hasattr(img, 'ndim'):\n return img\n\n if img.ndim > 2:\n if img.shape[-1] not in (3, 4) and img.shape[-3] in (3, 4):\n img = np.swapaxes(img, -1, -3)\n img = np.swapaxes(img, -2, -3)\n\n if as_gray:\n if img.shape[2] == 4:\n img = rgba2rgb(img)\n img = rgb2gray(img)\n\n return img\n\n\ndef imread_collection(load_pattern, conserve_memory=True,\n plugin=None, **plugin_args):\n \"\"\"\n Load a collection of images.\n\n Parameters\n ----------\n load_pattern : str or list\n List of objects to load. These are usually filenames, but may\n vary depending on the currently active plugin. See the docstring\n for ``ImageCollection`` for the default behaviour of this parameter.\n conserve_memory : bool, optional\n If True, never keep more than one in memory at a specific\n time. Otherwise, images will be cached once they are loaded.\n\n Returns\n -------\n ic : ImageCollection\n Collection of images.\n\n Other Parameters\n ----------------\n plugin_args : keywords\n Passed to the given plugin.\n\n \"\"\"\n return call_plugin('imread_collection', load_pattern, conserve_memory,\n plugin=plugin, **plugin_args)\n\n\ndef imsave(fname, arr, plugin=None, check_contrast=True, **plugin_args):\n \"\"\"Save an image to file.\n\n Parameters\n ----------\n fname : str\n Target filename.\n arr : ndarray of shape (M,N) or (M,N,3) or (M,N,4)\n Image data.\n plugin : str, optional\n Name of plugin to use. By default, the different plugins are\n tried (starting with imageio) until a suitable\n candidate is found. If not given and fname is a tiff file, the\n tifffile plugin will be used.\n check_contrast : bool, optional\n Check for low contrast and print warning (default: True).\n\n Other Parameters\n ----------------\n plugin_args : keywords\n Passed to the given plugin.\n\n Notes\n -----\n When saving a JPEG, the compression ratio may be controlled using the\n ``quality`` keyword argument which is an integer with values in [1, 100]\n where 1 is worst quality and smallest file size, and 100 is best quality\n and largest file size (default 75). This is only available when using\n the PIL and imageio plugins.\n \"\"\"\n if isinstance(fname, pathlib.Path):\n fname = str(fname.resolve())\n if plugin is None and hasattr(fname, 'lower'):\n if fname.lower().endswith(('.tiff', '.tif')):\n plugin = 'tifffile'\n if arr.dtype == bool:\n warn('%s is a boolean image: setting True to 255 and False to 0. '\n 'To silence this warning, please convert the image using '\n 'img_as_ubyte.' % fname, stacklevel=2)\n arr = arr.astype('uint8') * 255\n if check_contrast and is_low_contrast(arr):\n warn('%s is a low contrast image' % fname)\n return call_plugin('imsave', fname, arr, plugin=plugin, **plugin_args)\n\n\ndef imshow(arr, plugin=None, **plugin_args):\n \"\"\"Display an image.\n\n Parameters\n ----------\n arr : ndarray or str\n Image data or name of image file.\n plugin : str\n Name of plugin to use. By default, the different plugins are\n tried (starting with imageio) until a suitable\n candidate is found.\n\n Other Parameters\n ----------------\n plugin_args : keywords\n Passed to the given plugin.\n\n \"\"\"\n if isinstance(arr, str):\n arr = call_plugin('imread', arr, plugin=plugin)\n return call_plugin('imshow', arr, plugin=plugin, **plugin_args)\n\n\ndef imshow_collection(ic, plugin=None, **plugin_args):\n \"\"\"Display a collection of images.\n\n Parameters\n ----------\n ic : ImageCollection\n Collection to display.\n plugin : str\n Name of plugin to use. By default, the different plugins are\n tried until a suitable candidate is found.\n\n Other Parameters\n ----------------\n plugin_args : keywords\n Passed to the given plugin.\n\n \"\"\"\n return call_plugin('imshow_collection', ic, plugin=plugin, **plugin_args)\n\n\ndef show():\n '''Display pending images.\n\n Launch the event loop of the current gui plugin, and display all\n pending images, queued via `imshow`. This is required when using\n `imshow` from non-interactive scripts.\n\n A call to `show` will block execution of code until all windows\n have been closed.\n\n Examples\n --------\n >>> import skimage.io as io\n\n >>> rng = np.random.default_rng()\n >>> for i in range(4):\n ... ax_im = io.imshow(rng.random((50, 50)))\n >>> io.show() # doctest: +SKIP\n\n '''\n return call_plugin('_app_show')\n", "path": "skimage/io/_io.py"}]} | 2,409 | 135 |
gh_patches_debug_30833 | rasdani/github-patches | git_diff | numpy__numpy-13083 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Failed to compile scipy with Anaconda gfortran.
<!-- Please describe the issue in detail here, and fill in the fields below -->
The compilation fails with an error and prints a command it is trying to execute.
If I type the command at the end of the error log manually, it works.
I think this points to a quotation error. The additional fortran flags from the environment shall be added to the command line list after shell-style string split. It will produce this exact problem if the full list of fortran flags are added as a single string.
### Reproducing code example:
clone scipy from github;
activate Anaconda build enviroment.
python runtests.py
### Error message:
```
f951: Error: unrecognized command line option '-fopenmp -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe'
Running from scipy source directory.
/home/yfeng1/anaconda3/install/lib/python3.6/site-packages/numpy/distutils/system_info.py:730: UserWarning: Specified path /usr/local/include/python3.6m is invalid.
return self.get_paths(self.section, key)
/home/yfeng1/anaconda3/install/lib/python3.6/site-packages/numpy/distutils/system_info.py:730: UserWarning: Specified path /usr/include/suitesparse/python3.6m is invalid.
return self.get_paths(self.section, key)
error: Command "/home/yfeng1/anaconda3/install/bin/x86_64-conda_cos6-linux-gnu-gfortran -Wall -g -ffixed-form -fno-second-underscore -fPIC -fopenmp -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -O3 -funroll-loops -I/home/yfeng1/anaconda3/install/lib/python3.6/site-packages/numpy/core/include -c -c scipy/fftpack/src/dfftpack/dcosqb.f -o build/temp.linux-x86_64-3.6/scipy/fftpack/src/dfftpack/dcosqb.o" failed with exit status 1
```
### Numpy/Python version information:
1.16.2.
The environment variables regarding fortran are
```
FORTRANFLAGS=-fopenmp -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe
GFORTRAN=/home/yfeng1/anaconda3/install/bin/x86_64-conda_cos6-linux-gnu-gfortran
DEBUG_FORTRANFLAGS=-fopenmp -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -fopenmp -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-all -fno-plt -Og -g -Wall -Wextra -fcheck=all -fbacktrace -fimplicit-none -fvar-tracking-assignments -ffunction-sections -pipe
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `numpy/distutils/fcompiler/environment.py`
Content:
```
1 from __future__ import division, absolute_import, print_function
2
3 import os
4 import warnings
5 from distutils.dist import Distribution
6
7 __metaclass__ = type
8
9 class EnvironmentConfig(object):
10 def __init__(self, distutils_section='ALL', **kw):
11 self._distutils_section = distutils_section
12 self._conf_keys = kw
13 self._conf = None
14 self._hook_handler = None
15
16 def dump_variable(self, name):
17 conf_desc = self._conf_keys[name]
18 hook, envvar, confvar, convert, append = conf_desc
19 if not convert:
20 convert = lambda x : x
21 print('%s.%s:' % (self._distutils_section, name))
22 v = self._hook_handler(name, hook)
23 print(' hook : %s' % (convert(v),))
24 if envvar:
25 v = os.environ.get(envvar, None)
26 print(' environ: %s' % (convert(v),))
27 if confvar and self._conf:
28 v = self._conf.get(confvar, (None, None))[1]
29 print(' config : %s' % (convert(v),))
30
31 def dump_variables(self):
32 for name in self._conf_keys:
33 self.dump_variable(name)
34
35 def __getattr__(self, name):
36 try:
37 conf_desc = self._conf_keys[name]
38 except KeyError:
39 raise AttributeError(name)
40 return self._get_var(name, conf_desc)
41
42 def get(self, name, default=None):
43 try:
44 conf_desc = self._conf_keys[name]
45 except KeyError:
46 return default
47 var = self._get_var(name, conf_desc)
48 if var is None:
49 var = default
50 return var
51
52 def _get_var(self, name, conf_desc):
53 hook, envvar, confvar, convert, append = conf_desc
54 var = self._hook_handler(name, hook)
55 if envvar is not None:
56 envvar_contents = os.environ.get(envvar)
57 if envvar_contents is not None:
58 if var and append:
59 if os.environ.get('NPY_DISTUTILS_APPEND_FLAGS', '0') == '1':
60 var = var + [envvar_contents]
61 else:
62 var = envvar_contents
63 if 'NPY_DISTUTILS_APPEND_FLAGS' not in os.environ.keys():
64 msg = "{} is used as is, not appended ".format(envvar) + \
65 "to flags already defined " + \
66 "by numpy.distutils! Use NPY_DISTUTILS_APPEND_FLAGS=1 " + \
67 "to obtain appending behavior instead (this " + \
68 "behavior will become default in a future release)."
69 warnings.warn(msg, UserWarning, stacklevel=3)
70 else:
71 var = envvar_contents
72 if confvar is not None and self._conf:
73 var = self._conf.get(confvar, (None, var))[1]
74 if convert is not None:
75 var = convert(var)
76 return var
77
78 def clone(self, hook_handler):
79 ec = self.__class__(distutils_section=self._distutils_section,
80 **self._conf_keys)
81 ec._hook_handler = hook_handler
82 return ec
83
84 def use_distribution(self, dist):
85 if isinstance(dist, Distribution):
86 self._conf = dist.get_option_dict(self._distutils_section)
87 else:
88 self._conf = dist
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/numpy/distutils/fcompiler/environment.py b/numpy/distutils/fcompiler/environment.py
--- a/numpy/distutils/fcompiler/environment.py
+++ b/numpy/distutils/fcompiler/environment.py
@@ -51,13 +51,16 @@
def _get_var(self, name, conf_desc):
hook, envvar, confvar, convert, append = conf_desc
+ if convert is None:
+ convert = lambda x: x
var = self._hook_handler(name, hook)
if envvar is not None:
envvar_contents = os.environ.get(envvar)
if envvar_contents is not None:
+ envvar_contents = convert(envvar_contents)
if var and append:
if os.environ.get('NPY_DISTUTILS_APPEND_FLAGS', '0') == '1':
- var = var + [envvar_contents]
+ var.extend(envvar_contents)
else:
var = envvar_contents
if 'NPY_DISTUTILS_APPEND_FLAGS' not in os.environ.keys():
@@ -70,11 +73,12 @@
else:
var = envvar_contents
if confvar is not None and self._conf:
- var = self._conf.get(confvar, (None, var))[1]
- if convert is not None:
- var = convert(var)
+ if confvar in self._conf:
+ source, confvar_contents = self._conf[confvar]
+ var = convert(confvar_contents)
return var
+
def clone(self, hook_handler):
ec = self.__class__(distutils_section=self._distutils_section,
**self._conf_keys)
| {"golden_diff": "diff --git a/numpy/distutils/fcompiler/environment.py b/numpy/distutils/fcompiler/environment.py\n--- a/numpy/distutils/fcompiler/environment.py\n+++ b/numpy/distutils/fcompiler/environment.py\n@@ -51,13 +51,16 @@\n \n def _get_var(self, name, conf_desc):\n hook, envvar, confvar, convert, append = conf_desc\n+ if convert is None:\n+ convert = lambda x: x\n var = self._hook_handler(name, hook)\n if envvar is not None:\n envvar_contents = os.environ.get(envvar)\n if envvar_contents is not None:\n+ envvar_contents = convert(envvar_contents)\n if var and append:\n if os.environ.get('NPY_DISTUTILS_APPEND_FLAGS', '0') == '1':\n- var = var + [envvar_contents]\n+ var.extend(envvar_contents)\n else:\n var = envvar_contents\n if 'NPY_DISTUTILS_APPEND_FLAGS' not in os.environ.keys():\n@@ -70,11 +73,12 @@\n else:\n var = envvar_contents\n if confvar is not None and self._conf:\n- var = self._conf.get(confvar, (None, var))[1]\n- if convert is not None:\n- var = convert(var)\n+ if confvar in self._conf:\n+ source, confvar_contents = self._conf[confvar]\n+ var = convert(confvar_contents)\n return var\n \n+\n def clone(self, hook_handler):\n ec = self.__class__(distutils_section=self._distutils_section,\n **self._conf_keys)\n", "issue": "Failed to compile scipy with Anaconda gfortran.\n<!-- Please describe the issue in detail here, and fill in the fields below -->\r\n\r\nThe compilation fails with an error and prints a command it is trying to execute. \r\n\r\nIf I type the command at the end of the error log manually, it works.\r\n\r\nI think this points to a quotation error. The additional fortran flags from the environment shall be added to the command line list after shell-style string split. It will produce this exact problem if the full list of fortran flags are added as a single string.\r\n\r\n### Reproducing code example:\r\n\r\nclone scipy from github;\r\n\r\nactivate Anaconda build enviroment.\r\npython runtests.py\r\n\r\n### Error message:\r\n\r\n```\r\nf951: Error: unrecognized command line option '-fopenmp -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe'\r\nRunning from scipy source directory.\r\n/home/yfeng1/anaconda3/install/lib/python3.6/site-packages/numpy/distutils/system_info.py:730: UserWarning: Specified path /usr/local/include/python3.6m is invalid.\r\n return self.get_paths(self.section, key)\r\n/home/yfeng1/anaconda3/install/lib/python3.6/site-packages/numpy/distutils/system_info.py:730: UserWarning: Specified path /usr/include/suitesparse/python3.6m is invalid.\r\n return self.get_paths(self.section, key)\r\nerror: Command \"/home/yfeng1/anaconda3/install/bin/x86_64-conda_cos6-linux-gnu-gfortran -Wall -g -ffixed-form -fno-second-underscore -fPIC -fopenmp -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -O3 -funroll-loops -I/home/yfeng1/anaconda3/install/lib/python3.6/site-packages/numpy/core/include -c -c scipy/fftpack/src/dfftpack/dcosqb.f -o build/temp.linux-x86_64-3.6/scipy/fftpack/src/dfftpack/dcosqb.o\" failed with exit status 1\r\n```\r\n\r\n### Numpy/Python version information:\r\n\r\n1.16.2.\r\n\r\nThe environment variables regarding fortran are\r\n```\r\nFORTRANFLAGS=-fopenmp -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe\r\nGFORTRAN=/home/yfeng1/anaconda3/install/bin/x86_64-conda_cos6-linux-gnu-gfortran\r\nDEBUG_FORTRANFLAGS=-fopenmp -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -fopenmp -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-all -fno-plt -Og -g -Wall -Wextra -fcheck=all -fbacktrace -fimplicit-none -fvar-tracking-assignments -ffunction-sections -pipe\r\n```\n", "before_files": [{"content": "from __future__ import division, absolute_import, print_function\n\nimport os\nimport warnings\nfrom distutils.dist import Distribution\n\n__metaclass__ = type\n\nclass EnvironmentConfig(object):\n def __init__(self, distutils_section='ALL', **kw):\n self._distutils_section = distutils_section\n self._conf_keys = kw\n self._conf = None\n self._hook_handler = None\n\n def dump_variable(self, name):\n conf_desc = self._conf_keys[name]\n hook, envvar, confvar, convert, append = conf_desc\n if not convert:\n convert = lambda x : x\n print('%s.%s:' % (self._distutils_section, name))\n v = self._hook_handler(name, hook)\n print(' hook : %s' % (convert(v),))\n if envvar:\n v = os.environ.get(envvar, None)\n print(' environ: %s' % (convert(v),))\n if confvar and self._conf:\n v = self._conf.get(confvar, (None, None))[1]\n print(' config : %s' % (convert(v),))\n\n def dump_variables(self):\n for name in self._conf_keys:\n self.dump_variable(name)\n\n def __getattr__(self, name):\n try:\n conf_desc = self._conf_keys[name]\n except KeyError:\n raise AttributeError(name)\n return self._get_var(name, conf_desc)\n\n def get(self, name, default=None):\n try:\n conf_desc = self._conf_keys[name]\n except KeyError:\n return default\n var = self._get_var(name, conf_desc)\n if var is None:\n var = default\n return var\n\n def _get_var(self, name, conf_desc):\n hook, envvar, confvar, convert, append = conf_desc\n var = self._hook_handler(name, hook)\n if envvar is not None:\n envvar_contents = os.environ.get(envvar)\n if envvar_contents is not None:\n if var and append:\n if os.environ.get('NPY_DISTUTILS_APPEND_FLAGS', '0') == '1':\n var = var + [envvar_contents]\n else:\n var = envvar_contents\n if 'NPY_DISTUTILS_APPEND_FLAGS' not in os.environ.keys():\n msg = \"{} is used as is, not appended \".format(envvar) + \\\n \"to flags already defined \" + \\\n \"by numpy.distutils! Use NPY_DISTUTILS_APPEND_FLAGS=1 \" + \\\n \"to obtain appending behavior instead (this \" + \\\n \"behavior will become default in a future release).\"\n warnings.warn(msg, UserWarning, stacklevel=3)\n else:\n var = envvar_contents\n if confvar is not None and self._conf:\n var = self._conf.get(confvar, (None, var))[1]\n if convert is not None:\n var = convert(var)\n return var\n\n def clone(self, hook_handler):\n ec = self.__class__(distutils_section=self._distutils_section,\n **self._conf_keys)\n ec._hook_handler = hook_handler\n return ec\n\n def use_distribution(self, dist):\n if isinstance(dist, Distribution):\n self._conf = dist.get_option_dict(self._distutils_section)\n else:\n self._conf = dist\n", "path": "numpy/distutils/fcompiler/environment.py"}], "after_files": [{"content": "from __future__ import division, absolute_import, print_function\n\nimport os\nimport warnings\nfrom distutils.dist import Distribution\n\n__metaclass__ = type\n\nclass EnvironmentConfig(object):\n def __init__(self, distutils_section='ALL', **kw):\n self._distutils_section = distutils_section\n self._conf_keys = kw\n self._conf = None\n self._hook_handler = None\n\n def dump_variable(self, name):\n conf_desc = self._conf_keys[name]\n hook, envvar, confvar, convert, append = conf_desc\n if not convert:\n convert = lambda x : x\n print('%s.%s:' % (self._distutils_section, name))\n v = self._hook_handler(name, hook)\n print(' hook : %s' % (convert(v),))\n if envvar:\n v = os.environ.get(envvar, None)\n print(' environ: %s' % (convert(v),))\n if confvar and self._conf:\n v = self._conf.get(confvar, (None, None))[1]\n print(' config : %s' % (convert(v),))\n\n def dump_variables(self):\n for name in self._conf_keys:\n self.dump_variable(name)\n\n def __getattr__(self, name):\n try:\n conf_desc = self._conf_keys[name]\n except KeyError:\n raise AttributeError(name)\n return self._get_var(name, conf_desc)\n\n def get(self, name, default=None):\n try:\n conf_desc = self._conf_keys[name]\n except KeyError:\n return default\n var = self._get_var(name, conf_desc)\n if var is None:\n var = default\n return var\n\n def _get_var(self, name, conf_desc):\n hook, envvar, confvar, convert, append = conf_desc\n if convert is None:\n convert = lambda x: x\n var = self._hook_handler(name, hook)\n if envvar is not None:\n envvar_contents = os.environ.get(envvar)\n if envvar_contents is not None:\n envvar_contents = convert(envvar_contents)\n if var and append:\n if os.environ.get('NPY_DISTUTILS_APPEND_FLAGS', '0') == '1':\n var.extend(envvar_contents)\n else:\n var = envvar_contents\n if 'NPY_DISTUTILS_APPEND_FLAGS' not in os.environ.keys():\n msg = \"{} is used as is, not appended \".format(envvar) + \\\n \"to flags already defined \" + \\\n \"by numpy.distutils! Use NPY_DISTUTILS_APPEND_FLAGS=1 \" + \\\n \"to obtain appending behavior instead (this \" + \\\n \"behavior will become default in a future release).\"\n warnings.warn(msg, UserWarning, stacklevel=3)\n else:\n var = envvar_contents\n if confvar is not None and self._conf:\n if confvar in self._conf:\n source, confvar_contents = self._conf[confvar]\n var = convert(confvar_contents)\n return var\n\n\n def clone(self, hook_handler):\n ec = self.__class__(distutils_section=self._distutils_section,\n **self._conf_keys)\n ec._hook_handler = hook_handler\n return ec\n\n def use_distribution(self, dist):\n if isinstance(dist, Distribution):\n self._conf = dist.get_option_dict(self._distutils_section)\n else:\n self._conf = dist\n", "path": "numpy/distutils/fcompiler/environment.py"}]} | 1,899 | 365 |
gh_patches_debug_24492 | rasdani/github-patches | git_diff | scikit-hep__pyhf-186 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Updated setup.py for pytorch > 0.4 dependency
# Description
I had 0.3.1 for Torch and that caused issues with some of the doctesting as the distributions did not have `cdf` methods. I forced an upgrade pytorch and things are fine now.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import setup, find_packages
2 setup(
3 name = 'pyhf',
4 version = '0.0.8',
5 description = '(partial) pure python histfactory implementation',
6 url = '',
7 author = 'Lukas Heinrich',
8 author_email = '[email protected]',
9 packages = find_packages(),
10 include_package_data = True,
11 install_requires = [
12 'numpy>=1.14.3',
13 'scipy'
14 ],
15 extras_require = {
16 'xmlimport': [
17 'uproot',
18 ],
19 'torch': [
20 'torch'
21 ],
22 'mxnet':[
23 'mxnet',
24 ],
25 'develop': [
26 'pyflakes',
27 'pytest>=3.5.1',
28 'pytest-cov>=2.5.1',
29 'pytest-benchmark[histogram]',
30 'python-coveralls',
31 'matplotlib',
32 'jupyter',
33 'uproot',
34 'papermill',
35 'torch',
36 'tensorflow',
37 'mxnet>=1.0.0',
38 'graphviz',
39 'sphinx',
40 'sphinxcontrib-bibtex',
41 'sphinxcontrib-napoleon',
42 'sphinx_rtd_theme',
43 'nbsphinx',
44 'jsonpatch',
45 'jsonschema>=2.6.0'
46 ]
47 },
48 entry_points = {
49 },
50 dependency_links = [
51 ]
52 )
53
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -9,7 +9,7 @@
packages = find_packages(),
include_package_data = True,
install_requires = [
- 'numpy>=1.14.3',
+ 'numpy<=1.14.5,>=1.14.3', # required by tensorflow, mxnet, and us
'scipy'
],
extras_require = {
@@ -17,10 +17,18 @@
'uproot',
],
'torch': [
- 'torch'
+ 'torch>=0.4.0'
],
'mxnet':[
- 'mxnet',
+ 'mxnet>=1.0.0',
+ 'requests<2.19.0,>=2.18.4',
+ 'numpy<1.15.0,>=1.8.2',
+ 'requests<2.19.0,>=2.18.4',
+ ],
+ 'tensorflow':[
+ 'tensorflow==1.10.0',
+ 'numpy<=1.14.5,>=1.13.3',
+ 'setuptools<=39.1.0',
],
'develop': [
'pyflakes',
@@ -28,13 +36,11 @@
'pytest-cov>=2.5.1',
'pytest-benchmark[histogram]',
'python-coveralls',
+ 'coverage==4.0.3', # coveralls
'matplotlib',
'jupyter',
'uproot',
'papermill',
- 'torch',
- 'tensorflow',
- 'mxnet>=1.0.0',
'graphviz',
'sphinx',
'sphinxcontrib-bibtex',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -9,7 +9,7 @@\n packages = find_packages(),\n include_package_data = True,\n install_requires = [\n- 'numpy>=1.14.3',\n+ 'numpy<=1.14.5,>=1.14.3', # required by tensorflow, mxnet, and us\n 'scipy'\n ],\n extras_require = {\n@@ -17,10 +17,18 @@\n 'uproot',\n ],\n 'torch': [\n- 'torch'\n+ 'torch>=0.4.0'\n ],\n 'mxnet':[\n- 'mxnet',\n+ 'mxnet>=1.0.0',\n+ 'requests<2.19.0,>=2.18.4',\n+ 'numpy<1.15.0,>=1.8.2',\n+ 'requests<2.19.0,>=2.18.4',\n+ ],\n+ 'tensorflow':[\n+ 'tensorflow==1.10.0',\n+ 'numpy<=1.14.5,>=1.13.3',\n+ 'setuptools<=39.1.0',\n ],\n 'develop': [\n 'pyflakes',\n@@ -28,13 +36,11 @@\n 'pytest-cov>=2.5.1',\n 'pytest-benchmark[histogram]',\n 'python-coveralls',\n+ 'coverage==4.0.3', # coveralls\n 'matplotlib',\n 'jupyter',\n 'uproot',\n 'papermill',\n- 'torch',\n- 'tensorflow',\n- 'mxnet>=1.0.0',\n 'graphviz',\n 'sphinx',\n 'sphinxcontrib-bibtex',\n", "issue": "Updated setup.py for pytorch > 0.4 dependency\n# Description\r\n\r\nI had 0.3.1 for Torch and that caused issues with some of the doctesting as the distributions did not have `cdf` methods. I forced an upgrade pytorch and things are fine now.\n", "before_files": [{"content": "from setuptools import setup, find_packages\nsetup(\n name = 'pyhf',\n version = '0.0.8',\n description = '(partial) pure python histfactory implementation',\n url = '',\n author = 'Lukas Heinrich',\n author_email = '[email protected]',\n packages = find_packages(),\n include_package_data = True,\n install_requires = [\n 'numpy>=1.14.3',\n 'scipy'\n ],\n extras_require = {\n 'xmlimport': [\n 'uproot',\n ],\n 'torch': [\n 'torch'\n ],\n 'mxnet':[\n 'mxnet',\n ],\n 'develop': [\n 'pyflakes',\n 'pytest>=3.5.1',\n 'pytest-cov>=2.5.1',\n 'pytest-benchmark[histogram]',\n 'python-coveralls',\n 'matplotlib',\n 'jupyter',\n 'uproot',\n 'papermill',\n 'torch',\n 'tensorflow',\n 'mxnet>=1.0.0',\n 'graphviz',\n 'sphinx',\n 'sphinxcontrib-bibtex',\n 'sphinxcontrib-napoleon',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'jsonpatch',\n 'jsonschema>=2.6.0'\n ]\n },\n entry_points = {\n },\n dependency_links = [\n ]\n)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import setup, find_packages\nsetup(\n name = 'pyhf',\n version = '0.0.8',\n description = '(partial) pure python histfactory implementation',\n url = '',\n author = 'Lukas Heinrich',\n author_email = '[email protected]',\n packages = find_packages(),\n include_package_data = True,\n install_requires = [\n 'numpy<=1.14.5,>=1.14.3', # required by tensorflow, mxnet, and us\n 'scipy'\n ],\n extras_require = {\n 'xmlimport': [\n 'uproot',\n ],\n 'torch': [\n 'torch>=0.4.0'\n ],\n 'mxnet':[\n 'mxnet>=1.0.0',\n 'requests<2.19.0,>=2.18.4',\n 'numpy<1.15.0,>=1.8.2',\n 'requests<2.19.0,>=2.18.4',\n ],\n 'tensorflow':[\n 'tensorflow==1.10.0',\n 'numpy<=1.14.5,>=1.13.3',\n 'setuptools<=39.1.0',\n ],\n 'develop': [\n 'pyflakes',\n 'pytest>=3.5.1',\n 'pytest-cov>=2.5.1',\n 'pytest-benchmark[histogram]',\n 'python-coveralls',\n 'coverage==4.0.3', # coveralls\n 'matplotlib',\n 'jupyter',\n 'uproot',\n 'papermill',\n 'graphviz',\n 'sphinx',\n 'sphinxcontrib-bibtex',\n 'sphinxcontrib-napoleon',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'jsonpatch',\n 'jsonschema>=2.6.0'\n ]\n },\n entry_points = {\n },\n dependency_links = [\n ]\n)\n", "path": "setup.py"}]} | 722 | 415 |
gh_patches_debug_31962 | rasdani/github-patches | git_diff | dbt-labs__dbt-core-1086 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
handle exceptions that occur inside of jinja
## Issue
### Issue description
Given sql with invalid jinja like:
```
{{ config(
materialized = "table",
schema = 'custom_schema' -- render this in a custom schema
) }}
```
dbt will throw a very unhelpful error that looks like:
```
File "/usr/local/Cellar/dbt/0.10.2/libexec/lib/python3.7/site-packages/dbt/parser/base_sql.py", line 95, in parse_sql_nodes
macros=macros)
File "/usr/local/Cellar/dbt/0.10.2/libexec/lib/python3.7/site-packages/dbt/parser/base.py", line 105, in parse_node
capture_macros=True)
File "/usr/local/Cellar/dbt/0.10.2/libexec/lib/python3.7/site-packages/dbt/clients/jinja.py", line 198, in get_rendered
return render_template(template, ctx, node)
File "/usr/local/Cellar/dbt/0.10.2/libexec/lib/python3.7/site-packages/dbt/clients/jinja.py", line 185, in render_template
return template.render(ctx)
File "/usr/local/Cellar/dbt/0.10.2/libexec/lib/python3.7/site-packages/jinja2/asyncsupport.py", line 76, in render
return original_render(self, *args, **kwargs)
File "/usr/local/Cellar/dbt/0.10.2/libexec/lib/python3.7/site-packages/jinja2/environment.py", line 1008, in render
return self.environment.handle_exception(exc_info, True)
File "/usr/local/Cellar/dbt/0.10.2/libexec/lib/python3.7/site-packages/jinja2/environment.py", line 780, in handle_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/Cellar/dbt/0.10.2/libexec/lib/python3.7/site-packages/jinja2/_compat.py", line 37, in reraise
raise value.with_traceback(tb)
File "<template>", line 1, in top-level template code
TypeError: exceptions must derive from BaseException
```
It will likely be difficult to pinpoint the exact error, but it should ideally be able to throw some sort of human-readable parsing error, indicating the model where the syntax error occurred.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dbt/clients/jinja.py`
Content:
```
1 import codecs
2 import linecache
3 import os
4
5 import jinja2
6 import jinja2._compat
7 import jinja2.ext
8 import jinja2.nodes
9 import jinja2.parser
10 import jinja2.sandbox
11
12 import dbt.compat
13 import dbt.exceptions
14
15 from dbt.node_types import NodeType
16 from dbt.utils import AttrDict
17
18 from dbt.logger import GLOBAL_LOGGER as logger # noqa
19
20
21 class MacroFuzzParser(jinja2.parser.Parser):
22 def parse_macro(self):
23 node = jinja2.nodes.Macro(lineno=next(self.stream).lineno)
24
25 # modified to fuzz macros defined in the same file. this way
26 # dbt can understand the stack of macros being called.
27 # - @cmcarthur
28 node.name = dbt.utils.get_dbt_macro_name(
29 self.parse_assign_target(name_only=True).name)
30
31 self.parse_signature(node)
32 node.body = self.parse_statements(('name:endmacro',),
33 drop_needle=True)
34 return node
35
36
37 class MacroFuzzEnvironment(jinja2.sandbox.SandboxedEnvironment):
38 def _parse(self, source, name, filename):
39 return MacroFuzzParser(
40 self, source, name,
41 jinja2._compat.encode_filename(filename)
42 ).parse()
43
44 def _compile(self, source, filename):
45 """Override jinja's compilation to stash the rendered source inside
46 the python linecache for debugging.
47 """
48 if filename == '<template>':
49 # make a better filename
50 filename = 'dbt-{}'.format(
51 codecs.encode(os.urandom(12), 'hex').decode('ascii')
52 )
53 # encode, though I don't think this matters
54 filename = jinja2._compat.encode_filename(filename)
55 # put ourselves in the cache using the 'lazycache' method
56 linecache.cache[filename] = (lambda: source,)
57
58 return super(MacroFuzzEnvironment, self)._compile(source, filename)
59
60
61 class TemplateCache(object):
62
63 def __init__(self):
64 self.file_cache = {}
65
66 def get_node_template(self, node):
67 key = (node['package_name'], node['original_file_path'])
68
69 if key in self.file_cache:
70 return self.file_cache[key]
71
72 template = get_template(
73 string=node.get('raw_sql'),
74 ctx={},
75 node=node
76 )
77 self.file_cache[key] = template
78
79 return template
80
81 def clear(self):
82 self.file_cache.clear()
83
84
85 template_cache = TemplateCache()
86
87
88 def macro_generator(node):
89 def apply_context(context):
90 def call(*args, **kwargs):
91 name = node.get('name')
92 template = template_cache.get_node_template(node)
93 module = template.make_module(context, False, context)
94
95 if node['resource_type'] == NodeType.Operation:
96 macro = module.__dict__[dbt.utils.get_dbt_operation_name(name)]
97 else:
98 macro = module.__dict__[dbt.utils.get_dbt_macro_name(name)]
99 module.__dict__.update(context)
100
101 try:
102 return macro(*args, **kwargs)
103 except dbt.exceptions.MacroReturn as e:
104 return e.value
105 except (TypeError, jinja2.exceptions.TemplateRuntimeError) as e:
106 dbt.exceptions.raise_compiler_error(str(e), node)
107 except dbt.exceptions.CompilationException as e:
108 e.stack.append(node)
109 raise e
110
111 return call
112 return apply_context
113
114
115 class MaterializationExtension(jinja2.ext.Extension):
116 tags = ['materialization']
117
118 def parse(self, parser):
119 node = jinja2.nodes.Macro(lineno=next(parser.stream).lineno)
120 materialization_name = \
121 parser.parse_assign_target(name_only=True).name
122
123 adapter_name = 'default'
124 node.args = []
125 node.defaults = []
126
127 while parser.stream.skip_if('comma'):
128 target = parser.parse_assign_target(name_only=True)
129
130 if target.name == 'default':
131 pass
132
133 elif target.name == 'adapter':
134 parser.stream.expect('assign')
135 value = parser.parse_expression()
136 adapter_name = value.value
137
138 else:
139 dbt.exceptions.invalid_materialization_argument(
140 materialization_name, target.name)
141
142 node.name = dbt.utils.get_materialization_macro_name(
143 materialization_name, adapter_name)
144
145 node.body = parser.parse_statements(('name:endmaterialization',),
146 drop_needle=True)
147
148 return node
149
150
151 class OperationExtension(jinja2.ext.Extension):
152 tags = ['operation']
153
154 def parse(self, parser):
155 node = jinja2.nodes.Macro(lineno=next(parser.stream).lineno)
156 operation_name = \
157 parser.parse_assign_target(name_only=True).name
158
159 node.args = []
160 node.defaults = []
161
162 while parser.stream.skip_if('comma'):
163 target = parser.parse_assign_target(name_only=True)
164
165 node.name = dbt.utils.get_operation_macro_name(operation_name)
166
167 node.body = parser.parse_statements(('name:endoperation',),
168 drop_needle=True)
169
170 return node
171
172
173 class DocumentationExtension(jinja2.ext.Extension):
174 tags = ['docs']
175
176 def parse(self, parser):
177 node = jinja2.nodes.Macro(lineno=next(parser.stream).lineno)
178 docs_name = parser.parse_assign_target(name_only=True).name
179
180 node.args = []
181 node.defaults = []
182 node.name = dbt.utils.get_docs_macro_name(docs_name)
183 node.body = parser.parse_statements(('name:enddocs',),
184 drop_needle=True)
185 return node
186
187
188 def _is_dunder_name(name):
189 return name.startswith('__') and name.endswith('__')
190
191
192 def create_macro_capture_env(node):
193
194 class ParserMacroCapture(jinja2.Undefined):
195 """
196 This class sets up the parser to capture macros.
197 """
198 def __init__(self, hint=None, obj=None, name=None,
199 exc=None):
200 super(jinja2.Undefined, self).__init__()
201 self.node = node
202 self.name = name
203 self.package_name = node.get('package_name')
204 # jinja uses these for safety, so we have to override them.
205 # see https://github.com/pallets/jinja/blob/master/jinja2/sandbox.py#L332-L339 # noqa
206 self.unsafe_callable = False
207 self.alters_data = False
208
209 def __deepcopy__(self, memo):
210 path = os.path.join(self.node.get('root_path'),
211 self.node.get('original_file_path'))
212
213 logger.debug(
214 'A ParserMacroCapture has been deecopy()d, invalid reference '
215 'to "{}" in node {}.{} (source path: {})'
216 .format(self.name, self.node.get('package_name'),
217 self.node.get('name'),
218 path))
219
220 dbt.exceptions.raise_compiler_error(
221 'dbt has detected at least one invalid reference in {}.{}. '
222 'Check logs for more information'
223 .format(self.node.get('package_name'), self.node.get('name'))
224 )
225
226 def __getattr__(self, name):
227 if name == 'name' or _is_dunder_name(name):
228 raise AttributeError(
229 "'{}' object has no attribute '{}'"
230 .format(type(self).__name__, name)
231 )
232
233 self.package_name = self.name
234 self.name = name
235
236 return self
237
238 def __call__(self, *args, **kwargs):
239 return True
240
241 return ParserMacroCapture
242
243
244 def get_environment(node=None, capture_macros=False):
245 args = {
246 'extensions': []
247 }
248
249 if capture_macros:
250 args['undefined'] = create_macro_capture_env(node)
251
252 args['extensions'].append(MaterializationExtension)
253 args['extensions'].append(OperationExtension)
254 args['extensions'].append(DocumentationExtension)
255
256 return MacroFuzzEnvironment(**args)
257
258
259 def parse(string):
260 try:
261 return get_environment().parse(dbt.compat.to_string(string))
262
263 except (jinja2.exceptions.TemplateSyntaxError,
264 jinja2.exceptions.UndefinedError) as e:
265 e.translated = False
266 dbt.exceptions.raise_compiler_error(str(e))
267
268
269 def get_template(string, ctx, node=None, capture_macros=False):
270 try:
271 env = get_environment(node, capture_macros)
272
273 template_source = dbt.compat.to_string(string)
274 return env.from_string(template_source, globals=ctx)
275
276 except (jinja2.exceptions.TemplateSyntaxError,
277 jinja2.exceptions.UndefinedError) as e:
278 e.translated = False
279 dbt.exceptions.raise_compiler_error(str(e), node)
280
281
282 def render_template(template, ctx, node=None):
283 try:
284 return template.render(ctx)
285
286 except (jinja2.exceptions.TemplateSyntaxError,
287 jinja2.exceptions.UndefinedError) as e:
288 e.translated = False
289 dbt.exceptions.raise_compiler_error(str(e), node)
290
291
292 def get_rendered(string, ctx, node=None,
293 capture_macros=False):
294 template = get_template(string, ctx, node,
295 capture_macros=capture_macros)
296
297 return render_template(template, ctx, node)
298
299
300 def undefined_error(msg):
301 raise jinja2.exceptions.UndefinedError(msg)
302
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/dbt/clients/jinja.py b/dbt/clients/jinja.py
--- a/dbt/clients/jinja.py
+++ b/dbt/clients/jinja.py
@@ -195,9 +195,8 @@
"""
This class sets up the parser to capture macros.
"""
- def __init__(self, hint=None, obj=None, name=None,
- exc=None):
- super(jinja2.Undefined, self).__init__()
+ def __init__(self, hint=None, obj=None, name=None, exc=None):
+ super(ParserMacroCapture, self).__init__(hint=hint, name=name)
self.node = node
self.name = name
self.package_name = node.get('package_name')
@@ -211,18 +210,22 @@
self.node.get('original_file_path'))
logger.debug(
- 'A ParserMacroCapture has been deecopy()d, invalid reference '
- 'to "{}" in node {}.{} (source path: {})'
+ 'dbt encountered an undefined variable, "{}" in node {}.{} '
+ '(source path: {})'
.format(self.name, self.node.get('package_name'),
- self.node.get('name'),
- path))
+ self.node.get('name'), path))
+ # match jinja's message
dbt.exceptions.raise_compiler_error(
- 'dbt has detected at least one invalid reference in {}.{}. '
- 'Check logs for more information'
- .format(self.node.get('package_name'), self.node.get('name'))
+ "{!r} is undefined".format(self.name),
+ node=self.node
)
+ def __getitem__(self, name):
+ # Propagate the undefined value if a caller accesses this as if it
+ # were a dictionary
+ return self
+
def __getattr__(self, name):
if name == 'name' or _is_dunder_name(name):
raise AttributeError(
| {"golden_diff": "diff --git a/dbt/clients/jinja.py b/dbt/clients/jinja.py\n--- a/dbt/clients/jinja.py\n+++ b/dbt/clients/jinja.py\n@@ -195,9 +195,8 @@\n \"\"\"\n This class sets up the parser to capture macros.\n \"\"\"\n- def __init__(self, hint=None, obj=None, name=None,\n- exc=None):\n- super(jinja2.Undefined, self).__init__()\n+ def __init__(self, hint=None, obj=None, name=None, exc=None):\n+ super(ParserMacroCapture, self).__init__(hint=hint, name=name)\n self.node = node\n self.name = name\n self.package_name = node.get('package_name')\n@@ -211,18 +210,22 @@\n self.node.get('original_file_path'))\n \n logger.debug(\n- 'A ParserMacroCapture has been deecopy()d, invalid reference '\n- 'to \"{}\" in node {}.{} (source path: {})'\n+ 'dbt encountered an undefined variable, \"{}\" in node {}.{} '\n+ '(source path: {})'\n .format(self.name, self.node.get('package_name'),\n- self.node.get('name'),\n- path))\n+ self.node.get('name'), path))\n \n+ # match jinja's message\n dbt.exceptions.raise_compiler_error(\n- 'dbt has detected at least one invalid reference in {}.{}. '\n- 'Check logs for more information'\n- .format(self.node.get('package_name'), self.node.get('name'))\n+ \"{!r} is undefined\".format(self.name),\n+ node=self.node\n )\n \n+ def __getitem__(self, name):\n+ # Propagate the undefined value if a caller accesses this as if it\n+ # were a dictionary\n+ return self\n+\n def __getattr__(self, name):\n if name == 'name' or _is_dunder_name(name):\n raise AttributeError(\n", "issue": "handle exceptions that occur inside of jinja\n## Issue\r\n\r\n### Issue description\r\nGiven sql with invalid jinja like:\r\n```\r\n{{ config(\r\n materialized = \"table\",\r\n schema = 'custom_schema' -- render this in a custom schema\r\n) }}\r\n```\r\n\r\ndbt will throw a very unhelpful error that looks like:\r\n```\r\n File \"/usr/local/Cellar/dbt/0.10.2/libexec/lib/python3.7/site-packages/dbt/parser/base_sql.py\", line 95, in parse_sql_nodes\r\n macros=macros)\r\n File \"/usr/local/Cellar/dbt/0.10.2/libexec/lib/python3.7/site-packages/dbt/parser/base.py\", line 105, in parse_node\r\n capture_macros=True)\r\n File \"/usr/local/Cellar/dbt/0.10.2/libexec/lib/python3.7/site-packages/dbt/clients/jinja.py\", line 198, in get_rendered\r\n return render_template(template, ctx, node)\r\n File \"/usr/local/Cellar/dbt/0.10.2/libexec/lib/python3.7/site-packages/dbt/clients/jinja.py\", line 185, in render_template\r\n return template.render(ctx)\r\n File \"/usr/local/Cellar/dbt/0.10.2/libexec/lib/python3.7/site-packages/jinja2/asyncsupport.py\", line 76, in render\r\n return original_render(self, *args, **kwargs)\r\n File \"/usr/local/Cellar/dbt/0.10.2/libexec/lib/python3.7/site-packages/jinja2/environment.py\", line 1008, in render\r\n return self.environment.handle_exception(exc_info, True)\r\n File \"/usr/local/Cellar/dbt/0.10.2/libexec/lib/python3.7/site-packages/jinja2/environment.py\", line 780, in handle_exception\r\n reraise(exc_type, exc_value, tb)\r\n File \"/usr/local/Cellar/dbt/0.10.2/libexec/lib/python3.7/site-packages/jinja2/_compat.py\", line 37, in reraise\r\n raise value.with_traceback(tb)\r\n File \"<template>\", line 1, in top-level template code\r\nTypeError: exceptions must derive from BaseException\r\n```\r\n\r\nIt will likely be difficult to pinpoint the exact error, but it should ideally be able to throw some sort of human-readable parsing error, indicating the model where the syntax error occurred.\n", "before_files": [{"content": "import codecs\nimport linecache\nimport os\n\nimport jinja2\nimport jinja2._compat\nimport jinja2.ext\nimport jinja2.nodes\nimport jinja2.parser\nimport jinja2.sandbox\n\nimport dbt.compat\nimport dbt.exceptions\n\nfrom dbt.node_types import NodeType\nfrom dbt.utils import AttrDict\n\nfrom dbt.logger import GLOBAL_LOGGER as logger # noqa\n\n\nclass MacroFuzzParser(jinja2.parser.Parser):\n def parse_macro(self):\n node = jinja2.nodes.Macro(lineno=next(self.stream).lineno)\n\n # modified to fuzz macros defined in the same file. this way\n # dbt can understand the stack of macros being called.\n # - @cmcarthur\n node.name = dbt.utils.get_dbt_macro_name(\n self.parse_assign_target(name_only=True).name)\n\n self.parse_signature(node)\n node.body = self.parse_statements(('name:endmacro',),\n drop_needle=True)\n return node\n\n\nclass MacroFuzzEnvironment(jinja2.sandbox.SandboxedEnvironment):\n def _parse(self, source, name, filename):\n return MacroFuzzParser(\n self, source, name,\n jinja2._compat.encode_filename(filename)\n ).parse()\n\n def _compile(self, source, filename):\n \"\"\"Override jinja's compilation to stash the rendered source inside\n the python linecache for debugging.\n \"\"\"\n if filename == '<template>':\n # make a better filename\n filename = 'dbt-{}'.format(\n codecs.encode(os.urandom(12), 'hex').decode('ascii')\n )\n # encode, though I don't think this matters\n filename = jinja2._compat.encode_filename(filename)\n # put ourselves in the cache using the 'lazycache' method\n linecache.cache[filename] = (lambda: source,)\n\n return super(MacroFuzzEnvironment, self)._compile(source, filename)\n\n\nclass TemplateCache(object):\n\n def __init__(self):\n self.file_cache = {}\n\n def get_node_template(self, node):\n key = (node['package_name'], node['original_file_path'])\n\n if key in self.file_cache:\n return self.file_cache[key]\n\n template = get_template(\n string=node.get('raw_sql'),\n ctx={},\n node=node\n )\n self.file_cache[key] = template\n\n return template\n\n def clear(self):\n self.file_cache.clear()\n\n\ntemplate_cache = TemplateCache()\n\n\ndef macro_generator(node):\n def apply_context(context):\n def call(*args, **kwargs):\n name = node.get('name')\n template = template_cache.get_node_template(node)\n module = template.make_module(context, False, context)\n\n if node['resource_type'] == NodeType.Operation:\n macro = module.__dict__[dbt.utils.get_dbt_operation_name(name)]\n else:\n macro = module.__dict__[dbt.utils.get_dbt_macro_name(name)]\n module.__dict__.update(context)\n\n try:\n return macro(*args, **kwargs)\n except dbt.exceptions.MacroReturn as e:\n return e.value\n except (TypeError, jinja2.exceptions.TemplateRuntimeError) as e:\n dbt.exceptions.raise_compiler_error(str(e), node)\n except dbt.exceptions.CompilationException as e:\n e.stack.append(node)\n raise e\n\n return call\n return apply_context\n\n\nclass MaterializationExtension(jinja2.ext.Extension):\n tags = ['materialization']\n\n def parse(self, parser):\n node = jinja2.nodes.Macro(lineno=next(parser.stream).lineno)\n materialization_name = \\\n parser.parse_assign_target(name_only=True).name\n\n adapter_name = 'default'\n node.args = []\n node.defaults = []\n\n while parser.stream.skip_if('comma'):\n target = parser.parse_assign_target(name_only=True)\n\n if target.name == 'default':\n pass\n\n elif target.name == 'adapter':\n parser.stream.expect('assign')\n value = parser.parse_expression()\n adapter_name = value.value\n\n else:\n dbt.exceptions.invalid_materialization_argument(\n materialization_name, target.name)\n\n node.name = dbt.utils.get_materialization_macro_name(\n materialization_name, adapter_name)\n\n node.body = parser.parse_statements(('name:endmaterialization',),\n drop_needle=True)\n\n return node\n\n\nclass OperationExtension(jinja2.ext.Extension):\n tags = ['operation']\n\n def parse(self, parser):\n node = jinja2.nodes.Macro(lineno=next(parser.stream).lineno)\n operation_name = \\\n parser.parse_assign_target(name_only=True).name\n\n node.args = []\n node.defaults = []\n\n while parser.stream.skip_if('comma'):\n target = parser.parse_assign_target(name_only=True)\n\n node.name = dbt.utils.get_operation_macro_name(operation_name)\n\n node.body = parser.parse_statements(('name:endoperation',),\n drop_needle=True)\n\n return node\n\n\nclass DocumentationExtension(jinja2.ext.Extension):\n tags = ['docs']\n\n def parse(self, parser):\n node = jinja2.nodes.Macro(lineno=next(parser.stream).lineno)\n docs_name = parser.parse_assign_target(name_only=True).name\n\n node.args = []\n node.defaults = []\n node.name = dbt.utils.get_docs_macro_name(docs_name)\n node.body = parser.parse_statements(('name:enddocs',),\n drop_needle=True)\n return node\n\n\ndef _is_dunder_name(name):\n return name.startswith('__') and name.endswith('__')\n\n\ndef create_macro_capture_env(node):\n\n class ParserMacroCapture(jinja2.Undefined):\n \"\"\"\n This class sets up the parser to capture macros.\n \"\"\"\n def __init__(self, hint=None, obj=None, name=None,\n exc=None):\n super(jinja2.Undefined, self).__init__()\n self.node = node\n self.name = name\n self.package_name = node.get('package_name')\n # jinja uses these for safety, so we have to override them.\n # see https://github.com/pallets/jinja/blob/master/jinja2/sandbox.py#L332-L339 # noqa\n self.unsafe_callable = False\n self.alters_data = False\n\n def __deepcopy__(self, memo):\n path = os.path.join(self.node.get('root_path'),\n self.node.get('original_file_path'))\n\n logger.debug(\n 'A ParserMacroCapture has been deecopy()d, invalid reference '\n 'to \"{}\" in node {}.{} (source path: {})'\n .format(self.name, self.node.get('package_name'),\n self.node.get('name'),\n path))\n\n dbt.exceptions.raise_compiler_error(\n 'dbt has detected at least one invalid reference in {}.{}. '\n 'Check logs for more information'\n .format(self.node.get('package_name'), self.node.get('name'))\n )\n\n def __getattr__(self, name):\n if name == 'name' or _is_dunder_name(name):\n raise AttributeError(\n \"'{}' object has no attribute '{}'\"\n .format(type(self).__name__, name)\n )\n\n self.package_name = self.name\n self.name = name\n\n return self\n\n def __call__(self, *args, **kwargs):\n return True\n\n return ParserMacroCapture\n\n\ndef get_environment(node=None, capture_macros=False):\n args = {\n 'extensions': []\n }\n\n if capture_macros:\n args['undefined'] = create_macro_capture_env(node)\n\n args['extensions'].append(MaterializationExtension)\n args['extensions'].append(OperationExtension)\n args['extensions'].append(DocumentationExtension)\n\n return MacroFuzzEnvironment(**args)\n\n\ndef parse(string):\n try:\n return get_environment().parse(dbt.compat.to_string(string))\n\n except (jinja2.exceptions.TemplateSyntaxError,\n jinja2.exceptions.UndefinedError) as e:\n e.translated = False\n dbt.exceptions.raise_compiler_error(str(e))\n\n\ndef get_template(string, ctx, node=None, capture_macros=False):\n try:\n env = get_environment(node, capture_macros)\n\n template_source = dbt.compat.to_string(string)\n return env.from_string(template_source, globals=ctx)\n\n except (jinja2.exceptions.TemplateSyntaxError,\n jinja2.exceptions.UndefinedError) as e:\n e.translated = False\n dbt.exceptions.raise_compiler_error(str(e), node)\n\n\ndef render_template(template, ctx, node=None):\n try:\n return template.render(ctx)\n\n except (jinja2.exceptions.TemplateSyntaxError,\n jinja2.exceptions.UndefinedError) as e:\n e.translated = False\n dbt.exceptions.raise_compiler_error(str(e), node)\n\n\ndef get_rendered(string, ctx, node=None,\n capture_macros=False):\n template = get_template(string, ctx, node,\n capture_macros=capture_macros)\n\n return render_template(template, ctx, node)\n\n\ndef undefined_error(msg):\n raise jinja2.exceptions.UndefinedError(msg)\n", "path": "dbt/clients/jinja.py"}], "after_files": [{"content": "import codecs\nimport linecache\nimport os\n\nimport jinja2\nimport jinja2._compat\nimport jinja2.ext\nimport jinja2.nodes\nimport jinja2.parser\nimport jinja2.sandbox\n\nimport dbt.compat\nimport dbt.exceptions\n\nfrom dbt.node_types import NodeType\nfrom dbt.utils import AttrDict\n\nfrom dbt.logger import GLOBAL_LOGGER as logger # noqa\n\n\nclass MacroFuzzParser(jinja2.parser.Parser):\n def parse_macro(self):\n node = jinja2.nodes.Macro(lineno=next(self.stream).lineno)\n\n # modified to fuzz macros defined in the same file. this way\n # dbt can understand the stack of macros being called.\n # - @cmcarthur\n node.name = dbt.utils.get_dbt_macro_name(\n self.parse_assign_target(name_only=True).name)\n\n self.parse_signature(node)\n node.body = self.parse_statements(('name:endmacro',),\n drop_needle=True)\n return node\n\n\nclass MacroFuzzEnvironment(jinja2.sandbox.SandboxedEnvironment):\n def _parse(self, source, name, filename):\n return MacroFuzzParser(\n self, source, name,\n jinja2._compat.encode_filename(filename)\n ).parse()\n\n def _compile(self, source, filename):\n \"\"\"Override jinja's compilation to stash the rendered source inside\n the python linecache for debugging.\n \"\"\"\n if filename == '<template>':\n # make a better filename\n filename = 'dbt-{}'.format(\n codecs.encode(os.urandom(12), 'hex').decode('ascii')\n )\n # encode, though I don't think this matters\n filename = jinja2._compat.encode_filename(filename)\n # put ourselves in the cache using the 'lazycache' method\n linecache.cache[filename] = (lambda: source,)\n\n return super(MacroFuzzEnvironment, self)._compile(source, filename)\n\n\nclass TemplateCache(object):\n\n def __init__(self):\n self.file_cache = {}\n\n def get_node_template(self, node):\n key = (node['package_name'], node['original_file_path'])\n\n if key in self.file_cache:\n return self.file_cache[key]\n\n template = get_template(\n string=node.get('raw_sql'),\n ctx={},\n node=node\n )\n self.file_cache[key] = template\n\n return template\n\n def clear(self):\n self.file_cache.clear()\n\n\ntemplate_cache = TemplateCache()\n\n\ndef macro_generator(node):\n def apply_context(context):\n def call(*args, **kwargs):\n name = node.get('name')\n template = template_cache.get_node_template(node)\n module = template.make_module(context, False, context)\n\n if node['resource_type'] == NodeType.Operation:\n macro = module.__dict__[dbt.utils.get_dbt_operation_name(name)]\n else:\n macro = module.__dict__[dbt.utils.get_dbt_macro_name(name)]\n module.__dict__.update(context)\n\n try:\n return macro(*args, **kwargs)\n except dbt.exceptions.MacroReturn as e:\n return e.value\n except (TypeError, jinja2.exceptions.TemplateRuntimeError) as e:\n dbt.exceptions.raise_compiler_error(str(e), node)\n except dbt.exceptions.CompilationException as e:\n e.stack.append(node)\n raise e\n\n return call\n return apply_context\n\n\nclass MaterializationExtension(jinja2.ext.Extension):\n tags = ['materialization']\n\n def parse(self, parser):\n node = jinja2.nodes.Macro(lineno=next(parser.stream).lineno)\n materialization_name = \\\n parser.parse_assign_target(name_only=True).name\n\n adapter_name = 'default'\n node.args = []\n node.defaults = []\n\n while parser.stream.skip_if('comma'):\n target = parser.parse_assign_target(name_only=True)\n\n if target.name == 'default':\n pass\n\n elif target.name == 'adapter':\n parser.stream.expect('assign')\n value = parser.parse_expression()\n adapter_name = value.value\n\n else:\n dbt.exceptions.invalid_materialization_argument(\n materialization_name, target.name)\n\n node.name = dbt.utils.get_materialization_macro_name(\n materialization_name, adapter_name)\n\n node.body = parser.parse_statements(('name:endmaterialization',),\n drop_needle=True)\n\n return node\n\n\nclass OperationExtension(jinja2.ext.Extension):\n tags = ['operation']\n\n def parse(self, parser):\n node = jinja2.nodes.Macro(lineno=next(parser.stream).lineno)\n operation_name = \\\n parser.parse_assign_target(name_only=True).name\n\n node.args = []\n node.defaults = []\n\n while parser.stream.skip_if('comma'):\n target = parser.parse_assign_target(name_only=True)\n\n node.name = dbt.utils.get_operation_macro_name(operation_name)\n\n node.body = parser.parse_statements(('name:endoperation',),\n drop_needle=True)\n\n return node\n\n\nclass DocumentationExtension(jinja2.ext.Extension):\n tags = ['docs']\n\n def parse(self, parser):\n node = jinja2.nodes.Macro(lineno=next(parser.stream).lineno)\n docs_name = parser.parse_assign_target(name_only=True).name\n\n node.args = []\n node.defaults = []\n node.name = dbt.utils.get_docs_macro_name(docs_name)\n node.body = parser.parse_statements(('name:enddocs',),\n drop_needle=True)\n return node\n\n\ndef _is_dunder_name(name):\n return name.startswith('__') and name.endswith('__')\n\n\ndef create_macro_capture_env(node):\n\n class ParserMacroCapture(jinja2.Undefined):\n \"\"\"\n This class sets up the parser to capture macros.\n \"\"\"\n def __init__(self, hint=None, obj=None, name=None, exc=None):\n super(ParserMacroCapture, self).__init__(hint=hint, name=name)\n self.node = node\n self.name = name\n self.package_name = node.get('package_name')\n # jinja uses these for safety, so we have to override them.\n # see https://github.com/pallets/jinja/blob/master/jinja2/sandbox.py#L332-L339 # noqa\n self.unsafe_callable = False\n self.alters_data = False\n\n def __deepcopy__(self, memo):\n path = os.path.join(self.node.get('root_path'),\n self.node.get('original_file_path'))\n\n logger.debug(\n 'dbt encountered an undefined variable, \"{}\" in node {}.{} '\n '(source path: {})'\n .format(self.name, self.node.get('package_name'),\n self.node.get('name'), path))\n\n # match jinja's message\n dbt.exceptions.raise_compiler_error(\n \"{!r} is undefined\".format(self.name),\n node=self.node\n )\n\n def __getitem__(self, name):\n # Propagate the undefined value if a caller accesses this as if it\n # were a dictionary\n return self\n\n def __getattr__(self, name):\n if name == 'name' or _is_dunder_name(name):\n raise AttributeError(\n \"'{}' object has no attribute '{}'\"\n .format(type(self).__name__, name)\n )\n\n self.package_name = self.name\n self.name = name\n\n return self\n\n def __call__(self, *args, **kwargs):\n return True\n\n return ParserMacroCapture\n\n\ndef get_environment(node=None, capture_macros=False):\n args = {\n 'extensions': []\n }\n\n if capture_macros:\n args['undefined'] = create_macro_capture_env(node)\n\n args['extensions'].append(MaterializationExtension)\n args['extensions'].append(OperationExtension)\n args['extensions'].append(DocumentationExtension)\n\n return MacroFuzzEnvironment(**args)\n\n\ndef parse(string):\n try:\n return get_environment().parse(dbt.compat.to_string(string))\n\n except (jinja2.exceptions.TemplateSyntaxError,\n jinja2.exceptions.UndefinedError) as e:\n e.translated = False\n dbt.exceptions.raise_compiler_error(str(e))\n\n\ndef get_template(string, ctx, node=None, capture_macros=False):\n try:\n env = get_environment(node, capture_macros)\n\n template_source = dbt.compat.to_string(string)\n return env.from_string(template_source, globals=ctx)\n\n except (jinja2.exceptions.TemplateSyntaxError,\n jinja2.exceptions.UndefinedError) as e:\n e.translated = False\n dbt.exceptions.raise_compiler_error(str(e), node)\n\n\ndef render_template(template, ctx, node=None):\n try:\n return template.render(ctx)\n\n except (jinja2.exceptions.TemplateSyntaxError,\n jinja2.exceptions.UndefinedError) as e:\n e.translated = False\n dbt.exceptions.raise_compiler_error(str(e), node)\n\n\ndef get_rendered(string, ctx, node=None,\n capture_macros=False):\n template = get_template(string, ctx, node,\n capture_macros=capture_macros)\n\n return render_template(template, ctx, node)\n\n\ndef undefined_error(msg):\n raise jinja2.exceptions.UndefinedError(msg)\n", "path": "dbt/clients/jinja.py"}]} | 3,594 | 440 |
gh_patches_debug_24803 | rasdani/github-patches | git_diff | facebookresearch__hydra-135 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error when using strict mode and multirun
Modifying the `6_sweep` example to have `@hydra.main(config_path="conf/config.yaml", strict=True)` causes this error:
```python3
6_sweep(master*)$ python3 experiment.py -m
Traceback (most recent call last):
File "experiment.py", line 15, in <module>
experiment()
File "/private/home/bda/repos/hydra/hydra/main.py", line 62, in decorated_main
run_hydra(get_args(), task_function, config_path, strict)
File "/private/home/bda/repos/hydra/hydra/_internal/utils.py", line 57, in run_hydra
hydra.multirun(overrides=args.overrides)
File "/private/home/bda/repos/hydra/hydra/_internal/hydra.py", line 124, in multirun
return sweeper.sweep(arguments=task_overrides)
File "/private/home/bda/repos/hydra/hydra/plugins/step_sweeper.py", line 63, in sweep
results = self.launcher.launch(batch)
File "/private/home/bda/repos/hydra/plugins/submitit/hydra_plugins/submitit/submitit_launcher.py", line 67, in launch
self.config.hydra.job.num_jobs = num_jobs
File "/private/home/bda/anaconda3/lib/python3.7/site-packages/omegaconf-1.3.0-py3.7.egg/omegaconf/dictconfig.py", line 59, in __setattr__
self.__setitem__(key, value)
File "/private/home/bda/anaconda3/lib/python3.7/site-packages/omegaconf-1.3.0-py3.7.egg/omegaconf/dictconfig.py", line 28, in __setitem__
raise KeyError("Accessing unknown key in a struct : {}".format(self.get_full_key(key)))
KeyError: 'Accessing unknown key in a struct : hydra.job.num_jobs'
```
---
Perhaps hydra needs to add the right placeholders in the config here for the plugins to modify/write into, or the plugin needs to have a temporarily config that's not locked?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugins/submitit/hydra_plugins/submitit/submitit_launcher.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 import logging
3 import os
4
5 import hydra._internal.utils
6 import hydra.plugins.common.utils
7 from hydra.plugins import Launcher
8
9 from omegaconf import open_dict
10
11 # pylint: disable=C0103
12 log = logging.getLogger(__name__)
13
14
15 class SubmititLauncher(Launcher):
16 def __init__(self, queue, folder, queue_parameters, conda_file=None):
17 self.queue = queue
18 self.queue_parameters = queue_parameters
19 self.folder = folder
20 self.conda_file = conda_file
21 self.config = None
22 self.task_function = None
23 self.verbose = None
24 self.sweep_configs = None
25 self.config_loader = None
26
27 def setup(self, config, config_loader, task_function, verbose):
28 self.config = config
29 self.config_loader = config_loader
30 self.task_function = task_function
31 self.verbose = verbose
32
33 def launch_job(self, sweep_overrides, job_dir_key, job_num):
34 hydra.plugins.common.utils.configure_log(None, self.verbose)
35 hydra.plugins.common.utils.setup_globals()
36 sweep_config = self.config_loader.load_sweep_config(
37 self.config, sweep_overrides
38 )
39 with open_dict(sweep_config):
40 # Populate new job variables
41 if "SLURM_JOB_ID" in os.environ:
42 sweep_config.hydra.job.id = os.environ["SLURM_JOB_ID"]
43 elif "CHRONOS_JOB_ID" in os.environ:
44 sweep_config.hydra.job.id = os.environ["CHRONOS_JOB_ID"]
45 else:
46 sweep_config.hydra.job.id = "unknown"
47
48 sweep_config.hydra.job.num = job_num
49 sweep_config.hydra.job.override_dirname = hydra.plugins.common.utils.get_overrides_dirname(
50 sweep_config.hydra.overrides.task
51 )
52
53 return hydra.plugins.common.utils.run_job(
54 config=sweep_config,
55 task_function=self.task_function,
56 verbose=self.verbose,
57 job_dir_key=job_dir_key,
58 job_subdir_key="hydra.sweep.subdir",
59 )
60
61 def launch(self, job_overrides):
62 import submitit
63
64 num_jobs = len(job_overrides)
65 assert num_jobs > 0
66 self.config.hydra.job.num_jobs = num_jobs
67 if self.queue == "auto":
68 executor = submitit.AutoExecutor(
69 folder=self.folder, conda_file=self.conda_file
70 )
71 elif self.queue == "slurm":
72 executor = submitit.SlurmExecutor(folder=self.folder)
73 elif self.queue == "chronos":
74 executor = submitit.ChronosExecutor(
75 folder=self.folder, conda_file=self.conda_file
76 )
77 elif self.queue == "local":
78 executor = submitit.LocalExecutor(folder=self.folder)
79 else:
80 raise RuntimeError("Unsupported queue type {}".format(self.queue))
81
82 executor.update_parameters(**self.queue_parameters[self.queue])
83
84 log.info("Sweep output dir : {}".format(self.config.hydra.sweep.dir))
85 path_str = str(self.config.hydra.sweep.dir)
86 os.makedirs(path_str, exist_ok=True)
87 if self.config.hydra.sweep.mode is not None:
88 mode = int(str(self.config.hydra.sweep.mode), 8)
89 os.chmod(path_str, mode=mode)
90
91 jobs = []
92 for job_num in range(num_jobs):
93 sweep_override = list(job_overrides[job_num])
94 log.info(
95 "\t#{} : {}".format(
96 job_num,
97 " ".join(
98 hydra.plugins.common.utils.filter_overrides(sweep_override)
99 ),
100 )
101 )
102 job = executor.submit(
103 self.launch_job, sweep_override, "hydra.sweep.dir", job_num
104 )
105 jobs.append(job)
106
107 return [j.results() for j in jobs]
108
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/plugins/submitit/hydra_plugins/submitit/submitit_launcher.py b/plugins/submitit/hydra_plugins/submitit/submitit_launcher.py
--- a/plugins/submitit/hydra_plugins/submitit/submitit_launcher.py
+++ b/plugins/submitit/hydra_plugins/submitit/submitit_launcher.py
@@ -60,11 +60,14 @@
)
def launch(self, job_overrides):
+ # lazy import to ensurue plugin discovery remains fast
import submitit
num_jobs = len(job_overrides)
assert num_jobs > 0
- self.config.hydra.job.num_jobs = num_jobs
+ with open_dict(self.config):
+ self.config.hydra.job.num_jobs = num_jobs
+
if self.queue == "auto":
executor = submitit.AutoExecutor(
folder=self.folder, conda_file=self.conda_file, par_file=self.par_file
@@ -85,7 +88,7 @@
log.info("Sweep output dir : {}".format(self.config.hydra.sweep.dir))
path_str = str(self.config.hydra.sweep.dir)
os.makedirs(path_str, exist_ok=True)
- if self.config.hydra.sweep.mode is not None:
+ if "mode" in self.config.hydra.sweep:
mode = int(str(self.config.hydra.sweep.mode), 8)
os.chmod(path_str, mode=mode)
| {"golden_diff": "diff --git a/plugins/submitit/hydra_plugins/submitit/submitit_launcher.py b/plugins/submitit/hydra_plugins/submitit/submitit_launcher.py\n--- a/plugins/submitit/hydra_plugins/submitit/submitit_launcher.py\n+++ b/plugins/submitit/hydra_plugins/submitit/submitit_launcher.py\n@@ -60,11 +60,14 @@\n )\n \n def launch(self, job_overrides):\n+ # lazy import to ensurue plugin discovery remains fast\n import submitit\n \n num_jobs = len(job_overrides)\n assert num_jobs > 0\n- self.config.hydra.job.num_jobs = num_jobs\n+ with open_dict(self.config):\n+ self.config.hydra.job.num_jobs = num_jobs\n+\n if self.queue == \"auto\":\n executor = submitit.AutoExecutor(\n folder=self.folder, conda_file=self.conda_file, par_file=self.par_file\n@@ -85,7 +88,7 @@\n log.info(\"Sweep output dir : {}\".format(self.config.hydra.sweep.dir))\n path_str = str(self.config.hydra.sweep.dir)\n os.makedirs(path_str, exist_ok=True)\n- if self.config.hydra.sweep.mode is not None:\n+ if \"mode\" in self.config.hydra.sweep:\n mode = int(str(self.config.hydra.sweep.mode), 8)\n os.chmod(path_str, mode=mode)\n", "issue": "Error when using strict mode and multirun\nModifying the `6_sweep` example to have `@hydra.main(config_path=\"conf/config.yaml\", strict=True)` causes this error:\r\n\r\n```python3\r\n6_sweep(master*)$ python3 experiment.py -m\r\nTraceback (most recent call last):\r\n File \"experiment.py\", line 15, in <module>\r\n experiment()\r\n File \"/private/home/bda/repos/hydra/hydra/main.py\", line 62, in decorated_main\r\n run_hydra(get_args(), task_function, config_path, strict)\r\n File \"/private/home/bda/repos/hydra/hydra/_internal/utils.py\", line 57, in run_hydra\r\n hydra.multirun(overrides=args.overrides)\r\n File \"/private/home/bda/repos/hydra/hydra/_internal/hydra.py\", line 124, in multirun\r\n return sweeper.sweep(arguments=task_overrides)\r\n File \"/private/home/bda/repos/hydra/hydra/plugins/step_sweeper.py\", line 63, in sweep\r\n results = self.launcher.launch(batch)\r\n File \"/private/home/bda/repos/hydra/plugins/submitit/hydra_plugins/submitit/submitit_launcher.py\", line 67, in launch\r\n self.config.hydra.job.num_jobs = num_jobs\r\n File \"/private/home/bda/anaconda3/lib/python3.7/site-packages/omegaconf-1.3.0-py3.7.egg/omegaconf/dictconfig.py\", line 59, in __setattr__\r\n self.__setitem__(key, value)\r\n File \"/private/home/bda/anaconda3/lib/python3.7/site-packages/omegaconf-1.3.0-py3.7.egg/omegaconf/dictconfig.py\", line 28, in __setitem__\r\n raise KeyError(\"Accessing unknown key in a struct : {}\".format(self.get_full_key(key)))\r\nKeyError: 'Accessing unknown key in a struct : hydra.job.num_jobs'\r\n```\r\n\r\n---\r\n\r\nPerhaps hydra needs to add the right placeholders in the config here for the plugins to modify/write into, or the plugin needs to have a temporarily config that's not locked?\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport logging\nimport os\n\nimport hydra._internal.utils\nimport hydra.plugins.common.utils\nfrom hydra.plugins import Launcher\n\nfrom omegaconf import open_dict\n\n# pylint: disable=C0103\nlog = logging.getLogger(__name__)\n\n\nclass SubmititLauncher(Launcher):\n def __init__(self, queue, folder, queue_parameters, conda_file=None):\n self.queue = queue\n self.queue_parameters = queue_parameters\n self.folder = folder\n self.conda_file = conda_file\n self.config = None\n self.task_function = None\n self.verbose = None\n self.sweep_configs = None\n self.config_loader = None\n\n def setup(self, config, config_loader, task_function, verbose):\n self.config = config\n self.config_loader = config_loader\n self.task_function = task_function\n self.verbose = verbose\n\n def launch_job(self, sweep_overrides, job_dir_key, job_num):\n hydra.plugins.common.utils.configure_log(None, self.verbose)\n hydra.plugins.common.utils.setup_globals()\n sweep_config = self.config_loader.load_sweep_config(\n self.config, sweep_overrides\n )\n with open_dict(sweep_config):\n # Populate new job variables\n if \"SLURM_JOB_ID\" in os.environ:\n sweep_config.hydra.job.id = os.environ[\"SLURM_JOB_ID\"]\n elif \"CHRONOS_JOB_ID\" in os.environ:\n sweep_config.hydra.job.id = os.environ[\"CHRONOS_JOB_ID\"]\n else:\n sweep_config.hydra.job.id = \"unknown\"\n\n sweep_config.hydra.job.num = job_num\n sweep_config.hydra.job.override_dirname = hydra.plugins.common.utils.get_overrides_dirname(\n sweep_config.hydra.overrides.task\n )\n\n return hydra.plugins.common.utils.run_job(\n config=sweep_config,\n task_function=self.task_function,\n verbose=self.verbose,\n job_dir_key=job_dir_key,\n job_subdir_key=\"hydra.sweep.subdir\",\n )\n\n def launch(self, job_overrides):\n import submitit\n\n num_jobs = len(job_overrides)\n assert num_jobs > 0\n self.config.hydra.job.num_jobs = num_jobs\n if self.queue == \"auto\":\n executor = submitit.AutoExecutor(\n folder=self.folder, conda_file=self.conda_file\n )\n elif self.queue == \"slurm\":\n executor = submitit.SlurmExecutor(folder=self.folder)\n elif self.queue == \"chronos\":\n executor = submitit.ChronosExecutor(\n folder=self.folder, conda_file=self.conda_file\n )\n elif self.queue == \"local\":\n executor = submitit.LocalExecutor(folder=self.folder)\n else:\n raise RuntimeError(\"Unsupported queue type {}\".format(self.queue))\n\n executor.update_parameters(**self.queue_parameters[self.queue])\n\n log.info(\"Sweep output dir : {}\".format(self.config.hydra.sweep.dir))\n path_str = str(self.config.hydra.sweep.dir)\n os.makedirs(path_str, exist_ok=True)\n if self.config.hydra.sweep.mode is not None:\n mode = int(str(self.config.hydra.sweep.mode), 8)\n os.chmod(path_str, mode=mode)\n\n jobs = []\n for job_num in range(num_jobs):\n sweep_override = list(job_overrides[job_num])\n log.info(\n \"\\t#{} : {}\".format(\n job_num,\n \" \".join(\n hydra.plugins.common.utils.filter_overrides(sweep_override)\n ),\n )\n )\n job = executor.submit(\n self.launch_job, sweep_override, \"hydra.sweep.dir\", job_num\n )\n jobs.append(job)\n\n return [j.results() for j in jobs]\n", "path": "plugins/submitit/hydra_plugins/submitit/submitit_launcher.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport logging\nimport os\n\nimport hydra._internal.utils\nimport hydra.plugins.common.utils\nfrom hydra.plugins import Launcher\n\nfrom omegaconf import open_dict\n\n# pylint: disable=C0103\nlog = logging.getLogger(__name__)\n\n\nclass SubmititLauncher(Launcher):\n def __init__(self, queue, folder, queue_parameters, conda_file=None, par_file=None):\n self.queue = queue\n self.queue_parameters = queue_parameters\n self.folder = folder\n self.conda_file = conda_file\n self.par_file = par_file\n self.config = None\n self.task_function = None\n self.verbose = None\n self.sweep_configs = None\n self.config_loader = None\n\n def setup(self, config, config_loader, task_function, verbose):\n self.config = config\n self.config_loader = config_loader\n self.task_function = task_function\n self.verbose = verbose\n\n def launch_job(self, sweep_overrides, job_dir_key, job_num):\n hydra.plugins.common.utils.configure_log(None, self.verbose)\n hydra.plugins.common.utils.setup_globals()\n sweep_config = self.config_loader.load_sweep_config(\n self.config, sweep_overrides\n )\n with open_dict(sweep_config):\n # Populate new job variables\n if \"SLURM_JOB_ID\" in os.environ:\n sweep_config.hydra.job.id = os.environ[\"SLURM_JOB_ID\"]\n elif \"CHRONOS_JOB_ID\" in os.environ:\n sweep_config.hydra.job.id = os.environ[\"CHRONOS_JOB_ID\"]\n else:\n sweep_config.hydra.job.id = \"unknown\"\n\n sweep_config.hydra.job.num = job_num\n sweep_config.hydra.job.override_dirname = hydra.plugins.common.utils.get_overrides_dirname(\n sweep_config.hydra.overrides.task\n )\n\n return hydra.plugins.common.utils.run_job(\n config=sweep_config,\n task_function=self.task_function,\n verbose=self.verbose,\n job_dir_key=job_dir_key,\n job_subdir_key=\"hydra.sweep.subdir\",\n )\n\n def launch(self, job_overrides):\n # lazy import to ensurue plugin discovery remains fast\n import submitit\n\n num_jobs = len(job_overrides)\n assert num_jobs > 0\n with open_dict(self.config):\n self.config.hydra.job.num_jobs = num_jobs\n\n if self.queue == \"auto\":\n executor = submitit.AutoExecutor(\n folder=self.folder, conda_file=self.conda_file, par_file=self.par_file\n )\n elif self.queue == \"slurm\":\n executor = submitit.SlurmExecutor(folder=self.folder)\n elif self.queue == \"chronos\":\n executor = submitit.ChronosExecutor(\n folder=self.folder, conda_file=self.conda_file, par_file=self.par_file\n )\n elif self.queue == \"local\":\n executor = submitit.LocalExecutor(folder=self.folder)\n else:\n raise RuntimeError(\"Unsupported queue type {}\".format(self.queue))\n\n executor.update_parameters(**self.queue_parameters[self.queue])\n\n log.info(\"Sweep output dir : {}\".format(self.config.hydra.sweep.dir))\n path_str = str(self.config.hydra.sweep.dir)\n os.makedirs(path_str, exist_ok=True)\n if \"mode\" in self.config.hydra.sweep:\n mode = int(str(self.config.hydra.sweep.mode), 8)\n os.chmod(path_str, mode=mode)\n\n jobs = []\n for job_num in range(num_jobs):\n sweep_override = list(job_overrides[job_num])\n log.info(\n \"\\t#{} : {}\".format(\n job_num,\n \" \".join(\n hydra.plugins.common.utils.filter_overrides(sweep_override)\n ),\n )\n )\n job = executor.submit(\n self.launch_job, sweep_override, \"hydra.sweep.dir\", job_num\n )\n jobs.append(job)\n\n return [j.results() for j in jobs]\n", "path": "plugins/submitit/hydra_plugins/submitit/submitit_launcher.py"}]} | 1,801 | 317 |
gh_patches_debug_40617 | rasdani/github-patches | git_diff | rasterio__rasterio-290 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Read default of masked=None breaks for some nodatavals
Back in #71, there was some controversy over the decision to have RasterReader.read's default operation as `masked=None`, which is both the slowest and most complicated behavior, and also is _different_ from `masked=False`.
Here's one case that took a colleage and I hours to debug: It's allowed have an image whose NoDataValue is outside the range of the dtype. If I want to read one of these, the default `read` behavior throws an error.
```
import numpy as np
data = np.array([[0, 0, 1], [1, 2, 0]])
kwargs = {
'driver': u'GTiff',
'dtype': 'uint8',
'nodata': -1.7e+308,
'height': 3,
'width': 3,
'count': 1
}
with rasterio.drivers():
with rasterio.open('foo.tif', 'w', **kwargs) as dst:
dst.write_band(1, data.astype(rasterio.uint8))
with rasterio.open('foo.tif') as src:
mask = src.read_band(1)
# TypeError: Fill value -1.7e+308 overflows dtype uint8
```
Obviously it's impossible for any raster values to have been flagged as NoDataValues, so the workaround is to ignore the NoDataValue entirely by setting `masked=False` (the distinction was a bit lost at first).
We probably don't want to change the default now, but here's what we can do:
- at the very least, print a warning suggesting that you may want `masked=False`
- I prefer tweaking behavior so that `masked=None` returns a masked array if at least one of the bands has a NoDataValue that is within the `dtype` bounds. (currently it only checks `nodataval is not None`)
Finally we should clarify the docstring: "masked if any of the nodatavals are not `None`", but "nodatavals" is not defined or referenced elsewhere in the documentation.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rasterio/rio/merge.py`
Content:
```
1 # Merge command.
2
3 import logging
4 import math
5 import os.path
6 import sys
7 import warnings
8
9 import click
10 from cligj import files_inout_arg, format_opt
11
12 import rasterio
13 from rasterio.rio.cli import cli
14 from rasterio.transform import Affine
15
16
17 @cli.command(short_help="Merge a stack of raster datasets.")
18 @files_inout_arg
19 @format_opt
20 @click.option('--bounds', nargs=4, type=float, default=None,
21 help="Output bounds: left, bottom, right, top.")
22 @click.option('--res', nargs=2, type=float, default=None,
23 help="Output dataset resolution: pixel width, pixel height")
24 @click.option('--nodata', '-n', type=float, default=None,
25 help="Override nodata values defined in input datasets")
26 @click.pass_context
27 def merge(ctx, files, driver, bounds, res, nodata):
28 """Copy valid pixels from input files to an output file.
29
30 All files must have the same number of bands, data type, and
31 coordinate reference system.
32
33 Input files are merged in their listed order using the reverse
34 painter's algorithm. If the output file exists, its values will be
35 overwritten by input values.
36
37 Geospatial bounds and resolution of a new output file in the
38 units of the input file coordinate reference system may be provided
39 and are otherwise taken from the first input file.
40 """
41 import numpy as np
42
43 verbosity = (ctx.obj and ctx.obj.get('verbosity')) or 1
44 logger = logging.getLogger('rio')
45
46 try:
47 with rasterio.drivers(CPL_DEBUG=verbosity>2):
48 output = files[-1]
49 files = files[:-1]
50
51 with rasterio.open(files[0]) as first:
52 first_res = first.res
53 kwargs = first.meta
54 kwargs.pop('affine')
55 nodataval = first.nodatavals[0]
56 dtype = first.dtypes[0]
57
58 if os.path.exists(output):
59 # TODO: prompt user to update existing file (-i option) like:
60 # overwrite b.tif? (y/n [n]) n
61 # not overwritten
62 dst = rasterio.open(output, 'r+')
63 nodataval = dst.nodatavals[0]
64 dtype = dst.dtypes[0]
65 dest = np.zeros((dst.count,) + dst.shape, dtype=dtype)
66 else:
67 # Create new output file.
68 # Extent from option or extent of all inputs.
69 if not bounds:
70 # scan input files.
71 xs = []
72 ys = []
73 for f in files:
74 with rasterio.open(f) as src:
75 left, bottom, right, top = src.bounds
76 xs.extend([left, right])
77 ys.extend([bottom, top])
78 bounds = min(xs), min(ys), max(xs), max(ys)
79 output_transform = Affine.translation(bounds[0], bounds[3])
80
81 # Resolution/pixel size.
82 if not res:
83 res = first_res
84 output_transform *= Affine.scale(res[0], -res[1])
85
86 # Dataset shape.
87 output_width = int(math.ceil((bounds[2]-bounds[0])/res[0]))
88 output_height = int(math.ceil((bounds[3]-bounds[1])/res[1]))
89
90 kwargs['driver'] == driver
91 kwargs['transform'] = output_transform
92 kwargs['width'] = output_width
93 kwargs['height'] = output_height
94
95 dst = rasterio.open(output, 'w', **kwargs)
96 dest = np.zeros((first.count, output_height, output_width),
97 dtype=dtype)
98
99 if nodata is not None:
100 nodataval = nodata
101
102 if nodataval is not None:
103 # Only fill if the nodataval is within dtype's range.
104 inrange = False
105 if np.dtype(dtype).kind in ('i', 'u'):
106 info = np.iinfo(dtype)
107 inrange = (info.min <= nodataval <= info.max)
108 elif np.dtype(dtype).kind == 'f':
109 info = np.finfo(dtype)
110 inrange = (info.min <= nodataval <= info.max)
111 if inrange:
112 dest.fill(nodataval)
113 else:
114 warnings.warn(
115 "Input file's nodata value, %s, is beyond the valid "
116 "range of its data type, %s. Consider overriding it "
117 "using the --nodata option for better results." % (
118 nodataval, dtype))
119 else:
120 nodataval = 0
121
122 for fname in reversed(files):
123 with rasterio.open(fname) as src:
124 # Real World (tm) use of boundless reads.
125 # This approach uses the maximum amount of memory to solve
126 # the problem. Making it more efficient is a TODO.
127 window = src.window(*dst.bounds)
128 data = np.zeros_like(dest)
129 data = src.read(
130 out=data,
131 window=window,
132 boundless=True,
133 masked=True)
134 np.copyto(dest, data,
135 where=np.logical_and(
136 dest==nodataval, data.mask==False))
137
138 if dst.mode == 'r+':
139 data = dst.read(masked=True)
140 np.copyto(dest, data,
141 where=np.logical_and(
142 dest==nodataval, data.mask==False))
143
144 dst.write(dest)
145 dst.close()
146
147 sys.exit(0)
148 except Exception:
149 logger.exception("Failed. Exception caught")
150 sys.exit(1)
151
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/rasterio/rio/merge.py b/rasterio/rio/merge.py
--- a/rasterio/rio/merge.py
+++ b/rasterio/rio/merge.py
@@ -92,10 +92,16 @@
kwargs['width'] = output_width
kwargs['height'] = output_height
+ logger.debug("Kwargs: %r", kwargs)
+ logger.debug("bounds: %r", bounds)
+ logger.debug("Res: %r", res)
+
dst = rasterio.open(output, 'w', **kwargs)
dest = np.zeros((first.count, output_height, output_width),
dtype=dtype)
+ logger.debug("In merge, dest shape: %r", dest.shape)
+
if nodata is not None:
nodataval = nodata
@@ -119,27 +125,54 @@
else:
nodataval = 0
+ dst_w, dst_s, dst_e, dst_n = dst.bounds
+
for fname in reversed(files):
with rasterio.open(fname) as src:
# Real World (tm) use of boundless reads.
# This approach uses the maximum amount of memory to solve
# the problem. Making it more efficient is a TODO.
- window = src.window(*dst.bounds)
- data = np.zeros_like(dest)
- data = src.read(
- out=data,
- window=window,
- boundless=True,
+
+ # 1. Compute spatial intersection of destination
+ # and source.
+ src_w, src_s, src_e, src_n = src.bounds
+
+ int_w = src_w if src_w > dst_w else dst_w
+ int_s = src_s if src_s > dst_s else dst_s
+ int_e = src_e if src_e < dst_e else dst_e
+ int_n = src_n if src_n < dst_n else dst_n
+
+ # 2. Compute the source window.
+ src_window = src.window(int_w, int_s, int_e, int_n)
+
+ # 3. Compute the destination window.
+ dst_window = dst.window(int_w, int_s, int_e, int_n)
+
+ # 4. Initialize temp array.
+ temp = np.zeros(
+ (first.count,) + tuple(b - a for a, b in dst_window),
+ dtype=dtype)
+
+ temp = src.read(
+ out=temp,
+ window=src_window,
+ boundless=False,
masked=True)
- np.copyto(dest, data,
+
+ # 5. Copy elements of temp into dest.
+ roff, coff = dst.index(int_w, int_n)
+ h, w = temp.shape[-2:]
+
+ region = dest[:,roff:roff+h,coff:coff+w]
+ np.copyto(region, temp,
where=np.logical_and(
- dest==nodataval, data.mask==False))
+ region==nodataval, temp.mask==False))
if dst.mode == 'r+':
- data = dst.read(masked=True)
- np.copyto(dest, data,
+ temp = dst.read(masked=True)
+ np.copyto(dest, temp,
where=np.logical_and(
- dest==nodataval, data.mask==False))
+ dest==nodataval, temp.mask==False))
dst.write(dest)
dst.close()
| {"golden_diff": "diff --git a/rasterio/rio/merge.py b/rasterio/rio/merge.py\n--- a/rasterio/rio/merge.py\n+++ b/rasterio/rio/merge.py\n@@ -92,10 +92,16 @@\n kwargs['width'] = output_width\n kwargs['height'] = output_height\n \n+ logger.debug(\"Kwargs: %r\", kwargs)\n+ logger.debug(\"bounds: %r\", bounds)\n+ logger.debug(\"Res: %r\", res)\n+\n dst = rasterio.open(output, 'w', **kwargs)\n dest = np.zeros((first.count, output_height, output_width),\n dtype=dtype)\n \n+ logger.debug(\"In merge, dest shape: %r\", dest.shape)\n+\n if nodata is not None:\n nodataval = nodata\n \n@@ -119,27 +125,54 @@\n else:\n nodataval = 0\n \n+ dst_w, dst_s, dst_e, dst_n = dst.bounds\n+\n for fname in reversed(files):\n with rasterio.open(fname) as src:\n # Real World (tm) use of boundless reads.\n # This approach uses the maximum amount of memory to solve\n # the problem. Making it more efficient is a TODO.\n- window = src.window(*dst.bounds)\n- data = np.zeros_like(dest)\n- data = src.read(\n- out=data,\n- window=window,\n- boundless=True,\n+\n+ # 1. Compute spatial intersection of destination\n+ # and source.\n+ src_w, src_s, src_e, src_n = src.bounds\n+\n+ int_w = src_w if src_w > dst_w else dst_w\n+ int_s = src_s if src_s > dst_s else dst_s\n+ int_e = src_e if src_e < dst_e else dst_e\n+ int_n = src_n if src_n < dst_n else dst_n\n+\n+ # 2. Compute the source window.\n+ src_window = src.window(int_w, int_s, int_e, int_n)\n+\n+ # 3. Compute the destination window.\n+ dst_window = dst.window(int_w, int_s, int_e, int_n)\n+\n+ # 4. Initialize temp array.\n+ temp = np.zeros(\n+ (first.count,) + tuple(b - a for a, b in dst_window),\n+ dtype=dtype)\n+\n+ temp = src.read(\n+ out=temp,\n+ window=src_window,\n+ boundless=False,\n masked=True)\n- np.copyto(dest, data,\n+\n+ # 5. Copy elements of temp into dest.\n+ roff, coff = dst.index(int_w, int_n)\n+ h, w = temp.shape[-2:]\n+\n+ region = dest[:,roff:roff+h,coff:coff+w]\n+ np.copyto(region, temp,\n where=np.logical_and(\n- dest==nodataval, data.mask==False))\n+ region==nodataval, temp.mask==False))\n \n if dst.mode == 'r+':\n- data = dst.read(masked=True)\n- np.copyto(dest, data,\n+ temp = dst.read(masked=True)\n+ np.copyto(dest, temp,\n where=np.logical_and(\n- dest==nodataval, data.mask==False))\n+ dest==nodataval, temp.mask==False))\n \n dst.write(dest)\n dst.close()\n", "issue": "Read default of masked=None breaks for some nodatavals\nBack in #71, there was some controversy over the decision to have RasterReader.read's default operation as `masked=None`, which is both the slowest and most complicated behavior, and also is _different_ from `masked=False`.\n\nHere's one case that took a colleage and I hours to debug: It's allowed have an image whose NoDataValue is outside the range of the dtype. If I want to read one of these, the default `read` behavior throws an error.\n\n```\nimport numpy as np\ndata = np.array([[0, 0, 1], [1, 2, 0]])\nkwargs = {\n 'driver': u'GTiff',\n 'dtype': 'uint8',\n 'nodata': -1.7e+308,\n 'height': 3,\n 'width': 3,\n 'count': 1\n}\nwith rasterio.drivers():\n with rasterio.open('foo.tif', 'w', **kwargs) as dst:\n dst.write_band(1, data.astype(rasterio.uint8))\n\n with rasterio.open('foo.tif') as src:\n mask = src.read_band(1)\n# TypeError: Fill value -1.7e+308 overflows dtype uint8\n```\n\nObviously it's impossible for any raster values to have been flagged as NoDataValues, so the workaround is to ignore the NoDataValue entirely by setting `masked=False` (the distinction was a bit lost at first).\n\nWe probably don't want to change the default now, but here's what we can do:\n- at the very least, print a warning suggesting that you may want `masked=False`\n- I prefer tweaking behavior so that `masked=None` returns a masked array if at least one of the bands has a NoDataValue that is within the `dtype` bounds. (currently it only checks `nodataval is not None`)\n\nFinally we should clarify the docstring: \"masked if any of the nodatavals are not `None`\", but \"nodatavals\" is not defined or referenced elsewhere in the documentation.\n\n", "before_files": [{"content": "# Merge command.\n\nimport logging\nimport math\nimport os.path\nimport sys\nimport warnings\n\nimport click\nfrom cligj import files_inout_arg, format_opt\n\nimport rasterio\nfrom rasterio.rio.cli import cli\nfrom rasterio.transform import Affine\n\n\[email protected](short_help=\"Merge a stack of raster datasets.\")\n@files_inout_arg\n@format_opt\[email protected]('--bounds', nargs=4, type=float, default=None,\n help=\"Output bounds: left, bottom, right, top.\")\[email protected]('--res', nargs=2, type=float, default=None,\n help=\"Output dataset resolution: pixel width, pixel height\")\[email protected]('--nodata', '-n', type=float, default=None,\n help=\"Override nodata values defined in input datasets\")\[email protected]_context\ndef merge(ctx, files, driver, bounds, res, nodata):\n \"\"\"Copy valid pixels from input files to an output file.\n\n All files must have the same number of bands, data type, and\n coordinate reference system.\n\n Input files are merged in their listed order using the reverse\n painter's algorithm. If the output file exists, its values will be\n overwritten by input values.\n\n Geospatial bounds and resolution of a new output file in the\n units of the input file coordinate reference system may be provided\n and are otherwise taken from the first input file.\n \"\"\"\n import numpy as np\n\n verbosity = (ctx.obj and ctx.obj.get('verbosity')) or 1\n logger = logging.getLogger('rio')\n\n try:\n with rasterio.drivers(CPL_DEBUG=verbosity>2):\n output = files[-1]\n files = files[:-1]\n\n with rasterio.open(files[0]) as first:\n first_res = first.res\n kwargs = first.meta\n kwargs.pop('affine')\n nodataval = first.nodatavals[0]\n dtype = first.dtypes[0]\n\n if os.path.exists(output):\n # TODO: prompt user to update existing file (-i option) like:\n # overwrite b.tif? (y/n [n]) n\n # not overwritten\n dst = rasterio.open(output, 'r+')\n nodataval = dst.nodatavals[0]\n dtype = dst.dtypes[0]\n dest = np.zeros((dst.count,) + dst.shape, dtype=dtype)\n else:\n # Create new output file.\n # Extent from option or extent of all inputs.\n if not bounds:\n # scan input files.\n xs = []\n ys = []\n for f in files:\n with rasterio.open(f) as src:\n left, bottom, right, top = src.bounds\n xs.extend([left, right])\n ys.extend([bottom, top])\n bounds = min(xs), min(ys), max(xs), max(ys)\n output_transform = Affine.translation(bounds[0], bounds[3])\n\n # Resolution/pixel size.\n if not res:\n res = first_res\n output_transform *= Affine.scale(res[0], -res[1])\n\n # Dataset shape.\n output_width = int(math.ceil((bounds[2]-bounds[0])/res[0]))\n output_height = int(math.ceil((bounds[3]-bounds[1])/res[1]))\n\n kwargs['driver'] == driver\n kwargs['transform'] = output_transform\n kwargs['width'] = output_width\n kwargs['height'] = output_height\n\n dst = rasterio.open(output, 'w', **kwargs)\n dest = np.zeros((first.count, output_height, output_width),\n dtype=dtype)\n\n if nodata is not None:\n nodataval = nodata\n\n if nodataval is not None:\n # Only fill if the nodataval is within dtype's range.\n inrange = False\n if np.dtype(dtype).kind in ('i', 'u'):\n info = np.iinfo(dtype)\n inrange = (info.min <= nodataval <= info.max)\n elif np.dtype(dtype).kind == 'f':\n info = np.finfo(dtype)\n inrange = (info.min <= nodataval <= info.max)\n if inrange:\n dest.fill(nodataval)\n else:\n warnings.warn(\n \"Input file's nodata value, %s, is beyond the valid \"\n \"range of its data type, %s. Consider overriding it \"\n \"using the --nodata option for better results.\" % (\n nodataval, dtype))\n else:\n nodataval = 0\n\n for fname in reversed(files):\n with rasterio.open(fname) as src:\n # Real World (tm) use of boundless reads.\n # This approach uses the maximum amount of memory to solve\n # the problem. Making it more efficient is a TODO.\n window = src.window(*dst.bounds)\n data = np.zeros_like(dest)\n data = src.read(\n out=data,\n window=window,\n boundless=True,\n masked=True)\n np.copyto(dest, data,\n where=np.logical_and(\n dest==nodataval, data.mask==False))\n\n if dst.mode == 'r+':\n data = dst.read(masked=True)\n np.copyto(dest, data,\n where=np.logical_and(\n dest==nodataval, data.mask==False))\n\n dst.write(dest)\n dst.close()\n\n sys.exit(0)\n except Exception:\n logger.exception(\"Failed. Exception caught\")\n sys.exit(1)\n", "path": "rasterio/rio/merge.py"}], "after_files": [{"content": "# Merge command.\n\nimport logging\nimport math\nimport os.path\nimport sys\nimport warnings\n\nimport click\nfrom cligj import files_inout_arg, format_opt\n\nimport rasterio\nfrom rasterio.rio.cli import cli\nfrom rasterio.transform import Affine\n\n\[email protected](short_help=\"Merge a stack of raster datasets.\")\n@files_inout_arg\n@format_opt\[email protected]('--bounds', nargs=4, type=float, default=None,\n help=\"Output bounds: left, bottom, right, top.\")\[email protected]('--res', nargs=2, type=float, default=None,\n help=\"Output dataset resolution: pixel width, pixel height\")\[email protected]('--nodata', '-n', type=float, default=None,\n help=\"Override nodata values defined in input datasets\")\[email protected]_context\ndef merge(ctx, files, driver, bounds, res, nodata):\n \"\"\"Copy valid pixels from input files to an output file.\n\n All files must have the same number of bands, data type, and\n coordinate reference system.\n\n Input files are merged in their listed order using the reverse\n painter's algorithm. If the output file exists, its values will be\n overwritten by input values.\n\n Geospatial bounds and resolution of a new output file in the\n units of the input file coordinate reference system may be provided\n and are otherwise taken from the first input file.\n \"\"\"\n import numpy as np\n\n verbosity = (ctx.obj and ctx.obj.get('verbosity')) or 1\n logger = logging.getLogger('rio')\n\n try:\n with rasterio.drivers(CPL_DEBUG=verbosity>2):\n output = files[-1]\n files = files[:-1]\n\n with rasterio.open(files[0]) as first:\n first_res = first.res\n kwargs = first.meta\n kwargs.pop('affine')\n nodataval = first.nodatavals[0]\n dtype = first.dtypes[0]\n\n if os.path.exists(output):\n # TODO: prompt user to update existing file (-i option) like:\n # overwrite b.tif? (y/n [n]) n\n # not overwritten\n dst = rasterio.open(output, 'r+')\n nodataval = dst.nodatavals[0]\n dtype = dst.dtypes[0]\n dest = np.zeros((dst.count,) + dst.shape, dtype=dtype)\n else:\n # Create new output file.\n # Extent from option or extent of all inputs.\n if not bounds:\n # scan input files.\n xs = []\n ys = []\n for f in files:\n with rasterio.open(f) as src:\n left, bottom, right, top = src.bounds\n xs.extend([left, right])\n ys.extend([bottom, top])\n bounds = min(xs), min(ys), max(xs), max(ys)\n output_transform = Affine.translation(bounds[0], bounds[3])\n\n # Resolution/pixel size.\n if not res:\n res = first_res\n output_transform *= Affine.scale(res[0], -res[1])\n\n # Dataset shape.\n output_width = int(math.ceil((bounds[2]-bounds[0])/res[0]))\n output_height = int(math.ceil((bounds[3]-bounds[1])/res[1]))\n\n kwargs['driver'] == driver\n kwargs['transform'] = output_transform\n kwargs['width'] = output_width\n kwargs['height'] = output_height\n\n logger.debug(\"Kwargs: %r\", kwargs)\n logger.debug(\"bounds: %r\", bounds)\n logger.debug(\"Res: %r\", res)\n\n dst = rasterio.open(output, 'w', **kwargs)\n dest = np.zeros((first.count, output_height, output_width),\n dtype=dtype)\n\n logger.debug(\"In merge, dest shape: %r\", dest.shape)\n\n if nodata is not None:\n nodataval = nodata\n\n if nodataval is not None:\n # Only fill if the nodataval is within dtype's range.\n inrange = False\n if np.dtype(dtype).kind in ('i', 'u'):\n info = np.iinfo(dtype)\n inrange = (info.min <= nodataval <= info.max)\n elif np.dtype(dtype).kind == 'f':\n info = np.finfo(dtype)\n inrange = (info.min <= nodataval <= info.max)\n if inrange:\n dest.fill(nodataval)\n else:\n warnings.warn(\n \"Input file's nodata value, %s, is beyond the valid \"\n \"range of its data type, %s. Consider overriding it \"\n \"using the --nodata option for better results.\" % (\n nodataval, dtype))\n else:\n nodataval = 0\n\n dst_w, dst_s, dst_e, dst_n = dst.bounds\n\n for fname in reversed(files):\n with rasterio.open(fname) as src:\n # Real World (tm) use of boundless reads.\n # This approach uses the maximum amount of memory to solve\n # the problem. Making it more efficient is a TODO.\n\n # 1. Compute spatial intersection of destination\n # and source.\n src_w, src_s, src_e, src_n = src.bounds\n\n int_w = src_w if src_w > dst_w else dst_w\n int_s = src_s if src_s > dst_s else dst_s\n int_e = src_e if src_e < dst_e else dst_e\n int_n = src_n if src_n < dst_n else dst_n\n\n # 2. Compute the source window.\n src_window = src.window(int_w, int_s, int_e, int_n)\n\n # 3. Compute the destination window.\n dst_window = dst.window(int_w, int_s, int_e, int_n)\n\n # 4. Initialize temp array.\n temp = np.zeros(\n (first.count,) + tuple(b - a for a, b in dst_window),\n dtype=dtype)\n\n temp = src.read(\n out=temp,\n window=src_window,\n boundless=False,\n masked=True)\n\n # 5. Copy elements of temp into dest.\n roff, coff = dst.index(int_w, int_n)\n h, w = temp.shape[-2:]\n\n region = dest[:,roff:roff+h,coff:coff+w]\n np.copyto(region, temp,\n where=np.logical_and(\n region==nodataval, temp.mask==False))\n\n if dst.mode == 'r+':\n temp = dst.read(masked=True)\n np.copyto(dest, temp,\n where=np.logical_and(\n dest==nodataval, temp.mask==False))\n\n dst.write(dest)\n dst.close()\n\n sys.exit(0)\n except Exception:\n logger.exception(\"Failed. Exception caught\")\n sys.exit(1)\n", "path": "rasterio/rio/merge.py"}]} | 2,260 | 769 |
gh_patches_debug_29392 | rasdani/github-patches | git_diff | goauthentik__authentik-7588 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
User write stages do not trigger a Model Updated event when writing to an existing user
**Describe the bug**
User write stages do not trigger a "Model updated" event when the user is existing if the stage is after an email stage with "activate pending user on success".
**To Reproduce**
Steps to reproduce the behavior:
1. Create a new enrollment flow
2. Add a prompt stage to enter username email and name
3. Do a user write, make sure it always creates a new user and store the user as inactive
4. Add an email stage, and enable "Activate pending user on success"
5. Add a prompt stage to set the password
6. Do another user write, make sure it never creates a user
7. Do a log in
During this process there is no Model Updated event sent in the second user write.
However, when you remove the email stage, it will trigger this event.
It also triggers the correct event in the user-write on a user settings flow for example, so it seems to be related to the activation stage.
**Expected behavior**
A user write should trigger a Model Updated event when modifying an existing user, as it also triggers a Model Created event when a new user is written to.
**Version and Deployment (please complete the following information):**
- authentik version: 2023.1.2
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `authentik/events/middleware.py`
Content:
```
1 """Events middleware"""
2 from functools import partial
3 from threading import Thread
4 from typing import Any, Callable, Optional
5
6 from django.conf import settings
7 from django.contrib.sessions.models import Session
8 from django.core.exceptions import SuspiciousOperation
9 from django.db.models import Model
10 from django.db.models.signals import m2m_changed, post_save, pre_delete
11 from django.http import HttpRequest, HttpResponse
12 from guardian.models import UserObjectPermission
13
14 from authentik.core.models import (
15 AuthenticatedSession,
16 Group,
17 PropertyMapping,
18 Provider,
19 Source,
20 User,
21 UserSourceConnection,
22 )
23 from authentik.events.models import Event, EventAction, Notification
24 from authentik.events.utils import model_to_dict
25 from authentik.flows.models import FlowToken, Stage
26 from authentik.lib.sentry import before_send
27 from authentik.lib.utils.errors import exception_to_string
28 from authentik.outposts.models import OutpostServiceConnection
29 from authentik.policies.models import Policy, PolicyBindingModel
30 from authentik.providers.oauth2.models import AccessToken, AuthorizationCode, RefreshToken
31 from authentik.providers.scim.models import SCIMGroup, SCIMUser
32 from authentik.stages.authenticator_static.models import StaticToken
33
34 IGNORED_MODELS = (
35 Event,
36 Notification,
37 UserObjectPermission,
38 AuthenticatedSession,
39 StaticToken,
40 Session,
41 FlowToken,
42 Provider,
43 Source,
44 PropertyMapping,
45 UserSourceConnection,
46 Stage,
47 OutpostServiceConnection,
48 Policy,
49 PolicyBindingModel,
50 AuthorizationCode,
51 AccessToken,
52 RefreshToken,
53 SCIMUser,
54 SCIMGroup,
55 )
56
57
58 def should_log_model(model: Model) -> bool:
59 """Return true if operation on `model` should be logged"""
60 if model.__module__.startswith("silk"):
61 return False
62 return model.__class__ not in IGNORED_MODELS
63
64
65 def should_log_m2m(model: Model) -> bool:
66 """Return true if m2m operation should be logged"""
67 if model.__class__ in [User, Group]:
68 return True
69 return False
70
71
72 class EventNewThread(Thread):
73 """Create Event in background thread"""
74
75 action: str
76 request: HttpRequest
77 kwargs: dict[str, Any]
78 user: Optional[User] = None
79
80 def __init__(self, action: str, request: HttpRequest, user: Optional[User] = None, **kwargs):
81 super().__init__()
82 self.action = action
83 self.request = request
84 self.user = user
85 self.kwargs = kwargs
86
87 def run(self):
88 Event.new(self.action, **self.kwargs).from_http(self.request, user=self.user)
89
90
91 class AuditMiddleware:
92 """Register handlers for duration of request-response that log creation/update/deletion
93 of models"""
94
95 get_response: Callable[[HttpRequest], HttpResponse]
96
97 def __init__(self, get_response: Callable[[HttpRequest], HttpResponse]):
98 self.get_response = get_response
99
100 def connect(self, request: HttpRequest):
101 """Connect signal for automatic logging"""
102 if not hasattr(request, "user"):
103 return
104 if not getattr(request.user, "is_authenticated", False):
105 return
106 if not hasattr(request, "request_id"):
107 return
108 post_save_handler = partial(self.post_save_handler, user=request.user, request=request)
109 pre_delete_handler = partial(self.pre_delete_handler, user=request.user, request=request)
110 m2m_changed_handler = partial(self.m2m_changed_handler, user=request.user, request=request)
111 post_save.connect(
112 post_save_handler,
113 dispatch_uid=request.request_id,
114 weak=False,
115 )
116 pre_delete.connect(
117 pre_delete_handler,
118 dispatch_uid=request.request_id,
119 weak=False,
120 )
121 m2m_changed.connect(
122 m2m_changed_handler,
123 dispatch_uid=request.request_id,
124 weak=False,
125 )
126
127 def disconnect(self, request: HttpRequest):
128 """Disconnect signals"""
129 if not hasattr(request, "request_id"):
130 return
131 post_save.disconnect(dispatch_uid=request.request_id)
132 pre_delete.disconnect(dispatch_uid=request.request_id)
133 m2m_changed.disconnect(dispatch_uid=request.request_id)
134
135 def __call__(self, request: HttpRequest) -> HttpResponse:
136 self.connect(request)
137
138 response = self.get_response(request)
139
140 self.disconnect(request)
141 return response
142
143 def process_exception(self, request: HttpRequest, exception: Exception):
144 """Disconnect handlers in case of exception"""
145 self.disconnect(request)
146
147 if settings.DEBUG:
148 return
149 # Special case for SuspiciousOperation, we have a special event action for that
150 if isinstance(exception, SuspiciousOperation):
151 thread = EventNewThread(
152 EventAction.SUSPICIOUS_REQUEST,
153 request,
154 message=str(exception),
155 )
156 thread.run()
157 elif before_send({}, {"exc_info": (None, exception, None)}) is not None:
158 thread = EventNewThread(
159 EventAction.SYSTEM_EXCEPTION,
160 request,
161 message=exception_to_string(exception),
162 )
163 thread.run()
164
165 @staticmethod
166 def post_save_handler(
167 user: User, request: HttpRequest, sender, instance: Model, created: bool, **_
168 ):
169 """Signal handler for all object's post_save"""
170 if not should_log_model(instance):
171 return
172
173 action = EventAction.MODEL_CREATED if created else EventAction.MODEL_UPDATED
174 EventNewThread(action, request, user=user, model=model_to_dict(instance)).run()
175
176 @staticmethod
177 def pre_delete_handler(user: User, request: HttpRequest, sender, instance: Model, **_):
178 """Signal handler for all object's pre_delete"""
179 if not should_log_model(instance): # pragma: no cover
180 return
181
182 EventNewThread(
183 EventAction.MODEL_DELETED,
184 request,
185 user=user,
186 model=model_to_dict(instance),
187 ).run()
188
189 @staticmethod
190 def m2m_changed_handler(
191 user: User, request: HttpRequest, sender, instance: Model, action: str, **_
192 ):
193 """Signal handler for all object's m2m_changed"""
194 if action not in ["pre_add", "pre_remove", "post_clear"]:
195 return
196 if not should_log_m2m(instance):
197 return
198
199 EventNewThread(
200 EventAction.MODEL_UPDATED,
201 request,
202 user=user,
203 model=model_to_dict(instance),
204 ).run()
205
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/authentik/events/middleware.py b/authentik/events/middleware.py
--- a/authentik/events/middleware.py
+++ b/authentik/events/middleware.py
@@ -93,21 +93,30 @@
of models"""
get_response: Callable[[HttpRequest], HttpResponse]
+ anonymous_user: User = None
def __init__(self, get_response: Callable[[HttpRequest], HttpResponse]):
self.get_response = get_response
+ def _ensure_fallback_user(self):
+ """Defer fetching anonymous user until we have to"""
+ if self.anonymous_user:
+ return
+ from guardian.shortcuts import get_anonymous_user
+
+ self.anonymous_user = get_anonymous_user()
+
def connect(self, request: HttpRequest):
"""Connect signal for automatic logging"""
- if not hasattr(request, "user"):
- return
- if not getattr(request.user, "is_authenticated", False):
- return
+ self._ensure_fallback_user()
+ user = getattr(request, "user", self.anonymous_user)
+ if not user.is_authenticated:
+ user = self.anonymous_user
if not hasattr(request, "request_id"):
return
- post_save_handler = partial(self.post_save_handler, user=request.user, request=request)
- pre_delete_handler = partial(self.pre_delete_handler, user=request.user, request=request)
- m2m_changed_handler = partial(self.m2m_changed_handler, user=request.user, request=request)
+ post_save_handler = partial(self.post_save_handler, user=user, request=request)
+ pre_delete_handler = partial(self.pre_delete_handler, user=user, request=request)
+ m2m_changed_handler = partial(self.m2m_changed_handler, user=user, request=request)
post_save.connect(
post_save_handler,
dispatch_uid=request.request_id,
| {"golden_diff": "diff --git a/authentik/events/middleware.py b/authentik/events/middleware.py\n--- a/authentik/events/middleware.py\n+++ b/authentik/events/middleware.py\n@@ -93,21 +93,30 @@\n of models\"\"\"\n \n get_response: Callable[[HttpRequest], HttpResponse]\n+ anonymous_user: User = None\n \n def __init__(self, get_response: Callable[[HttpRequest], HttpResponse]):\n self.get_response = get_response\n \n+ def _ensure_fallback_user(self):\n+ \"\"\"Defer fetching anonymous user until we have to\"\"\"\n+ if self.anonymous_user:\n+ return\n+ from guardian.shortcuts import get_anonymous_user\n+\n+ self.anonymous_user = get_anonymous_user()\n+\n def connect(self, request: HttpRequest):\n \"\"\"Connect signal for automatic logging\"\"\"\n- if not hasattr(request, \"user\"):\n- return\n- if not getattr(request.user, \"is_authenticated\", False):\n- return\n+ self._ensure_fallback_user()\n+ user = getattr(request, \"user\", self.anonymous_user)\n+ if not user.is_authenticated:\n+ user = self.anonymous_user\n if not hasattr(request, \"request_id\"):\n return\n- post_save_handler = partial(self.post_save_handler, user=request.user, request=request)\n- pre_delete_handler = partial(self.pre_delete_handler, user=request.user, request=request)\n- m2m_changed_handler = partial(self.m2m_changed_handler, user=request.user, request=request)\n+ post_save_handler = partial(self.post_save_handler, user=user, request=request)\n+ pre_delete_handler = partial(self.pre_delete_handler, user=user, request=request)\n+ m2m_changed_handler = partial(self.m2m_changed_handler, user=user, request=request)\n post_save.connect(\n post_save_handler,\n dispatch_uid=request.request_id,\n", "issue": "User write stages do not trigger a Model Updated event when writing to an existing user\n**Describe the bug**\r\nUser write stages do not trigger a \"Model updated\" event when the user is existing if the stage is after an email stage with \"activate pending user on success\".\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Create a new enrollment flow\r\n2. Add a prompt stage to enter username email and name\r\n3. Do a user write, make sure it always creates a new user and store the user as inactive\r\n4. Add an email stage, and enable \"Activate pending user on success\"\r\n5. Add a prompt stage to set the password\r\n6. Do another user write, make sure it never creates a user\r\n7. Do a log in\r\n\r\nDuring this process there is no Model Updated event sent in the second user write.\r\nHowever, when you remove the email stage, it will trigger this event.\r\nIt also triggers the correct event in the user-write on a user settings flow for example, so it seems to be related to the activation stage.\r\n\r\n**Expected behavior**\r\nA user write should trigger a Model Updated event when modifying an existing user, as it also triggers a Model Created event when a new user is written to.\r\n\r\n**Version and Deployment (please complete the following information):**\r\n - authentik version: 2023.1.2\r\n\n", "before_files": [{"content": "\"\"\"Events middleware\"\"\"\nfrom functools import partial\nfrom threading import Thread\nfrom typing import Any, Callable, Optional\n\nfrom django.conf import settings\nfrom django.contrib.sessions.models import Session\nfrom django.core.exceptions import SuspiciousOperation\nfrom django.db.models import Model\nfrom django.db.models.signals import m2m_changed, post_save, pre_delete\nfrom django.http import HttpRequest, HttpResponse\nfrom guardian.models import UserObjectPermission\n\nfrom authentik.core.models import (\n AuthenticatedSession,\n Group,\n PropertyMapping,\n Provider,\n Source,\n User,\n UserSourceConnection,\n)\nfrom authentik.events.models import Event, EventAction, Notification\nfrom authentik.events.utils import model_to_dict\nfrom authentik.flows.models import FlowToken, Stage\nfrom authentik.lib.sentry import before_send\nfrom authentik.lib.utils.errors import exception_to_string\nfrom authentik.outposts.models import OutpostServiceConnection\nfrom authentik.policies.models import Policy, PolicyBindingModel\nfrom authentik.providers.oauth2.models import AccessToken, AuthorizationCode, RefreshToken\nfrom authentik.providers.scim.models import SCIMGroup, SCIMUser\nfrom authentik.stages.authenticator_static.models import StaticToken\n\nIGNORED_MODELS = (\n Event,\n Notification,\n UserObjectPermission,\n AuthenticatedSession,\n StaticToken,\n Session,\n FlowToken,\n Provider,\n Source,\n PropertyMapping,\n UserSourceConnection,\n Stage,\n OutpostServiceConnection,\n Policy,\n PolicyBindingModel,\n AuthorizationCode,\n AccessToken,\n RefreshToken,\n SCIMUser,\n SCIMGroup,\n)\n\n\ndef should_log_model(model: Model) -> bool:\n \"\"\"Return true if operation on `model` should be logged\"\"\"\n if model.__module__.startswith(\"silk\"):\n return False\n return model.__class__ not in IGNORED_MODELS\n\n\ndef should_log_m2m(model: Model) -> bool:\n \"\"\"Return true if m2m operation should be logged\"\"\"\n if model.__class__ in [User, Group]:\n return True\n return False\n\n\nclass EventNewThread(Thread):\n \"\"\"Create Event in background thread\"\"\"\n\n action: str\n request: HttpRequest\n kwargs: dict[str, Any]\n user: Optional[User] = None\n\n def __init__(self, action: str, request: HttpRequest, user: Optional[User] = None, **kwargs):\n super().__init__()\n self.action = action\n self.request = request\n self.user = user\n self.kwargs = kwargs\n\n def run(self):\n Event.new(self.action, **self.kwargs).from_http(self.request, user=self.user)\n\n\nclass AuditMiddleware:\n \"\"\"Register handlers for duration of request-response that log creation/update/deletion\n of models\"\"\"\n\n get_response: Callable[[HttpRequest], HttpResponse]\n\n def __init__(self, get_response: Callable[[HttpRequest], HttpResponse]):\n self.get_response = get_response\n\n def connect(self, request: HttpRequest):\n \"\"\"Connect signal for automatic logging\"\"\"\n if not hasattr(request, \"user\"):\n return\n if not getattr(request.user, \"is_authenticated\", False):\n return\n if not hasattr(request, \"request_id\"):\n return\n post_save_handler = partial(self.post_save_handler, user=request.user, request=request)\n pre_delete_handler = partial(self.pre_delete_handler, user=request.user, request=request)\n m2m_changed_handler = partial(self.m2m_changed_handler, user=request.user, request=request)\n post_save.connect(\n post_save_handler,\n dispatch_uid=request.request_id,\n weak=False,\n )\n pre_delete.connect(\n pre_delete_handler,\n dispatch_uid=request.request_id,\n weak=False,\n )\n m2m_changed.connect(\n m2m_changed_handler,\n dispatch_uid=request.request_id,\n weak=False,\n )\n\n def disconnect(self, request: HttpRequest):\n \"\"\"Disconnect signals\"\"\"\n if not hasattr(request, \"request_id\"):\n return\n post_save.disconnect(dispatch_uid=request.request_id)\n pre_delete.disconnect(dispatch_uid=request.request_id)\n m2m_changed.disconnect(dispatch_uid=request.request_id)\n\n def __call__(self, request: HttpRequest) -> HttpResponse:\n self.connect(request)\n\n response = self.get_response(request)\n\n self.disconnect(request)\n return response\n\n def process_exception(self, request: HttpRequest, exception: Exception):\n \"\"\"Disconnect handlers in case of exception\"\"\"\n self.disconnect(request)\n\n if settings.DEBUG:\n return\n # Special case for SuspiciousOperation, we have a special event action for that\n if isinstance(exception, SuspiciousOperation):\n thread = EventNewThread(\n EventAction.SUSPICIOUS_REQUEST,\n request,\n message=str(exception),\n )\n thread.run()\n elif before_send({}, {\"exc_info\": (None, exception, None)}) is not None:\n thread = EventNewThread(\n EventAction.SYSTEM_EXCEPTION,\n request,\n message=exception_to_string(exception),\n )\n thread.run()\n\n @staticmethod\n def post_save_handler(\n user: User, request: HttpRequest, sender, instance: Model, created: bool, **_\n ):\n \"\"\"Signal handler for all object's post_save\"\"\"\n if not should_log_model(instance):\n return\n\n action = EventAction.MODEL_CREATED if created else EventAction.MODEL_UPDATED\n EventNewThread(action, request, user=user, model=model_to_dict(instance)).run()\n\n @staticmethod\n def pre_delete_handler(user: User, request: HttpRequest, sender, instance: Model, **_):\n \"\"\"Signal handler for all object's pre_delete\"\"\"\n if not should_log_model(instance): # pragma: no cover\n return\n\n EventNewThread(\n EventAction.MODEL_DELETED,\n request,\n user=user,\n model=model_to_dict(instance),\n ).run()\n\n @staticmethod\n def m2m_changed_handler(\n user: User, request: HttpRequest, sender, instance: Model, action: str, **_\n ):\n \"\"\"Signal handler for all object's m2m_changed\"\"\"\n if action not in [\"pre_add\", \"pre_remove\", \"post_clear\"]:\n return\n if not should_log_m2m(instance):\n return\n\n EventNewThread(\n EventAction.MODEL_UPDATED,\n request,\n user=user,\n model=model_to_dict(instance),\n ).run()\n", "path": "authentik/events/middleware.py"}], "after_files": [{"content": "\"\"\"Events middleware\"\"\"\nfrom functools import partial\nfrom threading import Thread\nfrom typing import Any, Callable, Optional\n\nfrom django.conf import settings\nfrom django.contrib.sessions.models import Session\nfrom django.core.exceptions import SuspiciousOperation\nfrom django.db.models import Model\nfrom django.db.models.signals import m2m_changed, post_save, pre_delete\nfrom django.http import HttpRequest, HttpResponse\nfrom guardian.models import UserObjectPermission\n\nfrom authentik.core.models import (\n AuthenticatedSession,\n Group,\n PropertyMapping,\n Provider,\n Source,\n User,\n UserSourceConnection,\n)\nfrom authentik.events.models import Event, EventAction, Notification\nfrom authentik.events.utils import model_to_dict\nfrom authentik.flows.models import FlowToken, Stage\nfrom authentik.lib.sentry import before_send\nfrom authentik.lib.utils.errors import exception_to_string\nfrom authentik.outposts.models import OutpostServiceConnection\nfrom authentik.policies.models import Policy, PolicyBindingModel\nfrom authentik.providers.oauth2.models import AccessToken, AuthorizationCode, RefreshToken\nfrom authentik.providers.scim.models import SCIMGroup, SCIMUser\nfrom authentik.stages.authenticator_static.models import StaticToken\n\nIGNORED_MODELS = (\n Event,\n Notification,\n UserObjectPermission,\n AuthenticatedSession,\n StaticToken,\n Session,\n FlowToken,\n Provider,\n Source,\n PropertyMapping,\n UserSourceConnection,\n Stage,\n OutpostServiceConnection,\n Policy,\n PolicyBindingModel,\n AuthorizationCode,\n AccessToken,\n RefreshToken,\n SCIMUser,\n SCIMGroup,\n)\n\n\ndef should_log_model(model: Model) -> bool:\n \"\"\"Return true if operation on `model` should be logged\"\"\"\n if model.__module__.startswith(\"silk\"):\n return False\n return model.__class__ not in IGNORED_MODELS\n\n\ndef should_log_m2m(model: Model) -> bool:\n \"\"\"Return true if m2m operation should be logged\"\"\"\n if model.__class__ in [User, Group]:\n return True\n return False\n\n\nclass EventNewThread(Thread):\n \"\"\"Create Event in background thread\"\"\"\n\n action: str\n request: HttpRequest\n kwargs: dict[str, Any]\n user: Optional[User] = None\n\n def __init__(self, action: str, request: HttpRequest, user: Optional[User] = None, **kwargs):\n super().__init__()\n self.action = action\n self.request = request\n self.user = user\n self.kwargs = kwargs\n\n def run(self):\n Event.new(self.action, **self.kwargs).from_http(self.request, user=self.user)\n\n\nclass AuditMiddleware:\n \"\"\"Register handlers for duration of request-response that log creation/update/deletion\n of models\"\"\"\n\n get_response: Callable[[HttpRequest], HttpResponse]\n anonymous_user: User = None\n\n def __init__(self, get_response: Callable[[HttpRequest], HttpResponse]):\n self.get_response = get_response\n\n def _ensure_fallback_user(self):\n \"\"\"Defer fetching anonymous user until we have to\"\"\"\n if self.anonymous_user:\n return\n from guardian.shortcuts import get_anonymous_user\n\n self.anonymous_user = get_anonymous_user()\n\n def connect(self, request: HttpRequest):\n \"\"\"Connect signal for automatic logging\"\"\"\n self._ensure_fallback_user()\n user = getattr(request, \"user\", self.anonymous_user)\n if not user.is_authenticated:\n user = self.anonymous_user\n if not hasattr(request, \"request_id\"):\n return\n post_save_handler = partial(self.post_save_handler, user=user, request=request)\n pre_delete_handler = partial(self.pre_delete_handler, user=user, request=request)\n m2m_changed_handler = partial(self.m2m_changed_handler, user=user, request=request)\n post_save.connect(\n post_save_handler,\n dispatch_uid=request.request_id,\n weak=False,\n )\n pre_delete.connect(\n pre_delete_handler,\n dispatch_uid=request.request_id,\n weak=False,\n )\n m2m_changed.connect(\n m2m_changed_handler,\n dispatch_uid=request.request_id,\n weak=False,\n )\n\n def disconnect(self, request: HttpRequest):\n \"\"\"Disconnect signals\"\"\"\n if not hasattr(request, \"request_id\"):\n return\n post_save.disconnect(dispatch_uid=request.request_id)\n pre_delete.disconnect(dispatch_uid=request.request_id)\n m2m_changed.disconnect(dispatch_uid=request.request_id)\n\n def __call__(self, request: HttpRequest) -> HttpResponse:\n self.connect(request)\n\n response = self.get_response(request)\n\n self.disconnect(request)\n return response\n\n def process_exception(self, request: HttpRequest, exception: Exception):\n \"\"\"Disconnect handlers in case of exception\"\"\"\n self.disconnect(request)\n\n if settings.DEBUG:\n return\n # Special case for SuspiciousOperation, we have a special event action for that\n if isinstance(exception, SuspiciousOperation):\n thread = EventNewThread(\n EventAction.SUSPICIOUS_REQUEST,\n request,\n message=str(exception),\n )\n thread.run()\n elif before_send({}, {\"exc_info\": (None, exception, None)}) is not None:\n thread = EventNewThread(\n EventAction.SYSTEM_EXCEPTION,\n request,\n message=exception_to_string(exception),\n )\n thread.run()\n\n @staticmethod\n def post_save_handler(\n user: User, request: HttpRequest, sender, instance: Model, created: bool, **_\n ):\n \"\"\"Signal handler for all object's post_save\"\"\"\n if not should_log_model(instance):\n return\n\n action = EventAction.MODEL_CREATED if created else EventAction.MODEL_UPDATED\n EventNewThread(action, request, user=user, model=model_to_dict(instance)).run()\n\n @staticmethod\n def pre_delete_handler(user: User, request: HttpRequest, sender, instance: Model, **_):\n \"\"\"Signal handler for all object's pre_delete\"\"\"\n if not should_log_model(instance): # pragma: no cover\n return\n\n EventNewThread(\n EventAction.MODEL_DELETED,\n request,\n user=user,\n model=model_to_dict(instance),\n ).run()\n\n @staticmethod\n def m2m_changed_handler(\n user: User, request: HttpRequest, sender, instance: Model, action: str, **_\n ):\n \"\"\"Signal handler for all object's m2m_changed\"\"\"\n if action not in [\"pre_add\", \"pre_remove\", \"post_clear\"]:\n return\n if not should_log_m2m(instance):\n return\n\n EventNewThread(\n EventAction.MODEL_UPDATED,\n request,\n user=user,\n model=model_to_dict(instance),\n ).run()\n", "path": "authentik/events/middleware.py"}]} | 2,413 | 401 |
gh_patches_debug_6542 | rasdani/github-patches | git_diff | dbt-labs__dbt-core-3071 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update to use new version of Hologram
As an extension of https://github.com/fishtown-analytics/hologram/issues/40 -- support Mashumaro in Hologram -- makes changes to pull in version 0.0.13 of Hologram.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/setup.py`
Content:
```
1 #!/usr/bin/env python
2 import os
3 import sys
4
5 if sys.version_info < (3, 6):
6 print('Error: dbt does not support this version of Python.')
7 print('Please upgrade to Python 3.6 or higher.')
8 sys.exit(1)
9
10
11 from setuptools import setup
12 try:
13 from setuptools import find_namespace_packages
14 except ImportError:
15 # the user has a downlevel version of setuptools.
16 print('Error: dbt requires setuptools v40.1.0 or higher.')
17 print('Please upgrade setuptools with "pip install --upgrade setuptools" '
18 'and try again')
19 sys.exit(1)
20
21
22 def read(fname):
23 return open(os.path.join(os.path.dirname(__file__), fname)).read()
24
25
26 package_name = "dbt-core"
27 package_version = "0.19.0"
28 description = """dbt (data build tool) is a command line tool that helps \
29 analysts and engineers transform data in their warehouse more effectively"""
30
31
32 setup(
33 name=package_name,
34 version=package_version,
35 description=description,
36 long_description=description,
37 author="Fishtown Analytics",
38 author_email="[email protected]",
39 url="https://github.com/fishtown-analytics/dbt",
40 packages=find_namespace_packages(include=['dbt', 'dbt.*']),
41 package_data={
42 'dbt': [
43 'include/index.html',
44 'include/global_project/dbt_project.yml',
45 'include/global_project/docs/*.md',
46 'include/global_project/macros/*.sql',
47 'include/global_project/macros/**/*.sql',
48 'include/global_project/macros/**/**/*.sql',
49 'py.typed',
50 ]
51 },
52 test_suite='test',
53 entry_points={
54 'console_scripts': [
55 'dbt = dbt.main:main',
56 ],
57 },
58 scripts=[
59 'scripts/dbt',
60 ],
61 install_requires=[
62 'Jinja2==2.11.2',
63 'PyYAML>=3.11',
64 'sqlparse>=0.2.3,<0.4',
65 'networkx>=2.3,<3',
66 'minimal-snowplow-tracker==0.0.2',
67 'colorama>=0.3.9,<0.4.4',
68 'agate>=1.6,<2',
69 'isodate>=0.6,<0.7',
70 'json-rpc>=1.12,<2',
71 'werkzeug>=0.15,<2.0',
72 'dataclasses==0.6;python_version<"3.7"',
73 # 'hologram==0.0.12', # must be updated prior to release
74 'logbook>=1.5,<1.6',
75 'typing-extensions>=3.7.4,<3.8',
76 # the following are all to match snowflake-connector-python
77 'requests>=2.18.0,<2.24.0',
78 'idna<2.10',
79 'cffi>=1.9,<1.15',
80 ],
81 zip_safe=False,
82 classifiers=[
83 'Development Status :: 5 - Production/Stable',
84
85 'License :: OSI Approved :: Apache Software License',
86
87 'Operating System :: Microsoft :: Windows',
88 'Operating System :: MacOS :: MacOS X',
89 'Operating System :: POSIX :: Linux',
90
91 'Programming Language :: Python :: 3.6',
92 'Programming Language :: Python :: 3.7',
93 'Programming Language :: Python :: 3.8',
94 'Programming Language :: Python :: 3.9',
95 ],
96 python_requires=">=3.6.3",
97 )
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/core/setup.py b/core/setup.py
--- a/core/setup.py
+++ b/core/setup.py
@@ -70,7 +70,7 @@
'json-rpc>=1.12,<2',
'werkzeug>=0.15,<2.0',
'dataclasses==0.6;python_version<"3.7"',
- # 'hologram==0.0.12', # must be updated prior to release
+ 'hologram==0.0.13',
'logbook>=1.5,<1.6',
'typing-extensions>=3.7.4,<3.8',
# the following are all to match snowflake-connector-python
| {"golden_diff": "diff --git a/core/setup.py b/core/setup.py\n--- a/core/setup.py\n+++ b/core/setup.py\n@@ -70,7 +70,7 @@\n 'json-rpc>=1.12,<2',\n 'werkzeug>=0.15,<2.0',\n 'dataclasses==0.6;python_version<\"3.7\"',\n- # 'hologram==0.0.12', # must be updated prior to release\n+ 'hologram==0.0.13',\n 'logbook>=1.5,<1.6',\n 'typing-extensions>=3.7.4,<3.8',\n # the following are all to match snowflake-connector-python\n", "issue": "Update to use new version of Hologram\nAs an extension of https://github.com/fishtown-analytics/hologram/issues/40 -- support Mashumaro in Hologram -- makes changes to pull in version 0.0.13 of Hologram. \n", "before_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nif sys.version_info < (3, 6):\n print('Error: dbt does not support this version of Python.')\n print('Please upgrade to Python 3.6 or higher.')\n sys.exit(1)\n\n\nfrom setuptools import setup\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print('Error: dbt requires setuptools v40.1.0 or higher.')\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" '\n 'and try again')\n sys.exit(1)\n\n\ndef read(fname):\n return open(os.path.join(os.path.dirname(__file__), fname)).read()\n\n\npackage_name = \"dbt-core\"\npackage_version = \"0.19.0\"\ndescription = \"\"\"dbt (data build tool) is a command line tool that helps \\\nanalysts and engineers transform data in their warehouse more effectively\"\"\"\n\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=description,\n author=\"Fishtown Analytics\",\n author_email=\"[email protected]\",\n url=\"https://github.com/fishtown-analytics/dbt\",\n packages=find_namespace_packages(include=['dbt', 'dbt.*']),\n package_data={\n 'dbt': [\n 'include/index.html',\n 'include/global_project/dbt_project.yml',\n 'include/global_project/docs/*.md',\n 'include/global_project/macros/*.sql',\n 'include/global_project/macros/**/*.sql',\n 'include/global_project/macros/**/**/*.sql',\n 'py.typed',\n ]\n },\n test_suite='test',\n entry_points={\n 'console_scripts': [\n 'dbt = dbt.main:main',\n ],\n },\n scripts=[\n 'scripts/dbt',\n ],\n install_requires=[\n 'Jinja2==2.11.2',\n 'PyYAML>=3.11',\n 'sqlparse>=0.2.3,<0.4',\n 'networkx>=2.3,<3',\n 'minimal-snowplow-tracker==0.0.2',\n 'colorama>=0.3.9,<0.4.4',\n 'agate>=1.6,<2',\n 'isodate>=0.6,<0.7',\n 'json-rpc>=1.12,<2',\n 'werkzeug>=0.15,<2.0',\n 'dataclasses==0.6;python_version<\"3.7\"',\n # 'hologram==0.0.12', # must be updated prior to release\n 'logbook>=1.5,<1.6',\n 'typing-extensions>=3.7.4,<3.8',\n # the following are all to match snowflake-connector-python\n 'requests>=2.18.0,<2.24.0',\n 'idna<2.10',\n 'cffi>=1.9,<1.15',\n ],\n zip_safe=False,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n\n 'License :: OSI Approved :: Apache Software License',\n\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: POSIX :: Linux',\n\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n ],\n python_requires=\">=3.6.3\",\n)\n", "path": "core/setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nif sys.version_info < (3, 6):\n print('Error: dbt does not support this version of Python.')\n print('Please upgrade to Python 3.6 or higher.')\n sys.exit(1)\n\n\nfrom setuptools import setup\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print('Error: dbt requires setuptools v40.1.0 or higher.')\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" '\n 'and try again')\n sys.exit(1)\n\n\ndef read(fname):\n return open(os.path.join(os.path.dirname(__file__), fname)).read()\n\n\npackage_name = \"dbt-core\"\npackage_version = \"0.19.0\"\ndescription = \"\"\"dbt (data build tool) is a command line tool that helps \\\nanalysts and engineers transform data in their warehouse more effectively\"\"\"\n\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=description,\n author=\"Fishtown Analytics\",\n author_email=\"[email protected]\",\n url=\"https://github.com/fishtown-analytics/dbt\",\n packages=find_namespace_packages(include=['dbt', 'dbt.*']),\n package_data={\n 'dbt': [\n 'include/index.html',\n 'include/global_project/dbt_project.yml',\n 'include/global_project/docs/*.md',\n 'include/global_project/macros/*.sql',\n 'include/global_project/macros/**/*.sql',\n 'include/global_project/macros/**/**/*.sql',\n 'py.typed',\n ]\n },\n test_suite='test',\n entry_points={\n 'console_scripts': [\n 'dbt = dbt.main:main',\n ],\n },\n scripts=[\n 'scripts/dbt',\n ],\n install_requires=[\n 'Jinja2==2.11.2',\n 'PyYAML>=3.11',\n 'sqlparse>=0.2.3,<0.4',\n 'networkx>=2.3,<3',\n 'minimal-snowplow-tracker==0.0.2',\n 'colorama>=0.3.9,<0.4.4',\n 'agate>=1.6,<2',\n 'isodate>=0.6,<0.7',\n 'json-rpc>=1.12,<2',\n 'werkzeug>=0.15,<2.0',\n 'dataclasses==0.6;python_version<\"3.7\"',\n 'hologram==0.0.13',\n 'logbook>=1.5,<1.6',\n 'typing-extensions>=3.7.4,<3.8',\n # the following are all to match snowflake-connector-python\n 'requests>=2.18.0,<2.24.0',\n 'idna<2.10',\n 'cffi>=1.9,<1.15',\n ],\n zip_safe=False,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n\n 'License :: OSI Approved :: Apache Software License',\n\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: POSIX :: Linux',\n\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n ],\n python_requires=\">=3.6.3\",\n)\n", "path": "core/setup.py"}]} | 1,296 | 159 |
gh_patches_debug_35137 | rasdani/github-patches | git_diff | LibraryOfCongress__concordia-354 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
MetricsModelMixin is missing from several models
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `concordia/models.py`
Content:
```
1 from logging import getLogger
2
3 from django.conf import settings
4 from django.contrib.auth.models import User
5 from django.contrib.postgres.fields import JSONField
6 from django.core.validators import RegexValidator
7 from django.db import models
8 from django.urls import reverse
9 from django.utils import timezone
10 from django_prometheus_metrics.models import MetricsModelMixin
11
12
13 logger = getLogger(__name__)
14
15 metadata_default = dict
16
17 User._meta.get_field("email").__dict__["_unique"] = True
18
19
20 class UserProfile(MetricsModelMixin("userprofile"), models.Model):
21 user = models.ForeignKey(User, on_delete=models.CASCADE)
22
23
24 class Status:
25 # FIXME: determine whether this is actually universally applicable to all of
26 # our models or should be split into subsets
27 EDIT = "Edit"
28 SUBMITTED = "Submitted"
29 COMPLETED = "Completed"
30 INACTIVE = "Inactive"
31 ACTIVE = "Active"
32
33 DEFAULT = EDIT
34 CHOICES = (
35 (EDIT, "Open for Edit"),
36 (SUBMITTED, "Submitted for Review"),
37 (COMPLETED, "Transcription Completed"),
38 (INACTIVE, "Inactive"),
39 (ACTIVE, "Active"),
40 )
41
42 #: Convenience lookup dictionary for CHOICES:
43 CHOICE_MAP = dict(CHOICES)
44
45
46 class MediaType:
47 IMAGE = "IMG"
48 AUDIO = "AUD"
49 VIDEO = "VID"
50
51 CHOICES = ((IMAGE, "Image"), (AUDIO, "Audio"), (VIDEO, "Video"))
52
53
54 class PublicationManager(models.Manager):
55 def published(self):
56 return self.get_queryset().filter(published=True)
57
58 def unpublished(self):
59 return self.get_queryset().filter(published=False)
60
61
62 class Campaign(MetricsModelMixin("campaign"), models.Model):
63 objects = PublicationManager()
64
65 published = models.BooleanField(default=False, blank=True)
66 status = models.CharField(
67 max_length=10, choices=Status.CHOICES, default=Status.DEFAULT
68 )
69
70 title = models.CharField(max_length=80)
71 slug = models.SlugField(max_length=80, unique=True)
72 description = models.TextField(blank=True)
73
74 start_date = models.DateTimeField(null=True, blank=True)
75 end_date = models.DateTimeField(null=True, blank=True)
76
77 metadata = JSONField(default=metadata_default, blank=True, null=True)
78
79 def __str__(self):
80 return self.title
81
82 def get_absolute_url(self):
83 # FIXME: change this with https://github.com/LibraryOfCongress/concordia/issues/242
84 return reverse("transcriptions:campaign", args=(self.slug,))
85
86
87 class Project(models.Model):
88 objects = PublicationManager()
89
90 campaign = models.ForeignKey(Campaign, on_delete=models.CASCADE)
91 title = models.CharField(max_length=80)
92 slug = models.SlugField(max_length=80)
93
94 category = models.CharField(max_length=12, blank=True)
95 metadata = JSONField(default=metadata_default, blank=True, null=True)
96 status = models.CharField(
97 max_length=10, choices=Status.CHOICES, default=Status.DEFAULT
98 )
99 published = models.BooleanField(default=False, blank=True)
100
101 class Meta:
102 unique_together = (("slug", "campaign"),)
103 ordering = ["title"]
104
105 def __str__(self):
106 return self.title
107
108 def get_absolute_url(self):
109 return reverse(
110 "transcriptions:project-detail",
111 kwargs={"campaign_slug": self.campaign.slug, "slug": self.slug},
112 )
113
114
115 class Item(models.Model):
116 objects = PublicationManager()
117
118 project = models.ForeignKey(
119 Project, on_delete=models.CASCADE, blank=True, null=True
120 )
121
122 published = models.BooleanField(default=False, blank=True)
123
124 title = models.CharField(max_length=300)
125 item_url = models.URLField(max_length=255)
126 item_id = models.CharField(
127 max_length=100, help_text="Unique item ID assigned by the upstream source"
128 )
129 description = models.TextField(blank=True)
130 metadata = JSONField(
131 default=metadata_default,
132 blank=True,
133 null=True,
134 help_text="Raw metadata returned by the remote API",
135 )
136 thumbnail_url = models.URLField(max_length=255, blank=True, null=True)
137 status = models.CharField(
138 max_length=10, choices=Status.CHOICES, default=Status.DEFAULT
139 )
140
141 class Meta:
142 unique_together = (("item_id", "project"),)
143
144 def __str__(self):
145 return f"{self.item_id}: {self.title}"
146
147 def get_absolute_url(self):
148 return reverse(
149 "transcriptions:item",
150 kwargs={
151 "campaign_slug": self.project.campaign.slug,
152 "project_slug": self.project.slug,
153 "item_id": self.item_id,
154 },
155 )
156
157
158 class Asset(models.Model):
159 item = models.ForeignKey(Item, on_delete=models.CASCADE)
160
161 title = models.CharField(max_length=100)
162 slug = models.SlugField(max_length=100)
163
164 description = models.TextField(blank=True)
165 # TODO: do we really need this given that we import in lock-step sequence
166 # numbers with a fixed extension?
167 media_url = models.TextField("Path component of the URL", max_length=255)
168 media_type = models.CharField(
169 max_length=4, choices=MediaType.CHOICES, db_index=True
170 )
171 sequence = models.PositiveIntegerField(default=1)
172
173 # The original ID of the image resource on loc.gov
174 resource_id = models.CharField(max_length=100, blank=True, null=True)
175 # The URL used to download this image from loc.gov
176 download_url = models.CharField(max_length=255, blank=True, null=True)
177
178 metadata = JSONField(default=metadata_default, blank=True, null=True)
179 status = models.CharField(
180 max_length=10, choices=Status.CHOICES, default=Status.DEFAULT
181 )
182
183 class Meta:
184 unique_together = (("slug", "item"),)
185 ordering = ["title", "sequence"]
186
187 def __str__(self):
188 return self.title
189
190 def get_absolute_url(self):
191 return reverse(
192 "transcriptions:asset-detail",
193 kwargs={
194 "campaign_slug": self.item.project.campaign.slug,
195 "project_slug": self.item.project.slug,
196 "item_id": self.item.item_id,
197 "slug": self.slug,
198 },
199 )
200
201
202 class Tag(models.Model):
203 TAG_VALIDATOR = RegexValidator(r"^[- _'\w]{1,50}$")
204 value = models.CharField(max_length=50, validators=[TAG_VALIDATOR])
205
206 def __str__(self):
207 return self.value
208
209
210 class UserAssetTagCollection(models.Model):
211 asset = models.ForeignKey(Asset, on_delete=models.CASCADE)
212
213 user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
214
215 tags = models.ManyToManyField(Tag, blank=True)
216 created_on = models.DateTimeField(auto_now_add=True)
217 updated_on = models.DateTimeField(auto_now=True)
218
219 def __str__(self):
220 return "{} - {}".format(self.asset, self.user)
221
222
223 class Transcription(models.Model):
224 asset = models.ForeignKey(Asset, on_delete=models.CASCADE)
225
226 user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
227
228 text = models.TextField(blank=True)
229 status = models.CharField(
230 max_length=10, choices=Status.CHOICES, default=Status.DEFAULT
231 )
232
233 created_on = models.DateTimeField(auto_now_add=True)
234 updated_on = models.DateTimeField(auto_now=True)
235
236 def __str__(self):
237 return str(self.asset)
238
239
240 class AssetTranscriptionReservation(models.Model):
241 """
242 Records a user's reservation to transcribe a particular asset
243 """
244
245 asset = models.OneToOneField(Asset, on_delete=models.CASCADE)
246 user = models.ForeignKey(User, on_delete=models.CASCADE)
247
248 created_on = models.DateTimeField(editable=False, auto_now_add=True)
249 updated_on = models.DateTimeField(auto_now=True)
250
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/concordia/models.py b/concordia/models.py
--- a/concordia/models.py
+++ b/concordia/models.py
@@ -84,7 +84,7 @@
return reverse("transcriptions:campaign", args=(self.slug,))
-class Project(models.Model):
+class Project(MetricsModelMixin("project"), models.Model):
objects = PublicationManager()
campaign = models.ForeignKey(Campaign, on_delete=models.CASCADE)
@@ -112,7 +112,7 @@
)
-class Item(models.Model):
+class Item(MetricsModelMixin("item"), models.Model):
objects = PublicationManager()
project = models.ForeignKey(
@@ -155,7 +155,7 @@
)
-class Asset(models.Model):
+class Asset(MetricsModelMixin("asset"), models.Model):
item = models.ForeignKey(Item, on_delete=models.CASCADE)
title = models.CharField(max_length=100)
@@ -199,7 +199,7 @@
)
-class Tag(models.Model):
+class Tag(MetricsModelMixin("tag"), models.Model):
TAG_VALIDATOR = RegexValidator(r"^[- _'\w]{1,50}$")
value = models.CharField(max_length=50, validators=[TAG_VALIDATOR])
@@ -207,7 +207,9 @@
return self.value
-class UserAssetTagCollection(models.Model):
+class UserAssetTagCollection(
+ MetricsModelMixin("user_asset_tag_collection"), models.Model
+):
asset = models.ForeignKey(Asset, on_delete=models.CASCADE)
user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
@@ -220,7 +222,7 @@
return "{} - {}".format(self.asset, self.user)
-class Transcription(models.Model):
+class Transcription(MetricsModelMixin("transcription"), models.Model):
asset = models.ForeignKey(Asset, on_delete=models.CASCADE)
user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
| {"golden_diff": "diff --git a/concordia/models.py b/concordia/models.py\n--- a/concordia/models.py\n+++ b/concordia/models.py\n@@ -84,7 +84,7 @@\n return reverse(\"transcriptions:campaign\", args=(self.slug,))\n \n \n-class Project(models.Model):\n+class Project(MetricsModelMixin(\"project\"), models.Model):\n objects = PublicationManager()\n \n campaign = models.ForeignKey(Campaign, on_delete=models.CASCADE)\n@@ -112,7 +112,7 @@\n )\n \n \n-class Item(models.Model):\n+class Item(MetricsModelMixin(\"item\"), models.Model):\n objects = PublicationManager()\n \n project = models.ForeignKey(\n@@ -155,7 +155,7 @@\n )\n \n \n-class Asset(models.Model):\n+class Asset(MetricsModelMixin(\"asset\"), models.Model):\n item = models.ForeignKey(Item, on_delete=models.CASCADE)\n \n title = models.CharField(max_length=100)\n@@ -199,7 +199,7 @@\n )\n \n \n-class Tag(models.Model):\n+class Tag(MetricsModelMixin(\"tag\"), models.Model):\n TAG_VALIDATOR = RegexValidator(r\"^[- _'\\w]{1,50}$\")\n value = models.CharField(max_length=50, validators=[TAG_VALIDATOR])\n \n@@ -207,7 +207,9 @@\n return self.value\n \n \n-class UserAssetTagCollection(models.Model):\n+class UserAssetTagCollection(\n+ MetricsModelMixin(\"user_asset_tag_collection\"), models.Model\n+):\n asset = models.ForeignKey(Asset, on_delete=models.CASCADE)\n \n user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)\n@@ -220,7 +222,7 @@\n return \"{} - {}\".format(self.asset, self.user)\n \n \n-class Transcription(models.Model):\n+class Transcription(MetricsModelMixin(\"transcription\"), models.Model):\n asset = models.ForeignKey(Asset, on_delete=models.CASCADE)\n \n user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)\n", "issue": "MetricsModelMixin is missing from several models\n\n", "before_files": [{"content": "from logging import getLogger\n\nfrom django.conf import settings\nfrom django.contrib.auth.models import User\nfrom django.contrib.postgres.fields import JSONField\nfrom django.core.validators import RegexValidator\nfrom django.db import models\nfrom django.urls import reverse\nfrom django.utils import timezone\nfrom django_prometheus_metrics.models import MetricsModelMixin\n\n\nlogger = getLogger(__name__)\n\nmetadata_default = dict\n\nUser._meta.get_field(\"email\").__dict__[\"_unique\"] = True\n\n\nclass UserProfile(MetricsModelMixin(\"userprofile\"), models.Model):\n user = models.ForeignKey(User, on_delete=models.CASCADE)\n\n\nclass Status:\n # FIXME: determine whether this is actually universally applicable to all of\n # our models or should be split into subsets\n EDIT = \"Edit\"\n SUBMITTED = \"Submitted\"\n COMPLETED = \"Completed\"\n INACTIVE = \"Inactive\"\n ACTIVE = \"Active\"\n\n DEFAULT = EDIT\n CHOICES = (\n (EDIT, \"Open for Edit\"),\n (SUBMITTED, \"Submitted for Review\"),\n (COMPLETED, \"Transcription Completed\"),\n (INACTIVE, \"Inactive\"),\n (ACTIVE, \"Active\"),\n )\n\n #: Convenience lookup dictionary for CHOICES:\n CHOICE_MAP = dict(CHOICES)\n\n\nclass MediaType:\n IMAGE = \"IMG\"\n AUDIO = \"AUD\"\n VIDEO = \"VID\"\n\n CHOICES = ((IMAGE, \"Image\"), (AUDIO, \"Audio\"), (VIDEO, \"Video\"))\n\n\nclass PublicationManager(models.Manager):\n def published(self):\n return self.get_queryset().filter(published=True)\n\n def unpublished(self):\n return self.get_queryset().filter(published=False)\n\n\nclass Campaign(MetricsModelMixin(\"campaign\"), models.Model):\n objects = PublicationManager()\n\n published = models.BooleanField(default=False, blank=True)\n status = models.CharField(\n max_length=10, choices=Status.CHOICES, default=Status.DEFAULT\n )\n\n title = models.CharField(max_length=80)\n slug = models.SlugField(max_length=80, unique=True)\n description = models.TextField(blank=True)\n\n start_date = models.DateTimeField(null=True, blank=True)\n end_date = models.DateTimeField(null=True, blank=True)\n\n metadata = JSONField(default=metadata_default, blank=True, null=True)\n\n def __str__(self):\n return self.title\n\n def get_absolute_url(self):\n # FIXME: change this with https://github.com/LibraryOfCongress/concordia/issues/242\n return reverse(\"transcriptions:campaign\", args=(self.slug,))\n\n\nclass Project(models.Model):\n objects = PublicationManager()\n\n campaign = models.ForeignKey(Campaign, on_delete=models.CASCADE)\n title = models.CharField(max_length=80)\n slug = models.SlugField(max_length=80)\n\n category = models.CharField(max_length=12, blank=True)\n metadata = JSONField(default=metadata_default, blank=True, null=True)\n status = models.CharField(\n max_length=10, choices=Status.CHOICES, default=Status.DEFAULT\n )\n published = models.BooleanField(default=False, blank=True)\n\n class Meta:\n unique_together = ((\"slug\", \"campaign\"),)\n ordering = [\"title\"]\n\n def __str__(self):\n return self.title\n\n def get_absolute_url(self):\n return reverse(\n \"transcriptions:project-detail\",\n kwargs={\"campaign_slug\": self.campaign.slug, \"slug\": self.slug},\n )\n\n\nclass Item(models.Model):\n objects = PublicationManager()\n\n project = models.ForeignKey(\n Project, on_delete=models.CASCADE, blank=True, null=True\n )\n\n published = models.BooleanField(default=False, blank=True)\n\n title = models.CharField(max_length=300)\n item_url = models.URLField(max_length=255)\n item_id = models.CharField(\n max_length=100, help_text=\"Unique item ID assigned by the upstream source\"\n )\n description = models.TextField(blank=True)\n metadata = JSONField(\n default=metadata_default,\n blank=True,\n null=True,\n help_text=\"Raw metadata returned by the remote API\",\n )\n thumbnail_url = models.URLField(max_length=255, blank=True, null=True)\n status = models.CharField(\n max_length=10, choices=Status.CHOICES, default=Status.DEFAULT\n )\n\n class Meta:\n unique_together = ((\"item_id\", \"project\"),)\n\n def __str__(self):\n return f\"{self.item_id}: {self.title}\"\n\n def get_absolute_url(self):\n return reverse(\n \"transcriptions:item\",\n kwargs={\n \"campaign_slug\": self.project.campaign.slug,\n \"project_slug\": self.project.slug,\n \"item_id\": self.item_id,\n },\n )\n\n\nclass Asset(models.Model):\n item = models.ForeignKey(Item, on_delete=models.CASCADE)\n\n title = models.CharField(max_length=100)\n slug = models.SlugField(max_length=100)\n\n description = models.TextField(blank=True)\n # TODO: do we really need this given that we import in lock-step sequence\n # numbers with a fixed extension?\n media_url = models.TextField(\"Path component of the URL\", max_length=255)\n media_type = models.CharField(\n max_length=4, choices=MediaType.CHOICES, db_index=True\n )\n sequence = models.PositiveIntegerField(default=1)\n\n # The original ID of the image resource on loc.gov\n resource_id = models.CharField(max_length=100, blank=True, null=True)\n # The URL used to download this image from loc.gov\n download_url = models.CharField(max_length=255, blank=True, null=True)\n\n metadata = JSONField(default=metadata_default, blank=True, null=True)\n status = models.CharField(\n max_length=10, choices=Status.CHOICES, default=Status.DEFAULT\n )\n\n class Meta:\n unique_together = ((\"slug\", \"item\"),)\n ordering = [\"title\", \"sequence\"]\n\n def __str__(self):\n return self.title\n\n def get_absolute_url(self):\n return reverse(\n \"transcriptions:asset-detail\",\n kwargs={\n \"campaign_slug\": self.item.project.campaign.slug,\n \"project_slug\": self.item.project.slug,\n \"item_id\": self.item.item_id,\n \"slug\": self.slug,\n },\n )\n\n\nclass Tag(models.Model):\n TAG_VALIDATOR = RegexValidator(r\"^[- _'\\w]{1,50}$\")\n value = models.CharField(max_length=50, validators=[TAG_VALIDATOR])\n\n def __str__(self):\n return self.value\n\n\nclass UserAssetTagCollection(models.Model):\n asset = models.ForeignKey(Asset, on_delete=models.CASCADE)\n\n user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)\n\n tags = models.ManyToManyField(Tag, blank=True)\n created_on = models.DateTimeField(auto_now_add=True)\n updated_on = models.DateTimeField(auto_now=True)\n\n def __str__(self):\n return \"{} - {}\".format(self.asset, self.user)\n\n\nclass Transcription(models.Model):\n asset = models.ForeignKey(Asset, on_delete=models.CASCADE)\n\n user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)\n\n text = models.TextField(blank=True)\n status = models.CharField(\n max_length=10, choices=Status.CHOICES, default=Status.DEFAULT\n )\n\n created_on = models.DateTimeField(auto_now_add=True)\n updated_on = models.DateTimeField(auto_now=True)\n\n def __str__(self):\n return str(self.asset)\n\n\nclass AssetTranscriptionReservation(models.Model):\n \"\"\"\n Records a user's reservation to transcribe a particular asset\n \"\"\"\n\n asset = models.OneToOneField(Asset, on_delete=models.CASCADE)\n user = models.ForeignKey(User, on_delete=models.CASCADE)\n\n created_on = models.DateTimeField(editable=False, auto_now_add=True)\n updated_on = models.DateTimeField(auto_now=True)\n", "path": "concordia/models.py"}], "after_files": [{"content": "from logging import getLogger\n\nfrom django.conf import settings\nfrom django.contrib.auth.models import User\nfrom django.contrib.postgres.fields import JSONField\nfrom django.core.validators import RegexValidator\nfrom django.db import models\nfrom django.urls import reverse\nfrom django.utils import timezone\nfrom django_prometheus_metrics.models import MetricsModelMixin\n\n\nlogger = getLogger(__name__)\n\nmetadata_default = dict\n\nUser._meta.get_field(\"email\").__dict__[\"_unique\"] = True\n\n\nclass UserProfile(MetricsModelMixin(\"userprofile\"), models.Model):\n user = models.ForeignKey(User, on_delete=models.CASCADE)\n\n\nclass Status:\n # FIXME: determine whether this is actually universally applicable to all of\n # our models or should be split into subsets\n EDIT = \"Edit\"\n SUBMITTED = \"Submitted\"\n COMPLETED = \"Completed\"\n INACTIVE = \"Inactive\"\n ACTIVE = \"Active\"\n\n DEFAULT = EDIT\n CHOICES = (\n (EDIT, \"Open for Edit\"),\n (SUBMITTED, \"Submitted for Review\"),\n (COMPLETED, \"Transcription Completed\"),\n (INACTIVE, \"Inactive\"),\n (ACTIVE, \"Active\"),\n )\n\n #: Convenience lookup dictionary for CHOICES:\n CHOICE_MAP = dict(CHOICES)\n\n\nclass MediaType:\n IMAGE = \"IMG\"\n AUDIO = \"AUD\"\n VIDEO = \"VID\"\n\n CHOICES = ((IMAGE, \"Image\"), (AUDIO, \"Audio\"), (VIDEO, \"Video\"))\n\n\nclass PublicationManager(models.Manager):\n def published(self):\n return self.get_queryset().filter(published=True)\n\n def unpublished(self):\n return self.get_queryset().filter(published=False)\n\n\nclass Campaign(MetricsModelMixin(\"campaign\"), models.Model):\n objects = PublicationManager()\n\n published = models.BooleanField(default=False, blank=True)\n status = models.CharField(\n max_length=10, choices=Status.CHOICES, default=Status.DEFAULT\n )\n\n title = models.CharField(max_length=80)\n slug = models.SlugField(max_length=80, unique=True)\n description = models.TextField(blank=True)\n\n start_date = models.DateTimeField(null=True, blank=True)\n end_date = models.DateTimeField(null=True, blank=True)\n\n metadata = JSONField(default=metadata_default, blank=True, null=True)\n\n def __str__(self):\n return self.title\n\n def get_absolute_url(self):\n # FIXME: change this with https://github.com/LibraryOfCongress/concordia/issues/242\n return reverse(\"transcriptions:campaign\", args=(self.slug,))\n\n\nclass Project(MetricsModelMixin(\"project\"), models.Model):\n objects = PublicationManager()\n\n campaign = models.ForeignKey(Campaign, on_delete=models.CASCADE)\n title = models.CharField(max_length=80)\n slug = models.SlugField(max_length=80)\n\n category = models.CharField(max_length=12, blank=True)\n metadata = JSONField(default=metadata_default, blank=True, null=True)\n status = models.CharField(\n max_length=10, choices=Status.CHOICES, default=Status.DEFAULT\n )\n published = models.BooleanField(default=False, blank=True)\n\n class Meta:\n unique_together = ((\"slug\", \"campaign\"),)\n ordering = [\"title\"]\n\n def __str__(self):\n return self.title\n\n def get_absolute_url(self):\n return reverse(\n \"transcriptions:project-detail\",\n kwargs={\"campaign_slug\": self.campaign.slug, \"slug\": self.slug},\n )\n\n\nclass Item(MetricsModelMixin(\"item\"), models.Model):\n objects = PublicationManager()\n\n project = models.ForeignKey(\n Project, on_delete=models.CASCADE, blank=True, null=True\n )\n\n published = models.BooleanField(default=False, blank=True)\n\n title = models.CharField(max_length=300)\n item_url = models.URLField(max_length=255)\n item_id = models.CharField(\n max_length=100, help_text=\"Unique item ID assigned by the upstream source\"\n )\n description = models.TextField(blank=True)\n metadata = JSONField(\n default=metadata_default,\n blank=True,\n null=True,\n help_text=\"Raw metadata returned by the remote API\",\n )\n thumbnail_url = models.URLField(max_length=255, blank=True, null=True)\n status = models.CharField(\n max_length=10, choices=Status.CHOICES, default=Status.DEFAULT\n )\n\n class Meta:\n unique_together = ((\"item_id\", \"project\"),)\n\n def __str__(self):\n return f\"{self.item_id}: {self.title}\"\n\n def get_absolute_url(self):\n return reverse(\n \"transcriptions:item\",\n kwargs={\n \"campaign_slug\": self.project.campaign.slug,\n \"project_slug\": self.project.slug,\n \"item_id\": self.item_id,\n },\n )\n\n\nclass Asset(MetricsModelMixin(\"asset\"), models.Model):\n item = models.ForeignKey(Item, on_delete=models.CASCADE)\n\n title = models.CharField(max_length=100)\n slug = models.SlugField(max_length=100)\n\n description = models.TextField(blank=True)\n # TODO: do we really need this given that we import in lock-step sequence\n # numbers with a fixed extension?\n media_url = models.TextField(\"Path component of the URL\", max_length=255)\n media_type = models.CharField(\n max_length=4, choices=MediaType.CHOICES, db_index=True\n )\n sequence = models.PositiveIntegerField(default=1)\n\n # The original ID of the image resource on loc.gov\n resource_id = models.CharField(max_length=100, blank=True, null=True)\n # The URL used to download this image from loc.gov\n download_url = models.CharField(max_length=255, blank=True, null=True)\n\n metadata = JSONField(default=metadata_default, blank=True, null=True)\n status = models.CharField(\n max_length=10, choices=Status.CHOICES, default=Status.DEFAULT\n )\n\n class Meta:\n unique_together = ((\"slug\", \"item\"),)\n ordering = [\"title\", \"sequence\"]\n\n def __str__(self):\n return self.title\n\n def get_absolute_url(self):\n return reverse(\n \"transcriptions:asset-detail\",\n kwargs={\n \"campaign_slug\": self.item.project.campaign.slug,\n \"project_slug\": self.item.project.slug,\n \"item_id\": self.item.item_id,\n \"slug\": self.slug,\n },\n )\n\n\nclass Tag(MetricsModelMixin(\"tag\"), models.Model):\n TAG_VALIDATOR = RegexValidator(r\"^[- _'\\w]{1,50}$\")\n value = models.CharField(max_length=50, validators=[TAG_VALIDATOR])\n\n def __str__(self):\n return self.value\n\n\nclass UserAssetTagCollection(\n MetricsModelMixin(\"user_asset_tag_collection\"), models.Model\n):\n asset = models.ForeignKey(Asset, on_delete=models.CASCADE)\n\n user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)\n\n tags = models.ManyToManyField(Tag, blank=True)\n created_on = models.DateTimeField(auto_now_add=True)\n updated_on = models.DateTimeField(auto_now=True)\n\n def __str__(self):\n return \"{} - {}\".format(self.asset, self.user)\n\n\nclass Transcription(MetricsModelMixin(\"transcription\"), models.Model):\n asset = models.ForeignKey(Asset, on_delete=models.CASCADE)\n\n user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)\n\n text = models.TextField(blank=True)\n status = models.CharField(\n max_length=10, choices=Status.CHOICES, default=Status.DEFAULT\n )\n\n created_on = models.DateTimeField(auto_now_add=True)\n updated_on = models.DateTimeField(auto_now=True)\n\n def __str__(self):\n return str(self.asset)\n\n\nclass AssetTranscriptionReservation(models.Model):\n \"\"\"\n Records a user's reservation to transcribe a particular asset\n \"\"\"\n\n asset = models.OneToOneField(Asset, on_delete=models.CASCADE)\n user = models.ForeignKey(User, on_delete=models.CASCADE)\n\n created_on = models.DateTimeField(editable=False, auto_now_add=True)\n updated_on = models.DateTimeField(auto_now=True)\n", "path": "concordia/models.py"}]} | 2,628 | 435 |
gh_patches_debug_25528 | rasdani/github-patches | git_diff | scrapy__scrapy-2464 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
String value for order of Scrapy component
If Scrapy component order is defined as a string, it leads to undefined behaviour on Python 2 and to the following errors on Python 3:
```
File "/usr/local/lib/python3.5/site-packages/scrapy/middleware.py", line 58, in from_crawler
return cls.from_settings(crawler.settings, crawler)
File "/usr/local/lib/python3.5/site-packages/scrapy/middleware.py", line 29, in from_settings
mwlist = cls._get_mwlist_from_settings(settings)
File "/usr/local/lib/python3.5/site-packages/scrapy/core/spidermw.py", line 21, in _get_mwlist_from_settings
return build_component_list(settings.getwithbase('SPIDER_MIDDLEWARES'))
File "/usr/local/lib/python3.5/site-packages/scrapy/utils/conf.py", line 47, in build_component_list
return [k for k, v in sorted(six.iteritems(compdict), key=itemgetter(1))]
builtins.TypeError: unorderable types: str() < int()
```
My guess that 1) order of a Scrapy component should be stated as of integer type (or `None`) and there should be a check somewhere, 2) or the sorting logic should be fixed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/utils/conf.py`
Content:
```
1 import os
2 import sys
3 from operator import itemgetter
4
5 import six
6 from six.moves.configparser import SafeConfigParser
7
8 from scrapy.settings import BaseSettings
9 from scrapy.utils.deprecate import update_classpath
10 from scrapy.utils.python import without_none_values
11
12
13 def build_component_list(compdict, custom=None, convert=update_classpath):
14 """Compose a component list from a { class: order } dictionary."""
15
16 def _check_components(complist):
17 if len({convert(c) for c in complist}) != len(complist):
18 raise ValueError('Some paths in {!r} convert to the same object, '
19 'please update your settings'.format(complist))
20
21 def _map_keys(compdict):
22 if isinstance(compdict, BaseSettings):
23 compbs = BaseSettings()
24 for k, v in six.iteritems(compdict):
25 prio = compdict.getpriority(k)
26 if compbs.getpriority(convert(k)) == prio:
27 raise ValueError('Some paths in {!r} convert to the same '
28 'object, please update your settings'
29 ''.format(list(compdict.keys())))
30 else:
31 compbs.set(convert(k), v, priority=prio)
32 return compbs
33 else:
34 _check_components(compdict)
35 return {convert(k): v for k, v in six.iteritems(compdict)}
36
37 # BEGIN Backwards compatibility for old (base, custom) call signature
38 if isinstance(custom, (list, tuple)):
39 _check_components(custom)
40 return type(custom)(convert(c) for c in custom)
41
42 if custom is not None:
43 compdict.update(custom)
44 # END Backwards compatibility
45
46 compdict = without_none_values(_map_keys(compdict))
47 return [k for k, v in sorted(six.iteritems(compdict), key=itemgetter(1))]
48
49
50 def arglist_to_dict(arglist):
51 """Convert a list of arguments like ['arg1=val1', 'arg2=val2', ...] to a
52 dict
53 """
54 return dict(x.split('=', 1) for x in arglist)
55
56
57 def closest_scrapy_cfg(path='.', prevpath=None):
58 """Return the path to the closest scrapy.cfg file by traversing the current
59 directory and its parents
60 """
61 if path == prevpath:
62 return ''
63 path = os.path.abspath(path)
64 cfgfile = os.path.join(path, 'scrapy.cfg')
65 if os.path.exists(cfgfile):
66 return cfgfile
67 return closest_scrapy_cfg(os.path.dirname(path), path)
68
69
70 def init_env(project='default', set_syspath=True):
71 """Initialize environment to use command-line tool from inside a project
72 dir. This sets the Scrapy settings module and modifies the Python path to
73 be able to locate the project module.
74 """
75 cfg = get_config()
76 if cfg.has_option('settings', project):
77 os.environ['SCRAPY_SETTINGS_MODULE'] = cfg.get('settings', project)
78 closest = closest_scrapy_cfg()
79 if closest:
80 projdir = os.path.dirname(closest)
81 if set_syspath and projdir not in sys.path:
82 sys.path.append(projdir)
83
84
85 def get_config(use_closest=True):
86 """Get Scrapy config file as a SafeConfigParser"""
87 sources = get_sources(use_closest)
88 cfg = SafeConfigParser()
89 cfg.read(sources)
90 return cfg
91
92
93 def get_sources(use_closest=True):
94 xdg_config_home = os.environ.get('XDG_CONFIG_HOME') or \
95 os.path.expanduser('~/.config')
96 sources = ['/etc/scrapy.cfg', r'c:\scrapy\scrapy.cfg',
97 xdg_config_home + '/scrapy.cfg',
98 os.path.expanduser('~/.scrapy.cfg')]
99 if use_closest:
100 sources.append(closest_scrapy_cfg())
101 return sources
102
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scrapy/utils/conf.py b/scrapy/utils/conf.py
--- a/scrapy/utils/conf.py
+++ b/scrapy/utils/conf.py
@@ -1,5 +1,6 @@
import os
import sys
+import numbers
from operator import itemgetter
import six
@@ -34,6 +35,13 @@
_check_components(compdict)
return {convert(k): v for k, v in six.iteritems(compdict)}
+ def _validate_values(compdict):
+ """Fail if a value in the components dict is not a real number or None."""
+ for name, value in six.iteritems(compdict):
+ if value is not None and not isinstance(value, numbers.Real):
+ raise ValueError('Invalid value {} for component {}, please provide ' \
+ 'a real number or None instead'.format(value, name))
+
# BEGIN Backwards compatibility for old (base, custom) call signature
if isinstance(custom, (list, tuple)):
_check_components(custom)
@@ -43,6 +51,7 @@
compdict.update(custom)
# END Backwards compatibility
+ _validate_values(compdict)
compdict = without_none_values(_map_keys(compdict))
return [k for k, v in sorted(six.iteritems(compdict), key=itemgetter(1))]
| {"golden_diff": "diff --git a/scrapy/utils/conf.py b/scrapy/utils/conf.py\n--- a/scrapy/utils/conf.py\n+++ b/scrapy/utils/conf.py\n@@ -1,5 +1,6 @@\n import os\n import sys\n+import numbers\n from operator import itemgetter\n \n import six\n@@ -34,6 +35,13 @@\n _check_components(compdict)\n return {convert(k): v for k, v in six.iteritems(compdict)}\n \n+ def _validate_values(compdict):\n+ \"\"\"Fail if a value in the components dict is not a real number or None.\"\"\"\n+ for name, value in six.iteritems(compdict):\n+ if value is not None and not isinstance(value, numbers.Real):\n+ raise ValueError('Invalid value {} for component {}, please provide ' \\\n+ 'a real number or None instead'.format(value, name))\n+\n # BEGIN Backwards compatibility for old (base, custom) call signature\n if isinstance(custom, (list, tuple)):\n _check_components(custom)\n@@ -43,6 +51,7 @@\n compdict.update(custom)\n # END Backwards compatibility\n \n+ _validate_values(compdict)\n compdict = without_none_values(_map_keys(compdict))\n return [k for k, v in sorted(six.iteritems(compdict), key=itemgetter(1))]\n", "issue": "String value for order of Scrapy component\nIf Scrapy component order is defined as a string, it leads to undefined behaviour on Python 2 and to the following errors on Python 3:\r\n```\r\nFile \"/usr/local/lib/python3.5/site-packages/scrapy/middleware.py\", line 58, in from_crawler\r\n return cls.from_settings(crawler.settings, crawler)\r\nFile \"/usr/local/lib/python3.5/site-packages/scrapy/middleware.py\", line 29, in from_settings\r\n mwlist = cls._get_mwlist_from_settings(settings)\r\nFile \"/usr/local/lib/python3.5/site-packages/scrapy/core/spidermw.py\", line 21, in _get_mwlist_from_settings\r\n return build_component_list(settings.getwithbase('SPIDER_MIDDLEWARES'))\r\nFile \"/usr/local/lib/python3.5/site-packages/scrapy/utils/conf.py\", line 47, in build_component_list\r\n return [k for k, v in sorted(six.iteritems(compdict), key=itemgetter(1))]\r\nbuiltins.TypeError: unorderable types: str() < int()\r\n```\r\n\r\nMy guess that 1) order of a Scrapy component should be stated as of integer type (or `None`) and there should be a check somewhere, 2) or the sorting logic should be fixed.\n", "before_files": [{"content": "import os\nimport sys\nfrom operator import itemgetter\n\nimport six\nfrom six.moves.configparser import SafeConfigParser\n\nfrom scrapy.settings import BaseSettings\nfrom scrapy.utils.deprecate import update_classpath\nfrom scrapy.utils.python import without_none_values\n\n\ndef build_component_list(compdict, custom=None, convert=update_classpath):\n \"\"\"Compose a component list from a { class: order } dictionary.\"\"\"\n\n def _check_components(complist):\n if len({convert(c) for c in complist}) != len(complist):\n raise ValueError('Some paths in {!r} convert to the same object, '\n 'please update your settings'.format(complist))\n\n def _map_keys(compdict):\n if isinstance(compdict, BaseSettings):\n compbs = BaseSettings()\n for k, v in six.iteritems(compdict):\n prio = compdict.getpriority(k)\n if compbs.getpriority(convert(k)) == prio:\n raise ValueError('Some paths in {!r} convert to the same '\n 'object, please update your settings'\n ''.format(list(compdict.keys())))\n else:\n compbs.set(convert(k), v, priority=prio)\n return compbs\n else:\n _check_components(compdict)\n return {convert(k): v for k, v in six.iteritems(compdict)}\n\n # BEGIN Backwards compatibility for old (base, custom) call signature\n if isinstance(custom, (list, tuple)):\n _check_components(custom)\n return type(custom)(convert(c) for c in custom)\n\n if custom is not None:\n compdict.update(custom)\n # END Backwards compatibility\n\n compdict = without_none_values(_map_keys(compdict))\n return [k for k, v in sorted(six.iteritems(compdict), key=itemgetter(1))]\n\n\ndef arglist_to_dict(arglist):\n \"\"\"Convert a list of arguments like ['arg1=val1', 'arg2=val2', ...] to a\n dict\n \"\"\"\n return dict(x.split('=', 1) for x in arglist)\n\n\ndef closest_scrapy_cfg(path='.', prevpath=None):\n \"\"\"Return the path to the closest scrapy.cfg file by traversing the current\n directory and its parents\n \"\"\"\n if path == prevpath:\n return ''\n path = os.path.abspath(path)\n cfgfile = os.path.join(path, 'scrapy.cfg')\n if os.path.exists(cfgfile):\n return cfgfile\n return closest_scrapy_cfg(os.path.dirname(path), path)\n\n\ndef init_env(project='default', set_syspath=True):\n \"\"\"Initialize environment to use command-line tool from inside a project\n dir. This sets the Scrapy settings module and modifies the Python path to\n be able to locate the project module.\n \"\"\"\n cfg = get_config()\n if cfg.has_option('settings', project):\n os.environ['SCRAPY_SETTINGS_MODULE'] = cfg.get('settings', project)\n closest = closest_scrapy_cfg()\n if closest:\n projdir = os.path.dirname(closest)\n if set_syspath and projdir not in sys.path:\n sys.path.append(projdir)\n\n\ndef get_config(use_closest=True):\n \"\"\"Get Scrapy config file as a SafeConfigParser\"\"\"\n sources = get_sources(use_closest)\n cfg = SafeConfigParser()\n cfg.read(sources)\n return cfg\n\n\ndef get_sources(use_closest=True):\n xdg_config_home = os.environ.get('XDG_CONFIG_HOME') or \\\n os.path.expanduser('~/.config')\n sources = ['/etc/scrapy.cfg', r'c:\\scrapy\\scrapy.cfg',\n xdg_config_home + '/scrapy.cfg',\n os.path.expanduser('~/.scrapy.cfg')]\n if use_closest:\n sources.append(closest_scrapy_cfg())\n return sources\n", "path": "scrapy/utils/conf.py"}], "after_files": [{"content": "import os\nimport sys\nimport numbers\nfrom operator import itemgetter\n\nimport six\nfrom six.moves.configparser import SafeConfigParser\n\nfrom scrapy.settings import BaseSettings\nfrom scrapy.utils.deprecate import update_classpath\nfrom scrapy.utils.python import without_none_values\n\n\ndef build_component_list(compdict, custom=None, convert=update_classpath):\n \"\"\"Compose a component list from a { class: order } dictionary.\"\"\"\n\n def _check_components(complist):\n if len({convert(c) for c in complist}) != len(complist):\n raise ValueError('Some paths in {!r} convert to the same object, '\n 'please update your settings'.format(complist))\n\n def _map_keys(compdict):\n if isinstance(compdict, BaseSettings):\n compbs = BaseSettings()\n for k, v in six.iteritems(compdict):\n prio = compdict.getpriority(k)\n if compbs.getpriority(convert(k)) == prio:\n raise ValueError('Some paths in {!r} convert to the same '\n 'object, please update your settings'\n ''.format(list(compdict.keys())))\n else:\n compbs.set(convert(k), v, priority=prio)\n return compbs\n else:\n _check_components(compdict)\n return {convert(k): v for k, v in six.iteritems(compdict)}\n\n def _validate_values(compdict):\n \"\"\"Fail if a value in the components dict is not a real number or None.\"\"\"\n for name, value in six.iteritems(compdict):\n if value is not None and not isinstance(value, numbers.Real):\n raise ValueError('Invalid value {} for component {}, please provide ' \\\n 'a real number or None instead'.format(value, name))\n\n # BEGIN Backwards compatibility for old (base, custom) call signature\n if isinstance(custom, (list, tuple)):\n _check_components(custom)\n return type(custom)(convert(c) for c in custom)\n\n if custom is not None:\n compdict.update(custom)\n # END Backwards compatibility\n\n _validate_values(compdict)\n compdict = without_none_values(_map_keys(compdict))\n return [k for k, v in sorted(six.iteritems(compdict), key=itemgetter(1))]\n\n\ndef arglist_to_dict(arglist):\n \"\"\"Convert a list of arguments like ['arg1=val1', 'arg2=val2', ...] to a\n dict\n \"\"\"\n return dict(x.split('=', 1) for x in arglist)\n\n\ndef closest_scrapy_cfg(path='.', prevpath=None):\n \"\"\"Return the path to the closest scrapy.cfg file by traversing the current\n directory and its parents\n \"\"\"\n if path == prevpath:\n return ''\n path = os.path.abspath(path)\n cfgfile = os.path.join(path, 'scrapy.cfg')\n if os.path.exists(cfgfile):\n return cfgfile\n return closest_scrapy_cfg(os.path.dirname(path), path)\n\n\ndef init_env(project='default', set_syspath=True):\n \"\"\"Initialize environment to use command-line tool from inside a project\n dir. This sets the Scrapy settings module and modifies the Python path to\n be able to locate the project module.\n \"\"\"\n cfg = get_config()\n if cfg.has_option('settings', project):\n os.environ['SCRAPY_SETTINGS_MODULE'] = cfg.get('settings', project)\n closest = closest_scrapy_cfg()\n if closest:\n projdir = os.path.dirname(closest)\n if set_syspath and projdir not in sys.path:\n sys.path.append(projdir)\n\n\ndef get_config(use_closest=True):\n \"\"\"Get Scrapy config file as a SafeConfigParser\"\"\"\n sources = get_sources(use_closest)\n cfg = SafeConfigParser()\n cfg.read(sources)\n return cfg\n\n\ndef get_sources(use_closest=True):\n xdg_config_home = os.environ.get('XDG_CONFIG_HOME') or \\\n os.path.expanduser('~/.config')\n sources = ['/etc/scrapy.cfg', r'c:\\scrapy\\scrapy.cfg',\n xdg_config_home + '/scrapy.cfg',\n os.path.expanduser('~/.scrapy.cfg')]\n if use_closest:\n sources.append(closest_scrapy_cfg())\n return sources\n", "path": "scrapy/utils/conf.py"}]} | 1,544 | 285 |
gh_patches_debug_860 | rasdani/github-patches | git_diff | modin-project__modin-2173 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[OmniSci] Add float32 dtype support
Looks like our calcite serializer doesn't support float32 type.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `modin/experimental/engines/omnisci_on_ray/frame/calcite_serializer.py`
Content:
```
1 # Licensed to Modin Development Team under one or more contributor license agreements.
2 # See the NOTICE file distributed with this work for additional information regarding
3 # copyright ownership. The Modin Development Team licenses this file to you under the
4 # Apache License, Version 2.0 (the "License"); you may not use this file except in
5 # compliance with the License. You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software distributed under
10 # the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific language
12 # governing permissions and limitations under the License.
13
14 from .expr import (
15 BaseExpr,
16 LiteralExpr,
17 OpExpr,
18 AggregateExpr,
19 )
20 from .calcite_algebra import (
21 CalciteBaseNode,
22 CalciteInputRefExpr,
23 CalciteInputIdxExpr,
24 CalciteScanNode,
25 CalciteProjectionNode,
26 CalciteFilterNode,
27 CalciteAggregateNode,
28 CalciteCollation,
29 CalciteSortNode,
30 CalciteJoinNode,
31 CalciteUnionNode,
32 )
33 import json
34 import numpy as np
35
36
37 class CalciteSerializer:
38 dtype_strings = {
39 "int8": "TINYINT",
40 "int16": "SMALLINT",
41 "int32": "INTEGER",
42 "int64": "BIGINT",
43 "bool": "BOOLEAN",
44 "float64": "DOUBLE",
45 }
46
47 def serialize(self, plan):
48 return json.dumps({"rels": [self.serialize_item(node) for node in plan]})
49
50 def expect_one_of(self, val, *types):
51 for t in types:
52 if isinstance(val, t):
53 return
54 raise TypeError("Can not serialize {}".format(type(val).__name__))
55
56 def serialize_item(self, item):
57 if isinstance(item, CalciteBaseNode):
58 return self.serialize_node(item)
59 elif isinstance(item, BaseExpr):
60 return self.serialize_expr(item)
61 elif isinstance(item, CalciteCollation):
62 return self.serialize_obj(item)
63 elif isinstance(item, list):
64 return [self.serialize_item(v) for v in item]
65
66 self.expect_one_of(item, str, int)
67 return item
68
69 def serialize_node(self, node):
70 # We need to setup context for proper references
71 # serialization
72 if isinstance(
73 node,
74 (
75 CalciteScanNode,
76 CalciteProjectionNode,
77 CalciteFilterNode,
78 CalciteAggregateNode,
79 CalciteSortNode,
80 CalciteJoinNode,
81 CalciteUnionNode,
82 ),
83 ):
84 return self.serialize_obj(node)
85 else:
86 raise NotImplementedError(
87 "Can not serialize {}".format(type(node).__name__)
88 )
89
90 def serialize_obj(self, obj):
91 res = {}
92 for k, v in obj.__dict__.items():
93 if k[0] != "_":
94 res[k] = self.serialize_item(v)
95 return res
96
97 def serialize_typed_obj(self, obj):
98 res = self.serialize_obj(obj)
99 res["type"] = self.serialize_dtype(obj._dtype)
100 return res
101
102 def serialize_expr(self, expr):
103 if isinstance(expr, LiteralExpr):
104 return self.serialize_literal(expr)
105 elif isinstance(expr, CalciteInputRefExpr):
106 return self.serialize_obj(expr)
107 elif isinstance(expr, CalciteInputIdxExpr):
108 return self.serialize_input_idx(expr)
109 elif isinstance(expr, OpExpr):
110 return self.serialize_typed_obj(expr)
111 elif isinstance(expr, AggregateExpr):
112 return self.serialize_typed_obj(expr)
113 else:
114 raise NotImplementedError(
115 "Can not serialize {}".format(type(expr).__name__)
116 )
117
118 def serialize_literal(self, literal):
119 if literal.val is None:
120 return {
121 "literal": None,
122 "type": "BIGINT",
123 "target_type": "BIGINT",
124 "scale": 0,
125 "precision": 19,
126 "type_scale": 0,
127 "type_precision": 19,
128 }
129 if type(literal.val) is str:
130 return {
131 "literal": literal.val,
132 "type": "CHAR",
133 "target_type": "CHAR",
134 "scale": -2147483648,
135 "precision": len(literal.val),
136 "type_scale": -2147483648,
137 "type_precision": len(literal.val),
138 }
139 if type(literal.val) in (int, np.int8, np.int16, np.int32, np.int64):
140 target_type, precision = self.opts_for_int_type(type(literal.val))
141 return {
142 "literal": int(literal.val),
143 "type": "DECIMAL",
144 "target_type": target_type,
145 "scale": 0,
146 "precision": len(str(literal.val)),
147 "type_scale": 0,
148 "type_precision": precision,
149 }
150 if type(literal.val) in (float, np.float64):
151 str_val = f"{literal.val:f}"
152 precision = len(str_val) - 1
153 scale = precision - str_val.index(".")
154 return {
155 "literal": int(str_val.replace(".", "")),
156 "type": "DECIMAL",
157 "target_type": "DOUBLE",
158 "scale": scale,
159 "precision": precision,
160 "type_scale": -2147483648,
161 "type_precision": 15,
162 }
163 if type(literal.val) is bool:
164 return {
165 "literal": literal.val,
166 "type": "BOOLEAN",
167 "target_type": "BOOLEAN",
168 "scale": -2147483648,
169 "precision": 1,
170 "type_scale": -2147483648,
171 "type_precision": 1,
172 }
173 raise NotImplementedError(f"Can not serialize {type(literal.val).__name__}")
174
175 def opts_for_int_type(self, int_type):
176 if int_type is np.int8:
177 return "TINYINT", 3
178 if int_type is np.int16:
179 return "SMALLINT", 5
180 if int_type is np.int32:
181 return "INTEGER", 10
182 if int_type in (np.int64, int):
183 return "BIGINT", 19
184 raise NotImplementedError(f"Unsupported integer type {int_type.__name__}")
185
186 def serialize_dtype(self, dtype):
187 return {"type": type(self).dtype_strings[dtype.name], "nullable": True}
188
189 def serialize_input_idx(self, expr):
190 return expr.input
191
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/modin/experimental/engines/omnisci_on_ray/frame/calcite_serializer.py b/modin/experimental/engines/omnisci_on_ray/frame/calcite_serializer.py
--- a/modin/experimental/engines/omnisci_on_ray/frame/calcite_serializer.py
+++ b/modin/experimental/engines/omnisci_on_ray/frame/calcite_serializer.py
@@ -41,6 +41,7 @@
"int32": "INTEGER",
"int64": "BIGINT",
"bool": "BOOLEAN",
+ "float32": "FLOAT",
"float64": "DOUBLE",
}
| {"golden_diff": "diff --git a/modin/experimental/engines/omnisci_on_ray/frame/calcite_serializer.py b/modin/experimental/engines/omnisci_on_ray/frame/calcite_serializer.py\n--- a/modin/experimental/engines/omnisci_on_ray/frame/calcite_serializer.py\n+++ b/modin/experimental/engines/omnisci_on_ray/frame/calcite_serializer.py\n@@ -41,6 +41,7 @@\n \"int32\": \"INTEGER\",\n \"int64\": \"BIGINT\",\n \"bool\": \"BOOLEAN\",\n+ \"float32\": \"FLOAT\",\n \"float64\": \"DOUBLE\",\n }\n", "issue": "[OmniSci] Add float32 dtype support\nLooks like our calcite serializer doesn't support float32 type.\n", "before_files": [{"content": "# Licensed to Modin Development Team under one or more contributor license agreements.\n# See the NOTICE file distributed with this work for additional information regarding\n# copyright ownership. The Modin Development Team licenses this file to you under the\n# Apache License, Version 2.0 (the \"License\"); you may not use this file except in\n# compliance with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software distributed under\n# the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific language\n# governing permissions and limitations under the License.\n\nfrom .expr import (\n BaseExpr,\n LiteralExpr,\n OpExpr,\n AggregateExpr,\n)\nfrom .calcite_algebra import (\n CalciteBaseNode,\n CalciteInputRefExpr,\n CalciteInputIdxExpr,\n CalciteScanNode,\n CalciteProjectionNode,\n CalciteFilterNode,\n CalciteAggregateNode,\n CalciteCollation,\n CalciteSortNode,\n CalciteJoinNode,\n CalciteUnionNode,\n)\nimport json\nimport numpy as np\n\n\nclass CalciteSerializer:\n dtype_strings = {\n \"int8\": \"TINYINT\",\n \"int16\": \"SMALLINT\",\n \"int32\": \"INTEGER\",\n \"int64\": \"BIGINT\",\n \"bool\": \"BOOLEAN\",\n \"float64\": \"DOUBLE\",\n }\n\n def serialize(self, plan):\n return json.dumps({\"rels\": [self.serialize_item(node) for node in plan]})\n\n def expect_one_of(self, val, *types):\n for t in types:\n if isinstance(val, t):\n return\n raise TypeError(\"Can not serialize {}\".format(type(val).__name__))\n\n def serialize_item(self, item):\n if isinstance(item, CalciteBaseNode):\n return self.serialize_node(item)\n elif isinstance(item, BaseExpr):\n return self.serialize_expr(item)\n elif isinstance(item, CalciteCollation):\n return self.serialize_obj(item)\n elif isinstance(item, list):\n return [self.serialize_item(v) for v in item]\n\n self.expect_one_of(item, str, int)\n return item\n\n def serialize_node(self, node):\n # We need to setup context for proper references\n # serialization\n if isinstance(\n node,\n (\n CalciteScanNode,\n CalciteProjectionNode,\n CalciteFilterNode,\n CalciteAggregateNode,\n CalciteSortNode,\n CalciteJoinNode,\n CalciteUnionNode,\n ),\n ):\n return self.serialize_obj(node)\n else:\n raise NotImplementedError(\n \"Can not serialize {}\".format(type(node).__name__)\n )\n\n def serialize_obj(self, obj):\n res = {}\n for k, v in obj.__dict__.items():\n if k[0] != \"_\":\n res[k] = self.serialize_item(v)\n return res\n\n def serialize_typed_obj(self, obj):\n res = self.serialize_obj(obj)\n res[\"type\"] = self.serialize_dtype(obj._dtype)\n return res\n\n def serialize_expr(self, expr):\n if isinstance(expr, LiteralExpr):\n return self.serialize_literal(expr)\n elif isinstance(expr, CalciteInputRefExpr):\n return self.serialize_obj(expr)\n elif isinstance(expr, CalciteInputIdxExpr):\n return self.serialize_input_idx(expr)\n elif isinstance(expr, OpExpr):\n return self.serialize_typed_obj(expr)\n elif isinstance(expr, AggregateExpr):\n return self.serialize_typed_obj(expr)\n else:\n raise NotImplementedError(\n \"Can not serialize {}\".format(type(expr).__name__)\n )\n\n def serialize_literal(self, literal):\n if literal.val is None:\n return {\n \"literal\": None,\n \"type\": \"BIGINT\",\n \"target_type\": \"BIGINT\",\n \"scale\": 0,\n \"precision\": 19,\n \"type_scale\": 0,\n \"type_precision\": 19,\n }\n if type(literal.val) is str:\n return {\n \"literal\": literal.val,\n \"type\": \"CHAR\",\n \"target_type\": \"CHAR\",\n \"scale\": -2147483648,\n \"precision\": len(literal.val),\n \"type_scale\": -2147483648,\n \"type_precision\": len(literal.val),\n }\n if type(literal.val) in (int, np.int8, np.int16, np.int32, np.int64):\n target_type, precision = self.opts_for_int_type(type(literal.val))\n return {\n \"literal\": int(literal.val),\n \"type\": \"DECIMAL\",\n \"target_type\": target_type,\n \"scale\": 0,\n \"precision\": len(str(literal.val)),\n \"type_scale\": 0,\n \"type_precision\": precision,\n }\n if type(literal.val) in (float, np.float64):\n str_val = f\"{literal.val:f}\"\n precision = len(str_val) - 1\n scale = precision - str_val.index(\".\")\n return {\n \"literal\": int(str_val.replace(\".\", \"\")),\n \"type\": \"DECIMAL\",\n \"target_type\": \"DOUBLE\",\n \"scale\": scale,\n \"precision\": precision,\n \"type_scale\": -2147483648,\n \"type_precision\": 15,\n }\n if type(literal.val) is bool:\n return {\n \"literal\": literal.val,\n \"type\": \"BOOLEAN\",\n \"target_type\": \"BOOLEAN\",\n \"scale\": -2147483648,\n \"precision\": 1,\n \"type_scale\": -2147483648,\n \"type_precision\": 1,\n }\n raise NotImplementedError(f\"Can not serialize {type(literal.val).__name__}\")\n\n def opts_for_int_type(self, int_type):\n if int_type is np.int8:\n return \"TINYINT\", 3\n if int_type is np.int16:\n return \"SMALLINT\", 5\n if int_type is np.int32:\n return \"INTEGER\", 10\n if int_type in (np.int64, int):\n return \"BIGINT\", 19\n raise NotImplementedError(f\"Unsupported integer type {int_type.__name__}\")\n\n def serialize_dtype(self, dtype):\n return {\"type\": type(self).dtype_strings[dtype.name], \"nullable\": True}\n\n def serialize_input_idx(self, expr):\n return expr.input\n", "path": "modin/experimental/engines/omnisci_on_ray/frame/calcite_serializer.py"}], "after_files": [{"content": "# Licensed to Modin Development Team under one or more contributor license agreements.\n# See the NOTICE file distributed with this work for additional information regarding\n# copyright ownership. The Modin Development Team licenses this file to you under the\n# Apache License, Version 2.0 (the \"License\"); you may not use this file except in\n# compliance with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software distributed under\n# the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific language\n# governing permissions and limitations under the License.\n\nfrom .expr import (\n BaseExpr,\n LiteralExpr,\n OpExpr,\n AggregateExpr,\n)\nfrom .calcite_algebra import (\n CalciteBaseNode,\n CalciteInputRefExpr,\n CalciteInputIdxExpr,\n CalciteScanNode,\n CalciteProjectionNode,\n CalciteFilterNode,\n CalciteAggregateNode,\n CalciteCollation,\n CalciteSortNode,\n CalciteJoinNode,\n CalciteUnionNode,\n)\nimport json\nimport numpy as np\n\n\nclass CalciteSerializer:\n dtype_strings = {\n \"int8\": \"TINYINT\",\n \"int16\": \"SMALLINT\",\n \"int32\": \"INTEGER\",\n \"int64\": \"BIGINT\",\n \"bool\": \"BOOLEAN\",\n \"float32\": \"FLOAT\",\n \"float64\": \"DOUBLE\",\n }\n\n def serialize(self, plan):\n return json.dumps({\"rels\": [self.serialize_item(node) for node in plan]})\n\n def expect_one_of(self, val, *types):\n for t in types:\n if isinstance(val, t):\n return\n raise TypeError(\"Can not serialize {}\".format(type(val).__name__))\n\n def serialize_item(self, item):\n if isinstance(item, CalciteBaseNode):\n return self.serialize_node(item)\n elif isinstance(item, BaseExpr):\n return self.serialize_expr(item)\n elif isinstance(item, CalciteCollation):\n return self.serialize_obj(item)\n elif isinstance(item, list):\n return [self.serialize_item(v) for v in item]\n\n self.expect_one_of(item, str, int)\n return item\n\n def serialize_node(self, node):\n # We need to setup context for proper references\n # serialization\n if isinstance(\n node,\n (\n CalciteScanNode,\n CalciteProjectionNode,\n CalciteFilterNode,\n CalciteAggregateNode,\n CalciteSortNode,\n CalciteJoinNode,\n CalciteUnionNode,\n ),\n ):\n return self.serialize_obj(node)\n else:\n raise NotImplementedError(\n \"Can not serialize {}\".format(type(node).__name__)\n )\n\n def serialize_obj(self, obj):\n res = {}\n for k, v in obj.__dict__.items():\n if k[0] != \"_\":\n res[k] = self.serialize_item(v)\n return res\n\n def serialize_typed_obj(self, obj):\n res = self.serialize_obj(obj)\n res[\"type\"] = self.serialize_dtype(obj._dtype)\n return res\n\n def serialize_expr(self, expr):\n if isinstance(expr, LiteralExpr):\n return self.serialize_literal(expr)\n elif isinstance(expr, CalciteInputRefExpr):\n return self.serialize_obj(expr)\n elif isinstance(expr, CalciteInputIdxExpr):\n return self.serialize_input_idx(expr)\n elif isinstance(expr, OpExpr):\n return self.serialize_typed_obj(expr)\n elif isinstance(expr, AggregateExpr):\n return self.serialize_typed_obj(expr)\n else:\n raise NotImplementedError(\n \"Can not serialize {}\".format(type(expr).__name__)\n )\n\n def serialize_literal(self, literal):\n if literal.val is None:\n return {\n \"literal\": None,\n \"type\": \"BIGINT\",\n \"target_type\": \"BIGINT\",\n \"scale\": 0,\n \"precision\": 19,\n \"type_scale\": 0,\n \"type_precision\": 19,\n }\n if type(literal.val) is str:\n return {\n \"literal\": literal.val,\n \"type\": \"CHAR\",\n \"target_type\": \"CHAR\",\n \"scale\": -2147483648,\n \"precision\": len(literal.val),\n \"type_scale\": -2147483648,\n \"type_precision\": len(literal.val),\n }\n if type(literal.val) in (int, np.int8, np.int16, np.int32, np.int64):\n target_type, precision = self.opts_for_int_type(type(literal.val))\n return {\n \"literal\": int(literal.val),\n \"type\": \"DECIMAL\",\n \"target_type\": target_type,\n \"scale\": 0,\n \"precision\": len(str(literal.val)),\n \"type_scale\": 0,\n \"type_precision\": precision,\n }\n if type(literal.val) in (float, np.float64):\n str_val = f\"{literal.val:f}\"\n precision = len(str_val) - 1\n scale = precision - str_val.index(\".\")\n return {\n \"literal\": int(str_val.replace(\".\", \"\")),\n \"type\": \"DECIMAL\",\n \"target_type\": \"DOUBLE\",\n \"scale\": scale,\n \"precision\": precision,\n \"type_scale\": -2147483648,\n \"type_precision\": 15,\n }\n if type(literal.val) is bool:\n return {\n \"literal\": literal.val,\n \"type\": \"BOOLEAN\",\n \"target_type\": \"BOOLEAN\",\n \"scale\": -2147483648,\n \"precision\": 1,\n \"type_scale\": -2147483648,\n \"type_precision\": 1,\n }\n raise NotImplementedError(f\"Can not serialize {type(literal.val).__name__}\")\n\n def opts_for_int_type(self, int_type):\n if int_type is np.int8:\n return \"TINYINT\", 3\n if int_type is np.int16:\n return \"SMALLINT\", 5\n if int_type is np.int32:\n return \"INTEGER\", 10\n if int_type in (np.int64, int):\n return \"BIGINT\", 19\n raise NotImplementedError(f\"Unsupported integer type {int_type.__name__}\")\n\n def serialize_dtype(self, dtype):\n return {\"type\": type(self).dtype_strings[dtype.name], \"nullable\": True}\n\n def serialize_input_idx(self, expr):\n return expr.input\n", "path": "modin/experimental/engines/omnisci_on_ray/frame/calcite_serializer.py"}]} | 2,219 | 147 |
gh_patches_debug_7393 | rasdani/github-patches | git_diff | ivy-llc__ivy-19179 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
column_stack
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ivy/functional/frontends/jax/numpy/manipulations.py`
Content:
```
1 # local
2 import ivy
3 from ivy.functional.frontends.jax.func_wrapper import (
4 to_ivy_arrays_and_back,
5 handle_jax_dtype,
6 )
7 from ivy.functional.frontends.jax.numpy import promote_types_of_jax_inputs
8
9
10 @to_ivy_arrays_and_back
11 def clip(a, a_min=None, a_max=None, out=None):
12 ivy.utils.assertions.check_all_or_any_fn(
13 a_min,
14 a_max,
15 fn=ivy.exists,
16 type="any",
17 limit=[1, 2],
18 message="at most one of a_min or a_max can be None",
19 )
20 a = ivy.array(a)
21 if a_min is None:
22 a, a_max = promote_types_of_jax_inputs(a, a_max)
23 return ivy.minimum(a, a_max, out=out)
24 if a_max is None:
25 a, a_min = promote_types_of_jax_inputs(a, a_min)
26 return ivy.maximum(a, a_min, out=out)
27 return ivy.clip(a, a_min, a_max, out=out)
28
29
30 @handle_jax_dtype
31 @to_ivy_arrays_and_back
32 def concatenate(arrays, axis=0, dtype=None):
33 ret = ivy.concat(arrays, axis=axis)
34 if dtype:
35 ret = ivy.array(ret, dtype=dtype)
36 return ret
37
38
39 @to_ivy_arrays_and_back
40 def repeat(a, repeats, axis=None, *, total_repeat_length=None):
41 return ivy.repeat(a, repeats, axis=axis)
42
43
44 @to_ivy_arrays_and_back
45 def reshape(a, newshape, order="C"):
46 return ivy.reshape(a, shape=newshape, order=order)
47
48
49 @to_ivy_arrays_and_back
50 def ravel(a, order="C"):
51 return ivy.reshape(a, shape=(-1,), order=order)
52
53
54 @to_ivy_arrays_and_back
55 def resize(a, new_shape):
56 a = ivy.array(a)
57 resized_a = ivy.reshape(a, new_shape)
58 return resized_a
59
60
61 @to_ivy_arrays_and_back
62 def moveaxis(a, source, destination):
63 return ivy.moveaxis(a, source, destination)
64
65
66 @to_ivy_arrays_and_back
67 def flipud(m):
68 return ivy.flipud(m, out=None)
69
70
71 @to_ivy_arrays_and_back
72 def transpose(a, axes=None):
73 if ivy.isscalar(a):
74 return ivy.array(a)
75 elif a.ndim == 1:
76 return a
77 if not axes:
78 axes = list(range(len(a.shape)))[::-1]
79 if type(axes) is int:
80 axes = [axes]
81 if (len(a.shape) == 0 and not axes) or (len(a.shape) == 1 and axes[0] == 0):
82 return a
83 return ivy.permute_dims(a, axes, out=None)
84
85
86 @to_ivy_arrays_and_back
87 def flip(m, axis=None):
88 return ivy.flip(m, axis=axis)
89
90
91 @to_ivy_arrays_and_back
92 def fliplr(m):
93 return ivy.fliplr(m)
94
95
96 @to_ivy_arrays_and_back
97 def expand_dims(a, axis):
98 return ivy.expand_dims(a, axis=axis)
99
100
101 @to_ivy_arrays_and_back
102 def stack(arrays, axis=0, out=None, dtype=None):
103 if dtype:
104 return ivy.astype(
105 ivy.stack(arrays, axis=axis, out=out), ivy.as_ivy_dtype(dtype)
106 )
107 return ivy.stack(arrays, axis=axis, out=out)
108
109
110 @to_ivy_arrays_and_back
111 def take(
112 a,
113 indices,
114 axis=None,
115 out=None,
116 mode=None,
117 unique_indices=False,
118 indices_are_sorted=False,
119 fill_value=None,
120 ):
121 return ivy.gather(a, indices, axis=axis, out=out)
122
123
124 @to_ivy_arrays_and_back
125 def broadcast_arrays(*args):
126 return ivy.broadcast_arrays(*args)
127
128
129 @to_ivy_arrays_and_back
130 def broadcast_shapes(*shapes):
131 return ivy.broadcast_shapes(*shapes)
132
133
134 @to_ivy_arrays_and_back
135 def broadcast_to(array, shape):
136 return ivy.broadcast_to(array, shape)
137
138
139 @to_ivy_arrays_and_back
140 def append(arr, values, axis=None):
141 if axis is None:
142 return ivy.concat((ivy.flatten(arr), ivy.flatten(values)), axis=0)
143 else:
144 return ivy.concat((arr, values), axis=axis)
145
146
147 @to_ivy_arrays_and_back
148 def swapaxes(a, axis1, axis2):
149 return ivy.swapaxes(a, axis1, axis2)
150
151
152 @to_ivy_arrays_and_back
153 def atleast_3d(*arys):
154 return ivy.atleast_3d(*arys)
155
156
157 @to_ivy_arrays_and_back
158 def atleast_1d(*arys):
159 return ivy.atleast_1d(*arys)
160
161
162 @to_ivy_arrays_and_back
163 def atleast_2d(*arys):
164 return ivy.atleast_2d(*arys)
165
166
167 @to_ivy_arrays_and_back
168 def tril(m, k=0):
169 return ivy.tril(m, k=k)
170
171
172 @to_ivy_arrays_and_back
173 def block(arr):
174 # TODO: reimplement block
175 raise ivy.utils.exceptions.IvyNotImplementedError()
176
177
178 @to_ivy_arrays_and_back
179 def squeeze(a, axis=None):
180 return ivy.squeeze(a, axis=axis)
181
182
183 @to_ivy_arrays_and_back
184 def rot90(m, k=1, axes=(0, 1)):
185 return ivy.rot90(m, k=k, axes=axes)
186
187
188 @to_ivy_arrays_and_back
189 def split(ary, indices_or_sections, axis=0):
190 if isinstance(indices_or_sections, (list, tuple, ivy.Array)):
191 indices_or_sections = (
192 ivy.diff(indices_or_sections, prepend=[0], append=[ary.shape[axis]])
193 .astype(ivy.int8)
194 .to_list()
195 )
196 return ivy.split(
197 ary, num_or_size_splits=indices_or_sections, axis=axis, with_remainder=False
198 )
199
200
201 @to_ivy_arrays_and_back
202 def array_split(ary, indices_or_sections, axis=0):
203 return ivy.split(
204 ary, num_or_size_splits=indices_or_sections, axis=axis, with_remainder=True
205 )
206
207
208 @to_ivy_arrays_and_back
209 def tile(A, reps):
210 return ivy.tile(A, reps)
211
212
213 @to_ivy_arrays_and_back
214 def dsplit(ary, indices_or_sections):
215 if isinstance(indices_or_sections, (list, tuple, ivy.Array)):
216 indices_or_sections = (
217 ivy.diff(indices_or_sections, prepend=[0], append=[ary.shape[2]])
218 .astype(ivy.int8)
219 .to_list()
220 )
221 return ivy.dsplit(ary, indices_or_sections)
222
223
224 @to_ivy_arrays_and_back
225 def dstack(tup, dtype=None):
226 return ivy.dstack(tup)
227
228
229 @to_ivy_arrays_and_back
230 def vsplit(ary, indices_or_sections):
231 if isinstance(indices_or_sections, (list, tuple, ivy.Array)):
232 indices_or_sections = (
233 ivy.diff(indices_or_sections, prepend=[0], append=[ary.shape[0]])
234 .astype(ivy.int8)
235 .to_list()
236 )
237 return ivy.vsplit(ary, indices_or_sections)
238
239
240 @to_ivy_arrays_and_back
241 def hsplit(ary, indices_or_sections):
242 if isinstance(indices_or_sections, (list, tuple, ivy.Array)):
243 if ary.ndim == 1:
244 indices_or_sections = (
245 ivy.diff(indices_or_sections, prepend=[0], append=[ary.shape[0]])
246 .astype(ivy.int8)
247 .to_list()
248 )
249 else:
250 indices_or_sections = (
251 ivy.diff(indices_or_sections, prepend=[0], append=[ary.shape[1]])
252 .astype(ivy.int8)
253 .to_list()
254 )
255 return ivy.hsplit(ary, indices_or_sections)
256
257
258 @to_ivy_arrays_and_back
259 def roll(a, shift, axis=None):
260 return ivy.roll(a, shift, axis=axis)
261
262
263 @to_ivy_arrays_and_back
264 def row_stack(tup):
265 if len(ivy.shape(tup[0])) == 1:
266 xs = []
267 for t in tup:
268 xs += [ivy.reshape(t, (1, ivy.shape(t)[0]))]
269 return ivy.concat(xs, axis=0)
270 return ivy.concat(tup, axis=0)
271
272
273 @to_ivy_arrays_and_back
274 def pad(array, pad_width, mode="constant", **kwargs):
275 return ivy.pad(array, pad_width, mode=mode, **kwargs)
276
277
278 def hamming(M):
279 if M <= 1:
280 return ivy.ones([M], dtype=ivy.float64)
281 n = ivy.arange(M)
282 ret = 0.54 - 0.46 * ivy.cos(2.0 * ivy.pi * n / (M - 1))
283 return ret
284
285
286 @to_ivy_arrays_and_back
287 def hanning(M):
288 if M <= 1:
289 return ivy.ones([M], dtype=ivy.float64)
290 n = ivy.arange(M)
291 ret = 0.5 * (1 - ivy.cos(2.0 * ivy.pi * n / (M - 1)))
292 return ret
293
294
295 @to_ivy_arrays_and_back
296 def kaiser(M, beta):
297 if M <= 1:
298 return ivy.ones([M], dtype=ivy.float64)
299 n = ivy.arange(M)
300 alpha = 0.5 * (M - 1)
301 ret = ivy.i0(beta * ivy.sqrt(1 - ((n - alpha) / alpha) ** 2)) / ivy.i0(beta)
302 return ret
303
304
305 @handle_jax_dtype
306 @to_ivy_arrays_and_back
307 def tri(N, M=None, k=0, dtype="float64"):
308 if M is None:
309 M = N
310 ones = ivy.ones((N, M), dtype=dtype)
311 return ivy.tril(ones, k=k)
312
313
314 @to_ivy_arrays_and_back
315 def blackman(M):
316 if M < 1:
317 return ivy.array([])
318 if M == 1:
319 return ivy.ones((1,))
320 n = ivy.arange(0, M)
321 alpha = 0.16
322 a0 = (1 - alpha) / 2
323 a1 = 1 / 2
324 a2 = alpha / 2
325 ret = (
326 a0
327 - a1 * ivy.cos(2 * ivy.pi * n / (M - 1))
328 + a2 * ivy.cos(4 * ivy.pi * n / (M - 1))
329 )
330 return ret
331
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ivy/functional/frontends/jax/numpy/manipulations.py b/ivy/functional/frontends/jax/numpy/manipulations.py
--- a/ivy/functional/frontends/jax/numpy/manipulations.py
+++ b/ivy/functional/frontends/jax/numpy/manipulations.py
@@ -270,6 +270,16 @@
return ivy.concat(tup, axis=0)
+@to_ivy_arrays_and_back
+def column_stack(tup):
+ if len(ivy.shape(tup[0])) == 1:
+ ys = []
+ for t in tup:
+ ys += [ivy.reshape(t, (ivy.shape(t)[0], 1))]
+ return ivy.concat(ys, axis=1)
+ return ivy.concat(tup, axis=1)
+
+
@to_ivy_arrays_and_back
def pad(array, pad_width, mode="constant", **kwargs):
return ivy.pad(array, pad_width, mode=mode, **kwargs)
| {"golden_diff": "diff --git a/ivy/functional/frontends/jax/numpy/manipulations.py b/ivy/functional/frontends/jax/numpy/manipulations.py\n--- a/ivy/functional/frontends/jax/numpy/manipulations.py\n+++ b/ivy/functional/frontends/jax/numpy/manipulations.py\n@@ -270,6 +270,16 @@\n return ivy.concat(tup, axis=0)\n \n \n+@to_ivy_arrays_and_back\n+def column_stack(tup):\n+ if len(ivy.shape(tup[0])) == 1:\n+ ys = []\n+ for t in tup:\n+ ys += [ivy.reshape(t, (ivy.shape(t)[0], 1))]\n+ return ivy.concat(ys, axis=1)\n+ return ivy.concat(tup, axis=1)\n+\n+\n @to_ivy_arrays_and_back\n def pad(array, pad_width, mode=\"constant\", **kwargs):\n return ivy.pad(array, pad_width, mode=mode, **kwargs)\n", "issue": "column_stack\n\n", "before_files": [{"content": "# local\nimport ivy\nfrom ivy.functional.frontends.jax.func_wrapper import (\n to_ivy_arrays_and_back,\n handle_jax_dtype,\n)\nfrom ivy.functional.frontends.jax.numpy import promote_types_of_jax_inputs\n\n\n@to_ivy_arrays_and_back\ndef clip(a, a_min=None, a_max=None, out=None):\n ivy.utils.assertions.check_all_or_any_fn(\n a_min,\n a_max,\n fn=ivy.exists,\n type=\"any\",\n limit=[1, 2],\n message=\"at most one of a_min or a_max can be None\",\n )\n a = ivy.array(a)\n if a_min is None:\n a, a_max = promote_types_of_jax_inputs(a, a_max)\n return ivy.minimum(a, a_max, out=out)\n if a_max is None:\n a, a_min = promote_types_of_jax_inputs(a, a_min)\n return ivy.maximum(a, a_min, out=out)\n return ivy.clip(a, a_min, a_max, out=out)\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef concatenate(arrays, axis=0, dtype=None):\n ret = ivy.concat(arrays, axis=axis)\n if dtype:\n ret = ivy.array(ret, dtype=dtype)\n return ret\n\n\n@to_ivy_arrays_and_back\ndef repeat(a, repeats, axis=None, *, total_repeat_length=None):\n return ivy.repeat(a, repeats, axis=axis)\n\n\n@to_ivy_arrays_and_back\ndef reshape(a, newshape, order=\"C\"):\n return ivy.reshape(a, shape=newshape, order=order)\n\n\n@to_ivy_arrays_and_back\ndef ravel(a, order=\"C\"):\n return ivy.reshape(a, shape=(-1,), order=order)\n\n\n@to_ivy_arrays_and_back\ndef resize(a, new_shape):\n a = ivy.array(a)\n resized_a = ivy.reshape(a, new_shape)\n return resized_a\n\n\n@to_ivy_arrays_and_back\ndef moveaxis(a, source, destination):\n return ivy.moveaxis(a, source, destination)\n\n\n@to_ivy_arrays_and_back\ndef flipud(m):\n return ivy.flipud(m, out=None)\n\n\n@to_ivy_arrays_and_back\ndef transpose(a, axes=None):\n if ivy.isscalar(a):\n return ivy.array(a)\n elif a.ndim == 1:\n return a\n if not axes:\n axes = list(range(len(a.shape)))[::-1]\n if type(axes) is int:\n axes = [axes]\n if (len(a.shape) == 0 and not axes) or (len(a.shape) == 1 and axes[0] == 0):\n return a\n return ivy.permute_dims(a, axes, out=None)\n\n\n@to_ivy_arrays_and_back\ndef flip(m, axis=None):\n return ivy.flip(m, axis=axis)\n\n\n@to_ivy_arrays_and_back\ndef fliplr(m):\n return ivy.fliplr(m)\n\n\n@to_ivy_arrays_and_back\ndef expand_dims(a, axis):\n return ivy.expand_dims(a, axis=axis)\n\n\n@to_ivy_arrays_and_back\ndef stack(arrays, axis=0, out=None, dtype=None):\n if dtype:\n return ivy.astype(\n ivy.stack(arrays, axis=axis, out=out), ivy.as_ivy_dtype(dtype)\n )\n return ivy.stack(arrays, axis=axis, out=out)\n\n\n@to_ivy_arrays_and_back\ndef take(\n a,\n indices,\n axis=None,\n out=None,\n mode=None,\n unique_indices=False,\n indices_are_sorted=False,\n fill_value=None,\n):\n return ivy.gather(a, indices, axis=axis, out=out)\n\n\n@to_ivy_arrays_and_back\ndef broadcast_arrays(*args):\n return ivy.broadcast_arrays(*args)\n\n\n@to_ivy_arrays_and_back\ndef broadcast_shapes(*shapes):\n return ivy.broadcast_shapes(*shapes)\n\n\n@to_ivy_arrays_and_back\ndef broadcast_to(array, shape):\n return ivy.broadcast_to(array, shape)\n\n\n@to_ivy_arrays_and_back\ndef append(arr, values, axis=None):\n if axis is None:\n return ivy.concat((ivy.flatten(arr), ivy.flatten(values)), axis=0)\n else:\n return ivy.concat((arr, values), axis=axis)\n\n\n@to_ivy_arrays_and_back\ndef swapaxes(a, axis1, axis2):\n return ivy.swapaxes(a, axis1, axis2)\n\n\n@to_ivy_arrays_and_back\ndef atleast_3d(*arys):\n return ivy.atleast_3d(*arys)\n\n\n@to_ivy_arrays_and_back\ndef atleast_1d(*arys):\n return ivy.atleast_1d(*arys)\n\n\n@to_ivy_arrays_and_back\ndef atleast_2d(*arys):\n return ivy.atleast_2d(*arys)\n\n\n@to_ivy_arrays_and_back\ndef tril(m, k=0):\n return ivy.tril(m, k=k)\n\n\n@to_ivy_arrays_and_back\ndef block(arr):\n # TODO: reimplement block\n raise ivy.utils.exceptions.IvyNotImplementedError()\n\n\n@to_ivy_arrays_and_back\ndef squeeze(a, axis=None):\n return ivy.squeeze(a, axis=axis)\n\n\n@to_ivy_arrays_and_back\ndef rot90(m, k=1, axes=(0, 1)):\n return ivy.rot90(m, k=k, axes=axes)\n\n\n@to_ivy_arrays_and_back\ndef split(ary, indices_or_sections, axis=0):\n if isinstance(indices_or_sections, (list, tuple, ivy.Array)):\n indices_or_sections = (\n ivy.diff(indices_or_sections, prepend=[0], append=[ary.shape[axis]])\n .astype(ivy.int8)\n .to_list()\n )\n return ivy.split(\n ary, num_or_size_splits=indices_or_sections, axis=axis, with_remainder=False\n )\n\n\n@to_ivy_arrays_and_back\ndef array_split(ary, indices_or_sections, axis=0):\n return ivy.split(\n ary, num_or_size_splits=indices_or_sections, axis=axis, with_remainder=True\n )\n\n\n@to_ivy_arrays_and_back\ndef tile(A, reps):\n return ivy.tile(A, reps)\n\n\n@to_ivy_arrays_and_back\ndef dsplit(ary, indices_or_sections):\n if isinstance(indices_or_sections, (list, tuple, ivy.Array)):\n indices_or_sections = (\n ivy.diff(indices_or_sections, prepend=[0], append=[ary.shape[2]])\n .astype(ivy.int8)\n .to_list()\n )\n return ivy.dsplit(ary, indices_or_sections)\n\n\n@to_ivy_arrays_and_back\ndef dstack(tup, dtype=None):\n return ivy.dstack(tup)\n\n\n@to_ivy_arrays_and_back\ndef vsplit(ary, indices_or_sections):\n if isinstance(indices_or_sections, (list, tuple, ivy.Array)):\n indices_or_sections = (\n ivy.diff(indices_or_sections, prepend=[0], append=[ary.shape[0]])\n .astype(ivy.int8)\n .to_list()\n )\n return ivy.vsplit(ary, indices_or_sections)\n\n\n@to_ivy_arrays_and_back\ndef hsplit(ary, indices_or_sections):\n if isinstance(indices_or_sections, (list, tuple, ivy.Array)):\n if ary.ndim == 1:\n indices_or_sections = (\n ivy.diff(indices_or_sections, prepend=[0], append=[ary.shape[0]])\n .astype(ivy.int8)\n .to_list()\n )\n else:\n indices_or_sections = (\n ivy.diff(indices_or_sections, prepend=[0], append=[ary.shape[1]])\n .astype(ivy.int8)\n .to_list()\n )\n return ivy.hsplit(ary, indices_or_sections)\n\n\n@to_ivy_arrays_and_back\ndef roll(a, shift, axis=None):\n return ivy.roll(a, shift, axis=axis)\n\n\n@to_ivy_arrays_and_back\ndef row_stack(tup):\n if len(ivy.shape(tup[0])) == 1:\n xs = []\n for t in tup:\n xs += [ivy.reshape(t, (1, ivy.shape(t)[0]))]\n return ivy.concat(xs, axis=0)\n return ivy.concat(tup, axis=0)\n\n\n@to_ivy_arrays_and_back\ndef pad(array, pad_width, mode=\"constant\", **kwargs):\n return ivy.pad(array, pad_width, mode=mode, **kwargs)\n\n\ndef hamming(M):\n if M <= 1:\n return ivy.ones([M], dtype=ivy.float64)\n n = ivy.arange(M)\n ret = 0.54 - 0.46 * ivy.cos(2.0 * ivy.pi * n / (M - 1))\n return ret\n\n\n@to_ivy_arrays_and_back\ndef hanning(M):\n if M <= 1:\n return ivy.ones([M], dtype=ivy.float64)\n n = ivy.arange(M)\n ret = 0.5 * (1 - ivy.cos(2.0 * ivy.pi * n / (M - 1)))\n return ret\n\n\n@to_ivy_arrays_and_back\ndef kaiser(M, beta):\n if M <= 1:\n return ivy.ones([M], dtype=ivy.float64)\n n = ivy.arange(M)\n alpha = 0.5 * (M - 1)\n ret = ivy.i0(beta * ivy.sqrt(1 - ((n - alpha) / alpha) ** 2)) / ivy.i0(beta)\n return ret\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef tri(N, M=None, k=0, dtype=\"float64\"):\n if M is None:\n M = N\n ones = ivy.ones((N, M), dtype=dtype)\n return ivy.tril(ones, k=k)\n\n\n@to_ivy_arrays_and_back\ndef blackman(M):\n if M < 1:\n return ivy.array([])\n if M == 1:\n return ivy.ones((1,))\n n = ivy.arange(0, M)\n alpha = 0.16\n a0 = (1 - alpha) / 2\n a1 = 1 / 2\n a2 = alpha / 2\n ret = (\n a0\n - a1 * ivy.cos(2 * ivy.pi * n / (M - 1))\n + a2 * ivy.cos(4 * ivy.pi * n / (M - 1))\n )\n return ret\n", "path": "ivy/functional/frontends/jax/numpy/manipulations.py"}], "after_files": [{"content": "# local\nimport ivy\nfrom ivy.functional.frontends.jax.func_wrapper import (\n to_ivy_arrays_and_back,\n handle_jax_dtype,\n)\nfrom ivy.functional.frontends.jax.numpy import promote_types_of_jax_inputs\n\n\n@to_ivy_arrays_and_back\ndef clip(a, a_min=None, a_max=None, out=None):\n ivy.utils.assertions.check_all_or_any_fn(\n a_min,\n a_max,\n fn=ivy.exists,\n type=\"any\",\n limit=[1, 2],\n message=\"at most one of a_min or a_max can be None\",\n )\n a = ivy.array(a)\n if a_min is None:\n a, a_max = promote_types_of_jax_inputs(a, a_max)\n return ivy.minimum(a, a_max, out=out)\n if a_max is None:\n a, a_min = promote_types_of_jax_inputs(a, a_min)\n return ivy.maximum(a, a_min, out=out)\n return ivy.clip(a, a_min, a_max, out=out)\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef concatenate(arrays, axis=0, dtype=None):\n ret = ivy.concat(arrays, axis=axis)\n if dtype:\n ret = ivy.array(ret, dtype=dtype)\n return ret\n\n\n@to_ivy_arrays_and_back\ndef repeat(a, repeats, axis=None, *, total_repeat_length=None):\n return ivy.repeat(a, repeats, axis=axis)\n\n\n@to_ivy_arrays_and_back\ndef reshape(a, newshape, order=\"C\"):\n return ivy.reshape(a, shape=newshape, order=order)\n\n\n@to_ivy_arrays_and_back\ndef ravel(a, order=\"C\"):\n return ivy.reshape(a, shape=(-1,), order=order)\n\n\n@to_ivy_arrays_and_back\ndef resize(a, new_shape):\n a = ivy.array(a)\n resized_a = ivy.reshape(a, new_shape)\n return resized_a\n\n\n@to_ivy_arrays_and_back\ndef moveaxis(a, source, destination):\n return ivy.moveaxis(a, source, destination)\n\n\n@to_ivy_arrays_and_back\ndef flipud(m):\n return ivy.flipud(m, out=None)\n\n\n@to_ivy_arrays_and_back\ndef transpose(a, axes=None):\n if ivy.isscalar(a):\n return ivy.array(a)\n elif a.ndim == 1:\n return a\n if not axes:\n axes = list(range(len(a.shape)))[::-1]\n if type(axes) is int:\n axes = [axes]\n if (len(a.shape) == 0 and not axes) or (len(a.shape) == 1 and axes[0] == 0):\n return a\n return ivy.permute_dims(a, axes, out=None)\n\n\n@to_ivy_arrays_and_back\ndef flip(m, axis=None):\n return ivy.flip(m, axis=axis)\n\n\n@to_ivy_arrays_and_back\ndef fliplr(m):\n return ivy.fliplr(m)\n\n\n@to_ivy_arrays_and_back\ndef expand_dims(a, axis):\n return ivy.expand_dims(a, axis=axis)\n\n\n@to_ivy_arrays_and_back\ndef stack(arrays, axis=0, out=None, dtype=None):\n if dtype:\n return ivy.astype(\n ivy.stack(arrays, axis=axis, out=out), ivy.as_ivy_dtype(dtype)\n )\n return ivy.stack(arrays, axis=axis, out=out)\n\n\n@to_ivy_arrays_and_back\ndef take(\n a,\n indices,\n axis=None,\n out=None,\n mode=None,\n unique_indices=False,\n indices_are_sorted=False,\n fill_value=None,\n):\n return ivy.gather(a, indices, axis=axis, out=out)\n\n\n@to_ivy_arrays_and_back\ndef broadcast_arrays(*args):\n return ivy.broadcast_arrays(*args)\n\n\n@to_ivy_arrays_and_back\ndef broadcast_shapes(*shapes):\n return ivy.broadcast_shapes(*shapes)\n\n\n@to_ivy_arrays_and_back\ndef broadcast_to(array, shape):\n return ivy.broadcast_to(array, shape)\n\n\n@to_ivy_arrays_and_back\ndef append(arr, values, axis=None):\n if axis is None:\n return ivy.concat((ivy.flatten(arr), ivy.flatten(values)), axis=0)\n else:\n return ivy.concat((arr, values), axis=axis)\n\n\n@to_ivy_arrays_and_back\ndef swapaxes(a, axis1, axis2):\n return ivy.swapaxes(a, axis1, axis2)\n\n\n@to_ivy_arrays_and_back\ndef atleast_3d(*arys):\n return ivy.atleast_3d(*arys)\n\n\n@to_ivy_arrays_and_back\ndef atleast_1d(*arys):\n return ivy.atleast_1d(*arys)\n\n\n@to_ivy_arrays_and_back\ndef atleast_2d(*arys):\n return ivy.atleast_2d(*arys)\n\n\n@to_ivy_arrays_and_back\ndef tril(m, k=0):\n return ivy.tril(m, k=k)\n\n\n@to_ivy_arrays_and_back\ndef block(arr):\n # TODO: reimplement block\n raise ivy.utils.exceptions.IvyNotImplementedError()\n\n\n@to_ivy_arrays_and_back\ndef squeeze(a, axis=None):\n return ivy.squeeze(a, axis=axis)\n\n\n@to_ivy_arrays_and_back\ndef rot90(m, k=1, axes=(0, 1)):\n return ivy.rot90(m, k=k, axes=axes)\n\n\n@to_ivy_arrays_and_back\ndef split(ary, indices_or_sections, axis=0):\n if isinstance(indices_or_sections, (list, tuple, ivy.Array)):\n indices_or_sections = (\n ivy.diff(indices_or_sections, prepend=[0], append=[ary.shape[axis]])\n .astype(ivy.int8)\n .to_list()\n )\n return ivy.split(\n ary, num_or_size_splits=indices_or_sections, axis=axis, with_remainder=False\n )\n\n\n@to_ivy_arrays_and_back\ndef array_split(ary, indices_or_sections, axis=0):\n return ivy.split(\n ary, num_or_size_splits=indices_or_sections, axis=axis, with_remainder=True\n )\n\n\n@to_ivy_arrays_and_back\ndef tile(A, reps):\n return ivy.tile(A, reps)\n\n\n@to_ivy_arrays_and_back\ndef dsplit(ary, indices_or_sections):\n if isinstance(indices_or_sections, (list, tuple, ivy.Array)):\n indices_or_sections = (\n ivy.diff(indices_or_sections, prepend=[0], append=[ary.shape[2]])\n .astype(ivy.int8)\n .to_list()\n )\n return ivy.dsplit(ary, indices_or_sections)\n\n\n@to_ivy_arrays_and_back\ndef dstack(tup, dtype=None):\n return ivy.dstack(tup)\n\n\n@to_ivy_arrays_and_back\ndef vsplit(ary, indices_or_sections):\n if isinstance(indices_or_sections, (list, tuple, ivy.Array)):\n indices_or_sections = (\n ivy.diff(indices_or_sections, prepend=[0], append=[ary.shape[0]])\n .astype(ivy.int8)\n .to_list()\n )\n return ivy.vsplit(ary, indices_or_sections)\n\n\n@to_ivy_arrays_and_back\ndef hsplit(ary, indices_or_sections):\n if isinstance(indices_or_sections, (list, tuple, ivy.Array)):\n if ary.ndim == 1:\n indices_or_sections = (\n ivy.diff(indices_or_sections, prepend=[0], append=[ary.shape[0]])\n .astype(ivy.int8)\n .to_list()\n )\n else:\n indices_or_sections = (\n ivy.diff(indices_or_sections, prepend=[0], append=[ary.shape[1]])\n .astype(ivy.int8)\n .to_list()\n )\n return ivy.hsplit(ary, indices_or_sections)\n\n\n@to_ivy_arrays_and_back\ndef roll(a, shift, axis=None):\n return ivy.roll(a, shift, axis=axis)\n\n\n@to_ivy_arrays_and_back\ndef row_stack(tup):\n if len(ivy.shape(tup[0])) == 1:\n xs = []\n for t in tup:\n xs += [ivy.reshape(t, (1, ivy.shape(t)[0]))]\n return ivy.concat(xs, axis=0)\n return ivy.concat(tup, axis=0)\n\n\n@to_ivy_arrays_and_back\ndef column_stack(tup):\n if len(ivy.shape(tup[0])) == 1:\n ys = []\n for t in tup:\n ys += [ivy.reshape(t, (ivy.shape(t)[0], 1))]\n return ivy.concat(ys, axis=1)\n return ivy.concat(tup, axis=1)\n\n\n@to_ivy_arrays_and_back\ndef pad(array, pad_width, mode=\"constant\", **kwargs):\n return ivy.pad(array, pad_width, mode=mode, **kwargs)\n\n\ndef hamming(M):\n if M <= 1:\n return ivy.ones([M], dtype=ivy.float64)\n n = ivy.arange(M)\n ret = 0.54 - 0.46 * ivy.cos(2.0 * ivy.pi * n / (M - 1))\n return ret\n\n\n@to_ivy_arrays_and_back\ndef hanning(M):\n if M <= 1:\n return ivy.ones([M], dtype=ivy.float64)\n n = ivy.arange(M)\n ret = 0.5 * (1 - ivy.cos(2.0 * ivy.pi * n / (M - 1)))\n return ret\n\n\n@to_ivy_arrays_and_back\ndef kaiser(M, beta):\n if M <= 1:\n return ivy.ones([M], dtype=ivy.float64)\n n = ivy.arange(M)\n alpha = 0.5 * (M - 1)\n ret = ivy.i0(beta * ivy.sqrt(1 - ((n - alpha) / alpha) ** 2)) / ivy.i0(beta)\n return ret\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef tri(N, M=None, k=0, dtype=\"float64\"):\n if M is None:\n M = N\n ones = ivy.ones((N, M), dtype=dtype)\n return ivy.tril(ones, k=k)\n\n\n@to_ivy_arrays_and_back\ndef blackman(M):\n if M < 1:\n return ivy.array([])\n if M == 1:\n return ivy.ones((1,))\n n = ivy.arange(0, M)\n alpha = 0.16\n a0 = (1 - alpha) / 2\n a1 = 1 / 2\n a2 = alpha / 2\n ret = (\n a0\n - a1 * ivy.cos(2 * ivy.pi * n / (M - 1))\n + a2 * ivy.cos(4 * ivy.pi * n / (M - 1))\n )\n return ret\n", "path": "ivy/functional/frontends/jax/numpy/manipulations.py"}]} | 3,575 | 226 |
gh_patches_debug_32161 | rasdani/github-patches | git_diff | sublimelsp__LSP-937 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Keep using trigger characters, or start using selectors?
For some reason the final `commit_completion` command re-triggers the completion panel "forever". I need to `esc` to hide it.

Settings are
```
"auto_complete": true,
"auto_complete_commit_on_tab": true,
"auto_complete_with_fields": true,
```
_Originally posted by @deathaxe in https://github.com/sublimelsp/LSP/pull/866#issuecomment-603466761_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugin/completion.py`
Content:
```
1 import sublime
2 import sublime_plugin
3
4 from .core.configurations import is_supported_syntax
5 from .core.edit import parse_text_edit
6 from .core.logging import debug
7 from .core.protocol import Request, Range, InsertTextFormat
8 from .core.registry import session_for_view, client_from_session, LSPViewEventListener
9 from .core.sessions import Session
10 from .core.settings import settings, client_configs
11 from .core.typing import Any, List, Dict, Optional, Union
12 from .core.views import range_to_region
13 from .core.views import text_document_position_params
14
15
16 completion_kinds = {
17 1: (sublime.KIND_ID_MARKUP, "Ξ", "Text"),
18 2: (sublime.KIND_ID_FUNCTION, "λ", "Method"),
19 3: (sublime.KIND_ID_FUNCTION, "λ", "Function"),
20 4: (sublime.KIND_ID_FUNCTION, "c", "Constructor"),
21 5: (sublime.KIND_ID_VARIABLE, "f", "Field"),
22 6: (sublime.KIND_ID_VARIABLE, "v", "Variable"),
23 7: (sublime.KIND_ID_TYPE, "c", "Class"),
24 8: (sublime.KIND_ID_TYPE, "i", "Interface"),
25 9: (sublime.KIND_ID_NAMESPACE, "◪", "Module"),
26 10: (sublime.KIND_ID_VARIABLE, "ρ", "Property"),
27 11: (sublime.KIND_ID_VARIABLE, "u", "Unit"),
28 12: (sublime.KIND_ID_VARIABLE, "ν", "Value"),
29 13: (sublime.KIND_ID_TYPE, "ε", "Enum"),
30 14: (sublime.KIND_ID_KEYWORD, "κ", "Keyword"),
31 15: (sublime.KIND_ID_SNIPPET, "s", "Snippet"),
32 16: (sublime.KIND_ID_AMBIGUOUS, "c", "Color"),
33 17: (sublime.KIND_ID_AMBIGUOUS, "#", "File"),
34 18: (sublime.KIND_ID_AMBIGUOUS, "⇢", "Reference"),
35 19: (sublime.KIND_ID_AMBIGUOUS, "ƒ", "Folder"),
36 20: (sublime.KIND_ID_TYPE, "ε", "EnumMember"),
37 21: (sublime.KIND_ID_VARIABLE, "π", "Constant"),
38 22: (sublime.KIND_ID_TYPE, "s", "Struct"),
39 23: (sublime.KIND_ID_FUNCTION, "e", "Event"),
40 24: (sublime.KIND_ID_KEYWORD, "ο", "Operator"),
41 25: (sublime.KIND_ID_TYPE, "τ", "Type Parameter")
42 }
43
44
45 def format_completion(item: dict, change_id: Any) -> sublime.CompletionItem:
46 item_kind = item.get("kind")
47 if item_kind:
48 kind = completion_kinds.get(item_kind, sublime.KIND_AMBIGUOUS)
49 else:
50 kind = sublime.KIND_AMBIGUOUS
51
52 if item.get("deprecated", False):
53 kind = (kind[0], '⚠', "⚠ {} - Deprecated".format(kind[2]))
54
55 item["change_id"] = change_id
56
57 return sublime.CompletionItem.command_completion(
58 trigger=item["label"],
59 command="lsp_select_completion_item",
60 args=item,
61 annotation=item.get('detail') or "",
62 kind=kind
63 )
64
65
66 class LspSelectCompletionItemCommand(sublime_plugin.TextCommand):
67 """
68 This command must handle four different kinds of LSP completion items:
69
70 1) plaintext + insertText (e.g. pyls)
71 2) plaintext + textEdit (e.g. intelephense)
72 3) snippet + insertText (???)
73 4) snippet + textEdit (e.g. clangd, intelephense)
74
75 For cases (3) and (4) we are forced to use the "insert_snippet" command.
76 """
77
78 def run(self, edit: sublime.Edit, **item: Any) -> None:
79 # Is it a textEdit or an insertText?
80 text_edit = item.get('textEdit')
81 if text_edit:
82 new_text = text_edit['newText']
83 # this region was valid a few view.change_count() moments back ...
84 edit_region = range_to_region(Range.from_lsp(text_edit['range']), self.view)
85 # ... but this brings it to the present.
86 edit_region = self.view.transform_region_from(edit_region, item["change_id"])
87 selection = self.view.sel()
88 primary_cursor_position = selection[0].b
89 for region in reversed(selection):
90 # For each selection region, apply the same removal as for the "primary" region.
91 # To do that, translate, or offset, the LSP edit region into the non-"primary" regions.
92 # The concept of "primary" is our own, and there is no mention of it in the LSP spec.
93 translation = region.b - primary_cursor_position
94 self.view.erase(edit, sublime.Region(edit_region.a + translation, edit_region.b + translation))
95 else:
96 new_text = item.get('insertText') or item['label']
97
98 # Is it a plaintext or a snippet?
99 if item.get("insertTextFormat", InsertTextFormat.PlainText) == InsertTextFormat.Snippet:
100 self.view.run_command("insert_snippet", {"contents": new_text})
101 else:
102 self.view.run_command("insert", {"characters": new_text})
103
104 # import statements, etc. some servers only return these after a resolve.
105 additional_edits = item.get('additionalTextEdits')
106 if additional_edits:
107 self.apply_additional_edits(additional_edits)
108 else:
109 self.do_resolve(item)
110
111 def do_resolve(self, item: dict) -> None:
112 session = session_for_view(self.view, 'completionProvider', self.view.sel()[0].begin())
113 if not session:
114 return
115
116 client = client_from_session(session)
117 if not client:
118 return
119
120 completion_provider = session.get_capability('completionProvider')
121 has_resolve_provider = completion_provider and completion_provider.get('resolveProvider', False)
122 if has_resolve_provider:
123 client.send_request(Request.resolveCompletionItem(item), self.handle_resolve_response)
124
125 def handle_resolve_response(self, response: Optional[dict]) -> None:
126 if response:
127 additional_edits = response.get('additionalTextEdits')
128 if additional_edits:
129 self.apply_additional_edits(additional_edits)
130
131 def apply_additional_edits(self, additional_edits: List[dict]) -> None:
132 edits = list(parse_text_edit(additional_edit) for additional_edit in additional_edits)
133 debug('applying additional edits:', edits)
134 self.view.run_command("lsp_apply_document_edit", {'changes': edits})
135 sublime.status_message('Applied additional edits for completion')
136
137
138 def resolve(completion_list: sublime.CompletionList, items: List[sublime.CompletionItem], flags: int = 0) -> None:
139 # Resolve the promise on the main thread to prevent any sort of data race for _set_target (see sublime_plugin.py).
140 sublime.set_timeout(lambda: completion_list.set_completions(items, flags))
141
142
143 class CompletionHandler(LSPViewEventListener):
144 def __init__(self, view: sublime.View) -> None:
145 super().__init__(view)
146 self.initialized = False
147 self.enabled = False
148
149 @classmethod
150 def is_applicable(cls, view_settings: dict) -> bool:
151 if 'completion' in settings.disabled_capabilities:
152 return False
153
154 syntax = view_settings.get('syntax')
155 return is_supported_syntax(syntax, client_configs.all) if syntax else False
156
157 def initialize(self) -> None:
158 self.initialized = True
159 session = session_for_view(self.view, 'completionProvider')
160 if session:
161 completionProvider = session.get_capability('completionProvider') or dict() # type: dict
162 # A language server may have an empty dict as CompletionOptions. In that case,
163 # no trigger characters will be registered but we'll still respond to Sublime's
164 # usual query for completions. So the explicit check for None is necessary.
165 self.enabled = True
166
167 trigger_chars = completionProvider.get(
168 'triggerCharacters') or []
169 if trigger_chars:
170 self.register_trigger_chars(session, trigger_chars)
171 # This is to make ST match with labels that have a weird prefix like a space character.
172 self.view.settings().set("auto_complete_preserve_order", "none")
173
174 def _view_language(self, config_name: str) -> Optional[str]:
175 languages = self.view.settings().get('lsp_language')
176 return languages.get(config_name) if languages else None
177
178 def register_trigger_chars(self, session: Session, trigger_chars: List[str]) -> None:
179 completion_triggers = self.view.settings().get('auto_complete_triggers', []) or [] # type: List[Dict[str, str]]
180 view_language = self._view_language(session.config.name)
181 if view_language:
182 for language in session.config.languages:
183 if language.id == view_language:
184 for scope in language.scopes:
185 # debug("registering", trigger_chars, "for", scope)
186 scope_trigger = next(
187 (trigger for trigger in completion_triggers if trigger.get('selector', None) == scope),
188 None
189 )
190 if not scope_trigger: # do not override user's trigger settings.
191 completion_triggers.append({
192 'characters': "".join(trigger_chars),
193 'selector': scope
194 })
195
196 self.view.settings().set('auto_complete_triggers', completion_triggers)
197
198 def on_query_completions(self, prefix: str, locations: List[int]) -> Optional[sublime.CompletionList]:
199 if not self.initialized:
200 self.initialize()
201 if not self.enabled:
202 return None
203 client = client_from_session(session_for_view(self.view, 'completionProvider', locations[0]))
204 if not client:
205 return None
206 self.manager.documents.purge_changes(self.view)
207 completion_list = sublime.CompletionList()
208 client.send_request(
209 Request.complete(text_document_position_params(self.view, locations[0])),
210 lambda res: self.handle_response(res, completion_list, self.view.change_id()),
211 lambda res: self.handle_error(res, completion_list))
212 return completion_list
213
214 def handle_response(self, response: Optional[Union[dict, List]],
215 completion_list: sublime.CompletionList, change_id: Any) -> None:
216 response_items = [] # type: List[Dict]
217 incomplete = False
218 if isinstance(response, dict):
219 response_items = response["items"] or []
220 incomplete = response.get("isIncomplete", False)
221 elif isinstance(response, list):
222 response_items = response
223 response_items = sorted(response_items, key=lambda item: item.get("sortText") or item["label"])
224
225 flags = 0
226 if settings.only_show_lsp_completions:
227 flags |= sublime.INHIBIT_WORD_COMPLETIONS
228 flags |= sublime.INHIBIT_EXPLICIT_COMPLETIONS
229
230 if incomplete:
231 flags |= sublime.DYNAMIC_COMPLETIONS
232 resolve(completion_list, [format_completion(i, change_id) for i in response_items], flags)
233
234 def handle_error(self, error: dict, completion_list: sublime.CompletionList) -> None:
235 resolve(completion_list, [])
236 sublime.status_message('Completion error: ' + str(error.get('message')))
237
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/plugin/completion.py b/plugin/completion.py
--- a/plugin/completion.py
+++ b/plugin/completion.py
@@ -171,29 +171,15 @@
# This is to make ST match with labels that have a weird prefix like a space character.
self.view.settings().set("auto_complete_preserve_order", "none")
- def _view_language(self, config_name: str) -> Optional[str]:
- languages = self.view.settings().get('lsp_language')
- return languages.get(config_name) if languages else None
-
def register_trigger_chars(self, session: Session, trigger_chars: List[str]) -> None:
- completion_triggers = self.view.settings().get('auto_complete_triggers', []) or [] # type: List[Dict[str, str]]
- view_language = self._view_language(session.config.name)
- if view_language:
- for language in session.config.languages:
- if language.id == view_language:
- for scope in language.scopes:
- # debug("registering", trigger_chars, "for", scope)
- scope_trigger = next(
- (trigger for trigger in completion_triggers if trigger.get('selector', None) == scope),
- None
- )
- if not scope_trigger: # do not override user's trigger settings.
- completion_triggers.append({
- 'characters': "".join(trigger_chars),
- 'selector': scope
- })
-
- self.view.settings().set('auto_complete_triggers', completion_triggers)
+ completion_triggers = self.view.settings().get('auto_complete_triggers') or [] # type: List[Dict[str, str]]
+
+ completion_triggers.append({
+ 'characters': "".join(trigger_chars),
+ 'selector': "- comment - punctuation.definition.string.end"
+ })
+
+ self.view.settings().set('auto_complete_triggers', completion_triggers)
def on_query_completions(self, prefix: str, locations: List[int]) -> Optional[sublime.CompletionList]:
if not self.initialized:
| {"golden_diff": "diff --git a/plugin/completion.py b/plugin/completion.py\n--- a/plugin/completion.py\n+++ b/plugin/completion.py\n@@ -171,29 +171,15 @@\n # This is to make ST match with labels that have a weird prefix like a space character.\n self.view.settings().set(\"auto_complete_preserve_order\", \"none\")\n \n- def _view_language(self, config_name: str) -> Optional[str]:\n- languages = self.view.settings().get('lsp_language')\n- return languages.get(config_name) if languages else None\n-\n def register_trigger_chars(self, session: Session, trigger_chars: List[str]) -> None:\n- completion_triggers = self.view.settings().get('auto_complete_triggers', []) or [] # type: List[Dict[str, str]]\n- view_language = self._view_language(session.config.name)\n- if view_language:\n- for language in session.config.languages:\n- if language.id == view_language:\n- for scope in language.scopes:\n- # debug(\"registering\", trigger_chars, \"for\", scope)\n- scope_trigger = next(\n- (trigger for trigger in completion_triggers if trigger.get('selector', None) == scope),\n- None\n- )\n- if not scope_trigger: # do not override user's trigger settings.\n- completion_triggers.append({\n- 'characters': \"\".join(trigger_chars),\n- 'selector': scope\n- })\n-\n- self.view.settings().set('auto_complete_triggers', completion_triggers)\n+ completion_triggers = self.view.settings().get('auto_complete_triggers') or [] # type: List[Dict[str, str]]\n+\n+ completion_triggers.append({\n+ 'characters': \"\".join(trigger_chars),\n+ 'selector': \"- comment - punctuation.definition.string.end\"\n+ })\n+\n+ self.view.settings().set('auto_complete_triggers', completion_triggers)\n \n def on_query_completions(self, prefix: str, locations: List[int]) -> Optional[sublime.CompletionList]:\n if not self.initialized:\n", "issue": "Keep using trigger characters, or start using selectors?\nFor some reason the final `commit_completion` command re-triggers the completion panel \"forever\". I need to `esc` to hide it.\r\n\r\n\r\n\r\nSettings are\r\n\r\n```\r\n\t\"auto_complete\": true,\r\n\t\"auto_complete_commit_on_tab\": true,\r\n\t\"auto_complete_with_fields\": true,\r\n```\r\n\r\n_Originally posted by @deathaxe in https://github.com/sublimelsp/LSP/pull/866#issuecomment-603466761_\n", "before_files": [{"content": "import sublime\nimport sublime_plugin\n\nfrom .core.configurations import is_supported_syntax\nfrom .core.edit import parse_text_edit\nfrom .core.logging import debug\nfrom .core.protocol import Request, Range, InsertTextFormat\nfrom .core.registry import session_for_view, client_from_session, LSPViewEventListener\nfrom .core.sessions import Session\nfrom .core.settings import settings, client_configs\nfrom .core.typing import Any, List, Dict, Optional, Union\nfrom .core.views import range_to_region\nfrom .core.views import text_document_position_params\n\n\ncompletion_kinds = {\n 1: (sublime.KIND_ID_MARKUP, \"\u039e\", \"Text\"),\n 2: (sublime.KIND_ID_FUNCTION, \"\u03bb\", \"Method\"),\n 3: (sublime.KIND_ID_FUNCTION, \"\u03bb\", \"Function\"),\n 4: (sublime.KIND_ID_FUNCTION, \"c\", \"Constructor\"),\n 5: (sublime.KIND_ID_VARIABLE, \"f\", \"Field\"),\n 6: (sublime.KIND_ID_VARIABLE, \"v\", \"Variable\"),\n 7: (sublime.KIND_ID_TYPE, \"c\", \"Class\"),\n 8: (sublime.KIND_ID_TYPE, \"i\", \"Interface\"),\n 9: (sublime.KIND_ID_NAMESPACE, \"\u25ea\", \"Module\"),\n 10: (sublime.KIND_ID_VARIABLE, \"\u03c1\", \"Property\"),\n 11: (sublime.KIND_ID_VARIABLE, \"u\", \"Unit\"),\n 12: (sublime.KIND_ID_VARIABLE, \"\u03bd\", \"Value\"),\n 13: (sublime.KIND_ID_TYPE, \"\u03b5\", \"Enum\"),\n 14: (sublime.KIND_ID_KEYWORD, \"\u03ba\", \"Keyword\"),\n 15: (sublime.KIND_ID_SNIPPET, \"s\", \"Snippet\"),\n 16: (sublime.KIND_ID_AMBIGUOUS, \"c\", \"Color\"),\n 17: (sublime.KIND_ID_AMBIGUOUS, \"#\", \"File\"),\n 18: (sublime.KIND_ID_AMBIGUOUS, \"\u21e2\", \"Reference\"),\n 19: (sublime.KIND_ID_AMBIGUOUS, \"\u0192\", \"Folder\"),\n 20: (sublime.KIND_ID_TYPE, \"\u03b5\", \"EnumMember\"),\n 21: (sublime.KIND_ID_VARIABLE, \"\u03c0\", \"Constant\"),\n 22: (sublime.KIND_ID_TYPE, \"s\", \"Struct\"),\n 23: (sublime.KIND_ID_FUNCTION, \"e\", \"Event\"),\n 24: (sublime.KIND_ID_KEYWORD, \"\u03bf\", \"Operator\"),\n 25: (sublime.KIND_ID_TYPE, \"\u03c4\", \"Type Parameter\")\n}\n\n\ndef format_completion(item: dict, change_id: Any) -> sublime.CompletionItem:\n item_kind = item.get(\"kind\")\n if item_kind:\n kind = completion_kinds.get(item_kind, sublime.KIND_AMBIGUOUS)\n else:\n kind = sublime.KIND_AMBIGUOUS\n\n if item.get(\"deprecated\", False):\n kind = (kind[0], '\u26a0', \"\u26a0 {} - Deprecated\".format(kind[2]))\n\n item[\"change_id\"] = change_id\n\n return sublime.CompletionItem.command_completion(\n trigger=item[\"label\"],\n command=\"lsp_select_completion_item\",\n args=item,\n annotation=item.get('detail') or \"\",\n kind=kind\n )\n\n\nclass LspSelectCompletionItemCommand(sublime_plugin.TextCommand):\n \"\"\"\n This command must handle four different kinds of LSP completion items:\n\n 1) plaintext + insertText (e.g. pyls)\n 2) plaintext + textEdit (e.g. intelephense)\n 3) snippet + insertText (???)\n 4) snippet + textEdit (e.g. clangd, intelephense)\n\n For cases (3) and (4) we are forced to use the \"insert_snippet\" command.\n \"\"\"\n\n def run(self, edit: sublime.Edit, **item: Any) -> None:\n # Is it a textEdit or an insertText?\n text_edit = item.get('textEdit')\n if text_edit:\n new_text = text_edit['newText']\n # this region was valid a few view.change_count() moments back ...\n edit_region = range_to_region(Range.from_lsp(text_edit['range']), self.view)\n # ... but this brings it to the present.\n edit_region = self.view.transform_region_from(edit_region, item[\"change_id\"])\n selection = self.view.sel()\n primary_cursor_position = selection[0].b\n for region in reversed(selection):\n # For each selection region, apply the same removal as for the \"primary\" region.\n # To do that, translate, or offset, the LSP edit region into the non-\"primary\" regions.\n # The concept of \"primary\" is our own, and there is no mention of it in the LSP spec.\n translation = region.b - primary_cursor_position\n self.view.erase(edit, sublime.Region(edit_region.a + translation, edit_region.b + translation))\n else:\n new_text = item.get('insertText') or item['label']\n\n # Is it a plaintext or a snippet?\n if item.get(\"insertTextFormat\", InsertTextFormat.PlainText) == InsertTextFormat.Snippet:\n self.view.run_command(\"insert_snippet\", {\"contents\": new_text})\n else:\n self.view.run_command(\"insert\", {\"characters\": new_text})\n\n # import statements, etc. some servers only return these after a resolve.\n additional_edits = item.get('additionalTextEdits')\n if additional_edits:\n self.apply_additional_edits(additional_edits)\n else:\n self.do_resolve(item)\n\n def do_resolve(self, item: dict) -> None:\n session = session_for_view(self.view, 'completionProvider', self.view.sel()[0].begin())\n if not session:\n return\n\n client = client_from_session(session)\n if not client:\n return\n\n completion_provider = session.get_capability('completionProvider')\n has_resolve_provider = completion_provider and completion_provider.get('resolveProvider', False)\n if has_resolve_provider:\n client.send_request(Request.resolveCompletionItem(item), self.handle_resolve_response)\n\n def handle_resolve_response(self, response: Optional[dict]) -> None:\n if response:\n additional_edits = response.get('additionalTextEdits')\n if additional_edits:\n self.apply_additional_edits(additional_edits)\n\n def apply_additional_edits(self, additional_edits: List[dict]) -> None:\n edits = list(parse_text_edit(additional_edit) for additional_edit in additional_edits)\n debug('applying additional edits:', edits)\n self.view.run_command(\"lsp_apply_document_edit\", {'changes': edits})\n sublime.status_message('Applied additional edits for completion')\n\n\ndef resolve(completion_list: sublime.CompletionList, items: List[sublime.CompletionItem], flags: int = 0) -> None:\n # Resolve the promise on the main thread to prevent any sort of data race for _set_target (see sublime_plugin.py).\n sublime.set_timeout(lambda: completion_list.set_completions(items, flags))\n\n\nclass CompletionHandler(LSPViewEventListener):\n def __init__(self, view: sublime.View) -> None:\n super().__init__(view)\n self.initialized = False\n self.enabled = False\n\n @classmethod\n def is_applicable(cls, view_settings: dict) -> bool:\n if 'completion' in settings.disabled_capabilities:\n return False\n\n syntax = view_settings.get('syntax')\n return is_supported_syntax(syntax, client_configs.all) if syntax else False\n\n def initialize(self) -> None:\n self.initialized = True\n session = session_for_view(self.view, 'completionProvider')\n if session:\n completionProvider = session.get_capability('completionProvider') or dict() # type: dict\n # A language server may have an empty dict as CompletionOptions. In that case,\n # no trigger characters will be registered but we'll still respond to Sublime's\n # usual query for completions. So the explicit check for None is necessary.\n self.enabled = True\n\n trigger_chars = completionProvider.get(\n 'triggerCharacters') or []\n if trigger_chars:\n self.register_trigger_chars(session, trigger_chars)\n # This is to make ST match with labels that have a weird prefix like a space character.\n self.view.settings().set(\"auto_complete_preserve_order\", \"none\")\n\n def _view_language(self, config_name: str) -> Optional[str]:\n languages = self.view.settings().get('lsp_language')\n return languages.get(config_name) if languages else None\n\n def register_trigger_chars(self, session: Session, trigger_chars: List[str]) -> None:\n completion_triggers = self.view.settings().get('auto_complete_triggers', []) or [] # type: List[Dict[str, str]]\n view_language = self._view_language(session.config.name)\n if view_language:\n for language in session.config.languages:\n if language.id == view_language:\n for scope in language.scopes:\n # debug(\"registering\", trigger_chars, \"for\", scope)\n scope_trigger = next(\n (trigger for trigger in completion_triggers if trigger.get('selector', None) == scope),\n None\n )\n if not scope_trigger: # do not override user's trigger settings.\n completion_triggers.append({\n 'characters': \"\".join(trigger_chars),\n 'selector': scope\n })\n\n self.view.settings().set('auto_complete_triggers', completion_triggers)\n\n def on_query_completions(self, prefix: str, locations: List[int]) -> Optional[sublime.CompletionList]:\n if not self.initialized:\n self.initialize()\n if not self.enabled:\n return None\n client = client_from_session(session_for_view(self.view, 'completionProvider', locations[0]))\n if not client:\n return None\n self.manager.documents.purge_changes(self.view)\n completion_list = sublime.CompletionList()\n client.send_request(\n Request.complete(text_document_position_params(self.view, locations[0])),\n lambda res: self.handle_response(res, completion_list, self.view.change_id()),\n lambda res: self.handle_error(res, completion_list))\n return completion_list\n\n def handle_response(self, response: Optional[Union[dict, List]],\n completion_list: sublime.CompletionList, change_id: Any) -> None:\n response_items = [] # type: List[Dict]\n incomplete = False\n if isinstance(response, dict):\n response_items = response[\"items\"] or []\n incomplete = response.get(\"isIncomplete\", False)\n elif isinstance(response, list):\n response_items = response\n response_items = sorted(response_items, key=lambda item: item.get(\"sortText\") or item[\"label\"])\n\n flags = 0\n if settings.only_show_lsp_completions:\n flags |= sublime.INHIBIT_WORD_COMPLETIONS\n flags |= sublime.INHIBIT_EXPLICIT_COMPLETIONS\n\n if incomplete:\n flags |= sublime.DYNAMIC_COMPLETIONS\n resolve(completion_list, [format_completion(i, change_id) for i in response_items], flags)\n\n def handle_error(self, error: dict, completion_list: sublime.CompletionList) -> None:\n resolve(completion_list, [])\n sublime.status_message('Completion error: ' + str(error.get('message')))\n", "path": "plugin/completion.py"}], "after_files": [{"content": "import sublime\nimport sublime_plugin\n\nfrom .core.configurations import is_supported_syntax\nfrom .core.edit import parse_text_edit\nfrom .core.logging import debug\nfrom .core.protocol import Request, Range, InsertTextFormat\nfrom .core.registry import session_for_view, client_from_session, LSPViewEventListener\nfrom .core.sessions import Session\nfrom .core.settings import settings, client_configs\nfrom .core.typing import Any, List, Dict, Optional, Union\nfrom .core.views import range_to_region\nfrom .core.views import text_document_position_params\n\n\ncompletion_kinds = {\n 1: (sublime.KIND_ID_MARKUP, \"\u039e\", \"Text\"),\n 2: (sublime.KIND_ID_FUNCTION, \"\u03bb\", \"Method\"),\n 3: (sublime.KIND_ID_FUNCTION, \"\u03bb\", \"Function\"),\n 4: (sublime.KIND_ID_FUNCTION, \"c\", \"Constructor\"),\n 5: (sublime.KIND_ID_VARIABLE, \"f\", \"Field\"),\n 6: (sublime.KIND_ID_VARIABLE, \"v\", \"Variable\"),\n 7: (sublime.KIND_ID_TYPE, \"c\", \"Class\"),\n 8: (sublime.KIND_ID_TYPE, \"i\", \"Interface\"),\n 9: (sublime.KIND_ID_NAMESPACE, \"\u25ea\", \"Module\"),\n 10: (sublime.KIND_ID_VARIABLE, \"\u03c1\", \"Property\"),\n 11: (sublime.KIND_ID_VARIABLE, \"u\", \"Unit\"),\n 12: (sublime.KIND_ID_VARIABLE, \"\u03bd\", \"Value\"),\n 13: (sublime.KIND_ID_TYPE, \"\u03b5\", \"Enum\"),\n 14: (sublime.KIND_ID_KEYWORD, \"\u03ba\", \"Keyword\"),\n 15: (sublime.KIND_ID_SNIPPET, \"s\", \"Snippet\"),\n 16: (sublime.KIND_ID_AMBIGUOUS, \"c\", \"Color\"),\n 17: (sublime.KIND_ID_AMBIGUOUS, \"#\", \"File\"),\n 18: (sublime.KIND_ID_AMBIGUOUS, \"\u21e2\", \"Reference\"),\n 19: (sublime.KIND_ID_AMBIGUOUS, \"\u0192\", \"Folder\"),\n 20: (sublime.KIND_ID_TYPE, \"\u03b5\", \"EnumMember\"),\n 21: (sublime.KIND_ID_VARIABLE, \"\u03c0\", \"Constant\"),\n 22: (sublime.KIND_ID_TYPE, \"s\", \"Struct\"),\n 23: (sublime.KIND_ID_FUNCTION, \"e\", \"Event\"),\n 24: (sublime.KIND_ID_KEYWORD, \"\u03bf\", \"Operator\"),\n 25: (sublime.KIND_ID_TYPE, \"\u03c4\", \"Type Parameter\")\n}\n\n\ndef format_completion(item: dict, change_id: Any) -> sublime.CompletionItem:\n item_kind = item.get(\"kind\")\n if item_kind:\n kind = completion_kinds.get(item_kind, sublime.KIND_AMBIGUOUS)\n else:\n kind = sublime.KIND_AMBIGUOUS\n\n if item.get(\"deprecated\", False):\n kind = (kind[0], '\u26a0', \"\u26a0 {} - Deprecated\".format(kind[2]))\n\n item[\"change_id\"] = change_id\n\n return sublime.CompletionItem.command_completion(\n trigger=item[\"label\"],\n command=\"lsp_select_completion_item\",\n args=item,\n annotation=item.get('detail') or \"\",\n kind=kind\n )\n\n\nclass LspSelectCompletionItemCommand(sublime_plugin.TextCommand):\n \"\"\"\n This command must handle four different kinds of LSP completion items:\n\n 1) plaintext + insertText (e.g. pyls)\n 2) plaintext + textEdit (e.g. intelephense)\n 3) snippet + insertText (???)\n 4) snippet + textEdit (e.g. clangd, intelephense)\n\n For cases (3) and (4) we are forced to use the \"insert_snippet\" command.\n \"\"\"\n\n def run(self, edit: sublime.Edit, **item: Any) -> None:\n # Is it a textEdit or an insertText?\n text_edit = item.get('textEdit')\n if text_edit:\n new_text = text_edit['newText']\n # this region was valid a few view.change_count() moments back ...\n edit_region = range_to_region(Range.from_lsp(text_edit['range']), self.view)\n # ... but this brings it to the present.\n edit_region = self.view.transform_region_from(edit_region, item[\"change_id\"])\n selection = self.view.sel()\n primary_cursor_position = selection[0].b\n for region in reversed(selection):\n # For each selection region, apply the same removal as for the \"primary\" region.\n # To do that, translate, or offset, the LSP edit region into the non-\"primary\" regions.\n # The concept of \"primary\" is our own, and there is no mention of it in the LSP spec.\n translation = region.b - primary_cursor_position\n self.view.erase(edit, sublime.Region(edit_region.a + translation, edit_region.b + translation))\n else:\n new_text = item.get('insertText') or item['label']\n\n # Is it a plaintext or a snippet?\n if item.get(\"insertTextFormat\", InsertTextFormat.PlainText) == InsertTextFormat.Snippet:\n self.view.run_command(\"insert_snippet\", {\"contents\": new_text})\n else:\n self.view.run_command(\"insert\", {\"characters\": new_text})\n\n # import statements, etc. some servers only return these after a resolve.\n additional_edits = item.get('additionalTextEdits')\n if additional_edits:\n self.apply_additional_edits(additional_edits)\n else:\n self.do_resolve(item)\n\n def do_resolve(self, item: dict) -> None:\n session = session_for_view(self.view, 'completionProvider', self.view.sel()[0].begin())\n if not session:\n return\n\n client = client_from_session(session)\n if not client:\n return\n\n completion_provider = session.get_capability('completionProvider')\n has_resolve_provider = completion_provider and completion_provider.get('resolveProvider', False)\n if has_resolve_provider:\n client.send_request(Request.resolveCompletionItem(item), self.handle_resolve_response)\n\n def handle_resolve_response(self, response: Optional[dict]) -> None:\n if response:\n additional_edits = response.get('additionalTextEdits')\n if additional_edits:\n self.apply_additional_edits(additional_edits)\n\n def apply_additional_edits(self, additional_edits: List[dict]) -> None:\n edits = list(parse_text_edit(additional_edit) for additional_edit in additional_edits)\n debug('applying additional edits:', edits)\n self.view.run_command(\"lsp_apply_document_edit\", {'changes': edits})\n sublime.status_message('Applied additional edits for completion')\n\n\ndef resolve(completion_list: sublime.CompletionList, items: List[sublime.CompletionItem], flags: int = 0) -> None:\n # Resolve the promise on the main thread to prevent any sort of data race for _set_target (see sublime_plugin.py).\n sublime.set_timeout(lambda: completion_list.set_completions(items, flags))\n\n\nclass CompletionHandler(LSPViewEventListener):\n def __init__(self, view: sublime.View) -> None:\n super().__init__(view)\n self.initialized = False\n self.enabled = False\n\n @classmethod\n def is_applicable(cls, view_settings: dict) -> bool:\n if 'completion' in settings.disabled_capabilities:\n return False\n\n syntax = view_settings.get('syntax')\n return is_supported_syntax(syntax, client_configs.all) if syntax else False\n\n def initialize(self) -> None:\n self.initialized = True\n session = session_for_view(self.view, 'completionProvider')\n if session:\n completionProvider = session.get_capability('completionProvider') or dict() # type: dict\n # A language server may have an empty dict as CompletionOptions. In that case,\n # no trigger characters will be registered but we'll still respond to Sublime's\n # usual query for completions. So the explicit check for None is necessary.\n self.enabled = True\n\n trigger_chars = completionProvider.get(\n 'triggerCharacters') or []\n if trigger_chars:\n self.register_trigger_chars(session, trigger_chars)\n # This is to make ST match with labels that have a weird prefix like a space character.\n self.view.settings().set(\"auto_complete_preserve_order\", \"none\")\n\n def register_trigger_chars(self, session: Session, trigger_chars: List[str]) -> None:\n completion_triggers = self.view.settings().get('auto_complete_triggers') or [] # type: List[Dict[str, str]]\n\n completion_triggers.append({\n 'characters': \"\".join(trigger_chars),\n 'selector': \"- comment - punctuation.definition.string.end\"\n })\n\n self.view.settings().set('auto_complete_triggers', completion_triggers)\n\n def on_query_completions(self, prefix: str, locations: List[int]) -> Optional[sublime.CompletionList]:\n if not self.initialized:\n self.initialize()\n if not self.enabled:\n return None\n client = client_from_session(session_for_view(self.view, 'completionProvider', locations[0]))\n if not client:\n return None\n self.manager.documents.purge_changes(self.view)\n completion_list = sublime.CompletionList()\n client.send_request(\n Request.complete(text_document_position_params(self.view, locations[0])),\n lambda res: self.handle_response(res, completion_list, self.view.change_id()),\n lambda res: self.handle_error(res, completion_list))\n return completion_list\n\n def handle_response(self, response: Optional[Union[dict, List]],\n completion_list: sublime.CompletionList, change_id: Any) -> None:\n response_items = [] # type: List[Dict]\n incomplete = False\n if isinstance(response, dict):\n response_items = response[\"items\"] or []\n incomplete = response.get(\"isIncomplete\", False)\n elif isinstance(response, list):\n response_items = response\n response_items = sorted(response_items, key=lambda item: item.get(\"sortText\") or item[\"label\"])\n\n flags = 0\n if settings.only_show_lsp_completions:\n flags |= sublime.INHIBIT_WORD_COMPLETIONS\n flags |= sublime.INHIBIT_EXPLICIT_COMPLETIONS\n\n if incomplete:\n flags |= sublime.DYNAMIC_COMPLETIONS\n resolve(completion_list, [format_completion(i, change_id) for i in response_items], flags)\n\n def handle_error(self, error: dict, completion_list: sublime.CompletionList) -> None:\n resolve(completion_list, [])\n sublime.status_message('Completion error: ' + str(error.get('message')))\n", "path": "plugin/completion.py"}]} | 3,529 | 452 |
gh_patches_debug_20102 | rasdani/github-patches | git_diff | Lightning-AI__torchmetrics-533 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pit bug: the device and type of preds and target may not the same with the results returned by metric_func
## 🐛 Bug
the device and type of preds and target may not the same with the results returned by metric_func
### To Reproduce
use a self-designed metric-func which accepts complex tensor but return real tensor
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
#### Code sample
<!-- Ideally attach a minimal code sample to reproduce the decried issue.
Minimal means having the shortest code but still preserving the bug. -->
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
### Environment
- PyTorch Version (e.g., 1.0):
- OS (e.g., Linux):
- How you installed PyTorch (`conda`, `pip`, source):
- Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:
### Additional context
<!-- Add any other context about the problem here. -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchmetrics/functional/audio/pit.py`
Content:
```
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import warnings
15 from itertools import permutations
16 from typing import Any, Callable, Dict, Tuple, Union
17
18 import torch
19 from torch import Tensor
20
21 from torchmetrics.utilities.checks import _check_same_shape
22 from torchmetrics.utilities.imports import _SCIPY_AVAILABLE
23
24 # _ps_dict: cache of permutations
25 # it's necessary to cache it, otherwise it will consume a large amount of time
26 _ps_dict: dict = {} # _ps_dict[str(spk_num)+str(device)] = permutations
27
28
29 def _find_best_perm_by_linear_sum_assignment(
30 metric_mtx: torch.Tensor,
31 eval_func: Union[torch.min, torch.max],
32 ) -> Tuple[Tensor, Tensor]:
33 """Solves the linear sum assignment problem using scipy, and returns the best metric values and the
34 corresponding permutations.
35
36 Args:
37 metric_mtx:
38 the metric matrix, shape [batch_size, spk_num, spk_num]
39 eval_func:
40 the function to reduce the metric values of different the permutations
41
42 Returns:
43 best_metric:
44 shape [batch]
45 best_perm:
46 shape [batch, spk]
47 """
48 from scipy.optimize import linear_sum_assignment
49
50 mmtx = metric_mtx.detach().cpu()
51 best_perm = torch.tensor([linear_sum_assignment(pwm, eval_func == torch.max)[1] for pwm in mmtx])
52 best_perm = best_perm.to(metric_mtx.device)
53 best_metric = torch.gather(metric_mtx, 2, best_perm[:, :, None]).mean([-1, -2])
54 return best_metric, best_perm # shape [batch], shape [batch, spk]
55
56
57 def _find_best_perm_by_exhuastive_method(
58 metric_mtx: torch.Tensor,
59 eval_func: Union[torch.min, torch.max],
60 ) -> Tuple[Tensor, Tensor]:
61 """Solves the linear sum assignment problem using exhuastive method, i.e. exhuastively calculates the metric
62 values of all possible permutations, and returns the best metric values and the corresponding permutations.
63
64 Args:
65 metric_mtx:
66 the metric matrix, shape [batch_size, spk_num, spk_num]
67 eval_func:
68 the function to reduce the metric values of different the permutations
69
70 Returns:
71 best_metric:
72 shape [batch]
73 best_perm:
74 shape [batch, spk]
75 """
76 # create/read/cache the permutations and its indexes
77 # reading from cache would be much faster than creating in CPU then moving to GPU
78 batch_size, spk_num = metric_mtx.shape[:2]
79 key = str(spk_num) + str(metric_mtx.device)
80 if key not in _ps_dict:
81 # ps: all the permutations, shape [spk_num, perm_num]
82 # ps: In i-th permutation, the predcition corresponds to the j-th target is ps[j,i]
83 ps = torch.tensor(list(permutations(range(spk_num))), device=metric_mtx.device).T
84 _ps_dict[key] = ps
85 else:
86 ps = _ps_dict[key] # all the permutations, shape [spk_num, perm_num]
87
88 # find the metric of each permutation
89 perm_num = ps.shape[-1]
90 # shape [batch_size, spk_num, perm_num]
91 bps = ps[None, ...].expand(batch_size, spk_num, perm_num)
92 # shape [batch_size, spk_num, perm_num]
93 metric_of_ps_details = torch.gather(metric_mtx, 2, bps)
94 # shape [batch_size, perm_num]
95 metric_of_ps = metric_of_ps_details.mean(dim=1)
96
97 # find the best metric and best permutation
98 best_metric, best_indexes = eval_func(metric_of_ps, dim=1)
99 best_indexes = best_indexes.detach()
100 best_perm = ps.T[best_indexes, :]
101 return best_metric, best_perm # shape [batch], shape [batch, spk]
102
103
104 def pit(
105 preds: torch.Tensor, target: torch.Tensor, metric_func: Callable, eval_func: str = "max", **kwargs: Dict[str, Any]
106 ) -> Tuple[Tensor, Tensor]:
107 """Permutation invariant training (PIT). The PIT implements the famous Permutation Invariant Training method.
108
109 [1] in speech separation field in order to calculate audio metrics in a permutation invariant way.
110
111 Args:
112 preds:
113 shape [batch, spk, ...]
114 target:
115 shape [batch, spk, ...]
116 metric_func:
117 a metric function accept a batch of target and estimate,
118 i.e. metric_func(preds[:, i, ...], target[:, j, ...]), and returns a batch of metric tensors [batch]
119 eval_func:
120 the function to find the best permutation, can be 'min' or 'max',
121 i.e. the smaller the better or the larger the better.
122 kwargs:
123 additional args for metric_func
124
125 Returns:
126 best_metric of shape [batch],
127 best_perm of shape [batch]
128
129 Example:
130 >>> from torchmetrics.functional.audio import si_sdr
131 >>> # [batch, spk, time]
132 >>> preds = torch.tensor([[[-0.0579, 0.3560, -0.9604], [-0.1719, 0.3205, 0.2951]]])
133 >>> target = torch.tensor([[[ 1.0958, -0.1648, 0.5228], [-0.4100, 1.1942, -0.5103]]])
134 >>> best_metric, best_perm = pit(preds, target, si_sdr, 'max')
135 >>> best_metric
136 tensor([-5.1091])
137 >>> best_perm
138 tensor([[0, 1]])
139 >>> pit_permutate(preds, best_perm)
140 tensor([[[-0.0579, 0.3560, -0.9604],
141 [-0.1719, 0.3205, 0.2951]]])
142
143 Reference:
144 [1] `Permutation Invariant Training of Deep Models`_
145 """
146 _check_same_shape(preds, target)
147 if eval_func not in ["max", "min"]:
148 raise ValueError(f'eval_func can only be "max" or "min" but got {eval_func}')
149 if target.ndim < 2:
150 raise ValueError(f"Inputs must be of shape [batch, spk, ...], got {target.shape} and {preds.shape} instead")
151
152 # calculate the metric matrix
153 batch_size, spk_num = target.shape[0:2]
154 metric_mtx = torch.empty((batch_size, spk_num, spk_num), dtype=preds.dtype, device=target.device)
155 for t in range(spk_num):
156 for e in range(spk_num):
157 metric_mtx[:, t, e] = metric_func(preds[:, e, ...], target[:, t, ...], **kwargs)
158
159 # find best
160 op = torch.max if eval_func == "max" else torch.min
161 if spk_num < 3 or not _SCIPY_AVAILABLE:
162 if spk_num >= 3 and not _SCIPY_AVAILABLE:
163 warnings.warn(
164 f"In pit metric for speaker-num {spk_num}>3, we recommend installing scipy for better performance"
165 )
166
167 best_metric, best_perm = _find_best_perm_by_exhuastive_method(metric_mtx, op)
168 else:
169 best_metric, best_perm = _find_best_perm_by_linear_sum_assignment(metric_mtx, op)
170
171 return best_metric, best_perm
172
173
174 def pit_permutate(preds: Tensor, perm: Tensor) -> Tensor:
175 """permutate estimate according to perm.
176
177 Args:
178 preds (Tensor): the estimates you want to permutate, shape [batch, spk, ...]
179 perm (Tensor): the permutation returned from pit, shape [batch, spk]
180
181 Returns:
182 Tensor: the permutated version of estimate
183
184 Example:
185 >>> from torchmetrics.functional.audio import si_sdr
186 >>> # [batch, spk, time]
187 >>> preds = torch.tensor([[[-0.0579, 0.3560, -0.9604], [-0.1719, 0.3205, 0.2951]]])
188 >>> target = torch.tensor([[[ 1.0958, -0.1648, 0.5228], [-0.4100, 1.1942, -0.5103]]])
189 >>> best_metric, best_perm = pit(preds, target, si_sdr, 'max')
190 >>> best_metric
191 tensor([-5.1091])
192 >>> best_perm
193 tensor([[0, 1]])
194 >>> pit_permutate(preds, best_perm)
195 tensor([[[-0.0579, 0.3560, -0.9604],
196 [-0.1719, 0.3205, 0.2951]]])
197 """
198 preds_pmted = torch.stack([torch.index_select(pred, 0, p) for pred, p in zip(preds, perm)])
199 return preds_pmted
200
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/torchmetrics/functional/audio/pit.py b/torchmetrics/functional/audio/pit.py
--- a/torchmetrics/functional/audio/pit.py
+++ b/torchmetrics/functional/audio/pit.py
@@ -151,10 +151,17 @@
# calculate the metric matrix
batch_size, spk_num = target.shape[0:2]
- metric_mtx = torch.empty((batch_size, spk_num, spk_num), dtype=preds.dtype, device=target.device)
- for t in range(spk_num):
- for e in range(spk_num):
- metric_mtx[:, t, e] = metric_func(preds[:, e, ...], target[:, t, ...], **kwargs)
+ metric_mtx = None
+ for target_idx in range(spk_num): # we have spk_num speeches in target in each sample
+ for preds_idx in range(spk_num): # we have spk_num speeches in preds in each sample
+ if metric_mtx is not None:
+ metric_mtx[:, target_idx, preds_idx] = metric_func(
+ preds[:, preds_idx, ...], target[:, target_idx, ...], **kwargs
+ )
+ else:
+ first_ele = metric_func(preds[:, preds_idx, ...], target[:, target_idx, ...], **kwargs)
+ metric_mtx = torch.empty((batch_size, spk_num, spk_num), dtype=first_ele.dtype, device=first_ele.device)
+ metric_mtx[:, target_idx, preds_idx] = first_ele
# find best
op = torch.max if eval_func == "max" else torch.min
| {"golden_diff": "diff --git a/torchmetrics/functional/audio/pit.py b/torchmetrics/functional/audio/pit.py\n--- a/torchmetrics/functional/audio/pit.py\n+++ b/torchmetrics/functional/audio/pit.py\n@@ -151,10 +151,17 @@\n \n # calculate the metric matrix\n batch_size, spk_num = target.shape[0:2]\n- metric_mtx = torch.empty((batch_size, spk_num, spk_num), dtype=preds.dtype, device=target.device)\n- for t in range(spk_num):\n- for e in range(spk_num):\n- metric_mtx[:, t, e] = metric_func(preds[:, e, ...], target[:, t, ...], **kwargs)\n+ metric_mtx = None\n+ for target_idx in range(spk_num): # we have spk_num speeches in target in each sample\n+ for preds_idx in range(spk_num): # we have spk_num speeches in preds in each sample\n+ if metric_mtx is not None:\n+ metric_mtx[:, target_idx, preds_idx] = metric_func(\n+ preds[:, preds_idx, ...], target[:, target_idx, ...], **kwargs\n+ )\n+ else:\n+ first_ele = metric_func(preds[:, preds_idx, ...], target[:, target_idx, ...], **kwargs)\n+ metric_mtx = torch.empty((batch_size, spk_num, spk_num), dtype=first_ele.dtype, device=first_ele.device)\n+ metric_mtx[:, target_idx, preds_idx] = first_ele\n \n # find best\n op = torch.max if eval_func == \"max\" else torch.min\n", "issue": "pit bug: the device and type of preds and target may not the same with the results returned by metric_func\n## \ud83d\udc1b Bug\r\n\r\nthe device and type of preds and target may not the same with the results returned by metric_func\r\n\r\n### To Reproduce\r\n\r\nuse a self-designed metric-func which accepts complex tensor but return real tensor\r\n\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n\r\n#### Code sample\r\n\r\n<!-- Ideally attach a minimal code sample to reproduce the decried issue.\r\nMinimal means having the shortest code but still preserving the bug. -->\r\n\r\n### Expected behavior\r\n\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\n### Environment\r\n\r\n- PyTorch Version (e.g., 1.0):\r\n- OS (e.g., Linux):\r\n- How you installed PyTorch (`conda`, `pip`, source):\r\n- Build command you used (if compiling from source):\r\n- Python version:\r\n- CUDA/cuDNN version:\r\n- GPU models and configuration:\r\n- Any other relevant information:\r\n\r\n### Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n\n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport warnings\nfrom itertools import permutations\nfrom typing import Any, Callable, Dict, Tuple, Union\n\nimport torch\nfrom torch import Tensor\n\nfrom torchmetrics.utilities.checks import _check_same_shape\nfrom torchmetrics.utilities.imports import _SCIPY_AVAILABLE\n\n# _ps_dict: cache of permutations\n# it's necessary to cache it, otherwise it will consume a large amount of time\n_ps_dict: dict = {} # _ps_dict[str(spk_num)+str(device)] = permutations\n\n\ndef _find_best_perm_by_linear_sum_assignment(\n metric_mtx: torch.Tensor,\n eval_func: Union[torch.min, torch.max],\n) -> Tuple[Tensor, Tensor]:\n \"\"\"Solves the linear sum assignment problem using scipy, and returns the best metric values and the\n corresponding permutations.\n\n Args:\n metric_mtx:\n the metric matrix, shape [batch_size, spk_num, spk_num]\n eval_func:\n the function to reduce the metric values of different the permutations\n\n Returns:\n best_metric:\n shape [batch]\n best_perm:\n shape [batch, spk]\n \"\"\"\n from scipy.optimize import linear_sum_assignment\n\n mmtx = metric_mtx.detach().cpu()\n best_perm = torch.tensor([linear_sum_assignment(pwm, eval_func == torch.max)[1] for pwm in mmtx])\n best_perm = best_perm.to(metric_mtx.device)\n best_metric = torch.gather(metric_mtx, 2, best_perm[:, :, None]).mean([-1, -2])\n return best_metric, best_perm # shape [batch], shape [batch, spk]\n\n\ndef _find_best_perm_by_exhuastive_method(\n metric_mtx: torch.Tensor,\n eval_func: Union[torch.min, torch.max],\n) -> Tuple[Tensor, Tensor]:\n \"\"\"Solves the linear sum assignment problem using exhuastive method, i.e. exhuastively calculates the metric\n values of all possible permutations, and returns the best metric values and the corresponding permutations.\n\n Args:\n metric_mtx:\n the metric matrix, shape [batch_size, spk_num, spk_num]\n eval_func:\n the function to reduce the metric values of different the permutations\n\n Returns:\n best_metric:\n shape [batch]\n best_perm:\n shape [batch, spk]\n \"\"\"\n # create/read/cache the permutations and its indexes\n # reading from cache would be much faster than creating in CPU then moving to GPU\n batch_size, spk_num = metric_mtx.shape[:2]\n key = str(spk_num) + str(metric_mtx.device)\n if key not in _ps_dict:\n # ps: all the permutations, shape [spk_num, perm_num]\n # ps: In i-th permutation, the predcition corresponds to the j-th target is ps[j,i]\n ps = torch.tensor(list(permutations(range(spk_num))), device=metric_mtx.device).T\n _ps_dict[key] = ps\n else:\n ps = _ps_dict[key] # all the permutations, shape [spk_num, perm_num]\n\n # find the metric of each permutation\n perm_num = ps.shape[-1]\n # shape [batch_size, spk_num, perm_num]\n bps = ps[None, ...].expand(batch_size, spk_num, perm_num)\n # shape [batch_size, spk_num, perm_num]\n metric_of_ps_details = torch.gather(metric_mtx, 2, bps)\n # shape [batch_size, perm_num]\n metric_of_ps = metric_of_ps_details.mean(dim=1)\n\n # find the best metric and best permutation\n best_metric, best_indexes = eval_func(metric_of_ps, dim=1)\n best_indexes = best_indexes.detach()\n best_perm = ps.T[best_indexes, :]\n return best_metric, best_perm # shape [batch], shape [batch, spk]\n\n\ndef pit(\n preds: torch.Tensor, target: torch.Tensor, metric_func: Callable, eval_func: str = \"max\", **kwargs: Dict[str, Any]\n) -> Tuple[Tensor, Tensor]:\n \"\"\"Permutation invariant training (PIT). The PIT implements the famous Permutation Invariant Training method.\n\n [1] in speech separation field in order to calculate audio metrics in a permutation invariant way.\n\n Args:\n preds:\n shape [batch, spk, ...]\n target:\n shape [batch, spk, ...]\n metric_func:\n a metric function accept a batch of target and estimate,\n i.e. metric_func(preds[:, i, ...], target[:, j, ...]), and returns a batch of metric tensors [batch]\n eval_func:\n the function to find the best permutation, can be 'min' or 'max',\n i.e. the smaller the better or the larger the better.\n kwargs:\n additional args for metric_func\n\n Returns:\n best_metric of shape [batch],\n best_perm of shape [batch]\n\n Example:\n >>> from torchmetrics.functional.audio import si_sdr\n >>> # [batch, spk, time]\n >>> preds = torch.tensor([[[-0.0579, 0.3560, -0.9604], [-0.1719, 0.3205, 0.2951]]])\n >>> target = torch.tensor([[[ 1.0958, -0.1648, 0.5228], [-0.4100, 1.1942, -0.5103]]])\n >>> best_metric, best_perm = pit(preds, target, si_sdr, 'max')\n >>> best_metric\n tensor([-5.1091])\n >>> best_perm\n tensor([[0, 1]])\n >>> pit_permutate(preds, best_perm)\n tensor([[[-0.0579, 0.3560, -0.9604],\n [-0.1719, 0.3205, 0.2951]]])\n\n Reference:\n [1]\t`Permutation Invariant Training of Deep Models`_\n \"\"\"\n _check_same_shape(preds, target)\n if eval_func not in [\"max\", \"min\"]:\n raise ValueError(f'eval_func can only be \"max\" or \"min\" but got {eval_func}')\n if target.ndim < 2:\n raise ValueError(f\"Inputs must be of shape [batch, spk, ...], got {target.shape} and {preds.shape} instead\")\n\n # calculate the metric matrix\n batch_size, spk_num = target.shape[0:2]\n metric_mtx = torch.empty((batch_size, spk_num, spk_num), dtype=preds.dtype, device=target.device)\n for t in range(spk_num):\n for e in range(spk_num):\n metric_mtx[:, t, e] = metric_func(preds[:, e, ...], target[:, t, ...], **kwargs)\n\n # find best\n op = torch.max if eval_func == \"max\" else torch.min\n if spk_num < 3 or not _SCIPY_AVAILABLE:\n if spk_num >= 3 and not _SCIPY_AVAILABLE:\n warnings.warn(\n f\"In pit metric for speaker-num {spk_num}>3, we recommend installing scipy for better performance\"\n )\n\n best_metric, best_perm = _find_best_perm_by_exhuastive_method(metric_mtx, op)\n else:\n best_metric, best_perm = _find_best_perm_by_linear_sum_assignment(metric_mtx, op)\n\n return best_metric, best_perm\n\n\ndef pit_permutate(preds: Tensor, perm: Tensor) -> Tensor:\n \"\"\"permutate estimate according to perm.\n\n Args:\n preds (Tensor): the estimates you want to permutate, shape [batch, spk, ...]\n perm (Tensor): the permutation returned from pit, shape [batch, spk]\n\n Returns:\n Tensor: the permutated version of estimate\n\n Example:\n >>> from torchmetrics.functional.audio import si_sdr\n >>> # [batch, spk, time]\n >>> preds = torch.tensor([[[-0.0579, 0.3560, -0.9604], [-0.1719, 0.3205, 0.2951]]])\n >>> target = torch.tensor([[[ 1.0958, -0.1648, 0.5228], [-0.4100, 1.1942, -0.5103]]])\n >>> best_metric, best_perm = pit(preds, target, si_sdr, 'max')\n >>> best_metric\n tensor([-5.1091])\n >>> best_perm\n tensor([[0, 1]])\n >>> pit_permutate(preds, best_perm)\n tensor([[[-0.0579, 0.3560, -0.9604],\n [-0.1719, 0.3205, 0.2951]]])\n \"\"\"\n preds_pmted = torch.stack([torch.index_select(pred, 0, p) for pred, p in zip(preds, perm)])\n return preds_pmted\n", "path": "torchmetrics/functional/audio/pit.py"}], "after_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport warnings\nfrom itertools import permutations\nfrom typing import Any, Callable, Dict, Tuple, Union\n\nimport torch\nfrom torch import Tensor\n\nfrom torchmetrics.utilities.checks import _check_same_shape\nfrom torchmetrics.utilities.imports import _SCIPY_AVAILABLE\n\n# _ps_dict: cache of permutations\n# it's necessary to cache it, otherwise it will consume a large amount of time\n_ps_dict: dict = {} # _ps_dict[str(spk_num)+str(device)] = permutations\n\n\ndef _find_best_perm_by_linear_sum_assignment(\n metric_mtx: torch.Tensor,\n eval_func: Union[torch.min, torch.max],\n) -> Tuple[Tensor, Tensor]:\n \"\"\"Solves the linear sum assignment problem using scipy, and returns the best metric values and the\n corresponding permutations.\n\n Args:\n metric_mtx:\n the metric matrix, shape [batch_size, spk_num, spk_num]\n eval_func:\n the function to reduce the metric values of different the permutations\n\n Returns:\n best_metric:\n shape [batch]\n best_perm:\n shape [batch, spk]\n \"\"\"\n from scipy.optimize import linear_sum_assignment\n\n mmtx = metric_mtx.detach().cpu()\n best_perm = torch.tensor([linear_sum_assignment(pwm, eval_func == torch.max)[1] for pwm in mmtx])\n best_perm = best_perm.to(metric_mtx.device)\n best_metric = torch.gather(metric_mtx, 2, best_perm[:, :, None]).mean([-1, -2])\n return best_metric, best_perm # shape [batch], shape [batch, spk]\n\n\ndef _find_best_perm_by_exhuastive_method(\n metric_mtx: torch.Tensor,\n eval_func: Union[torch.min, torch.max],\n) -> Tuple[Tensor, Tensor]:\n \"\"\"Solves the linear sum assignment problem using exhuastive method, i.e. exhuastively calculates the metric\n values of all possible permutations, and returns the best metric values and the corresponding permutations.\n\n Args:\n metric_mtx:\n the metric matrix, shape [batch_size, spk_num, spk_num]\n eval_func:\n the function to reduce the metric values of different the permutations\n\n Returns:\n best_metric:\n shape [batch]\n best_perm:\n shape [batch, spk]\n \"\"\"\n # create/read/cache the permutations and its indexes\n # reading from cache would be much faster than creating in CPU then moving to GPU\n batch_size, spk_num = metric_mtx.shape[:2]\n key = str(spk_num) + str(metric_mtx.device)\n if key not in _ps_dict:\n # ps: all the permutations, shape [spk_num, perm_num]\n # ps: In i-th permutation, the predcition corresponds to the j-th target is ps[j,i]\n ps = torch.tensor(list(permutations(range(spk_num))), device=metric_mtx.device).T\n _ps_dict[key] = ps\n else:\n ps = _ps_dict[key] # all the permutations, shape [spk_num, perm_num]\n\n # find the metric of each permutation\n perm_num = ps.shape[-1]\n # shape [batch_size, spk_num, perm_num]\n bps = ps[None, ...].expand(batch_size, spk_num, perm_num)\n # shape [batch_size, spk_num, perm_num]\n metric_of_ps_details = torch.gather(metric_mtx, 2, bps)\n # shape [batch_size, perm_num]\n metric_of_ps = metric_of_ps_details.mean(dim=1)\n\n # find the best metric and best permutation\n best_metric, best_indexes = eval_func(metric_of_ps, dim=1)\n best_indexes = best_indexes.detach()\n best_perm = ps.T[best_indexes, :]\n return best_metric, best_perm # shape [batch], shape [batch, spk]\n\n\ndef pit(\n preds: torch.Tensor, target: torch.Tensor, metric_func: Callable, eval_func: str = \"max\", **kwargs: Dict[str, Any]\n) -> Tuple[Tensor, Tensor]:\n \"\"\"Permutation invariant training (PIT). The PIT implements the famous Permutation Invariant Training method.\n\n [1] in speech separation field in order to calculate audio metrics in a permutation invariant way.\n\n Args:\n preds:\n shape [batch, spk, ...]\n target:\n shape [batch, spk, ...]\n metric_func:\n a metric function accept a batch of target and estimate,\n i.e. metric_func(preds[:, i, ...], target[:, j, ...]), and returns a batch of metric tensors [batch]\n eval_func:\n the function to find the best permutation, can be 'min' or 'max',\n i.e. the smaller the better or the larger the better.\n kwargs:\n additional args for metric_func\n\n Returns:\n best_metric of shape [batch],\n best_perm of shape [batch]\n\n Example:\n >>> from torchmetrics.functional.audio import si_sdr\n >>> # [batch, spk, time]\n >>> preds = torch.tensor([[[-0.0579, 0.3560, -0.9604], [-0.1719, 0.3205, 0.2951]]])\n >>> target = torch.tensor([[[ 1.0958, -0.1648, 0.5228], [-0.4100, 1.1942, -0.5103]]])\n >>> best_metric, best_perm = pit(preds, target, si_sdr, 'max')\n >>> best_metric\n tensor([-5.1091])\n >>> best_perm\n tensor([[0, 1]])\n >>> pit_permutate(preds, best_perm)\n tensor([[[-0.0579, 0.3560, -0.9604],\n [-0.1719, 0.3205, 0.2951]]])\n\n Reference:\n [1]\t`Permutation Invariant Training of Deep Models`_\n \"\"\"\n _check_same_shape(preds, target)\n if eval_func not in [\"max\", \"min\"]:\n raise ValueError(f'eval_func can only be \"max\" or \"min\" but got {eval_func}')\n if target.ndim < 2:\n raise ValueError(f\"Inputs must be of shape [batch, spk, ...], got {target.shape} and {preds.shape} instead\")\n\n # calculate the metric matrix\n batch_size, spk_num = target.shape[0:2]\n metric_mtx = None\n for target_idx in range(spk_num): # we have spk_num speeches in target in each sample\n for preds_idx in range(spk_num): # we have spk_num speeches in preds in each sample\n if metric_mtx is not None:\n metric_mtx[:, target_idx, preds_idx] = metric_func(\n preds[:, preds_idx, ...], target[:, target_idx, ...], **kwargs\n )\n else:\n first_ele = metric_func(preds[:, preds_idx, ...], target[:, target_idx, ...], **kwargs)\n metric_mtx = torch.empty((batch_size, spk_num, spk_num), dtype=first_ele.dtype, device=first_ele.device)\n metric_mtx[:, target_idx, preds_idx] = first_ele\n\n # find best\n op = torch.max if eval_func == \"max\" else torch.min\n if spk_num < 3 or not _SCIPY_AVAILABLE:\n if spk_num >= 3 and not _SCIPY_AVAILABLE:\n warnings.warn(\n f\"In pit metric for speaker-num {spk_num}>3, we recommend installing scipy for better performance\"\n )\n\n best_metric, best_perm = _find_best_perm_by_exhuastive_method(metric_mtx, op)\n else:\n best_metric, best_perm = _find_best_perm_by_linear_sum_assignment(metric_mtx, op)\n\n return best_metric, best_perm\n\n\ndef pit_permutate(preds: Tensor, perm: Tensor) -> Tensor:\n \"\"\"permutate estimate according to perm.\n\n Args:\n preds (Tensor): the estimates you want to permutate, shape [batch, spk, ...]\n perm (Tensor): the permutation returned from pit, shape [batch, spk]\n\n Returns:\n Tensor: the permutated version of estimate\n\n Example:\n >>> from torchmetrics.functional.audio import si_sdr\n >>> # [batch, spk, time]\n >>> preds = torch.tensor([[[-0.0579, 0.3560, -0.9604], [-0.1719, 0.3205, 0.2951]]])\n >>> target = torch.tensor([[[ 1.0958, -0.1648, 0.5228], [-0.4100, 1.1942, -0.5103]]])\n >>> best_metric, best_perm = pit(preds, target, si_sdr, 'max')\n >>> best_metric\n tensor([-5.1091])\n >>> best_perm\n tensor([[0, 1]])\n >>> pit_permutate(preds, best_perm)\n tensor([[[-0.0579, 0.3560, -0.9604],\n [-0.1719, 0.3205, 0.2951]]])\n \"\"\"\n preds_pmted = torch.stack([torch.index_select(pred, 0, p) for pred, p in zip(preds, perm)])\n return preds_pmted\n", "path": "torchmetrics/functional/audio/pit.py"}]} | 3,176 | 362 |
gh_patches_debug_26341 | rasdani/github-patches | git_diff | freqtrade__freqtrade-1896 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
--help produces traceback
Seems I broke it somehow.
`python3 freqtrade hyperopt --help`
produces traceback
```
Fatal exception!
Traceback (most recent call last):
File "/home/user/freqtrade-wrk/github-hroff-1902/freqtrade/freqtrade/main.py", line 42, in main
args: Namespace = arguments.get_parsed_arg()
File "/home/user/freqtrade-wrk/github-hroff-1902/freqtrade/freqtrade/arguments.py", line 46, in get_parsed_arg
self.parsed_arg = self.parse_args()
File "/home/user/freqtrade-wrk/github-hroff-1902/freqtrade/freqtrade/arguments.py", line 54, in parse_args
parsed_arg = self.parser.parse_args(self.args)
File "/usr/lib/python3.6/argparse.py", line 1743, in parse_args
args, argv = self.parse_known_args(args, namespace)
File "/usr/lib/python3.6/argparse.py", line 1775, in parse_known_args
namespace, args = self._parse_known_args(args, namespace)
File "/usr/lib/python3.6/argparse.py", line 1963, in _parse_known_args
positionals_end_index = consume_positionals(start_index)
File "/usr/lib/python3.6/argparse.py", line 1940, in consume_positionals
take_action(action, args)
File "/usr/lib/python3.6/argparse.py", line 1849, in take_action
action(self, namespace, argument_values, option_string)
File "/usr/lib/python3.6/argparse.py", line 1146, in __call__
subnamespace, arg_strings = parser.parse_known_args(arg_strings, None)
File "/usr/lib/python3.6/argparse.py", line 1775, in parse_known_args
namespace, args = self._parse_known_args(args, namespace)
File "/usr/lib/python3.6/argparse.py", line 1981, in _parse_known_args
start_index = consume_optional(start_index)
File "/usr/lib/python3.6/argparse.py", line 1921, in consume_optional
take_action(action, args, option_string)
File "/usr/lib/python3.6/argparse.py", line 1849, in take_action
action(self, namespace, argument_values, option_string)
File "/usr/lib/python3.6/argparse.py", line 1034, in __call__
parser.exit()
File "/usr/lib/python3.6/argparse.py", line 2389, in exit
_sys.exit(status)
SystemExit: 0
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `freqtrade/main.py`
Content:
```
1 #!/usr/bin/env python3
2 """
3 Main Freqtrade bot script.
4 Read the documentation to know what cli arguments you need.
5 """
6
7 import sys
8 # check min. python version
9 if sys.version_info < (3, 6):
10 sys.exit("Freqtrade requires Python version >= 3.6")
11
12 # flake8: noqa E402
13 import logging
14 from argparse import Namespace
15 from typing import List
16
17 from freqtrade import OperationalException
18 from freqtrade.arguments import Arguments
19 from freqtrade.configuration import set_loggers
20 from freqtrade.worker import Worker
21
22
23 logger = logging.getLogger('freqtrade')
24
25
26 def main(sysargv: List[str] = None) -> None:
27 """
28 This function will initiate the bot and start the trading loop.
29 :return: None
30 """
31
32 try:
33 set_loggers()
34
35 worker = None
36 return_code = 1
37
38 arguments = Arguments(
39 sysargv,
40 'Free, open source crypto trading bot'
41 )
42 args: Namespace = arguments.get_parsed_arg()
43
44 # A subcommand has been issued.
45 # Means if Backtesting or Hyperopt have been called we exit the bot
46 if hasattr(args, 'func'):
47 args.func(args)
48 # TODO: fetch return_code as returned by the command function here
49 return_code = 0
50 else:
51 # Load and run worker
52 worker = Worker(args)
53 worker.run()
54
55 except KeyboardInterrupt:
56 logger.info('SIGINT received, aborting ...')
57 return_code = 0
58 except OperationalException as e:
59 logger.error(str(e))
60 return_code = 2
61 except BaseException:
62 logger.exception('Fatal exception!')
63 finally:
64 if worker:
65 worker.exit()
66 sys.exit(return_code)
67
68
69 if __name__ == '__main__':
70 main()
71
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/freqtrade/main.py b/freqtrade/main.py
--- a/freqtrade/main.py
+++ b/freqtrade/main.py
@@ -12,7 +12,7 @@
# flake8: noqa E402
import logging
from argparse import Namespace
-from typing import List
+from typing import Any, List
from freqtrade import OperationalException
from freqtrade.arguments import Arguments
@@ -29,12 +29,11 @@
:return: None
"""
+ return_code: Any = 1
+ worker = None
try:
set_loggers()
- worker = None
- return_code = 1
-
arguments = Arguments(
sysargv,
'Free, open source crypto trading bot'
@@ -52,13 +51,15 @@
worker = Worker(args)
worker.run()
+ except SystemExit as e:
+ return_code = e
except KeyboardInterrupt:
logger.info('SIGINT received, aborting ...')
return_code = 0
except OperationalException as e:
logger.error(str(e))
return_code = 2
- except BaseException:
+ except Exception:
logger.exception('Fatal exception!')
finally:
if worker:
| {"golden_diff": "diff --git a/freqtrade/main.py b/freqtrade/main.py\n--- a/freqtrade/main.py\n+++ b/freqtrade/main.py\n@@ -12,7 +12,7 @@\n # flake8: noqa E402\n import logging\n from argparse import Namespace\n-from typing import List\n+from typing import Any, List\n \n from freqtrade import OperationalException\n from freqtrade.arguments import Arguments\n@@ -29,12 +29,11 @@\n :return: None\n \"\"\"\n \n+ return_code: Any = 1\n+ worker = None\n try:\n set_loggers()\n \n- worker = None\n- return_code = 1\n-\n arguments = Arguments(\n sysargv,\n 'Free, open source crypto trading bot'\n@@ -52,13 +51,15 @@\n worker = Worker(args)\n worker.run()\n \n+ except SystemExit as e:\n+ return_code = e\n except KeyboardInterrupt:\n logger.info('SIGINT received, aborting ...')\n return_code = 0\n except OperationalException as e:\n logger.error(str(e))\n return_code = 2\n- except BaseException:\n+ except Exception:\n logger.exception('Fatal exception!')\n finally:\n if worker:\n", "issue": "--help produces traceback\nSeems I broke it somehow.\r\n\r\n`python3 freqtrade hyperopt --help`\r\nproduces traceback \r\n```\r\nFatal exception!\r\nTraceback (most recent call last):\r\n File \"/home/user/freqtrade-wrk/github-hroff-1902/freqtrade/freqtrade/main.py\", line 42, in main\r\n args: Namespace = arguments.get_parsed_arg()\r\n File \"/home/user/freqtrade-wrk/github-hroff-1902/freqtrade/freqtrade/arguments.py\", line 46, in get_parsed_arg\r\n self.parsed_arg = self.parse_args()\r\n File \"/home/user/freqtrade-wrk/github-hroff-1902/freqtrade/freqtrade/arguments.py\", line 54, in parse_args\r\n parsed_arg = self.parser.parse_args(self.args)\r\n File \"/usr/lib/python3.6/argparse.py\", line 1743, in parse_args\r\n args, argv = self.parse_known_args(args, namespace)\r\n File \"/usr/lib/python3.6/argparse.py\", line 1775, in parse_known_args\r\n namespace, args = self._parse_known_args(args, namespace)\r\n File \"/usr/lib/python3.6/argparse.py\", line 1963, in _parse_known_args\r\n positionals_end_index = consume_positionals(start_index)\r\n File \"/usr/lib/python3.6/argparse.py\", line 1940, in consume_positionals\r\n take_action(action, args)\r\n File \"/usr/lib/python3.6/argparse.py\", line 1849, in take_action\r\n action(self, namespace, argument_values, option_string)\r\n File \"/usr/lib/python3.6/argparse.py\", line 1146, in __call__\r\n subnamespace, arg_strings = parser.parse_known_args(arg_strings, None)\r\n File \"/usr/lib/python3.6/argparse.py\", line 1775, in parse_known_args\r\n namespace, args = self._parse_known_args(args, namespace)\r\n File \"/usr/lib/python3.6/argparse.py\", line 1981, in _parse_known_args\r\n start_index = consume_optional(start_index)\r\n File \"/usr/lib/python3.6/argparse.py\", line 1921, in consume_optional\r\n take_action(action, args, option_string)\r\n File \"/usr/lib/python3.6/argparse.py\", line 1849, in take_action\r\n action(self, namespace, argument_values, option_string)\r\n File \"/usr/lib/python3.6/argparse.py\", line 1034, in __call__\r\n parser.exit()\r\n File \"/usr/lib/python3.6/argparse.py\", line 2389, in exit\r\n _sys.exit(status)\r\nSystemExit: 0\r\n```\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\"\"\"\nMain Freqtrade bot script.\nRead the documentation to know what cli arguments you need.\n\"\"\"\n\nimport sys\n# check min. python version\nif sys.version_info < (3, 6):\n sys.exit(\"Freqtrade requires Python version >= 3.6\")\n\n# flake8: noqa E402\nimport logging\nfrom argparse import Namespace\nfrom typing import List\n\nfrom freqtrade import OperationalException\nfrom freqtrade.arguments import Arguments\nfrom freqtrade.configuration import set_loggers\nfrom freqtrade.worker import Worker\n\n\nlogger = logging.getLogger('freqtrade')\n\n\ndef main(sysargv: List[str] = None) -> None:\n \"\"\"\n This function will initiate the bot and start the trading loop.\n :return: None\n \"\"\"\n\n try:\n set_loggers()\n\n worker = None\n return_code = 1\n\n arguments = Arguments(\n sysargv,\n 'Free, open source crypto trading bot'\n )\n args: Namespace = arguments.get_parsed_arg()\n\n # A subcommand has been issued.\n # Means if Backtesting or Hyperopt have been called we exit the bot\n if hasattr(args, 'func'):\n args.func(args)\n # TODO: fetch return_code as returned by the command function here\n return_code = 0\n else:\n # Load and run worker\n worker = Worker(args)\n worker.run()\n\n except KeyboardInterrupt:\n logger.info('SIGINT received, aborting ...')\n return_code = 0\n except OperationalException as e:\n logger.error(str(e))\n return_code = 2\n except BaseException:\n logger.exception('Fatal exception!')\n finally:\n if worker:\n worker.exit()\n sys.exit(return_code)\n\n\nif __name__ == '__main__':\n main()\n", "path": "freqtrade/main.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\"\"\"\nMain Freqtrade bot script.\nRead the documentation to know what cli arguments you need.\n\"\"\"\n\nimport sys\n# check min. python version\nif sys.version_info < (3, 6):\n sys.exit(\"Freqtrade requires Python version >= 3.6\")\n\n# flake8: noqa E402\nimport logging\nfrom argparse import Namespace\nfrom typing import Any, List\n\nfrom freqtrade import OperationalException\nfrom freqtrade.arguments import Arguments\nfrom freqtrade.configuration import set_loggers\nfrom freqtrade.worker import Worker\n\n\nlogger = logging.getLogger('freqtrade')\n\n\ndef main(sysargv: List[str] = None) -> None:\n \"\"\"\n This function will initiate the bot and start the trading loop.\n :return: None\n \"\"\"\n\n return_code: Any = 1\n worker = None\n try:\n set_loggers()\n\n arguments = Arguments(\n sysargv,\n 'Free, open source crypto trading bot'\n )\n args: Namespace = arguments.get_parsed_arg()\n\n # A subcommand has been issued.\n # Means if Backtesting or Hyperopt have been called we exit the bot\n if hasattr(args, 'func'):\n args.func(args)\n # TODO: fetch return_code as returned by the command function here\n return_code = 0\n else:\n # Load and run worker\n worker = Worker(args)\n worker.run()\n\n except SystemExit as e:\n return_code = e\n except KeyboardInterrupt:\n logger.info('SIGINT received, aborting ...')\n return_code = 0\n except OperationalException as e:\n logger.error(str(e))\n return_code = 2\n except Exception:\n logger.exception('Fatal exception!')\n finally:\n if worker:\n worker.exit()\n sys.exit(return_code)\n\n\nif __name__ == '__main__':\n main()\n", "path": "freqtrade/main.py"}]} | 1,410 | 280 |
gh_patches_debug_31268 | rasdani/github-patches | git_diff | kornia__kornia-2131 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Weird behavior of LongestMaxSize
### Describe the bug
Hello me again,
I might be doing something wrong with the way I use kornia augmentations, please let me know if it is the case.
I was expecting `LongestMaxSize` in kornia to perform similarily as the albumentation implementation. Meaning that I can throw any images with different shapes to the the transformation function and get an image with different shapes but similar ratios. The largest size being equal to the value given to `LongestMaxSize`.
See bellow a small code sample that disturbs me.
### Reproduction steps
```bash
import kornia.augmentation as K
a = torch.ones((512, 256))
b = torch.ones((512, 756))
print("first try")
transfo = K.LongestMaxSize(max_size=256, p=1.)
print(transfo(a).shape)
print(transfo(b).shape)
print("second try")
a = torch.ones((512, 256))
b = torch.ones((512, 756))
transfo = K.LongestMaxSize(max_size=256, p=1.)
print(transfo(b).shape)
print(transfo(a).shape)
Outputs:
first try
torch.Size([1, 1, 256, 128])
torch.Size([1, 1, 256, 128])
second try
torch.Size([1, 1, 173, 256])
torch.Size([1, 1, 173, 256])
```
### Expected behavior
I would expect to have the same values for the transformations no matter the order of the elements.
ie `transfo(a).shape == torch.Size([1, 1, 256, 128])` and `transfo(b).shape ==torch.Size([1, 1, 173, 256])`
Am I missing something here ?
### Environment
```shell
kornia='0.6.9'
torch='1.12.1+cu113'
```
### Additional context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kornia/augmentation/random_generator/_2d/resize.py`
Content:
```
1 from typing import Dict, Tuple, Union
2
3 import torch
4
5 from kornia.augmentation.random_generator.base import RandomGeneratorBase
6 from kornia.augmentation.utils import _common_param_check
7 from kornia.core import Device, Tensor, tensor
8 from kornia.geometry.bbox import bbox_generator
9 from kornia.geometry.transform.affwarp import _side_to_image_size
10
11
12 class ResizeGenerator(RandomGeneratorBase):
13 r"""Get parameters for ```resize``` transformation for resize transform.
14
15 Args:
16 resize_to: Desired output size of the crop, like (h, w).
17 side: Which side to resize if `resize_to` is only of type int.
18
19 Returns:
20 parameters to be passed for transformation.
21 - src (Tensor): cropping bounding boxes with a shape of (B, 4, 2).
22 - dst (Tensor): output bounding boxes with a shape (B, 4, 2).
23 - input_size (Tensor): (h, w) from batch input.
24 - resize_to (tuple): new (h, w) for batch input.
25
26 Note:
27 The generated random numbers are not reproducible across different devices and dtypes. By default,
28 the parameters will be generated on CPU in float32. This can be changed by calling
29 ``self.set_rng_device_and_dtype(device="cuda", dtype=torch.float64)``.
30 """
31
32 def __init__(self, resize_to: Union[int, Tuple[int, int]], side: str = "short") -> None:
33 super().__init__()
34 self.output_size = resize_to
35 self.side = side
36
37 def __repr__(self) -> str:
38 repr = f"output_size={self.output_size}"
39 return repr
40
41 def make_samplers(self, device: Device, dtype: torch.dtype) -> None:
42 self.device = device
43 self.dtype = dtype
44 pass
45
46 def forward(self, batch_shape: torch.Size, same_on_batch: bool = False) -> Dict[str, Tensor]:
47 batch_size = batch_shape[0]
48 _common_param_check(batch_size, same_on_batch)
49 _device = self.device
50 _dtype = self.dtype
51
52 if batch_size == 0:
53 return dict(
54 src=torch.zeros([0, 4, 2], device=_device, dtype=_dtype),
55 dst=torch.zeros([0, 4, 2], device=_device, dtype=_dtype),
56 )
57
58 input_size = h, w = (batch_shape[-2], batch_shape[-1])
59
60 src = bbox_generator(
61 tensor(0, device=_device, dtype=_dtype),
62 tensor(0, device=_device, dtype=_dtype),
63 tensor(input_size[1], device=_device, dtype=_dtype),
64 tensor(input_size[0], device=_device, dtype=_dtype),
65 ).repeat(batch_size, 1, 1)
66
67 if isinstance(self.output_size, int):
68 aspect_ratio = w / h
69 self.output_size = _side_to_image_size(self.output_size, aspect_ratio, self.side)
70
71 if not (
72 len(self.output_size) == 2
73 and isinstance(self.output_size[0], (int,))
74 and isinstance(self.output_size[1], (int,))
75 and self.output_size[0] > 0
76 and self.output_size[1] > 0
77 ):
78 raise AssertionError(f"`resize_to` must be a tuple of 2 positive integers. Got {self.output_size}.")
79
80 dst = bbox_generator(
81 tensor(0, device=_device, dtype=_dtype),
82 tensor(0, device=_device, dtype=_dtype),
83 tensor(self.output_size[1], device=_device, dtype=_dtype),
84 tensor(self.output_size[0], device=_device, dtype=_dtype),
85 ).repeat(batch_size, 1, 1)
86
87 _input_size = tensor(input_size, device=_device, dtype=torch.long).expand(batch_size, -1)
88 _output_size = tensor(self.output_size, device=_device, dtype=torch.long).expand(batch_size, -1)
89
90 return dict(src=src, dst=dst, input_size=_input_size, output_size=_output_size)
91
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kornia/augmentation/random_generator/_2d/resize.py b/kornia/augmentation/random_generator/_2d/resize.py
--- a/kornia/augmentation/random_generator/_2d/resize.py
+++ b/kornia/augmentation/random_generator/_2d/resize.py
@@ -66,25 +66,27 @@
if isinstance(self.output_size, int):
aspect_ratio = w / h
- self.output_size = _side_to_image_size(self.output_size, aspect_ratio, self.side)
+ output_size = _side_to_image_size(self.output_size, aspect_ratio, self.side)
+ else:
+ output_size = self.output_size
if not (
- len(self.output_size) == 2
- and isinstance(self.output_size[0], (int,))
- and isinstance(self.output_size[1], (int,))
- and self.output_size[0] > 0
- and self.output_size[1] > 0
+ len(output_size) == 2
+ and isinstance(output_size[0], (int,))
+ and isinstance(output_size[1], (int,))
+ and output_size[0] > 0
+ and output_size[1] > 0
):
- raise AssertionError(f"`resize_to` must be a tuple of 2 positive integers. Got {self.output_size}.")
+ raise AssertionError(f"`resize_to` must be a tuple of 2 positive integers. Got {output_size}.")
dst = bbox_generator(
tensor(0, device=_device, dtype=_dtype),
tensor(0, device=_device, dtype=_dtype),
- tensor(self.output_size[1], device=_device, dtype=_dtype),
- tensor(self.output_size[0], device=_device, dtype=_dtype),
+ tensor(output_size[1], device=_device, dtype=_dtype),
+ tensor(output_size[0], device=_device, dtype=_dtype),
).repeat(batch_size, 1, 1)
_input_size = tensor(input_size, device=_device, dtype=torch.long).expand(batch_size, -1)
- _output_size = tensor(self.output_size, device=_device, dtype=torch.long).expand(batch_size, -1)
+ _output_size = tensor(output_size, device=_device, dtype=torch.long).expand(batch_size, -1)
return dict(src=src, dst=dst, input_size=_input_size, output_size=_output_size)
| {"golden_diff": "diff --git a/kornia/augmentation/random_generator/_2d/resize.py b/kornia/augmentation/random_generator/_2d/resize.py\n--- a/kornia/augmentation/random_generator/_2d/resize.py\n+++ b/kornia/augmentation/random_generator/_2d/resize.py\n@@ -66,25 +66,27 @@\n \n if isinstance(self.output_size, int):\n aspect_ratio = w / h\n- self.output_size = _side_to_image_size(self.output_size, aspect_ratio, self.side)\n+ output_size = _side_to_image_size(self.output_size, aspect_ratio, self.side)\n+ else:\n+ output_size = self.output_size\n \n if not (\n- len(self.output_size) == 2\n- and isinstance(self.output_size[0], (int,))\n- and isinstance(self.output_size[1], (int,))\n- and self.output_size[0] > 0\n- and self.output_size[1] > 0\n+ len(output_size) == 2\n+ and isinstance(output_size[0], (int,))\n+ and isinstance(output_size[1], (int,))\n+ and output_size[0] > 0\n+ and output_size[1] > 0\n ):\n- raise AssertionError(f\"`resize_to` must be a tuple of 2 positive integers. Got {self.output_size}.\")\n+ raise AssertionError(f\"`resize_to` must be a tuple of 2 positive integers. Got {output_size}.\")\n \n dst = bbox_generator(\n tensor(0, device=_device, dtype=_dtype),\n tensor(0, device=_device, dtype=_dtype),\n- tensor(self.output_size[1], device=_device, dtype=_dtype),\n- tensor(self.output_size[0], device=_device, dtype=_dtype),\n+ tensor(output_size[1], device=_device, dtype=_dtype),\n+ tensor(output_size[0], device=_device, dtype=_dtype),\n ).repeat(batch_size, 1, 1)\n \n _input_size = tensor(input_size, device=_device, dtype=torch.long).expand(batch_size, -1)\n- _output_size = tensor(self.output_size, device=_device, dtype=torch.long).expand(batch_size, -1)\n+ _output_size = tensor(output_size, device=_device, dtype=torch.long).expand(batch_size, -1)\n \n return dict(src=src, dst=dst, input_size=_input_size, output_size=_output_size)\n", "issue": "Weird behavior of LongestMaxSize\n### Describe the bug\r\n\r\nHello me again,\r\n\r\nI might be doing something wrong with the way I use kornia augmentations, please let me know if it is the case.\r\n\r\nI was expecting `LongestMaxSize` in kornia to perform similarily as the albumentation implementation. Meaning that I can throw any images with different shapes to the the transformation function and get an image with different shapes but similar ratios. The largest size being equal to the value given to `LongestMaxSize`.\r\n\r\nSee bellow a small code sample that disturbs me.\r\n\r\n### Reproduction steps\r\n\r\n```bash\r\nimport kornia.augmentation as K\r\na = torch.ones((512, 256))\r\nb = torch.ones((512, 756))\r\n\r\nprint(\"first try\")\r\ntransfo = K.LongestMaxSize(max_size=256, p=1.)\r\n\r\nprint(transfo(a).shape)\r\nprint(transfo(b).shape)\r\n\r\nprint(\"second try\")\r\n\r\na = torch.ones((512, 256))\r\nb = torch.ones((512, 756))\r\n\r\ntransfo = K.LongestMaxSize(max_size=256, p=1.)\r\nprint(transfo(b).shape)\r\nprint(transfo(a).shape)\r\n\r\nOutputs:\r\nfirst try\r\ntorch.Size([1, 1, 256, 128])\r\ntorch.Size([1, 1, 256, 128])\r\nsecond try\r\ntorch.Size([1, 1, 173, 256])\r\ntorch.Size([1, 1, 173, 256])\r\n```\r\n\r\n\r\n### Expected behavior\r\n\r\nI would expect to have the same values for the transformations no matter the order of the elements.\r\n\r\nie `transfo(a).shape == torch.Size([1, 1, 256, 128])` and `transfo(b).shape ==torch.Size([1, 1, 173, 256])`\r\n\r\nAm I missing something here ?\r\n\r\n### Environment\r\n\r\n```shell\r\nkornia='0.6.9'\r\ntorch='1.12.1+cu113'\r\n```\r\n\r\n\r\n### Additional context\r\n\r\n_No response_\n", "before_files": [{"content": "from typing import Dict, Tuple, Union\n\nimport torch\n\nfrom kornia.augmentation.random_generator.base import RandomGeneratorBase\nfrom kornia.augmentation.utils import _common_param_check\nfrom kornia.core import Device, Tensor, tensor\nfrom kornia.geometry.bbox import bbox_generator\nfrom kornia.geometry.transform.affwarp import _side_to_image_size\n\n\nclass ResizeGenerator(RandomGeneratorBase):\n r\"\"\"Get parameters for ```resize``` transformation for resize transform.\n\n Args:\n resize_to: Desired output size of the crop, like (h, w).\n side: Which side to resize if `resize_to` is only of type int.\n\n Returns:\n parameters to be passed for transformation.\n - src (Tensor): cropping bounding boxes with a shape of (B, 4, 2).\n - dst (Tensor): output bounding boxes with a shape (B, 4, 2).\n - input_size (Tensor): (h, w) from batch input.\n - resize_to (tuple): new (h, w) for batch input.\n\n Note:\n The generated random numbers are not reproducible across different devices and dtypes. By default,\n the parameters will be generated on CPU in float32. This can be changed by calling\n ``self.set_rng_device_and_dtype(device=\"cuda\", dtype=torch.float64)``.\n \"\"\"\n\n def __init__(self, resize_to: Union[int, Tuple[int, int]], side: str = \"short\") -> None:\n super().__init__()\n self.output_size = resize_to\n self.side = side\n\n def __repr__(self) -> str:\n repr = f\"output_size={self.output_size}\"\n return repr\n\n def make_samplers(self, device: Device, dtype: torch.dtype) -> None:\n self.device = device\n self.dtype = dtype\n pass\n\n def forward(self, batch_shape: torch.Size, same_on_batch: bool = False) -> Dict[str, Tensor]:\n batch_size = batch_shape[0]\n _common_param_check(batch_size, same_on_batch)\n _device = self.device\n _dtype = self.dtype\n\n if batch_size == 0:\n return dict(\n src=torch.zeros([0, 4, 2], device=_device, dtype=_dtype),\n dst=torch.zeros([0, 4, 2], device=_device, dtype=_dtype),\n )\n\n input_size = h, w = (batch_shape[-2], batch_shape[-1])\n\n src = bbox_generator(\n tensor(0, device=_device, dtype=_dtype),\n tensor(0, device=_device, dtype=_dtype),\n tensor(input_size[1], device=_device, dtype=_dtype),\n tensor(input_size[0], device=_device, dtype=_dtype),\n ).repeat(batch_size, 1, 1)\n\n if isinstance(self.output_size, int):\n aspect_ratio = w / h\n self.output_size = _side_to_image_size(self.output_size, aspect_ratio, self.side)\n\n if not (\n len(self.output_size) == 2\n and isinstance(self.output_size[0], (int,))\n and isinstance(self.output_size[1], (int,))\n and self.output_size[0] > 0\n and self.output_size[1] > 0\n ):\n raise AssertionError(f\"`resize_to` must be a tuple of 2 positive integers. Got {self.output_size}.\")\n\n dst = bbox_generator(\n tensor(0, device=_device, dtype=_dtype),\n tensor(0, device=_device, dtype=_dtype),\n tensor(self.output_size[1], device=_device, dtype=_dtype),\n tensor(self.output_size[0], device=_device, dtype=_dtype),\n ).repeat(batch_size, 1, 1)\n\n _input_size = tensor(input_size, device=_device, dtype=torch.long).expand(batch_size, -1)\n _output_size = tensor(self.output_size, device=_device, dtype=torch.long).expand(batch_size, -1)\n\n return dict(src=src, dst=dst, input_size=_input_size, output_size=_output_size)\n", "path": "kornia/augmentation/random_generator/_2d/resize.py"}], "after_files": [{"content": "from typing import Dict, Tuple, Union\n\nimport torch\n\nfrom kornia.augmentation.random_generator.base import RandomGeneratorBase\nfrom kornia.augmentation.utils import _common_param_check\nfrom kornia.core import Device, Tensor, tensor\nfrom kornia.geometry.bbox import bbox_generator\nfrom kornia.geometry.transform.affwarp import _side_to_image_size\n\n\nclass ResizeGenerator(RandomGeneratorBase):\n r\"\"\"Get parameters for ```resize``` transformation for resize transform.\n\n Args:\n resize_to: Desired output size of the crop, like (h, w).\n side: Which side to resize if `resize_to` is only of type int.\n\n Returns:\n parameters to be passed for transformation.\n - src (Tensor): cropping bounding boxes with a shape of (B, 4, 2).\n - dst (Tensor): output bounding boxes with a shape (B, 4, 2).\n - input_size (Tensor): (h, w) from batch input.\n - resize_to (tuple): new (h, w) for batch input.\n\n Note:\n The generated random numbers are not reproducible across different devices and dtypes. By default,\n the parameters will be generated on CPU in float32. This can be changed by calling\n ``self.set_rng_device_and_dtype(device=\"cuda\", dtype=torch.float64)``.\n \"\"\"\n\n def __init__(self, resize_to: Union[int, Tuple[int, int]], side: str = \"short\") -> None:\n super().__init__()\n self.output_size = resize_to\n self.side = side\n\n def __repr__(self) -> str:\n repr = f\"output_size={self.output_size}\"\n return repr\n\n def make_samplers(self, device: Device, dtype: torch.dtype) -> None:\n self.device = device\n self.dtype = dtype\n pass\n\n def forward(self, batch_shape: torch.Size, same_on_batch: bool = False) -> Dict[str, Tensor]:\n batch_size = batch_shape[0]\n _common_param_check(batch_size, same_on_batch)\n _device = self.device\n _dtype = self.dtype\n\n if batch_size == 0:\n return dict(\n src=torch.zeros([0, 4, 2], device=_device, dtype=_dtype),\n dst=torch.zeros([0, 4, 2], device=_device, dtype=_dtype),\n )\n\n input_size = h, w = (batch_shape[-2], batch_shape[-1])\n\n src = bbox_generator(\n tensor(0, device=_device, dtype=_dtype),\n tensor(0, device=_device, dtype=_dtype),\n tensor(input_size[1], device=_device, dtype=_dtype),\n tensor(input_size[0], device=_device, dtype=_dtype),\n ).repeat(batch_size, 1, 1)\n\n if isinstance(self.output_size, int):\n aspect_ratio = w / h\n output_size = _side_to_image_size(self.output_size, aspect_ratio, self.side)\n else:\n output_size = self.output_size\n\n if not (\n len(output_size) == 2\n and isinstance(output_size[0], (int,))\n and isinstance(output_size[1], (int,))\n and output_size[0] > 0\n and output_size[1] > 0\n ):\n raise AssertionError(f\"`resize_to` must be a tuple of 2 positive integers. Got {output_size}.\")\n\n dst = bbox_generator(\n tensor(0, device=_device, dtype=_dtype),\n tensor(0, device=_device, dtype=_dtype),\n tensor(output_size[1], device=_device, dtype=_dtype),\n tensor(output_size[0], device=_device, dtype=_dtype),\n ).repeat(batch_size, 1, 1)\n\n _input_size = tensor(input_size, device=_device, dtype=torch.long).expand(batch_size, -1)\n _output_size = tensor(output_size, device=_device, dtype=torch.long).expand(batch_size, -1)\n\n return dict(src=src, dst=dst, input_size=_input_size, output_size=_output_size)\n", "path": "kornia/augmentation/random_generator/_2d/resize.py"}]} | 1,824 | 543 |
gh_patches_debug_39944 | rasdani/github-patches | git_diff | cobbler__cobbler-2919 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Security: Stabalize the MongoDB serializer
### Describe the bug
This is the upstream bug report for SUSE/spacewalk#16737 which is a mirror issue of an internal Bugzilla issue.
Copied from the Bugzilla comment by @thesp0nge:
In mongodb serializer class, when the config file is read, there is no sanity check.
If the file get somewhat corrupted, it can lead to unexpected behaviour.
```python
def __connect(configfile: str = "/etc/cobbler/mongodb.conf"):
"""
Reads the config file for mongodb and then connects to the mongodb.
"""
cp = ConfigParser()
cp.read(configfile)
host = cp.get("connection", "host")
port = int(cp.get("connection", "port"))
# pylint: disable=global-statement
global mongodb
mongodb = MongoClient(host, port)['cobbler']
```
### Steps to reproduce
1. Corrupt the `mongodb.conf`
2. Start Cobbler
3. See error
### Expected behavior
We get a better exception and Cobbler is prevented from starting up.
### Cobbler version
<!--- Paste output from `cobbler version` -->
````paste below
````
### Operating system
<!--- On which operating system do you use Cobbler? -->
### Cobbler log
<!--- Paste (partial) output from `/var/log/cobbler/cobbler.log` -->
````paste below
````
### Screenshots
<!--- If applicable, add screenshots to help explain your problem. -->
### Additional information
<!--- Add any other context about the problem here. -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cobbler/modules/serializers/mongodb.py`
Content:
```
1 """
2 Cobbler's Mongo database based object serializer.
3 Experimental version.
4
5 Copyright 2006-2009, Red Hat, Inc and Others
6 Michael DeHaan <michael.dehaan AT gmail>
7 James Cammarata <[email protected]>
8
9 This program is free software; you can redistribute it and/or modify
10 it under the terms of the GNU General Public License as published by
11 the Free Software Foundation; either version 2 of the License, or
12 (at your option) any later version.
13
14 This program is distributed in the hope that it will be useful,
15 but WITHOUT ANY WARRANTY; without even the implied warranty of
16 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
17 GNU General Public License for more details.
18
19 You should have received a copy of the GNU General Public License
20 along with this program; if not, write to the Free Software
21 Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
22 02110-1301 USA
23 """
24
25 from configparser import ConfigParser
26
27 from cobbler import settings
28 from cobbler.cexceptions import CX
29
30 try:
31 from pymongo import MongoClient
32 from pymongo.errors import ConnectionFailure, ConfigurationError
33 pymongo_loaded = True
34 except ModuleNotFoundError:
35 # FIXME: log message
36 pymongo_loaded = False
37
38 mongodb = None
39
40
41 def __connect(configfile: str = "/etc/cobbler/mongodb.conf"):
42 """
43 Reads the config file for mongodb and then connects to the mongodb.
44 """
45 cp = ConfigParser()
46 cp.read(configfile)
47
48 host = cp.get("connection", "host")
49 port = int(cp.get("connection", "port"))
50 # pylint: disable=global-statement
51 global mongodb
52 mongodb = MongoClient(host, port)['cobbler']
53 try:
54 # The ismaster command is cheap and doesn't require auth.
55 mongodb.admin.command('ismaster')
56 except ConnectionFailure as e:
57 # FIXME: log error
58 raise CX("Unable to connect to Mongo database or get database \"cobbler\"") from e
59 except ConfigurationError as e:
60 raise CX("The configuration of the MongoDB connection isn't correct, please check the Cobbler settings.") from e
61
62
63 def register() -> str:
64 """
65 The mandatory Cobbler module registration hook.
66 """
67 # FIXME: only run this if enabled.
68 if not pymongo_loaded:
69 return ""
70 return "serializer"
71
72
73 def what() -> str:
74 """
75 Module identification function
76 """
77 return "serializer/mongodb"
78
79
80 def serialize_item(collection, item):
81 """
82 Save a collection item to database.
83
84 :param collection: collection
85 :param item: collection item
86 """
87
88 __connect()
89 collection = mongodb[collection.collection_type()]
90 data = collection.find_one({'name': item.name})
91 if data:
92 collection.update({'name': item.name}, item.serialize())
93 else:
94 collection.insert(item.serialize())
95
96
97 def serialize_delete(collection, item):
98 """
99 Delete a collection item from database.
100
101 :param collection: collection
102 :param item: collection item
103 """
104
105 __connect()
106 collection = mongodb[collection.collection_type()]
107 collection.remove({'name': item.name})
108
109
110 def serialize(collection):
111 """
112 Save a collection to database
113
114 :param collection: collection
115 """
116
117 # TODO: error detection
118 ctype = collection.collection_type()
119 if ctype != "settings":
120 for x in collection:
121 serialize_item(collection, x)
122
123
124 def deserialize_raw(collection_type: str):
125 """
126 Get a collection from mongodb and parse it into an object.
127
128 :param collection_type: The collection type to fetch.
129 :return: The first element of the collection requested.
130 """
131 if collection_type == "settings":
132 return settings.read_settings_file()
133 else:
134 __connect()
135 collection = mongodb[collection_type]
136 return collection.find()
137
138
139 def deserialize(collection, topological: bool = True):
140 """
141 Load a collection from the database.
142
143 :param collection: The collection to deserialize.
144 :param topological: If the collection list should be sorted by the collection dict depth value or not.
145 """
146
147 datastruct = deserialize_raw(collection.collection_type())
148 if topological and type(datastruct) == list:
149 datastruct.sort(key=lambda x: x["depth"])
150 if type(datastruct) == dict:
151 collection.from_dict(datastruct)
152 elif type(datastruct) == list:
153 collection.from_list(datastruct)
154
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cobbler/modules/serializers/mongodb.py b/cobbler/modules/serializers/mongodb.py
--- a/cobbler/modules/serializers/mongodb.py
+++ b/cobbler/modules/serializers/mongodb.py
@@ -21,7 +21,8 @@
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
02110-1301 USA
"""
-
+import configparser
+import pathlib
from configparser import ConfigParser
from cobbler import settings
@@ -30,6 +31,7 @@
try:
from pymongo import MongoClient
from pymongo.errors import ConnectionFailure, ConfigurationError
+
pymongo_loaded = True
except ModuleNotFoundError:
# FIXME: log message
@@ -42,22 +44,34 @@
"""
Reads the config file for mongodb and then connects to the mongodb.
"""
- cp = ConfigParser()
- cp.read(configfile)
+ if not pathlib.Path(configfile).is_file():
+ raise FileNotFoundError(
+ "Specified Cobbler MongoDB config file could not be found!"
+ )
- host = cp.get("connection", "host")
- port = int(cp.get("connection", "port"))
+ cp = ConfigParser()
+ try:
+ cp.read(configfile)
+ except configparser.Error as cp_error:
+ raise configparser.Error(
+ "Could not read Cobbler MongoDB config file!"
+ ) from cp_error
+
+ host = cp.get("connection", "host", fallback="localhost")
+ port = cp.getint("connection", "port", fallback=27017)
# pylint: disable=global-statement
global mongodb
- mongodb = MongoClient(host, port)['cobbler']
+ mongodb = MongoClient(host, port)["cobbler"]
try:
# The ismaster command is cheap and doesn't require auth.
- mongodb.admin.command('ismaster')
+ mongodb.admin.command("ismaster")
except ConnectionFailure as e:
# FIXME: log error
- raise CX("Unable to connect to Mongo database or get database \"cobbler\"") from e
+ raise CX('Unable to connect to Mongo database or get database "cobbler"') from e
except ConfigurationError as e:
- raise CX("The configuration of the MongoDB connection isn't correct, please check the Cobbler settings.") from e
+ raise CX(
+ "The configuration of the MongoDB connection isn't correct, please check the Cobbler settings."
+ ) from e
def register() -> str:
@@ -87,9 +101,9 @@
__connect()
collection = mongodb[collection.collection_type()]
- data = collection.find_one({'name': item.name})
+ data = collection.find_one({"name": item.name})
if data:
- collection.update({'name': item.name}, item.serialize())
+ collection.update({"name": item.name}, item.serialize())
else:
collection.insert(item.serialize())
@@ -104,7 +118,7 @@
__connect()
collection = mongodb[collection.collection_type()]
- collection.remove({'name': item.name})
+ collection.remove({"name": item.name})
def serialize(collection):
| {"golden_diff": "diff --git a/cobbler/modules/serializers/mongodb.py b/cobbler/modules/serializers/mongodb.py\n--- a/cobbler/modules/serializers/mongodb.py\n+++ b/cobbler/modules/serializers/mongodb.py\n@@ -21,7 +21,8 @@\n Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA\n 02110-1301 USA\n \"\"\"\n-\n+import configparser\n+import pathlib\n from configparser import ConfigParser\n \n from cobbler import settings\n@@ -30,6 +31,7 @@\n try:\n from pymongo import MongoClient\n from pymongo.errors import ConnectionFailure, ConfigurationError\n+\n pymongo_loaded = True\n except ModuleNotFoundError:\n # FIXME: log message\n@@ -42,22 +44,34 @@\n \"\"\"\n Reads the config file for mongodb and then connects to the mongodb.\n \"\"\"\n- cp = ConfigParser()\n- cp.read(configfile)\n+ if not pathlib.Path(configfile).is_file():\n+ raise FileNotFoundError(\n+ \"Specified Cobbler MongoDB config file could not be found!\"\n+ )\n \n- host = cp.get(\"connection\", \"host\")\n- port = int(cp.get(\"connection\", \"port\"))\n+ cp = ConfigParser()\n+ try:\n+ cp.read(configfile)\n+ except configparser.Error as cp_error:\n+ raise configparser.Error(\n+ \"Could not read Cobbler MongoDB config file!\"\n+ ) from cp_error\n+\n+ host = cp.get(\"connection\", \"host\", fallback=\"localhost\")\n+ port = cp.getint(\"connection\", \"port\", fallback=27017)\n # pylint: disable=global-statement\n global mongodb\n- mongodb = MongoClient(host, port)['cobbler']\n+ mongodb = MongoClient(host, port)[\"cobbler\"]\n try:\n # The ismaster command is cheap and doesn't require auth.\n- mongodb.admin.command('ismaster')\n+ mongodb.admin.command(\"ismaster\")\n except ConnectionFailure as e:\n # FIXME: log error\n- raise CX(\"Unable to connect to Mongo database or get database \\\"cobbler\\\"\") from e\n+ raise CX('Unable to connect to Mongo database or get database \"cobbler\"') from e\n except ConfigurationError as e:\n- raise CX(\"The configuration of the MongoDB connection isn't correct, please check the Cobbler settings.\") from e\n+ raise CX(\n+ \"The configuration of the MongoDB connection isn't correct, please check the Cobbler settings.\"\n+ ) from e\n \n \n def register() -> str:\n@@ -87,9 +101,9 @@\n \n __connect()\n collection = mongodb[collection.collection_type()]\n- data = collection.find_one({'name': item.name})\n+ data = collection.find_one({\"name\": item.name})\n if data:\n- collection.update({'name': item.name}, item.serialize())\n+ collection.update({\"name\": item.name}, item.serialize())\n else:\n collection.insert(item.serialize())\n \n@@ -104,7 +118,7 @@\n \n __connect()\n collection = mongodb[collection.collection_type()]\n- collection.remove({'name': item.name})\n+ collection.remove({\"name\": item.name})\n \n \n def serialize(collection):\n", "issue": "Security: Stabalize the MongoDB serializer\n### Describe the bug\r\n\r\nThis is the upstream bug report for SUSE/spacewalk#16737 which is a mirror issue of an internal Bugzilla issue.\r\n\r\nCopied from the Bugzilla comment by @thesp0nge:\r\n\r\nIn mongodb serializer class, when the config file is read, there is no sanity check.\r\nIf the file get somewhat corrupted, it can lead to unexpected behaviour.\r\n\r\n```python\r\ndef __connect(configfile: str = \"/etc/cobbler/mongodb.conf\"):\r\n \"\"\"\r\n Reads the config file for mongodb and then connects to the mongodb.\r\n \"\"\"\r\n cp = ConfigParser()\r\n cp.read(configfile)\r\n\r\n host = cp.get(\"connection\", \"host\")\r\n port = int(cp.get(\"connection\", \"port\"))\r\n # pylint: disable=global-statement\r\n global mongodb\r\n mongodb = MongoClient(host, port)['cobbler']\r\n```\r\n\r\n### Steps to reproduce\r\n\r\n1. Corrupt the `mongodb.conf`\r\n2. Start Cobbler\r\n3. See error\r\n\r\n### Expected behavior\r\n\r\nWe get a better exception and Cobbler is prevented from starting up.\r\n\r\n### Cobbler version\r\n\r\n<!--- Paste output from `cobbler version` -->\r\n````paste below\r\n````\r\n\r\n### Operating system\r\n\r\n<!--- On which operating system do you use Cobbler? -->\r\n\r\n### Cobbler log\r\n\r\n<!--- Paste (partial) output from `/var/log/cobbler/cobbler.log` -->\r\n````paste below\r\n````\r\n\r\n### Screenshots\r\n\r\n<!--- If applicable, add screenshots to help explain your problem. -->\r\n\r\n### Additional information\r\n\r\n<!--- Add any other context about the problem here. -->\r\n\n", "before_files": [{"content": "\"\"\"\nCobbler's Mongo database based object serializer.\nExperimental version.\n\nCopyright 2006-2009, Red Hat, Inc and Others\nMichael DeHaan <michael.dehaan AT gmail>\nJames Cammarata <[email protected]>\n\nThis program is free software; you can redistribute it and/or modify\nit under the terms of the GNU General Public License as published by\nthe Free Software Foundation; either version 2 of the License, or\n(at your option) any later version.\n\nThis program is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU General Public License for more details.\n\nYou should have received a copy of the GNU General Public License\nalong with this program; if not, write to the Free Software\nFoundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA\n02110-1301 USA\n\"\"\"\n\nfrom configparser import ConfigParser\n\nfrom cobbler import settings\nfrom cobbler.cexceptions import CX\n\ntry:\n from pymongo import MongoClient\n from pymongo.errors import ConnectionFailure, ConfigurationError\n pymongo_loaded = True\nexcept ModuleNotFoundError:\n # FIXME: log message\n pymongo_loaded = False\n\nmongodb = None\n\n\ndef __connect(configfile: str = \"/etc/cobbler/mongodb.conf\"):\n \"\"\"\n Reads the config file for mongodb and then connects to the mongodb.\n \"\"\"\n cp = ConfigParser()\n cp.read(configfile)\n\n host = cp.get(\"connection\", \"host\")\n port = int(cp.get(\"connection\", \"port\"))\n # pylint: disable=global-statement\n global mongodb\n mongodb = MongoClient(host, port)['cobbler']\n try:\n # The ismaster command is cheap and doesn't require auth.\n mongodb.admin.command('ismaster')\n except ConnectionFailure as e:\n # FIXME: log error\n raise CX(\"Unable to connect to Mongo database or get database \\\"cobbler\\\"\") from e\n except ConfigurationError as e:\n raise CX(\"The configuration of the MongoDB connection isn't correct, please check the Cobbler settings.\") from e\n\n\ndef register() -> str:\n \"\"\"\n The mandatory Cobbler module registration hook.\n \"\"\"\n # FIXME: only run this if enabled.\n if not pymongo_loaded:\n return \"\"\n return \"serializer\"\n\n\ndef what() -> str:\n \"\"\"\n Module identification function\n \"\"\"\n return \"serializer/mongodb\"\n\n\ndef serialize_item(collection, item):\n \"\"\"\n Save a collection item to database.\n\n :param collection: collection\n :param item: collection item\n \"\"\"\n\n __connect()\n collection = mongodb[collection.collection_type()]\n data = collection.find_one({'name': item.name})\n if data:\n collection.update({'name': item.name}, item.serialize())\n else:\n collection.insert(item.serialize())\n\n\ndef serialize_delete(collection, item):\n \"\"\"\n Delete a collection item from database.\n\n :param collection: collection\n :param item: collection item\n \"\"\"\n\n __connect()\n collection = mongodb[collection.collection_type()]\n collection.remove({'name': item.name})\n\n\ndef serialize(collection):\n \"\"\"\n Save a collection to database\n\n :param collection: collection\n \"\"\"\n\n # TODO: error detection\n ctype = collection.collection_type()\n if ctype != \"settings\":\n for x in collection:\n serialize_item(collection, x)\n\n\ndef deserialize_raw(collection_type: str):\n \"\"\"\n Get a collection from mongodb and parse it into an object.\n\n :param collection_type: The collection type to fetch.\n :return: The first element of the collection requested.\n \"\"\"\n if collection_type == \"settings\":\n return settings.read_settings_file()\n else:\n __connect()\n collection = mongodb[collection_type]\n return collection.find()\n\n\ndef deserialize(collection, topological: bool = True):\n \"\"\"\n Load a collection from the database.\n\n :param collection: The collection to deserialize.\n :param topological: If the collection list should be sorted by the collection dict depth value or not.\n \"\"\"\n\n datastruct = deserialize_raw(collection.collection_type())\n if topological and type(datastruct) == list:\n datastruct.sort(key=lambda x: x[\"depth\"])\n if type(datastruct) == dict:\n collection.from_dict(datastruct)\n elif type(datastruct) == list:\n collection.from_list(datastruct)\n", "path": "cobbler/modules/serializers/mongodb.py"}], "after_files": [{"content": "\"\"\"\nCobbler's Mongo database based object serializer.\nExperimental version.\n\nCopyright 2006-2009, Red Hat, Inc and Others\nMichael DeHaan <michael.dehaan AT gmail>\nJames Cammarata <[email protected]>\n\nThis program is free software; you can redistribute it and/or modify\nit under the terms of the GNU General Public License as published by\nthe Free Software Foundation; either version 2 of the License, or\n(at your option) any later version.\n\nThis program is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU General Public License for more details.\n\nYou should have received a copy of the GNU General Public License\nalong with this program; if not, write to the Free Software\nFoundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA\n02110-1301 USA\n\"\"\"\nimport configparser\nimport pathlib\nfrom configparser import ConfigParser\n\nfrom cobbler import settings\nfrom cobbler.cexceptions import CX\n\ntry:\n from pymongo import MongoClient\n from pymongo.errors import ConnectionFailure, ConfigurationError\n\n pymongo_loaded = True\nexcept ModuleNotFoundError:\n # FIXME: log message\n pymongo_loaded = False\n\nmongodb = None\n\n\ndef __connect(configfile: str = \"/etc/cobbler/mongodb.conf\"):\n \"\"\"\n Reads the config file for mongodb and then connects to the mongodb.\n \"\"\"\n if not pathlib.Path(configfile).is_file():\n raise FileNotFoundError(\n \"Specified Cobbler MongoDB config file could not be found!\"\n )\n\n cp = ConfigParser()\n try:\n cp.read(configfile)\n except configparser.Error as cp_error:\n raise configparser.Error(\n \"Could not read Cobbler MongoDB config file!\"\n ) from cp_error\n\n host = cp.get(\"connection\", \"host\", fallback=\"localhost\")\n port = cp.getint(\"connection\", \"port\", fallback=27017)\n # pylint: disable=global-statement\n global mongodb\n mongodb = MongoClient(host, port)[\"cobbler\"]\n try:\n # The ismaster command is cheap and doesn't require auth.\n mongodb.admin.command(\"ismaster\")\n except ConnectionFailure as e:\n # FIXME: log error\n raise CX('Unable to connect to Mongo database or get database \"cobbler\"') from e\n except ConfigurationError as e:\n raise CX(\n \"The configuration of the MongoDB connection isn't correct, please check the Cobbler settings.\"\n ) from e\n\n\ndef register() -> str:\n \"\"\"\n The mandatory Cobbler module registration hook.\n \"\"\"\n # FIXME: only run this if enabled.\n if not pymongo_loaded:\n return \"\"\n return \"serializer\"\n\n\ndef what() -> str:\n \"\"\"\n Module identification function\n \"\"\"\n return \"serializer/mongodb\"\n\n\ndef serialize_item(collection, item):\n \"\"\"\n Save a collection item to database.\n\n :param collection: collection\n :param item: collection item\n \"\"\"\n\n __connect()\n collection = mongodb[collection.collection_type()]\n data = collection.find_one({\"name\": item.name})\n if data:\n collection.update({\"name\": item.name}, item.serialize())\n else:\n collection.insert(item.serialize())\n\n\ndef serialize_delete(collection, item):\n \"\"\"\n Delete a collection item from database.\n\n :param collection: collection\n :param item: collection item\n \"\"\"\n\n __connect()\n collection = mongodb[collection.collection_type()]\n collection.remove({\"name\": item.name})\n\n\ndef serialize(collection):\n \"\"\"\n Save a collection to database\n\n :param collection: collection\n \"\"\"\n\n # TODO: error detection\n ctype = collection.collection_type()\n if ctype != \"settings\":\n for x in collection:\n serialize_item(collection, x)\n\n\ndef deserialize_raw(collection_type: str):\n \"\"\"\n Get a collection from mongodb and parse it into an object.\n\n :param collection_type: The collection type to fetch.\n :return: The first element of the collection requested.\n \"\"\"\n if collection_type == \"settings\":\n return settings.read_settings_file()\n else:\n __connect()\n collection = mongodb[collection_type]\n return collection.find()\n\n\ndef deserialize(collection, topological: bool = True):\n \"\"\"\n Load a collection from the database.\n\n :param collection: The collection to deserialize.\n :param topological: If the collection list should be sorted by the collection dict depth value or not.\n \"\"\"\n\n datastruct = deserialize_raw(collection.collection_type())\n if topological and type(datastruct) == list:\n datastruct.sort(key=lambda x: x[\"depth\"])\n if type(datastruct) == dict:\n collection.from_dict(datastruct)\n elif type(datastruct) == list:\n collection.from_list(datastruct)\n", "path": "cobbler/modules/serializers/mongodb.py"}]} | 1,931 | 715 |
gh_patches_debug_42727 | rasdani/github-patches | git_diff | getmoto__moto-2330 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add trusted hosts / networks to Server mode
Currently (Java) AWS API can only work with Moto from the local network (if it starts with "192.168.X.X") or if the request is coming from the same host as the server, that's very limiting because i want to run Moto on a container in an external network.
Any chance to add trusted networks or hosts via config file or CLI args?
I'd appreciate any info if this had been addressed or i'm using it wrong
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `moto/server.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import argparse
4 import json
5 import re
6 import sys
7 from threading import Lock
8
9 import six
10 from flask import Flask
11 from flask.testing import FlaskClient
12
13 from six.moves.urllib.parse import urlencode
14 from werkzeug.routing import BaseConverter
15 from werkzeug.serving import run_simple
16
17 from moto.backends import BACKENDS
18 from moto.core.utils import convert_flask_to_httpretty_response
19
20
21 HTTP_METHODS = ["GET", "POST", "PUT", "DELETE", "HEAD", "PATCH"]
22
23
24 DEFAULT_SERVICE_REGION = ('s3', 'us-east-1')
25
26 # Map of unsigned calls to service-region as per AWS API docs
27 # https://docs.aws.amazon.com/cognito/latest/developerguide/resource-permissions.html#amazon-cognito-signed-versus-unsigned-apis
28 UNSIGNED_REQUESTS = {
29 'AWSCognitoIdentityService': ('cognito-identity', 'us-east-1'),
30 'AWSCognitoIdentityProviderService': ('cognito-idp', 'us-east-1'),
31 }
32
33
34 class DomainDispatcherApplication(object):
35 """
36 Dispatch requests to different applications based on the "Host:" header
37 value. We'll match the host header value with the url_bases of each backend.
38 """
39
40 def __init__(self, create_app, service=None):
41 self.create_app = create_app
42 self.lock = Lock()
43 self.app_instances = {}
44 self.service = service
45
46 def get_backend_for_host(self, host):
47 if host == 'moto_api':
48 return host
49
50 if self.service:
51 return self.service
52
53 if host in BACKENDS:
54 return host
55
56 for backend_name, backend in BACKENDS.items():
57 for url_base in list(backend.values())[0].url_bases:
58 if re.match(url_base, 'http://%s' % host):
59 return backend_name
60
61 raise RuntimeError('Invalid host: "%s"' % host)
62
63 def infer_service_region(self, environ):
64 auth = environ.get('HTTP_AUTHORIZATION')
65 if auth:
66 # Signed request
67 # Parse auth header to find service assuming a SigV4 request
68 # https://docs.aws.amazon.com/general/latest/gr/sigv4-signed-request-examples.html
69 # ['Credential=sdffdsa', '20170220', 'us-east-1', 'sns', 'aws4_request']
70 try:
71 credential_scope = auth.split(",")[0].split()[1]
72 _, _, region, service, _ = credential_scope.split("/")
73 return service, region
74 except ValueError:
75 # Signature format does not match, this is exceptional and we can't
76 # infer a service-region. A reduced set of services still use
77 # the deprecated SigV2, ergo prefer S3 as most likely default.
78 # https://docs.aws.amazon.com/general/latest/gr/signature-version-2.html
79 return DEFAULT_SERVICE_REGION
80 else:
81 # Unsigned request
82 target = environ.get('HTTP_X_AMZ_TARGET')
83 if target:
84 service, _ = target.split('.', 1)
85 return UNSIGNED_REQUESTS.get(service, DEFAULT_SERVICE_REGION)
86 # S3 is the last resort when the target is also unknown
87 return DEFAULT_SERVICE_REGION
88
89 def get_application(self, environ):
90 path_info = environ.get('PATH_INFO', '')
91
92 # The URL path might contain non-ASCII text, for instance unicode S3 bucket names
93 if six.PY2 and isinstance(path_info, str):
94 path_info = six.u(path_info)
95 if six.PY3 and isinstance(path_info, six.binary_type):
96 path_info = path_info.decode('utf-8')
97
98 if path_info.startswith("/moto-api") or path_info == "/favicon.ico":
99 host = "moto_api"
100 elif path_info.startswith("/latest/meta-data/"):
101 host = "instance_metadata"
102 else:
103 host = environ['HTTP_HOST'].split(':')[0]
104 if host in {'localhost', 'motoserver'} or host.startswith("192.168."):
105 service, region = self.infer_service_region(environ)
106 if service == 'dynamodb':
107 if environ['HTTP_X_AMZ_TARGET'].startswith('DynamoDBStreams'):
108 host = 'dynamodbstreams'
109 else:
110 dynamo_api_version = environ['HTTP_X_AMZ_TARGET'].split("_")[1].split(".")[0]
111 # If Newer API version, use dynamodb2
112 if dynamo_api_version > "20111205":
113 host = "dynamodb2"
114 else:
115 host = "{service}.{region}.amazonaws.com".format(
116 service=service, region=region)
117
118 with self.lock:
119 backend = self.get_backend_for_host(host)
120 app = self.app_instances.get(backend, None)
121 if app is None:
122 app = self.create_app(backend)
123 self.app_instances[backend] = app
124 return app
125
126 def __call__(self, environ, start_response):
127 backend_app = self.get_application(environ)
128 return backend_app(environ, start_response)
129
130
131 class RegexConverter(BaseConverter):
132 # http://werkzeug.pocoo.org/docs/routing/#custom-converters
133
134 def __init__(self, url_map, *items):
135 super(RegexConverter, self).__init__(url_map)
136 self.regex = items[0]
137
138
139 class AWSTestHelper(FlaskClient):
140
141 def action_data(self, action_name, **kwargs):
142 """
143 Method calls resource with action_name and returns data of response.
144 """
145 opts = {"Action": action_name}
146 opts.update(kwargs)
147 res = self.get("/?{0}".format(urlencode(opts)),
148 headers={"Host": "{0}.us-east-1.amazonaws.com".format(self.application.service)})
149 return res.data.decode("utf-8")
150
151 def action_json(self, action_name, **kwargs):
152 """
153 Method calls resource with action_name and returns object obtained via
154 deserialization of output.
155 """
156 return json.loads(self.action_data(action_name, **kwargs))
157
158
159 def create_backend_app(service):
160 from werkzeug.routing import Map
161
162 # Create the backend_app
163 backend_app = Flask(__name__)
164 backend_app.debug = True
165 backend_app.service = service
166
167 # Reset view functions to reset the app
168 backend_app.view_functions = {}
169 backend_app.url_map = Map()
170 backend_app.url_map.converters['regex'] = RegexConverter
171 backend = list(BACKENDS[service].values())[0]
172 for url_path, handler in backend.flask_paths.items():
173 if handler.__name__ == 'dispatch':
174 endpoint = '{0}.dispatch'.format(handler.__self__.__name__)
175 else:
176 endpoint = None
177
178 original_endpoint = endpoint
179 index = 2
180 while endpoint in backend_app.view_functions:
181 # HACK: Sometimes we map the same view to multiple url_paths. Flask
182 # requries us to have different names.
183 endpoint = original_endpoint + str(index)
184 index += 1
185
186 backend_app.add_url_rule(
187 url_path,
188 endpoint=endpoint,
189 methods=HTTP_METHODS,
190 view_func=convert_flask_to_httpretty_response(handler),
191 strict_slashes=False,
192 )
193
194 backend_app.test_client_class = AWSTestHelper
195 return backend_app
196
197
198 def main(argv=sys.argv[1:]):
199 parser = argparse.ArgumentParser()
200
201 # Keep this for backwards compat
202 parser.add_argument(
203 "service",
204 type=str,
205 nargs='?', # http://stackoverflow.com/a/4480202/731592
206 default=None)
207 parser.add_argument(
208 '-H', '--host', type=str,
209 help='Which host to bind',
210 default='127.0.0.1')
211 parser.add_argument(
212 '-p', '--port', type=int,
213 help='Port number to use for connection',
214 default=5000)
215 parser.add_argument(
216 '-r', '--reload',
217 action='store_true',
218 help='Reload server on a file change',
219 default=False
220 )
221 parser.add_argument(
222 '-s', '--ssl',
223 action='store_true',
224 help='Enable SSL encrypted connection with auto-generated certificate (use https://... URL)',
225 default=False
226 )
227 parser.add_argument(
228 '-c', '--ssl-cert', type=str,
229 help='Path to SSL certificate',
230 default=None)
231 parser.add_argument(
232 '-k', '--ssl-key', type=str,
233 help='Path to SSL private key',
234 default=None)
235
236 args = parser.parse_args(argv)
237
238 # Wrap the main application
239 main_app = DomainDispatcherApplication(
240 create_backend_app, service=args.service)
241 main_app.debug = True
242
243 ssl_context = None
244 if args.ssl_key and args.ssl_cert:
245 ssl_context = (args.ssl_cert, args.ssl_key)
246 elif args.ssl:
247 ssl_context = 'adhoc'
248
249 run_simple(args.host, args.port, main_app,
250 threaded=True, use_reloader=args.reload,
251 ssl_context=ssl_context)
252
253
254 if __name__ == '__main__':
255 main()
256
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/moto/server.py b/moto/server.py
--- a/moto/server.py
+++ b/moto/server.py
@@ -58,9 +58,7 @@
if re.match(url_base, 'http://%s' % host):
return backend_name
- raise RuntimeError('Invalid host: "%s"' % host)
-
- def infer_service_region(self, environ):
+ def infer_service_region_host(self, environ):
auth = environ.get('HTTP_AUTHORIZATION')
if auth:
# Signed request
@@ -70,21 +68,35 @@
try:
credential_scope = auth.split(",")[0].split()[1]
_, _, region, service, _ = credential_scope.split("/")
- return service, region
except ValueError:
# Signature format does not match, this is exceptional and we can't
# infer a service-region. A reduced set of services still use
# the deprecated SigV2, ergo prefer S3 as most likely default.
# https://docs.aws.amazon.com/general/latest/gr/signature-version-2.html
- return DEFAULT_SERVICE_REGION
+ service, region = DEFAULT_SERVICE_REGION
else:
# Unsigned request
target = environ.get('HTTP_X_AMZ_TARGET')
if target:
service, _ = target.split('.', 1)
- return UNSIGNED_REQUESTS.get(service, DEFAULT_SERVICE_REGION)
- # S3 is the last resort when the target is also unknown
- return DEFAULT_SERVICE_REGION
+ service, region = UNSIGNED_REQUESTS.get(service, DEFAULT_SERVICE_REGION)
+ else:
+ # S3 is the last resort when the target is also unknown
+ service, region = DEFAULT_SERVICE_REGION
+
+ if service == 'dynamodb':
+ if environ['HTTP_X_AMZ_TARGET'].startswith('DynamoDBStreams'):
+ host = 'dynamodbstreams'
+ else:
+ dynamo_api_version = environ['HTTP_X_AMZ_TARGET'].split("_")[1].split(".")[0]
+ # If Newer API version, use dynamodb2
+ if dynamo_api_version > "20111205":
+ host = "dynamodb2"
+ else:
+ host = "{service}.{region}.amazonaws.com".format(
+ service=service, region=region)
+
+ return host
def get_application(self, environ):
path_info = environ.get('PATH_INFO', '')
@@ -101,22 +113,14 @@
host = "instance_metadata"
else:
host = environ['HTTP_HOST'].split(':')[0]
- if host in {'localhost', 'motoserver'} or host.startswith("192.168."):
- service, region = self.infer_service_region(environ)
- if service == 'dynamodb':
- if environ['HTTP_X_AMZ_TARGET'].startswith('DynamoDBStreams'):
- host = 'dynamodbstreams'
- else:
- dynamo_api_version = environ['HTTP_X_AMZ_TARGET'].split("_")[1].split(".")[0]
- # If Newer API version, use dynamodb2
- if dynamo_api_version > "20111205":
- host = "dynamodb2"
- else:
- host = "{service}.{region}.amazonaws.com".format(
- service=service, region=region)
with self.lock:
backend = self.get_backend_for_host(host)
+ if not backend:
+ # No regular backend found; try parsing other headers
+ host = self.infer_service_region_host(environ)
+ backend = self.get_backend_for_host(host)
+
app = self.app_instances.get(backend, None)
if app is None:
app = self.create_app(backend)
| {"golden_diff": "diff --git a/moto/server.py b/moto/server.py\n--- a/moto/server.py\n+++ b/moto/server.py\n@@ -58,9 +58,7 @@\n if re.match(url_base, 'http://%s' % host):\n return backend_name\n \n- raise RuntimeError('Invalid host: \"%s\"' % host)\n-\n- def infer_service_region(self, environ):\n+ def infer_service_region_host(self, environ):\n auth = environ.get('HTTP_AUTHORIZATION')\n if auth:\n # Signed request\n@@ -70,21 +68,35 @@\n try:\n credential_scope = auth.split(\",\")[0].split()[1]\n _, _, region, service, _ = credential_scope.split(\"/\")\n- return service, region\n except ValueError:\n # Signature format does not match, this is exceptional and we can't\n # infer a service-region. A reduced set of services still use\n # the deprecated SigV2, ergo prefer S3 as most likely default.\n # https://docs.aws.amazon.com/general/latest/gr/signature-version-2.html\n- return DEFAULT_SERVICE_REGION\n+ service, region = DEFAULT_SERVICE_REGION\n else:\n # Unsigned request\n target = environ.get('HTTP_X_AMZ_TARGET')\n if target:\n service, _ = target.split('.', 1)\n- return UNSIGNED_REQUESTS.get(service, DEFAULT_SERVICE_REGION)\n- # S3 is the last resort when the target is also unknown\n- return DEFAULT_SERVICE_REGION\n+ service, region = UNSIGNED_REQUESTS.get(service, DEFAULT_SERVICE_REGION)\n+ else:\n+ # S3 is the last resort when the target is also unknown\n+ service, region = DEFAULT_SERVICE_REGION\n+\n+ if service == 'dynamodb':\n+ if environ['HTTP_X_AMZ_TARGET'].startswith('DynamoDBStreams'):\n+ host = 'dynamodbstreams'\n+ else:\n+ dynamo_api_version = environ['HTTP_X_AMZ_TARGET'].split(\"_\")[1].split(\".\")[0]\n+ # If Newer API version, use dynamodb2\n+ if dynamo_api_version > \"20111205\":\n+ host = \"dynamodb2\"\n+ else:\n+ host = \"{service}.{region}.amazonaws.com\".format(\n+ service=service, region=region)\n+\n+ return host\n \n def get_application(self, environ):\n path_info = environ.get('PATH_INFO', '')\n@@ -101,22 +113,14 @@\n host = \"instance_metadata\"\n else:\n host = environ['HTTP_HOST'].split(':')[0]\n- if host in {'localhost', 'motoserver'} or host.startswith(\"192.168.\"):\n- service, region = self.infer_service_region(environ)\n- if service == 'dynamodb':\n- if environ['HTTP_X_AMZ_TARGET'].startswith('DynamoDBStreams'):\n- host = 'dynamodbstreams'\n- else:\n- dynamo_api_version = environ['HTTP_X_AMZ_TARGET'].split(\"_\")[1].split(\".\")[0]\n- # If Newer API version, use dynamodb2\n- if dynamo_api_version > \"20111205\":\n- host = \"dynamodb2\"\n- else:\n- host = \"{service}.{region}.amazonaws.com\".format(\n- service=service, region=region)\n \n with self.lock:\n backend = self.get_backend_for_host(host)\n+ if not backend:\n+ # No regular backend found; try parsing other headers\n+ host = self.infer_service_region_host(environ)\n+ backend = self.get_backend_for_host(host)\n+\n app = self.app_instances.get(backend, None)\n if app is None:\n app = self.create_app(backend)\n", "issue": "Add trusted hosts / networks to Server mode\nCurrently (Java) AWS API can only work with Moto from the local network (if it starts with \"192.168.X.X\") or if the request is coming from the same host as the server, that's very limiting because i want to run Moto on a container in an external network.\r\n\r\nAny chance to add trusted networks or hosts via config file or CLI args?\r\n\r\nI'd appreciate any info if this had been addressed or i'm using it wrong\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport argparse\nimport json\nimport re\nimport sys\nfrom threading import Lock\n\nimport six\nfrom flask import Flask\nfrom flask.testing import FlaskClient\n\nfrom six.moves.urllib.parse import urlencode\nfrom werkzeug.routing import BaseConverter\nfrom werkzeug.serving import run_simple\n\nfrom moto.backends import BACKENDS\nfrom moto.core.utils import convert_flask_to_httpretty_response\n\n\nHTTP_METHODS = [\"GET\", \"POST\", \"PUT\", \"DELETE\", \"HEAD\", \"PATCH\"]\n\n\nDEFAULT_SERVICE_REGION = ('s3', 'us-east-1')\n\n# Map of unsigned calls to service-region as per AWS API docs\n# https://docs.aws.amazon.com/cognito/latest/developerguide/resource-permissions.html#amazon-cognito-signed-versus-unsigned-apis\nUNSIGNED_REQUESTS = {\n 'AWSCognitoIdentityService': ('cognito-identity', 'us-east-1'),\n 'AWSCognitoIdentityProviderService': ('cognito-idp', 'us-east-1'),\n}\n\n\nclass DomainDispatcherApplication(object):\n \"\"\"\n Dispatch requests to different applications based on the \"Host:\" header\n value. We'll match the host header value with the url_bases of each backend.\n \"\"\"\n\n def __init__(self, create_app, service=None):\n self.create_app = create_app\n self.lock = Lock()\n self.app_instances = {}\n self.service = service\n\n def get_backend_for_host(self, host):\n if host == 'moto_api':\n return host\n\n if self.service:\n return self.service\n\n if host in BACKENDS:\n return host\n\n for backend_name, backend in BACKENDS.items():\n for url_base in list(backend.values())[0].url_bases:\n if re.match(url_base, 'http://%s' % host):\n return backend_name\n\n raise RuntimeError('Invalid host: \"%s\"' % host)\n\n def infer_service_region(self, environ):\n auth = environ.get('HTTP_AUTHORIZATION')\n if auth:\n # Signed request\n # Parse auth header to find service assuming a SigV4 request\n # https://docs.aws.amazon.com/general/latest/gr/sigv4-signed-request-examples.html\n # ['Credential=sdffdsa', '20170220', 'us-east-1', 'sns', 'aws4_request']\n try:\n credential_scope = auth.split(\",\")[0].split()[1]\n _, _, region, service, _ = credential_scope.split(\"/\")\n return service, region\n except ValueError:\n # Signature format does not match, this is exceptional and we can't\n # infer a service-region. A reduced set of services still use\n # the deprecated SigV2, ergo prefer S3 as most likely default.\n # https://docs.aws.amazon.com/general/latest/gr/signature-version-2.html\n return DEFAULT_SERVICE_REGION\n else:\n # Unsigned request\n target = environ.get('HTTP_X_AMZ_TARGET')\n if target:\n service, _ = target.split('.', 1)\n return UNSIGNED_REQUESTS.get(service, DEFAULT_SERVICE_REGION)\n # S3 is the last resort when the target is also unknown\n return DEFAULT_SERVICE_REGION\n\n def get_application(self, environ):\n path_info = environ.get('PATH_INFO', '')\n\n # The URL path might contain non-ASCII text, for instance unicode S3 bucket names\n if six.PY2 and isinstance(path_info, str):\n path_info = six.u(path_info)\n if six.PY3 and isinstance(path_info, six.binary_type):\n path_info = path_info.decode('utf-8')\n\n if path_info.startswith(\"/moto-api\") or path_info == \"/favicon.ico\":\n host = \"moto_api\"\n elif path_info.startswith(\"/latest/meta-data/\"):\n host = \"instance_metadata\"\n else:\n host = environ['HTTP_HOST'].split(':')[0]\n if host in {'localhost', 'motoserver'} or host.startswith(\"192.168.\"):\n service, region = self.infer_service_region(environ)\n if service == 'dynamodb':\n if environ['HTTP_X_AMZ_TARGET'].startswith('DynamoDBStreams'):\n host = 'dynamodbstreams'\n else:\n dynamo_api_version = environ['HTTP_X_AMZ_TARGET'].split(\"_\")[1].split(\".\")[0]\n # If Newer API version, use dynamodb2\n if dynamo_api_version > \"20111205\":\n host = \"dynamodb2\"\n else:\n host = \"{service}.{region}.amazonaws.com\".format(\n service=service, region=region)\n\n with self.lock:\n backend = self.get_backend_for_host(host)\n app = self.app_instances.get(backend, None)\n if app is None:\n app = self.create_app(backend)\n self.app_instances[backend] = app\n return app\n\n def __call__(self, environ, start_response):\n backend_app = self.get_application(environ)\n return backend_app(environ, start_response)\n\n\nclass RegexConverter(BaseConverter):\n # http://werkzeug.pocoo.org/docs/routing/#custom-converters\n\n def __init__(self, url_map, *items):\n super(RegexConverter, self).__init__(url_map)\n self.regex = items[0]\n\n\nclass AWSTestHelper(FlaskClient):\n\n def action_data(self, action_name, **kwargs):\n \"\"\"\n Method calls resource with action_name and returns data of response.\n \"\"\"\n opts = {\"Action\": action_name}\n opts.update(kwargs)\n res = self.get(\"/?{0}\".format(urlencode(opts)),\n headers={\"Host\": \"{0}.us-east-1.amazonaws.com\".format(self.application.service)})\n return res.data.decode(\"utf-8\")\n\n def action_json(self, action_name, **kwargs):\n \"\"\"\n Method calls resource with action_name and returns object obtained via\n deserialization of output.\n \"\"\"\n return json.loads(self.action_data(action_name, **kwargs))\n\n\ndef create_backend_app(service):\n from werkzeug.routing import Map\n\n # Create the backend_app\n backend_app = Flask(__name__)\n backend_app.debug = True\n backend_app.service = service\n\n # Reset view functions to reset the app\n backend_app.view_functions = {}\n backend_app.url_map = Map()\n backend_app.url_map.converters['regex'] = RegexConverter\n backend = list(BACKENDS[service].values())[0]\n for url_path, handler in backend.flask_paths.items():\n if handler.__name__ == 'dispatch':\n endpoint = '{0}.dispatch'.format(handler.__self__.__name__)\n else:\n endpoint = None\n\n original_endpoint = endpoint\n index = 2\n while endpoint in backend_app.view_functions:\n # HACK: Sometimes we map the same view to multiple url_paths. Flask\n # requries us to have different names.\n endpoint = original_endpoint + str(index)\n index += 1\n\n backend_app.add_url_rule(\n url_path,\n endpoint=endpoint,\n methods=HTTP_METHODS,\n view_func=convert_flask_to_httpretty_response(handler),\n strict_slashes=False,\n )\n\n backend_app.test_client_class = AWSTestHelper\n return backend_app\n\n\ndef main(argv=sys.argv[1:]):\n parser = argparse.ArgumentParser()\n\n # Keep this for backwards compat\n parser.add_argument(\n \"service\",\n type=str,\n nargs='?', # http://stackoverflow.com/a/4480202/731592\n default=None)\n parser.add_argument(\n '-H', '--host', type=str,\n help='Which host to bind',\n default='127.0.0.1')\n parser.add_argument(\n '-p', '--port', type=int,\n help='Port number to use for connection',\n default=5000)\n parser.add_argument(\n '-r', '--reload',\n action='store_true',\n help='Reload server on a file change',\n default=False\n )\n parser.add_argument(\n '-s', '--ssl',\n action='store_true',\n help='Enable SSL encrypted connection with auto-generated certificate (use https://... URL)',\n default=False\n )\n parser.add_argument(\n '-c', '--ssl-cert', type=str,\n help='Path to SSL certificate',\n default=None)\n parser.add_argument(\n '-k', '--ssl-key', type=str,\n help='Path to SSL private key',\n default=None)\n\n args = parser.parse_args(argv)\n\n # Wrap the main application\n main_app = DomainDispatcherApplication(\n create_backend_app, service=args.service)\n main_app.debug = True\n\n ssl_context = None\n if args.ssl_key and args.ssl_cert:\n ssl_context = (args.ssl_cert, args.ssl_key)\n elif args.ssl:\n ssl_context = 'adhoc'\n\n run_simple(args.host, args.port, main_app,\n threaded=True, use_reloader=args.reload,\n ssl_context=ssl_context)\n\n\nif __name__ == '__main__':\n main()\n", "path": "moto/server.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport argparse\nimport json\nimport re\nimport sys\nfrom threading import Lock\n\nimport six\nfrom flask import Flask\nfrom flask.testing import FlaskClient\n\nfrom six.moves.urllib.parse import urlencode\nfrom werkzeug.routing import BaseConverter\nfrom werkzeug.serving import run_simple\n\nfrom moto.backends import BACKENDS\nfrom moto.core.utils import convert_flask_to_httpretty_response\n\n\nHTTP_METHODS = [\"GET\", \"POST\", \"PUT\", \"DELETE\", \"HEAD\", \"PATCH\"]\n\n\nDEFAULT_SERVICE_REGION = ('s3', 'us-east-1')\n\n# Map of unsigned calls to service-region as per AWS API docs\n# https://docs.aws.amazon.com/cognito/latest/developerguide/resource-permissions.html#amazon-cognito-signed-versus-unsigned-apis\nUNSIGNED_REQUESTS = {\n 'AWSCognitoIdentityService': ('cognito-identity', 'us-east-1'),\n 'AWSCognitoIdentityProviderService': ('cognito-idp', 'us-east-1'),\n}\n\n\nclass DomainDispatcherApplication(object):\n \"\"\"\n Dispatch requests to different applications based on the \"Host:\" header\n value. We'll match the host header value with the url_bases of each backend.\n \"\"\"\n\n def __init__(self, create_app, service=None):\n self.create_app = create_app\n self.lock = Lock()\n self.app_instances = {}\n self.service = service\n\n def get_backend_for_host(self, host):\n if host == 'moto_api':\n return host\n\n if self.service:\n return self.service\n\n if host in BACKENDS:\n return host\n\n for backend_name, backend in BACKENDS.items():\n for url_base in list(backend.values())[0].url_bases:\n if re.match(url_base, 'http://%s' % host):\n return backend_name\n\n def infer_service_region_host(self, environ):\n auth = environ.get('HTTP_AUTHORIZATION')\n if auth:\n # Signed request\n # Parse auth header to find service assuming a SigV4 request\n # https://docs.aws.amazon.com/general/latest/gr/sigv4-signed-request-examples.html\n # ['Credential=sdffdsa', '20170220', 'us-east-1', 'sns', 'aws4_request']\n try:\n credential_scope = auth.split(\",\")[0].split()[1]\n _, _, region, service, _ = credential_scope.split(\"/\")\n except ValueError:\n # Signature format does not match, this is exceptional and we can't\n # infer a service-region. A reduced set of services still use\n # the deprecated SigV2, ergo prefer S3 as most likely default.\n # https://docs.aws.amazon.com/general/latest/gr/signature-version-2.html\n service, region = DEFAULT_SERVICE_REGION\n else:\n # Unsigned request\n target = environ.get('HTTP_X_AMZ_TARGET')\n if target:\n service, _ = target.split('.', 1)\n service, region = UNSIGNED_REQUESTS.get(service, DEFAULT_SERVICE_REGION)\n else:\n # S3 is the last resort when the target is also unknown\n service, region = DEFAULT_SERVICE_REGION\n\n if service == 'dynamodb':\n if environ['HTTP_X_AMZ_TARGET'].startswith('DynamoDBStreams'):\n host = 'dynamodbstreams'\n else:\n dynamo_api_version = environ['HTTP_X_AMZ_TARGET'].split(\"_\")[1].split(\".\")[0]\n # If Newer API version, use dynamodb2\n if dynamo_api_version > \"20111205\":\n host = \"dynamodb2\"\n else:\n host = \"{service}.{region}.amazonaws.com\".format(\n service=service, region=region)\n\n return host\n\n def get_application(self, environ):\n path_info = environ.get('PATH_INFO', '')\n\n # The URL path might contain non-ASCII text, for instance unicode S3 bucket names\n if six.PY2 and isinstance(path_info, str):\n path_info = six.u(path_info)\n if six.PY3 and isinstance(path_info, six.binary_type):\n path_info = path_info.decode('utf-8')\n\n if path_info.startswith(\"/moto-api\") or path_info == \"/favicon.ico\":\n host = \"moto_api\"\n elif path_info.startswith(\"/latest/meta-data/\"):\n host = \"instance_metadata\"\n else:\n host = environ['HTTP_HOST'].split(':')[0]\n\n with self.lock:\n backend = self.get_backend_for_host(host)\n if not backend:\n # No regular backend found; try parsing other headers\n host = self.infer_service_region_host(environ)\n backend = self.get_backend_for_host(host)\n\n app = self.app_instances.get(backend, None)\n if app is None:\n app = self.create_app(backend)\n self.app_instances[backend] = app\n return app\n\n def __call__(self, environ, start_response):\n backend_app = self.get_application(environ)\n return backend_app(environ, start_response)\n\n\nclass RegexConverter(BaseConverter):\n # http://werkzeug.pocoo.org/docs/routing/#custom-converters\n\n def __init__(self, url_map, *items):\n super(RegexConverter, self).__init__(url_map)\n self.regex = items[0]\n\n\nclass AWSTestHelper(FlaskClient):\n\n def action_data(self, action_name, **kwargs):\n \"\"\"\n Method calls resource with action_name and returns data of response.\n \"\"\"\n opts = {\"Action\": action_name}\n opts.update(kwargs)\n res = self.get(\"/?{0}\".format(urlencode(opts)),\n headers={\"Host\": \"{0}.us-east-1.amazonaws.com\".format(self.application.service)})\n return res.data.decode(\"utf-8\")\n\n def action_json(self, action_name, **kwargs):\n \"\"\"\n Method calls resource with action_name and returns object obtained via\n deserialization of output.\n \"\"\"\n return json.loads(self.action_data(action_name, **kwargs))\n\n\ndef create_backend_app(service):\n from werkzeug.routing import Map\n\n # Create the backend_app\n backend_app = Flask(__name__)\n backend_app.debug = True\n backend_app.service = service\n\n # Reset view functions to reset the app\n backend_app.view_functions = {}\n backend_app.url_map = Map()\n backend_app.url_map.converters['regex'] = RegexConverter\n backend = list(BACKENDS[service].values())[0]\n for url_path, handler in backend.flask_paths.items():\n if handler.__name__ == 'dispatch':\n endpoint = '{0}.dispatch'.format(handler.__self__.__name__)\n else:\n endpoint = None\n\n original_endpoint = endpoint\n index = 2\n while endpoint in backend_app.view_functions:\n # HACK: Sometimes we map the same view to multiple url_paths. Flask\n # requries us to have different names.\n endpoint = original_endpoint + str(index)\n index += 1\n\n backend_app.add_url_rule(\n url_path,\n endpoint=endpoint,\n methods=HTTP_METHODS,\n view_func=convert_flask_to_httpretty_response(handler),\n strict_slashes=False,\n )\n\n backend_app.test_client_class = AWSTestHelper\n return backend_app\n\n\ndef main(argv=sys.argv[1:]):\n parser = argparse.ArgumentParser()\n\n # Keep this for backwards compat\n parser.add_argument(\n \"service\",\n type=str,\n nargs='?', # http://stackoverflow.com/a/4480202/731592\n default=None)\n parser.add_argument(\n '-H', '--host', type=str,\n help='Which host to bind',\n default='127.0.0.1')\n parser.add_argument(\n '-p', '--port', type=int,\n help='Port number to use for connection',\n default=5000)\n parser.add_argument(\n '-r', '--reload',\n action='store_true',\n help='Reload server on a file change',\n default=False\n )\n parser.add_argument(\n '-s', '--ssl',\n action='store_true',\n help='Enable SSL encrypted connection with auto-generated certificate (use https://... URL)',\n default=False\n )\n parser.add_argument(\n '-c', '--ssl-cert', type=str,\n help='Path to SSL certificate',\n default=None)\n parser.add_argument(\n '-k', '--ssl-key', type=str,\n help='Path to SSL private key',\n default=None)\n\n args = parser.parse_args(argv)\n\n # Wrap the main application\n main_app = DomainDispatcherApplication(\n create_backend_app, service=args.service)\n main_app.debug = True\n\n ssl_context = None\n if args.ssl_key and args.ssl_cert:\n ssl_context = (args.ssl_cert, args.ssl_key)\n elif args.ssl:\n ssl_context = 'adhoc'\n\n run_simple(args.host, args.port, main_app,\n threaded=True, use_reloader=args.reload,\n ssl_context=ssl_context)\n\n\nif __name__ == '__main__':\n main()\n", "path": "moto/server.py"}]} | 3,005 | 832 |
gh_patches_debug_37710 | rasdani/github-patches | git_diff | localstack__localstack-4575 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
bug: State Machine references don't get resolved properly
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Lambda refs get lost
### Expected Behavior
Lambda refs work in state machines
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker run localstack/localstack
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
awslocal s3 mb s3://mybucket
### Environment
```markdown
- OS:
- LocalStack:
```
### Anything else?
This is based on a conversation I had with @dominikschubert
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `localstack/utils/generic/wait_utils.py`
Content:
```
1 import time
2 from typing import Callable
3
4 from typing_extensions import Literal
5
6
7 def wait_until(
8 fn: Callable[[], bool],
9 wait: float = 1.0,
10 max_retries: int = 10,
11 strategy: Literal["exponential", "static", "linear"] = "exponential",
12 _retries: int = 0,
13 _max_wait: float = 240,
14 ) -> None:
15 """waits until a given condition is true, rechecking it periodically"""
16 if max_retries < _retries:
17 return
18 completed = fn()
19 if not completed:
20 if wait > _max_wait:
21 return
22 time.sleep(wait)
23 next_wait = wait # default: static
24 if strategy == "linear":
25 next_wait = (wait / _retries) * (_retries + 1)
26 elif strategy == "exponential":
27 next_wait = wait ** 2
28 wait_until(fn, next_wait, max_retries, strategy, _retries + 1, _max_wait)
29
```
Path: `localstack/services/cloudformation/models/stepfunctions.py`
Content:
```
1 from localstack.services.cloudformation.deployment_utils import PLACEHOLDER_RESOURCE_NAME
2 from localstack.services.cloudformation.service_models import GenericBaseModel
3 from localstack.utils.aws import aws_stack
4
5
6 class SFNActivity(GenericBaseModel):
7 @staticmethod
8 def cloudformation_type():
9 return "AWS::StepFunctions::Activity"
10
11 def fetch_state(self, stack_name, resources):
12 activity_arn = self.physical_resource_id
13 if not activity_arn:
14 return None
15 client = aws_stack.connect_to_service("stepfunctions")
16 result = client.describe_activity(activityArn=activity_arn)
17 return result
18
19 @staticmethod
20 def get_deploy_templates():
21 return {
22 "create": {
23 "function": "create_activity",
24 "parameters": {"name": ["Name", PLACEHOLDER_RESOURCE_NAME], "tags": "Tags"},
25 },
26 "delete": {
27 "function": "delete_activity",
28 "parameters": {"activityArn": "PhysicalResourceId"},
29 },
30 }
31
32
33 class SFNStateMachine(GenericBaseModel):
34 @staticmethod
35 def cloudformation_type():
36 return "AWS::StepFunctions::StateMachine"
37
38 def get_resource_name(self):
39 return self.props.get("StateMachineName")
40
41 def get_physical_resource_id(self, attribute=None, **kwargs):
42 return self.props.get("stateMachineArn")
43
44 def fetch_state(self, stack_name, resources):
45 sm_name = self.props.get("StateMachineName") or self.resource_id
46 sm_name = self.resolve_refs_recursively(stack_name, sm_name, resources)
47 sfn_client = aws_stack.connect_to_service("stepfunctions")
48 state_machines = sfn_client.list_state_machines()["stateMachines"]
49 sm_arn = [m["stateMachineArn"] for m in state_machines if m["name"] == sm_name]
50 if not sm_arn:
51 return None
52 result = sfn_client.describe_state_machine(stateMachineArn=sm_arn[0])
53 return result
54
55 def update_resource(self, new_resource, stack_name, resources):
56 props = new_resource["Properties"]
57 client = aws_stack.connect_to_service("stepfunctions")
58 sm_arn = self.props.get("stateMachineArn")
59 if not sm_arn:
60 self.state = self.fetch_state(stack_name=stack_name, resources=resources)
61 sm_arn = self.state["stateMachineArn"]
62 kwargs = {
63 "stateMachineArn": sm_arn,
64 "definition": props["DefinitionString"],
65 }
66 return client.update_state_machine(**kwargs)
67
68 @staticmethod
69 def get_deploy_templates():
70 return {
71 "create": {
72 "function": "create_state_machine",
73 "parameters": {
74 "name": ["StateMachineName", PLACEHOLDER_RESOURCE_NAME],
75 "definition": "DefinitionString",
76 "roleArn": "RoleArn",
77 },
78 },
79 "delete": {
80 "function": "delete_state_machine",
81 "parameters": {"stateMachineArn": "PhysicalResourceId"},
82 },
83 }
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/localstack/services/cloudformation/models/stepfunctions.py b/localstack/services/cloudformation/models/stepfunctions.py
--- a/localstack/services/cloudformation/models/stepfunctions.py
+++ b/localstack/services/cloudformation/models/stepfunctions.py
@@ -1,3 +1,6 @@
+import re
+from typing import Dict
+
from localstack.services.cloudformation.deployment_utils import PLACEHOLDER_RESOURCE_NAME
from localstack.services.cloudformation.service_models import GenericBaseModel
from localstack.utils.aws import aws_stack
@@ -65,19 +68,43 @@
}
return client.update_state_machine(**kwargs)
- @staticmethod
- def get_deploy_templates():
+ @classmethod
+ def get_deploy_templates(cls):
+ def _create_params(params, **kwargs):
+ def _get_definition(params):
+ definition_str = params.get("DefinitionString")
+ substitutions = params.get("DefinitionSubstitutions")
+ if substitutions is not None:
+ definition_str = _apply_substitutions(definition_str, substitutions)
+ return definition_str
+
+ return {
+ "name": params.get("StateMachineName", PLACEHOLDER_RESOURCE_NAME),
+ "definition": _get_definition(params),
+ "roleArn": params.get("RoleArn"),
+ "type": params.get("StateMachineTyp", None),
+ }
+
return {
"create": {
"function": "create_state_machine",
- "parameters": {
- "name": ["StateMachineName", PLACEHOLDER_RESOURCE_NAME],
- "definition": "DefinitionString",
- "roleArn": "RoleArn",
- },
+ "parameters": _create_params,
},
"delete": {
"function": "delete_state_machine",
"parameters": {"stateMachineArn": "PhysicalResourceId"},
},
}
+
+
+def _apply_substitutions(definition: str, substitutions: Dict[str, str]) -> str:
+ substitution_regex = re.compile("\\${[a-zA-Z0-9_]+}") # might be a bit too strict in some cases
+ tokens = substitution_regex.findall(definition)
+ result = definition
+ for token in tokens:
+ raw_token = token[2:-1] # strip ${ and }
+ if raw_token not in substitutions.keys():
+ raise
+ result = result.replace(token, substitutions[raw_token])
+
+ return result
diff --git a/localstack/utils/generic/wait_utils.py b/localstack/utils/generic/wait_utils.py
--- a/localstack/utils/generic/wait_utils.py
+++ b/localstack/utils/generic/wait_utils.py
@@ -24,5 +24,5 @@
if strategy == "linear":
next_wait = (wait / _retries) * (_retries + 1)
elif strategy == "exponential":
- next_wait = wait ** 2
+ next_wait = wait * 2
wait_until(fn, next_wait, max_retries, strategy, _retries + 1, _max_wait)
| {"golden_diff": "diff --git a/localstack/services/cloudformation/models/stepfunctions.py b/localstack/services/cloudformation/models/stepfunctions.py\n--- a/localstack/services/cloudformation/models/stepfunctions.py\n+++ b/localstack/services/cloudformation/models/stepfunctions.py\n@@ -1,3 +1,6 @@\n+import re\n+from typing import Dict\n+\n from localstack.services.cloudformation.deployment_utils import PLACEHOLDER_RESOURCE_NAME\n from localstack.services.cloudformation.service_models import GenericBaseModel\n from localstack.utils.aws import aws_stack\n@@ -65,19 +68,43 @@\n }\n return client.update_state_machine(**kwargs)\n \n- @staticmethod\n- def get_deploy_templates():\n+ @classmethod\n+ def get_deploy_templates(cls):\n+ def _create_params(params, **kwargs):\n+ def _get_definition(params):\n+ definition_str = params.get(\"DefinitionString\")\n+ substitutions = params.get(\"DefinitionSubstitutions\")\n+ if substitutions is not None:\n+ definition_str = _apply_substitutions(definition_str, substitutions)\n+ return definition_str\n+\n+ return {\n+ \"name\": params.get(\"StateMachineName\", PLACEHOLDER_RESOURCE_NAME),\n+ \"definition\": _get_definition(params),\n+ \"roleArn\": params.get(\"RoleArn\"),\n+ \"type\": params.get(\"StateMachineTyp\", None),\n+ }\n+\n return {\n \"create\": {\n \"function\": \"create_state_machine\",\n- \"parameters\": {\n- \"name\": [\"StateMachineName\", PLACEHOLDER_RESOURCE_NAME],\n- \"definition\": \"DefinitionString\",\n- \"roleArn\": \"RoleArn\",\n- },\n+ \"parameters\": _create_params,\n },\n \"delete\": {\n \"function\": \"delete_state_machine\",\n \"parameters\": {\"stateMachineArn\": \"PhysicalResourceId\"},\n },\n }\n+\n+\n+def _apply_substitutions(definition: str, substitutions: Dict[str, str]) -> str:\n+ substitution_regex = re.compile(\"\\\\${[a-zA-Z0-9_]+}\") # might be a bit too strict in some cases\n+ tokens = substitution_regex.findall(definition)\n+ result = definition\n+ for token in tokens:\n+ raw_token = token[2:-1] # strip ${ and }\n+ if raw_token not in substitutions.keys():\n+ raise\n+ result = result.replace(token, substitutions[raw_token])\n+\n+ return result\ndiff --git a/localstack/utils/generic/wait_utils.py b/localstack/utils/generic/wait_utils.py\n--- a/localstack/utils/generic/wait_utils.py\n+++ b/localstack/utils/generic/wait_utils.py\n@@ -24,5 +24,5 @@\n if strategy == \"linear\":\n next_wait = (wait / _retries) * (_retries + 1)\n elif strategy == \"exponential\":\n- next_wait = wait ** 2\n+ next_wait = wait * 2\n wait_until(fn, next_wait, max_retries, strategy, _retries + 1, _max_wait)\n", "issue": "bug: State Machine references don't get resolved properly\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\nLambda refs get lost\n\n### Expected Behavior\n\nLambda refs work in state machines\n\n### How are you starting LocalStack?\n\nWith a docker-compose file\n\n### Steps To Reproduce\n\n#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)\r\n\r\n docker run localstack/localstack\r\n\r\n#### Client commands (e.g., AWS SDK code snippet, or sequence of \"awslocal\" commands)\r\n\r\n awslocal s3 mb s3://mybucket\r\n\n\n### Environment\n\n```markdown\n- OS: \r\n- LocalStack:\n```\n\n\n### Anything else?\n\nThis is based on a conversation I had with @dominikschubert \n", "before_files": [{"content": "import time\nfrom typing import Callable\n\nfrom typing_extensions import Literal\n\n\ndef wait_until(\n fn: Callable[[], bool],\n wait: float = 1.0,\n max_retries: int = 10,\n strategy: Literal[\"exponential\", \"static\", \"linear\"] = \"exponential\",\n _retries: int = 0,\n _max_wait: float = 240,\n) -> None:\n \"\"\"waits until a given condition is true, rechecking it periodically\"\"\"\n if max_retries < _retries:\n return\n completed = fn()\n if not completed:\n if wait > _max_wait:\n return\n time.sleep(wait)\n next_wait = wait # default: static\n if strategy == \"linear\":\n next_wait = (wait / _retries) * (_retries + 1)\n elif strategy == \"exponential\":\n next_wait = wait ** 2\n wait_until(fn, next_wait, max_retries, strategy, _retries + 1, _max_wait)\n", "path": "localstack/utils/generic/wait_utils.py"}, {"content": "from localstack.services.cloudformation.deployment_utils import PLACEHOLDER_RESOURCE_NAME\nfrom localstack.services.cloudformation.service_models import GenericBaseModel\nfrom localstack.utils.aws import aws_stack\n\n\nclass SFNActivity(GenericBaseModel):\n @staticmethod\n def cloudformation_type():\n return \"AWS::StepFunctions::Activity\"\n\n def fetch_state(self, stack_name, resources):\n activity_arn = self.physical_resource_id\n if not activity_arn:\n return None\n client = aws_stack.connect_to_service(\"stepfunctions\")\n result = client.describe_activity(activityArn=activity_arn)\n return result\n\n @staticmethod\n def get_deploy_templates():\n return {\n \"create\": {\n \"function\": \"create_activity\",\n \"parameters\": {\"name\": [\"Name\", PLACEHOLDER_RESOURCE_NAME], \"tags\": \"Tags\"},\n },\n \"delete\": {\n \"function\": \"delete_activity\",\n \"parameters\": {\"activityArn\": \"PhysicalResourceId\"},\n },\n }\n\n\nclass SFNStateMachine(GenericBaseModel):\n @staticmethod\n def cloudformation_type():\n return \"AWS::StepFunctions::StateMachine\"\n\n def get_resource_name(self):\n return self.props.get(\"StateMachineName\")\n\n def get_physical_resource_id(self, attribute=None, **kwargs):\n return self.props.get(\"stateMachineArn\")\n\n def fetch_state(self, stack_name, resources):\n sm_name = self.props.get(\"StateMachineName\") or self.resource_id\n sm_name = self.resolve_refs_recursively(stack_name, sm_name, resources)\n sfn_client = aws_stack.connect_to_service(\"stepfunctions\")\n state_machines = sfn_client.list_state_machines()[\"stateMachines\"]\n sm_arn = [m[\"stateMachineArn\"] for m in state_machines if m[\"name\"] == sm_name]\n if not sm_arn:\n return None\n result = sfn_client.describe_state_machine(stateMachineArn=sm_arn[0])\n return result\n\n def update_resource(self, new_resource, stack_name, resources):\n props = new_resource[\"Properties\"]\n client = aws_stack.connect_to_service(\"stepfunctions\")\n sm_arn = self.props.get(\"stateMachineArn\")\n if not sm_arn:\n self.state = self.fetch_state(stack_name=stack_name, resources=resources)\n sm_arn = self.state[\"stateMachineArn\"]\n kwargs = {\n \"stateMachineArn\": sm_arn,\n \"definition\": props[\"DefinitionString\"],\n }\n return client.update_state_machine(**kwargs)\n\n @staticmethod\n def get_deploy_templates():\n return {\n \"create\": {\n \"function\": \"create_state_machine\",\n \"parameters\": {\n \"name\": [\"StateMachineName\", PLACEHOLDER_RESOURCE_NAME],\n \"definition\": \"DefinitionString\",\n \"roleArn\": \"RoleArn\",\n },\n },\n \"delete\": {\n \"function\": \"delete_state_machine\",\n \"parameters\": {\"stateMachineArn\": \"PhysicalResourceId\"},\n },\n }\n", "path": "localstack/services/cloudformation/models/stepfunctions.py"}], "after_files": [{"content": "import time\nfrom typing import Callable\n\nfrom typing_extensions import Literal\n\n\ndef wait_until(\n fn: Callable[[], bool],\n wait: float = 1.0,\n max_retries: int = 10,\n strategy: Literal[\"exponential\", \"static\", \"linear\"] = \"exponential\",\n _retries: int = 0,\n _max_wait: float = 240,\n) -> None:\n \"\"\"waits until a given condition is true, rechecking it periodically\"\"\"\n if max_retries < _retries:\n return\n completed = fn()\n if not completed:\n if wait > _max_wait:\n return\n time.sleep(wait)\n next_wait = wait # default: static\n if strategy == \"linear\":\n next_wait = (wait / _retries) * (_retries + 1)\n elif strategy == \"exponential\":\n next_wait = wait * 2\n wait_until(fn, next_wait, max_retries, strategy, _retries + 1, _max_wait)\n", "path": "localstack/utils/generic/wait_utils.py"}, {"content": "import re\nfrom typing import Dict\n\nfrom localstack.services.cloudformation.deployment_utils import PLACEHOLDER_RESOURCE_NAME\nfrom localstack.services.cloudformation.service_models import GenericBaseModel\nfrom localstack.utils.aws import aws_stack\n\n\nclass SFNActivity(GenericBaseModel):\n @staticmethod\n def cloudformation_type():\n return \"AWS::StepFunctions::Activity\"\n\n def fetch_state(self, stack_name, resources):\n activity_arn = self.physical_resource_id\n if not activity_arn:\n return None\n client = aws_stack.connect_to_service(\"stepfunctions\")\n result = client.describe_activity(activityArn=activity_arn)\n return result\n\n @staticmethod\n def get_deploy_templates():\n return {\n \"create\": {\n \"function\": \"create_activity\",\n \"parameters\": {\"name\": [\"Name\", PLACEHOLDER_RESOURCE_NAME], \"tags\": \"Tags\"},\n },\n \"delete\": {\n \"function\": \"delete_activity\",\n \"parameters\": {\"activityArn\": \"PhysicalResourceId\"},\n },\n }\n\n\nclass SFNStateMachine(GenericBaseModel):\n @staticmethod\n def cloudformation_type():\n return \"AWS::StepFunctions::StateMachine\"\n\n def get_resource_name(self):\n return self.props.get(\"StateMachineName\")\n\n def get_physical_resource_id(self, attribute=None, **kwargs):\n return self.props.get(\"stateMachineArn\")\n\n def fetch_state(self, stack_name, resources):\n sm_name = self.props.get(\"StateMachineName\") or self.resource_id\n sm_name = self.resolve_refs_recursively(stack_name, sm_name, resources)\n sfn_client = aws_stack.connect_to_service(\"stepfunctions\")\n state_machines = sfn_client.list_state_machines()[\"stateMachines\"]\n sm_arn = [m[\"stateMachineArn\"] for m in state_machines if m[\"name\"] == sm_name]\n if not sm_arn:\n return None\n result = sfn_client.describe_state_machine(stateMachineArn=sm_arn[0])\n return result\n\n def update_resource(self, new_resource, stack_name, resources):\n props = new_resource[\"Properties\"]\n client = aws_stack.connect_to_service(\"stepfunctions\")\n sm_arn = self.props.get(\"stateMachineArn\")\n if not sm_arn:\n self.state = self.fetch_state(stack_name=stack_name, resources=resources)\n sm_arn = self.state[\"stateMachineArn\"]\n kwargs = {\n \"stateMachineArn\": sm_arn,\n \"definition\": props[\"DefinitionString\"],\n }\n return client.update_state_machine(**kwargs)\n\n @classmethod\n def get_deploy_templates(cls):\n def _create_params(params, **kwargs):\n def _get_definition(params):\n definition_str = params.get(\"DefinitionString\")\n substitutions = params.get(\"DefinitionSubstitutions\")\n if substitutions is not None:\n definition_str = _apply_substitutions(definition_str, substitutions)\n return definition_str\n\n return {\n \"name\": params.get(\"StateMachineName\", PLACEHOLDER_RESOURCE_NAME),\n \"definition\": _get_definition(params),\n \"roleArn\": params.get(\"RoleArn\"),\n \"type\": params.get(\"StateMachineTyp\", None),\n }\n\n return {\n \"create\": {\n \"function\": \"create_state_machine\",\n \"parameters\": _create_params,\n },\n \"delete\": {\n \"function\": \"delete_state_machine\",\n \"parameters\": {\"stateMachineArn\": \"PhysicalResourceId\"},\n },\n }\n\n\ndef _apply_substitutions(definition: str, substitutions: Dict[str, str]) -> str:\n substitution_regex = re.compile(\"\\\\${[a-zA-Z0-9_]+}\") # might be a bit too strict in some cases\n tokens = substitution_regex.findall(definition)\n result = definition\n for token in tokens:\n raw_token = token[2:-1] # strip ${ and }\n if raw_token not in substitutions.keys():\n raise\n result = result.replace(token, substitutions[raw_token])\n\n return result\n", "path": "localstack/services/cloudformation/models/stepfunctions.py"}]} | 1,533 | 654 |
gh_patches_debug_32187 | rasdani/github-patches | git_diff | beeware__toga-1307 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Toga slider widget in android has 1 fewer tick counts than in Windows
**Describe the bug**
The toga.Slider widget when defined with range=(2,20), tick_count=19 instantiates a slider in Windows which has 19 tick values.
However when the same code is built and run for Android (both in the emulator and an actual Android phone), the slider only
has 18 tick values. In addition ,the property Slider.value returns a value which is 18/19 per tick.
**To Reproduce**
I wrote a simple example which demonstrates the bug clearly. I build and compiled like this:
briefcase update android
briefcase build android
briefcase run android
Here is the source code from app.py.
import toga
from toga.style import Pack
from toga.style.pack import COLUMN, ROW
TIMER_MIN=1
TIMER_MAX=19
class slidetest(toga.App):
def startup(self):
main_box = toga.Box()
self.timer_label=toga.Label(text="Counter: ")
self.timer_label.style.width=200
self.timer_label.style.padding=5
self.timer_slide=toga.Slider(range=(TIMER_MIN,TIMER_MAX),tick_count=(TIMER_MAX-TIMER_MIN+1),on_change=(self.timer_change),
on_release=(self.timer_set))
self.timer_slide.style.padding=5
self.timer_slide.style.padding_top=5
self.timer_slide.style.width=200
self.timer_slide.value=10.0
main_box.add(self.timer_slide)
main_box.add(self.timer_label)
self.main_window = toga.MainWindow(title=self.formal_name)
self.main_window.content = main_box
self.main_window.show()
def timer_change(self,widget):
self.timer_label.text="Counter: "+str(self.timer_slide.tick_value)
def timer_set(self,widget):
pass
def main():
return slidetest()
**Expected behavior**
This should produce a slider with 19 tick stops, and return values from 2.0 to 20.0.
The android build of this code has a slide with 18 tick stops and return values from 2.0 to 19.05263.
Screenshots below just show screen capture of the android emulator with slider full left and full right position. You can see the resulting tick_value is 1 and 18 respectively.
**Screenshots**


**Environment:**
- Operating System: Windows
- Python version:3.8
- Software versions:
- Briefcase: briefcase 0.3.5
- Toga:
- toga-core 0.3.0.dev27
- toga-winforms 0.3.0.dev27
arrow 1.1.0
binaryornot 0.4.4
certifi 2021.5.30
chardet 4.0.0
click 8.0.1
colorama 0.4.4
cookiecutter 1.7.3
gitdb 4.0.7
GitPython 3.1.17
idna 2.10
Jinja2 2.11.3
jinja2-time 0.2.0
MarkupSafe 2.0.1
oauthlib 3.1.1
pip 21.1.1
poyo 0.5.0
pycparser 2.20
python-dateutil 2.8.1
python-slugify 5.0.2
pythonnet 2.5.2
setuptools 56.0.0
six 1.16.0
smmap 4.0.0
text-unidecode 1.3
toml 0.10.2
travertino 0.1.3
urllib3 1.26.5
wheel 0.36.2
**Additional context**
Add any other context about the problem here.
Toga slider widget in android has 1 fewer tick counts than in Windows
**Describe the bug**
The toga.Slider widget when defined with range=(2,20), tick_count=19 instantiates a slider in Windows which has 19 tick values.
However when the same code is built and run for Android (both in the emulator and an actual Android phone), the slider only
has 18 tick values. In addition ,the property Slider.value returns a value which is 18/19 per tick.
**To Reproduce**
I wrote a simple example which demonstrates the bug clearly. I build and compiled like this:
briefcase update android
briefcase build android
briefcase run android
Here is the source code from app.py.
import toga
from toga.style import Pack
from toga.style.pack import COLUMN, ROW
TIMER_MIN=1
TIMER_MAX=19
class slidetest(toga.App):
def startup(self):
main_box = toga.Box()
self.timer_label=toga.Label(text="Counter: ")
self.timer_label.style.width=200
self.timer_label.style.padding=5
self.timer_slide=toga.Slider(range=(TIMER_MIN,TIMER_MAX),tick_count=(TIMER_MAX-TIMER_MIN+1),on_change=(self.timer_change),
on_release=(self.timer_set))
self.timer_slide.style.padding=5
self.timer_slide.style.padding_top=5
self.timer_slide.style.width=200
self.timer_slide.value=10.0
main_box.add(self.timer_slide)
main_box.add(self.timer_label)
self.main_window = toga.MainWindow(title=self.formal_name)
self.main_window.content = main_box
self.main_window.show()
def timer_change(self,widget):
self.timer_label.text="Counter: "+str(self.timer_slide.tick_value)
def timer_set(self,widget):
pass
def main():
return slidetest()
**Expected behavior**
This should produce a slider with 19 tick stops, and return values from 2.0 to 20.0.
The android build of this code has a slide with 18 tick stops and return values from 2.0 to 19.05263.
Screenshots below just show screen capture of the android emulator with slider full left and full right position. You can see the resulting tick_value is 1 and 18 respectively.
**Screenshots**


**Environment:**
- Operating System: Windows
- Python version:3.8
- Software versions:
- Briefcase: briefcase 0.3.5
- Toga:
- toga-core 0.3.0.dev27
- toga-winforms 0.3.0.dev27
arrow 1.1.0
binaryornot 0.4.4
certifi 2021.5.30
chardet 4.0.0
click 8.0.1
colorama 0.4.4
cookiecutter 1.7.3
gitdb 4.0.7
GitPython 3.1.17
idna 2.10
Jinja2 2.11.3
jinja2-time 0.2.0
MarkupSafe 2.0.1
oauthlib 3.1.1
pip 21.1.1
poyo 0.5.0
pycparser 2.20
python-dateutil 2.8.1
python-slugify 5.0.2
pythonnet 2.5.2
setuptools 56.0.0
six 1.16.0
smmap 4.0.0
text-unidecode 1.3
toml 0.10.2
travertino 0.1.3
urllib3 1.26.5
wheel 0.36.2
**Additional context**
Add any other context about the problem here.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/android/toga_android/widgets/slider.py`
Content:
```
1 from travertino.size import at_least
2
3 from ..libs.android.view import View__MeasureSpec
4 from ..libs.android.widget import SeekBar, SeekBar__OnSeekBarChangeListener
5 from .base import Widget
6
7
8 class TogaOnSeekBarChangeListener(SeekBar__OnSeekBarChangeListener):
9 def __init__(self, impl):
10 super().__init__()
11 self.impl = impl
12
13 def onProgressChanged(self, _view, _progress, _from_user):
14 if self.impl.interface.on_change:
15 self.impl.interface.on_change(widget=self.impl.interface)
16
17 # Add two unused methods so that the Java interface is completely implemented.
18 def onStartTrackingTouch(self, native_seekbar):
19 pass
20
21 def onStopTrackingTouch(self, native_seekbar):
22 pass
23
24
25 # Since Android's SeekBar is always discrete,
26 # use a high number of steps for a "continuous" slider.
27 DEFAULT_NUMBER_OF_TICKS = 10000
28
29
30 class Slider(Widget):
31 def create(self):
32 self.native = SeekBar(self._native_activity)
33 self.native.setMax(DEFAULT_NUMBER_OF_TICKS)
34 self.native.setOnSeekBarChangeListener(TogaOnSeekBarChangeListener(self))
35
36 def get_value(self):
37 minimum, maximum = self.interface.range
38 n_steps = self.interface.tick_count
39 if n_steps is None:
40 n_steps = DEFAULT_NUMBER_OF_TICKS
41 return (self.native.getProgress() * (maximum - minimum) / n_steps) + minimum
42
43 def set_value(self, value):
44 minimum, maximum = self.interface.range
45 n_steps = self.interface.tick_count
46 if n_steps is None:
47 n_steps = DEFAULT_NUMBER_OF_TICKS
48 self.native.setProgress(int((maximum - value - minimum) * n_steps))
49
50 def set_range(self, range):
51 pass
52
53 def set_tick_count(self, tick_count):
54 if tick_count is None:
55 self.native.setMax(DEFAULT_NUMBER_OF_TICKS)
56 else:
57 self.native.setMax(int(tick_count) - 1)
58
59 def rehint(self):
60 self.native.measure(
61 View__MeasureSpec.UNSPECIFIED, View__MeasureSpec.UNSPECIFIED
62 )
63 self.interface.intrinsic.width = at_least(self.native.getMeasuredWidth())
64 self.interface.intrinsic.height = self.native.getMeasuredHeight()
65
66 def set_on_change(self, handler):
67 # No special handling required
68 pass
69
70 def set_on_press(self, handler):
71 self.interface.factory.not_implemented("Slider.set_on_press()")
72
73 def set_on_release(self, handler):
74 self.interface.factory.not_implemented("Slider.set_on_release()")
75
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/android/toga_android/widgets/slider.py b/src/android/toga_android/widgets/slider.py
--- a/src/android/toga_android/widgets/slider.py
+++ b/src/android/toga_android/widgets/slider.py
@@ -35,26 +35,37 @@
def get_value(self):
minimum, maximum = self.interface.range
- n_steps = self.interface.tick_count
- if n_steps is None:
- n_steps = DEFAULT_NUMBER_OF_TICKS
- return (self.native.getProgress() * (maximum - minimum) / n_steps) + minimum
+ if self.interface.tick_count is not None and self.interface.tick_count <= 1:
+ return minimum
+ toga_tick_count = self.interface.tick_count or DEFAULT_NUMBER_OF_TICKS
+ android_slider_max = toga_tick_count - 1
+ tick_factor = (maximum - minimum) / android_slider_max
+ progress_scaled = self.native.getProgress() * tick_factor
+ result = progress_scaled + minimum
+ return result
def set_value(self, value):
minimum, maximum = self.interface.range
- n_steps = self.interface.tick_count
- if n_steps is None:
- n_steps = DEFAULT_NUMBER_OF_TICKS
- self.native.setProgress(int((maximum - value - minimum) * n_steps))
+ if self.interface.tick_count is not None and self.interface.tick_count <= 1:
+ android_progress = 0
+ else:
+ toga_tick_count = self.interface.tick_count or DEFAULT_NUMBER_OF_TICKS
+ android_slider_max = toga_tick_count - 1
+ tick_factor = (maximum - minimum) / android_slider_max
+ android_progress = int((value - minimum) * tick_factor)
+ self.native.setProgress(android_progress)
def set_range(self, range):
pass
def set_tick_count(self, tick_count):
- if tick_count is None:
- self.native.setMax(DEFAULT_NUMBER_OF_TICKS)
+ # Since the Android slider slides from 0 to max inclusive, always subtract 1 from tick_count.
+ if self.interface.tick_count is None:
+ android_slider_max = DEFAULT_NUMBER_OF_TICKS - 1
else:
- self.native.setMax(int(tick_count) - 1)
+ android_slider_max = int(self.interface.tick_count - 1)
+ # Set the Android SeekBar max, clamping so it's non-negative.
+ self.native.setMax(max(0, android_slider_max))
def rehint(self):
self.native.measure(
| {"golden_diff": "diff --git a/src/android/toga_android/widgets/slider.py b/src/android/toga_android/widgets/slider.py\n--- a/src/android/toga_android/widgets/slider.py\n+++ b/src/android/toga_android/widgets/slider.py\n@@ -35,26 +35,37 @@\n \n def get_value(self):\n minimum, maximum = self.interface.range\n- n_steps = self.interface.tick_count\n- if n_steps is None:\n- n_steps = DEFAULT_NUMBER_OF_TICKS\n- return (self.native.getProgress() * (maximum - minimum) / n_steps) + minimum\n+ if self.interface.tick_count is not None and self.interface.tick_count <= 1:\n+ return minimum\n+ toga_tick_count = self.interface.tick_count or DEFAULT_NUMBER_OF_TICKS\n+ android_slider_max = toga_tick_count - 1\n+ tick_factor = (maximum - minimum) / android_slider_max\n+ progress_scaled = self.native.getProgress() * tick_factor\n+ result = progress_scaled + minimum\n+ return result\n \n def set_value(self, value):\n minimum, maximum = self.interface.range\n- n_steps = self.interface.tick_count\n- if n_steps is None:\n- n_steps = DEFAULT_NUMBER_OF_TICKS\n- self.native.setProgress(int((maximum - value - minimum) * n_steps))\n+ if self.interface.tick_count is not None and self.interface.tick_count <= 1:\n+ android_progress = 0\n+ else:\n+ toga_tick_count = self.interface.tick_count or DEFAULT_NUMBER_OF_TICKS\n+ android_slider_max = toga_tick_count - 1\n+ tick_factor = (maximum - minimum) / android_slider_max\n+ android_progress = int((value - minimum) * tick_factor)\n+ self.native.setProgress(android_progress)\n \n def set_range(self, range):\n pass\n \n def set_tick_count(self, tick_count):\n- if tick_count is None:\n- self.native.setMax(DEFAULT_NUMBER_OF_TICKS)\n+ # Since the Android slider slides from 0 to max inclusive, always subtract 1 from tick_count.\n+ if self.interface.tick_count is None:\n+ android_slider_max = DEFAULT_NUMBER_OF_TICKS - 1\n else:\n- self.native.setMax(int(tick_count) - 1)\n+ android_slider_max = int(self.interface.tick_count - 1)\n+ # Set the Android SeekBar max, clamping so it's non-negative.\n+ self.native.setMax(max(0, android_slider_max))\n \n def rehint(self):\n self.native.measure(\n", "issue": "Toga slider widget in android has 1 fewer tick counts than in Windows\n**Describe the bug**\r\nThe toga.Slider widget when defined with range=(2,20), tick_count=19 instantiates a slider in Windows which has 19 tick values.\r\nHowever when the same code is built and run for Android (both in the emulator and an actual Android phone), the slider only\r\nhas 18 tick values. In addition ,the property Slider.value returns a value which is 18/19 per tick.\r\n\r\n**To Reproduce**\r\nI wrote a simple example which demonstrates the bug clearly. I build and compiled like this:\r\n briefcase update android\r\n briefcase build android\r\n briefcase run android\r\n\r\nHere is the source code from app.py.\r\n\r\nimport toga\r\nfrom toga.style import Pack\r\nfrom toga.style.pack import COLUMN, ROW\r\n\r\nTIMER_MIN=1\r\nTIMER_MAX=19\r\n\r\nclass slidetest(toga.App):\r\n\r\n def startup(self):\r\n main_box = toga.Box()\r\n\r\n self.timer_label=toga.Label(text=\"Counter: \")\r\n self.timer_label.style.width=200\r\n self.timer_label.style.padding=5\r\n\r\n self.timer_slide=toga.Slider(range=(TIMER_MIN,TIMER_MAX),tick_count=(TIMER_MAX-TIMER_MIN+1),on_change=(self.timer_change),\r\n on_release=(self.timer_set))\r\n self.timer_slide.style.padding=5\r\n self.timer_slide.style.padding_top=5\r\n self.timer_slide.style.width=200\r\n self.timer_slide.value=10.0\r\n\r\n main_box.add(self.timer_slide)\r\n main_box.add(self.timer_label)\r\n\r\n self.main_window = toga.MainWindow(title=self.formal_name)\r\n self.main_window.content = main_box\r\n self.main_window.show()\r\n\r\n def timer_change(self,widget):\r\n self.timer_label.text=\"Counter: \"+str(self.timer_slide.tick_value)\r\n\r\n def timer_set(self,widget):\r\n pass\r\n\r\ndef main():\r\n return slidetest()\r\n\r\n\r\n**Expected behavior**\r\nThis should produce a slider with 19 tick stops, and return values from 2.0 to 20.0. \r\n\r\nThe android build of this code has a slide with 18 tick stops and return values from 2.0 to 19.05263.\r\n\r\nScreenshots below just show screen capture of the android emulator with slider full left and full right position. You can see the resulting tick_value is 1 and 18 respectively.\r\n\r\n**Screenshots**\r\n\r\n\r\n\r\n\r\n**Environment:**\r\n - Operating System: Windows\r\n - Python version:3.8\r\n - Software versions:\r\n - Briefcase: briefcase 0.3.5\r\n - Toga:\r\n - toga-core 0.3.0.dev27\r\n - toga-winforms 0.3.0.dev27\r\n \r\narrow 1.1.0\r\nbinaryornot 0.4.4\r\ncertifi 2021.5.30\r\nchardet 4.0.0\r\nclick 8.0.1\r\ncolorama 0.4.4\r\ncookiecutter 1.7.3\r\ngitdb 4.0.7\r\nGitPython 3.1.17\r\nidna 2.10\r\nJinja2 2.11.3\r\njinja2-time 0.2.0\r\nMarkupSafe 2.0.1\r\noauthlib 3.1.1\r\npip 21.1.1\r\npoyo 0.5.0\r\npycparser 2.20\r\npython-dateutil 2.8.1\r\npython-slugify 5.0.2\r\npythonnet 2.5.2\r\nsetuptools 56.0.0\r\nsix 1.16.0\r\nsmmap 4.0.0\r\ntext-unidecode 1.3\r\ntoml 0.10.2\r\ntravertino 0.1.3\r\nurllib3 1.26.5\r\nwheel 0.36.2\r\n\r\n**Additional context**\r\nAdd any other context about the problem here.\r\n\nToga slider widget in android has 1 fewer tick counts than in Windows\n**Describe the bug**\r\nThe toga.Slider widget when defined with range=(2,20), tick_count=19 instantiates a slider in Windows which has 19 tick values.\r\nHowever when the same code is built and run for Android (both in the emulator and an actual Android phone), the slider only\r\nhas 18 tick values. In addition ,the property Slider.value returns a value which is 18/19 per tick.\r\n\r\n**To Reproduce**\r\nI wrote a simple example which demonstrates the bug clearly. I build and compiled like this:\r\n briefcase update android\r\n briefcase build android\r\n briefcase run android\r\n\r\nHere is the source code from app.py.\r\n\r\nimport toga\r\nfrom toga.style import Pack\r\nfrom toga.style.pack import COLUMN, ROW\r\n\r\nTIMER_MIN=1\r\nTIMER_MAX=19\r\n\r\nclass slidetest(toga.App):\r\n\r\n def startup(self):\r\n main_box = toga.Box()\r\n\r\n self.timer_label=toga.Label(text=\"Counter: \")\r\n self.timer_label.style.width=200\r\n self.timer_label.style.padding=5\r\n\r\n self.timer_slide=toga.Slider(range=(TIMER_MIN,TIMER_MAX),tick_count=(TIMER_MAX-TIMER_MIN+1),on_change=(self.timer_change),\r\n on_release=(self.timer_set))\r\n self.timer_slide.style.padding=5\r\n self.timer_slide.style.padding_top=5\r\n self.timer_slide.style.width=200\r\n self.timer_slide.value=10.0\r\n\r\n main_box.add(self.timer_slide)\r\n main_box.add(self.timer_label)\r\n\r\n self.main_window = toga.MainWindow(title=self.formal_name)\r\n self.main_window.content = main_box\r\n self.main_window.show()\r\n\r\n def timer_change(self,widget):\r\n self.timer_label.text=\"Counter: \"+str(self.timer_slide.tick_value)\r\n\r\n def timer_set(self,widget):\r\n pass\r\n\r\ndef main():\r\n return slidetest()\r\n\r\n\r\n**Expected behavior**\r\nThis should produce a slider with 19 tick stops, and return values from 2.0 to 20.0. \r\n\r\nThe android build of this code has a slide with 18 tick stops and return values from 2.0 to 19.05263.\r\n\r\nScreenshots below just show screen capture of the android emulator with slider full left and full right position. You can see the resulting tick_value is 1 and 18 respectively.\r\n\r\n**Screenshots**\r\n\r\n\r\n\r\n\r\n**Environment:**\r\n - Operating System: Windows\r\n - Python version:3.8\r\n - Software versions:\r\n - Briefcase: briefcase 0.3.5\r\n - Toga:\r\n - toga-core 0.3.0.dev27\r\n - toga-winforms 0.3.0.dev27\r\n \r\narrow 1.1.0\r\nbinaryornot 0.4.4\r\ncertifi 2021.5.30\r\nchardet 4.0.0\r\nclick 8.0.1\r\ncolorama 0.4.4\r\ncookiecutter 1.7.3\r\ngitdb 4.0.7\r\nGitPython 3.1.17\r\nidna 2.10\r\nJinja2 2.11.3\r\njinja2-time 0.2.0\r\nMarkupSafe 2.0.1\r\noauthlib 3.1.1\r\npip 21.1.1\r\npoyo 0.5.0\r\npycparser 2.20\r\npython-dateutil 2.8.1\r\npython-slugify 5.0.2\r\npythonnet 2.5.2\r\nsetuptools 56.0.0\r\nsix 1.16.0\r\nsmmap 4.0.0\r\ntext-unidecode 1.3\r\ntoml 0.10.2\r\ntravertino 0.1.3\r\nurllib3 1.26.5\r\nwheel 0.36.2\r\n\r\n**Additional context**\r\nAdd any other context about the problem here.\r\n\n", "before_files": [{"content": "from travertino.size import at_least\n\nfrom ..libs.android.view import View__MeasureSpec\nfrom ..libs.android.widget import SeekBar, SeekBar__OnSeekBarChangeListener\nfrom .base import Widget\n\n\nclass TogaOnSeekBarChangeListener(SeekBar__OnSeekBarChangeListener):\n def __init__(self, impl):\n super().__init__()\n self.impl = impl\n\n def onProgressChanged(self, _view, _progress, _from_user):\n if self.impl.interface.on_change:\n self.impl.interface.on_change(widget=self.impl.interface)\n\n # Add two unused methods so that the Java interface is completely implemented.\n def onStartTrackingTouch(self, native_seekbar):\n pass\n\n def onStopTrackingTouch(self, native_seekbar):\n pass\n\n\n# Since Android's SeekBar is always discrete,\n# use a high number of steps for a \"continuous\" slider.\nDEFAULT_NUMBER_OF_TICKS = 10000\n\n\nclass Slider(Widget):\n def create(self):\n self.native = SeekBar(self._native_activity)\n self.native.setMax(DEFAULT_NUMBER_OF_TICKS)\n self.native.setOnSeekBarChangeListener(TogaOnSeekBarChangeListener(self))\n\n def get_value(self):\n minimum, maximum = self.interface.range\n n_steps = self.interface.tick_count\n if n_steps is None:\n n_steps = DEFAULT_NUMBER_OF_TICKS\n return (self.native.getProgress() * (maximum - minimum) / n_steps) + minimum\n\n def set_value(self, value):\n minimum, maximum = self.interface.range\n n_steps = self.interface.tick_count\n if n_steps is None:\n n_steps = DEFAULT_NUMBER_OF_TICKS\n self.native.setProgress(int((maximum - value - minimum) * n_steps))\n\n def set_range(self, range):\n pass\n\n def set_tick_count(self, tick_count):\n if tick_count is None:\n self.native.setMax(DEFAULT_NUMBER_OF_TICKS)\n else:\n self.native.setMax(int(tick_count) - 1)\n\n def rehint(self):\n self.native.measure(\n View__MeasureSpec.UNSPECIFIED, View__MeasureSpec.UNSPECIFIED\n )\n self.interface.intrinsic.width = at_least(self.native.getMeasuredWidth())\n self.interface.intrinsic.height = self.native.getMeasuredHeight()\n\n def set_on_change(self, handler):\n # No special handling required\n pass\n\n def set_on_press(self, handler):\n self.interface.factory.not_implemented(\"Slider.set_on_press()\")\n\n def set_on_release(self, handler):\n self.interface.factory.not_implemented(\"Slider.set_on_release()\")\n", "path": "src/android/toga_android/widgets/slider.py"}], "after_files": [{"content": "from travertino.size import at_least\n\nfrom ..libs.android.view import View__MeasureSpec\nfrom ..libs.android.widget import SeekBar, SeekBar__OnSeekBarChangeListener\nfrom .base import Widget\n\n\nclass TogaOnSeekBarChangeListener(SeekBar__OnSeekBarChangeListener):\n def __init__(self, impl):\n super().__init__()\n self.impl = impl\n\n def onProgressChanged(self, _view, _progress, _from_user):\n if self.impl.interface.on_change:\n self.impl.interface.on_change(widget=self.impl.interface)\n\n # Add two unused methods so that the Java interface is completely implemented.\n def onStartTrackingTouch(self, native_seekbar):\n pass\n\n def onStopTrackingTouch(self, native_seekbar):\n pass\n\n\n# Since Android's SeekBar is always discrete,\n# use a high number of steps for a \"continuous\" slider.\nDEFAULT_NUMBER_OF_TICKS = 10000\n\n\nclass Slider(Widget):\n def create(self):\n self.native = SeekBar(self._native_activity)\n self.native.setMax(DEFAULT_NUMBER_OF_TICKS)\n self.native.setOnSeekBarChangeListener(TogaOnSeekBarChangeListener(self))\n\n def get_value(self):\n minimum, maximum = self.interface.range\n if self.interface.tick_count is not None and self.interface.tick_count <= 1:\n return minimum\n toga_tick_count = self.interface.tick_count or DEFAULT_NUMBER_OF_TICKS\n android_slider_max = toga_tick_count - 1\n tick_factor = (maximum - minimum) / android_slider_max\n progress_scaled = self.native.getProgress() * tick_factor\n result = progress_scaled + minimum\n return result\n\n def set_value(self, value):\n minimum, maximum = self.interface.range\n if self.interface.tick_count is not None and self.interface.tick_count <= 1:\n android_progress = 0\n else:\n toga_tick_count = self.interface.tick_count or DEFAULT_NUMBER_OF_TICKS\n android_slider_max = toga_tick_count - 1\n tick_factor = (maximum - minimum) / android_slider_max\n android_progress = int((value - minimum) * tick_factor)\n self.native.setProgress(android_progress)\n\n def set_range(self, range):\n pass\n\n def set_tick_count(self, tick_count):\n # Since the Android slider slides from 0 to max inclusive, always subtract 1 from tick_count.\n if self.interface.tick_count is None:\n android_slider_max = DEFAULT_NUMBER_OF_TICKS - 1\n else:\n android_slider_max = int(self.interface.tick_count - 1)\n # Set the Android SeekBar max, clamping so it's non-negative.\n self.native.setMax(max(0, android_slider_max))\n\n def rehint(self):\n self.native.measure(\n View__MeasureSpec.UNSPECIFIED, View__MeasureSpec.UNSPECIFIED\n )\n self.interface.intrinsic.width = at_least(self.native.getMeasuredWidth())\n self.interface.intrinsic.height = self.native.getMeasuredHeight()\n\n def set_on_change(self, handler):\n # No special handling required\n pass\n\n def set_on_press(self, handler):\n self.interface.factory.not_implemented(\"Slider.set_on_press()\")\n\n def set_on_release(self, handler):\n self.interface.factory.not_implemented(\"Slider.set_on_release()\")\n", "path": "src/android/toga_android/widgets/slider.py"}]} | 3,087 | 552 |
gh_patches_debug_19491 | rasdani/github-patches | git_diff | sunpy__sunpy-5493 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix CROTA keyword in EUI maps
Currently EUI maps have a `CROTA` keyword, which by the FITS standard should really be a `CROTA2` keyword. This results in the warning
```python
/home/docs/checkouts/readthedocs.org/user_builds/solar-orbiter-python/envs/latest/lib/python3.8/site-packages/astropy/wcs/wcs.py:482: FITSFixedWarning: CROTA = 2.486914995997215 / [deg] rotation angle
keyword looks very much like CROTAn but isn't.
wcsprm = _wcs.Wcsprm(header=header_bytes, key=key,
```
It would be good to
- Check if CROTA is in the header and CROTA2 isn't
- If so, rename the CROTA keyword to CROTA2
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sunpy/map/sources/solo.py`
Content:
```
1 """
2 Solar Orbiter Map subclass definitions.
3 """
4 import astropy.units as u
5 from astropy.coordinates import CartesianRepresentation
6 from astropy.visualization import ImageNormalize, LinearStretch
7
8 from sunpy.coordinates import HeliocentricInertial
9 from sunpy.map import GenericMap
10 from sunpy.map.sources.source_type import source_stretch
11 from sunpy.time import parse_time
12
13 __all__ = ['EUIMap']
14
15
16 class EUIMap(GenericMap):
17 """
18 EUI Image Map
19
20 The Extreme Ultraviolet Imager (EUI) is a remote sensing instrument onboard the
21 Solar Orbiter (SolO) spacecraft. EUI has three telescopes that image the Sun in
22 Lyman-alpha (1216 Å) and the EUV (174 Å and 304 Å). The three telescopes are the
23 Full Sun Imager (FSI) and two High Resolution Imagers (HRI). The FSI images the
24 whole Sun in both 174 Å and 304 Å. The EUV and Lyman-alpha HRI telescopes image a
25 1000"-by-1000" patch in 174 Å and 1216 Å, respectively.
26
27 References
28 ----------
29 * `Solar Orbiter Mission Page <https://sci.esa.int/web/solar-orbiter/>`__
30 * `EUI Instrument Page <https://wwwbis.sidc.be/EUI/EUI/EUI/EUI/EUI/>`__
31 * `Instrument Paper <https://doi.org/10.1051/0004-6361/201936663>`__
32 """
33
34 def __init__(self, data, header, **kwargs):
35 super().__init__(data, header, **kwargs)
36 self._nickname = self.detector
37 self.plot_settings['cmap'] = self._get_cmap_name()
38 self.plot_settings['norm'] = ImageNormalize(
39 stretch=source_stretch(self.meta, LinearStretch()), clip=False)
40
41 @property
42 def processing_level(self):
43 if self.meta.get('level'):
44 # The level number is prepended by the letter L
45 return int(self.meta.get('level')[1:])
46
47 @property
48 def exposure_time(self):
49 return self.meta.get('xposure', 0.0) * self.timeunit
50
51 @property
52 def date(self):
53 t = self.meta.get('date-avg')
54 timesys = self.meta.get('timesys')
55 return parse_time(t, scale=timesys.lower())
56
57 @property
58 def _supported_observer_coordinates(self):
59 return [(('hcix_obs', 'hciy_obs', 'hciz_obs'),
60 {'x': self.meta.get('hcix_obs'),
61 'y': self.meta.get('hciy_obs'),
62 'z': self.meta.get('hciz_obs'),
63 'unit': u.m,
64 'representation_type': CartesianRepresentation,
65 'frame': HeliocentricInertial})] + super()._supported_observer_coordinates
66
67 @classmethod
68 def is_datasource_for(cls, data, header, **kwargs):
69 """Determines if header corresponds to an EUI image"""
70 is_solo = 'solar orbiter' in str(header.get('obsrvtry', '')).lower()
71 is_eui = str(header.get('instrume', '')).startswith('EUI')
72 return is_solo and is_eui
73
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sunpy/map/sources/solo.py b/sunpy/map/sources/solo.py
--- a/sunpy/map/sources/solo.py
+++ b/sunpy/map/sources/solo.py
@@ -5,6 +5,7 @@
from astropy.coordinates import CartesianRepresentation
from astropy.visualization import ImageNormalize, LinearStretch
+from sunpy import log
from sunpy.coordinates import HeliocentricInertial
from sunpy.map import GenericMap
from sunpy.map.sources.source_type import source_stretch
@@ -38,6 +39,10 @@
self.plot_settings['norm'] = ImageNormalize(
stretch=source_stretch(self.meta, LinearStretch()), clip=False)
+ if 'CROTA' in self.meta and 'CROTA2' not in self.meta:
+ log.debug("Renaming 'CROTA' to 'CROTA2'")
+ self.meta['CROTA2'] = self.meta.pop('CROTA')
+
@property
def processing_level(self):
if self.meta.get('level'):
| {"golden_diff": "diff --git a/sunpy/map/sources/solo.py b/sunpy/map/sources/solo.py\n--- a/sunpy/map/sources/solo.py\n+++ b/sunpy/map/sources/solo.py\n@@ -5,6 +5,7 @@\n from astropy.coordinates import CartesianRepresentation\n from astropy.visualization import ImageNormalize, LinearStretch\n \n+from sunpy import log\n from sunpy.coordinates import HeliocentricInertial\n from sunpy.map import GenericMap\n from sunpy.map.sources.source_type import source_stretch\n@@ -38,6 +39,10 @@\n self.plot_settings['norm'] = ImageNormalize(\n stretch=source_stretch(self.meta, LinearStretch()), clip=False)\n \n+ if 'CROTA' in self.meta and 'CROTA2' not in self.meta:\n+ log.debug(\"Renaming 'CROTA' to 'CROTA2'\")\n+ self.meta['CROTA2'] = self.meta.pop('CROTA')\n+\n @property\n def processing_level(self):\n if self.meta.get('level'):\n", "issue": "Fix CROTA keyword in EUI maps\nCurrently EUI maps have a `CROTA` keyword, which by the FITS standard should really be a `CROTA2` keyword. This results in the warning\r\n```python\r\n/home/docs/checkouts/readthedocs.org/user_builds/solar-orbiter-python/envs/latest/lib/python3.8/site-packages/astropy/wcs/wcs.py:482: FITSFixedWarning: CROTA = 2.486914995997215 / [deg] rotation angle\r\nkeyword looks very much like CROTAn but isn't.\r\n wcsprm = _wcs.Wcsprm(header=header_bytes, key=key,\r\n```\r\nIt would be good to\r\n- Check if CROTA is in the header and CROTA2 isn't\r\n- If so, rename the CROTA keyword to CROTA2\n", "before_files": [{"content": "\"\"\"\nSolar Orbiter Map subclass definitions.\n\"\"\"\nimport astropy.units as u\nfrom astropy.coordinates import CartesianRepresentation\nfrom astropy.visualization import ImageNormalize, LinearStretch\n\nfrom sunpy.coordinates import HeliocentricInertial\nfrom sunpy.map import GenericMap\nfrom sunpy.map.sources.source_type import source_stretch\nfrom sunpy.time import parse_time\n\n__all__ = ['EUIMap']\n\n\nclass EUIMap(GenericMap):\n \"\"\"\n EUI Image Map\n\n The Extreme Ultraviolet Imager (EUI) is a remote sensing instrument onboard the\n Solar Orbiter (SolO) spacecraft. EUI has three telescopes that image the Sun in\n Lyman-alpha (1216 \u00c5) and the EUV (174 \u00c5 and 304 \u00c5). The three telescopes are the\n Full Sun Imager (FSI) and two High Resolution Imagers (HRI). The FSI images the\n whole Sun in both 174 \u00c5 and 304 \u00c5. The EUV and Lyman-alpha HRI telescopes image a\n 1000\"-by-1000\" patch in 174 \u00c5 and 1216 \u00c5, respectively.\n\n References\n ----------\n * `Solar Orbiter Mission Page <https://sci.esa.int/web/solar-orbiter/>`__\n * `EUI Instrument Page <https://wwwbis.sidc.be/EUI/EUI/EUI/EUI/EUI/>`__\n * `Instrument Paper <https://doi.org/10.1051/0004-6361/201936663>`__\n \"\"\"\n\n def __init__(self, data, header, **kwargs):\n super().__init__(data, header, **kwargs)\n self._nickname = self.detector\n self.plot_settings['cmap'] = self._get_cmap_name()\n self.plot_settings['norm'] = ImageNormalize(\n stretch=source_stretch(self.meta, LinearStretch()), clip=False)\n\n @property\n def processing_level(self):\n if self.meta.get('level'):\n # The level number is prepended by the letter L\n return int(self.meta.get('level')[1:])\n\n @property\n def exposure_time(self):\n return self.meta.get('xposure', 0.0) * self.timeunit\n\n @property\n def date(self):\n t = self.meta.get('date-avg')\n timesys = self.meta.get('timesys')\n return parse_time(t, scale=timesys.lower())\n\n @property\n def _supported_observer_coordinates(self):\n return [(('hcix_obs', 'hciy_obs', 'hciz_obs'),\n {'x': self.meta.get('hcix_obs'),\n 'y': self.meta.get('hciy_obs'),\n 'z': self.meta.get('hciz_obs'),\n 'unit': u.m,\n 'representation_type': CartesianRepresentation,\n 'frame': HeliocentricInertial})] + super()._supported_observer_coordinates\n\n @classmethod\n def is_datasource_for(cls, data, header, **kwargs):\n \"\"\"Determines if header corresponds to an EUI image\"\"\"\n is_solo = 'solar orbiter' in str(header.get('obsrvtry', '')).lower()\n is_eui = str(header.get('instrume', '')).startswith('EUI')\n return is_solo and is_eui\n", "path": "sunpy/map/sources/solo.py"}], "after_files": [{"content": "\"\"\"\nSolar Orbiter Map subclass definitions.\n\"\"\"\nimport astropy.units as u\nfrom astropy.coordinates import CartesianRepresentation\nfrom astropy.visualization import ImageNormalize, LinearStretch\n\nfrom sunpy import log\nfrom sunpy.coordinates import HeliocentricInertial\nfrom sunpy.map import GenericMap\nfrom sunpy.map.sources.source_type import source_stretch\nfrom sunpy.time import parse_time\n\n__all__ = ['EUIMap']\n\n\nclass EUIMap(GenericMap):\n \"\"\"\n EUI Image Map\n\n The Extreme Ultraviolet Imager (EUI) is a remote sensing instrument onboard the\n Solar Orbiter (SolO) spacecraft. EUI has three telescopes that image the Sun in\n Lyman-alpha (1216 \u00c5) and the EUV (174 \u00c5 and 304 \u00c5). The three telescopes are the\n Full Sun Imager (FSI) and two High Resolution Imagers (HRI). The FSI images the\n whole Sun in both 174 \u00c5 and 304 \u00c5. The EUV and Lyman-alpha HRI telescopes image a\n 1000\"-by-1000\" patch in 174 \u00c5 and 1216 \u00c5, respectively.\n\n References\n ----------\n * `Solar Orbiter Mission Page <https://sci.esa.int/web/solar-orbiter/>`__\n * `EUI Instrument Page <https://wwwbis.sidc.be/EUI/EUI/EUI/EUI/EUI/>`__\n * `Instrument Paper <https://doi.org/10.1051/0004-6361/201936663>`__\n \"\"\"\n\n def __init__(self, data, header, **kwargs):\n super().__init__(data, header, **kwargs)\n self._nickname = self.detector\n self.plot_settings['cmap'] = self._get_cmap_name()\n self.plot_settings['norm'] = ImageNormalize(\n stretch=source_stretch(self.meta, LinearStretch()), clip=False)\n\n if 'CROTA' in self.meta and 'CROTA2' not in self.meta:\n log.debug(\"Renaming 'CROTA' to 'CROTA2'\")\n self.meta['CROTA2'] = self.meta.pop('CROTA')\n\n @property\n def processing_level(self):\n if self.meta.get('level'):\n # The level number is prepended by the letter L\n return int(self.meta.get('level')[1:])\n\n @property\n def exposure_time(self):\n return self.meta.get('xposure', 0.0) * self.timeunit\n\n @property\n def date(self):\n t = self.meta.get('date-avg')\n timesys = self.meta.get('timesys')\n return parse_time(t, scale=timesys.lower())\n\n @property\n def _supported_observer_coordinates(self):\n return [(('hcix_obs', 'hciy_obs', 'hciz_obs'),\n {'x': self.meta.get('hcix_obs'),\n 'y': self.meta.get('hciy_obs'),\n 'z': self.meta.get('hciz_obs'),\n 'unit': u.m,\n 'representation_type': CartesianRepresentation,\n 'frame': HeliocentricInertial})] + super()._supported_observer_coordinates\n\n @classmethod\n def is_datasource_for(cls, data, header, **kwargs):\n \"\"\"Determines if header corresponds to an EUI image\"\"\"\n is_solo = 'solar orbiter' in str(header.get('obsrvtry', '')).lower()\n is_eui = str(header.get('instrume', '')).startswith('EUI')\n return is_solo and is_eui\n", "path": "sunpy/map/sources/solo.py"}]} | 1,344 | 233 |
gh_patches_debug_10058 | rasdani/github-patches | git_diff | docker__docker-py-1972 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Build context (.tar) is not prepared properly
Hey,
This morning I've updated to version `3.1.1` however, using this version i'm getting wired error from docker-engine build:
```
ERROR: Error processing tar file(exit status 1): mkdir /foodir/bardir: no such file or directory
```
and the actual building does not start.
took me some time to realise this is related to update i got this morning,
Reverting back to version `3.0.1`, I could build again.
*NOTE*: `/foodir/bardir` is censored due to security policy in my company,
so for the sake of this issue, lets assume this is the context:
- Dockerfile
- foodir
- bardir
- file
Also, path in error did start with `/` so i kept it there.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docker/utils/build.py`
Content:
```
1 import os
2 import re
3
4 from ..constants import IS_WINDOWS_PLATFORM
5 from fnmatch import fnmatch
6 from itertools import chain
7 from .utils import create_archive
8
9
10 def tar(path, exclude=None, dockerfile=None, fileobj=None, gzip=False):
11 root = os.path.abspath(path)
12 exclude = exclude or []
13 return create_archive(
14 files=sorted(exclude_paths(root, exclude, dockerfile=dockerfile)),
15 root=root, fileobj=fileobj, gzip=gzip
16 )
17
18
19 _SEP = re.compile('/|\\\\') if IS_WINDOWS_PLATFORM else re.compile('/')
20
21
22 def exclude_paths(root, patterns, dockerfile=None):
23 """
24 Given a root directory path and a list of .dockerignore patterns, return
25 an iterator of all paths (both regular files and directories) in the root
26 directory that do *not* match any of the patterns.
27
28 All paths returned are relative to the root.
29 """
30
31 if dockerfile is None:
32 dockerfile = 'Dockerfile'
33
34 def split_path(p):
35 return [pt for pt in re.split(_SEP, p) if pt and pt != '.']
36
37 def normalize(p):
38 # Leading and trailing slashes are not relevant. Yes,
39 # "foo.py/" must exclude the "foo.py" regular file. "."
40 # components are not relevant either, even if the whole
41 # pattern is only ".", as the Docker reference states: "For
42 # historical reasons, the pattern . is ignored."
43 # ".." component must be cleared with the potential previous
44 # component, regardless of whether it exists: "A preprocessing
45 # step [...] eliminates . and .. elements using Go's
46 # filepath.".
47 i = 0
48 split = split_path(p)
49 while i < len(split):
50 if split[i] == '..':
51 del split[i]
52 if i > 0:
53 del split[i - 1]
54 i -= 1
55 else:
56 i += 1
57 return split
58
59 patterns = (
60 (True, normalize(p[1:]))
61 if p.startswith('!') else
62 (False, normalize(p))
63 for p in patterns)
64 patterns = list(reversed(list(chain(
65 # Exclude empty patterns such as "." or the empty string.
66 filter(lambda p: p[1], patterns),
67 # Always include the Dockerfile and .dockerignore
68 [(True, split_path(dockerfile)), (True, ['.dockerignore'])]))))
69 return set(walk(root, patterns))
70
71
72 def walk(root, patterns, default=True):
73 """
74 A collection of file lying below root that should be included according to
75 patterns.
76 """
77
78 def match(p):
79 if p[1][0] == '**':
80 rec = (p[0], p[1][1:])
81 return [p] + (match(rec) if rec[1] else [rec])
82 elif fnmatch(f, p[1][0]):
83 return [(p[0], p[1][1:])]
84 else:
85 return []
86
87 for f in os.listdir(root):
88 cur = os.path.join(root, f)
89 # The patterns if recursing in that directory.
90 sub = list(chain(*(match(p) for p in patterns)))
91 # Whether this file is explicitely included / excluded.
92 hit = next((p[0] for p in sub if not p[1]), None)
93 # Whether this file is implicitely included / excluded.
94 matched = default if hit is None else hit
95 sub = list(filter(lambda p: p[1], sub))
96 if os.path.isdir(cur):
97 # Entirely skip directories if there are no chance any subfile will
98 # be included.
99 if all(not p[0] for p in sub) and not matched:
100 continue
101 # I think this would greatly speed up dockerignore handling by not
102 # recursing into directories we are sure would be entirely
103 # included, and only yielding the directory itself, which will be
104 # recursively archived anyway. However the current unit test expect
105 # the full list of subfiles and I'm not 100% sure it would make no
106 # difference yet.
107 # if all(p[0] for p in sub) and matched:
108 # yield f
109 # continue
110 children = False
111 for r in (os.path.join(f, p) for p in walk(cur, sub, matched)):
112 yield r
113 children = True
114 # The current unit tests expect directories only under those
115 # conditions. It might be simplifiable though.
116 if (not sub or not children) and hit or hit is None and default:
117 yield f
118 elif matched:
119 yield f
120
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docker/utils/build.py b/docker/utils/build.py
--- a/docker/utils/build.py
+++ b/docker/utils/build.py
@@ -93,7 +93,7 @@
# Whether this file is implicitely included / excluded.
matched = default if hit is None else hit
sub = list(filter(lambda p: p[1], sub))
- if os.path.isdir(cur):
+ if os.path.isdir(cur) and not os.path.islink(cur):
# Entirely skip directories if there are no chance any subfile will
# be included.
if all(not p[0] for p in sub) and not matched:
| {"golden_diff": "diff --git a/docker/utils/build.py b/docker/utils/build.py\n--- a/docker/utils/build.py\n+++ b/docker/utils/build.py\n@@ -93,7 +93,7 @@\n # Whether this file is implicitely included / excluded.\n matched = default if hit is None else hit\n sub = list(filter(lambda p: p[1], sub))\n- if os.path.isdir(cur):\n+ if os.path.isdir(cur) and not os.path.islink(cur):\n # Entirely skip directories if there are no chance any subfile will\n # be included.\n if all(not p[0] for p in sub) and not matched:\n", "issue": "Build context (.tar) is not prepared properly\nHey,\r\n\r\nThis morning I've updated to version `3.1.1` however, using this version i'm getting wired error from docker-engine build:\r\n```\r\nERROR: Error processing tar file(exit status 1): mkdir /foodir/bardir: no such file or directory\r\n```\r\nand the actual building does not start.\r\ntook me some time to realise this is related to update i got this morning, \r\nReverting back to version `3.0.1`, I could build again.\r\n\r\n*NOTE*: `/foodir/bardir` is censored due to security policy in my company,\r\nso for the sake of this issue, lets assume this is the context:\r\n- Dockerfile\r\n- foodir\r\n - bardir\r\n - file\r\n\r\nAlso, path in error did start with `/` so i kept it there.\r\n\n", "before_files": [{"content": "import os\nimport re\n\nfrom ..constants import IS_WINDOWS_PLATFORM\nfrom fnmatch import fnmatch\nfrom itertools import chain\nfrom .utils import create_archive\n\n\ndef tar(path, exclude=None, dockerfile=None, fileobj=None, gzip=False):\n root = os.path.abspath(path)\n exclude = exclude or []\n return create_archive(\n files=sorted(exclude_paths(root, exclude, dockerfile=dockerfile)),\n root=root, fileobj=fileobj, gzip=gzip\n )\n\n\n_SEP = re.compile('/|\\\\\\\\') if IS_WINDOWS_PLATFORM else re.compile('/')\n\n\ndef exclude_paths(root, patterns, dockerfile=None):\n \"\"\"\n Given a root directory path and a list of .dockerignore patterns, return\n an iterator of all paths (both regular files and directories) in the root\n directory that do *not* match any of the patterns.\n\n All paths returned are relative to the root.\n \"\"\"\n\n if dockerfile is None:\n dockerfile = 'Dockerfile'\n\n def split_path(p):\n return [pt for pt in re.split(_SEP, p) if pt and pt != '.']\n\n def normalize(p):\n # Leading and trailing slashes are not relevant. Yes,\n # \"foo.py/\" must exclude the \"foo.py\" regular file. \".\"\n # components are not relevant either, even if the whole\n # pattern is only \".\", as the Docker reference states: \"For\n # historical reasons, the pattern . is ignored.\"\n # \"..\" component must be cleared with the potential previous\n # component, regardless of whether it exists: \"A preprocessing\n # step [...] eliminates . and .. elements using Go's\n # filepath.\".\n i = 0\n split = split_path(p)\n while i < len(split):\n if split[i] == '..':\n del split[i]\n if i > 0:\n del split[i - 1]\n i -= 1\n else:\n i += 1\n return split\n\n patterns = (\n (True, normalize(p[1:]))\n if p.startswith('!') else\n (False, normalize(p))\n for p in patterns)\n patterns = list(reversed(list(chain(\n # Exclude empty patterns such as \".\" or the empty string.\n filter(lambda p: p[1], patterns),\n # Always include the Dockerfile and .dockerignore\n [(True, split_path(dockerfile)), (True, ['.dockerignore'])]))))\n return set(walk(root, patterns))\n\n\ndef walk(root, patterns, default=True):\n \"\"\"\n A collection of file lying below root that should be included according to\n patterns.\n \"\"\"\n\n def match(p):\n if p[1][0] == '**':\n rec = (p[0], p[1][1:])\n return [p] + (match(rec) if rec[1] else [rec])\n elif fnmatch(f, p[1][0]):\n return [(p[0], p[1][1:])]\n else:\n return []\n\n for f in os.listdir(root):\n cur = os.path.join(root, f)\n # The patterns if recursing in that directory.\n sub = list(chain(*(match(p) for p in patterns)))\n # Whether this file is explicitely included / excluded.\n hit = next((p[0] for p in sub if not p[1]), None)\n # Whether this file is implicitely included / excluded.\n matched = default if hit is None else hit\n sub = list(filter(lambda p: p[1], sub))\n if os.path.isdir(cur):\n # Entirely skip directories if there are no chance any subfile will\n # be included.\n if all(not p[0] for p in sub) and not matched:\n continue\n # I think this would greatly speed up dockerignore handling by not\n # recursing into directories we are sure would be entirely\n # included, and only yielding the directory itself, which will be\n # recursively archived anyway. However the current unit test expect\n # the full list of subfiles and I'm not 100% sure it would make no\n # difference yet.\n # if all(p[0] for p in sub) and matched:\n # yield f\n # continue\n children = False\n for r in (os.path.join(f, p) for p in walk(cur, sub, matched)):\n yield r\n children = True\n # The current unit tests expect directories only under those\n # conditions. It might be simplifiable though.\n if (not sub or not children) and hit or hit is None and default:\n yield f\n elif matched:\n yield f\n", "path": "docker/utils/build.py"}], "after_files": [{"content": "import os\nimport re\n\nfrom ..constants import IS_WINDOWS_PLATFORM\nfrom fnmatch import fnmatch\nfrom itertools import chain\nfrom .utils import create_archive\n\n\ndef tar(path, exclude=None, dockerfile=None, fileobj=None, gzip=False):\n root = os.path.abspath(path)\n exclude = exclude or []\n return create_archive(\n files=sorted(exclude_paths(root, exclude, dockerfile=dockerfile)),\n root=root, fileobj=fileobj, gzip=gzip\n )\n\n\n_SEP = re.compile('/|\\\\\\\\') if IS_WINDOWS_PLATFORM else re.compile('/')\n\n\ndef exclude_paths(root, patterns, dockerfile=None):\n \"\"\"\n Given a root directory path and a list of .dockerignore patterns, return\n an iterator of all paths (both regular files and directories) in the root\n directory that do *not* match any of the patterns.\n\n All paths returned are relative to the root.\n \"\"\"\n\n if dockerfile is None:\n dockerfile = 'Dockerfile'\n\n def split_path(p):\n return [pt for pt in re.split(_SEP, p) if pt and pt != '.']\n\n def normalize(p):\n # Leading and trailing slashes are not relevant. Yes,\n # \"foo.py/\" must exclude the \"foo.py\" regular file. \".\"\n # components are not relevant either, even if the whole\n # pattern is only \".\", as the Docker reference states: \"For\n # historical reasons, the pattern . is ignored.\"\n # \"..\" component must be cleared with the potential previous\n # component, regardless of whether it exists: \"A preprocessing\n # step [...] eliminates . and .. elements using Go's\n # filepath.\".\n i = 0\n split = split_path(p)\n while i < len(split):\n if split[i] == '..':\n del split[i]\n if i > 0:\n del split[i - 1]\n i -= 1\n else:\n i += 1\n return split\n\n patterns = (\n (True, normalize(p[1:]))\n if p.startswith('!') else\n (False, normalize(p))\n for p in patterns)\n patterns = list(reversed(list(chain(\n # Exclude empty patterns such as \".\" or the empty string.\n filter(lambda p: p[1], patterns),\n # Always include the Dockerfile and .dockerignore\n [(True, split_path(dockerfile)), (True, ['.dockerignore'])]))))\n return set(walk(root, patterns))\n\n\ndef walk(root, patterns, default=True):\n \"\"\"\n A collection of file lying below root that should be included according to\n patterns.\n \"\"\"\n\n def match(p):\n if p[1][0] == '**':\n rec = (p[0], p[1][1:])\n return [p] + (match(rec) if rec[1] else [rec])\n elif fnmatch(f, p[1][0]):\n return [(p[0], p[1][1:])]\n else:\n return []\n\n for f in os.listdir(root):\n cur = os.path.join(root, f)\n # The patterns if recursing in that directory.\n sub = list(chain(*(match(p) for p in patterns)))\n # Whether this file is explicitely included / excluded.\n hit = next((p[0] for p in sub if not p[1]), None)\n # Whether this file is implicitely included / excluded.\n matched = default if hit is None else hit\n sub = list(filter(lambda p: p[1], sub))\n if os.path.isdir(cur) and not os.path.islink(cur):\n # Entirely skip directories if there are no chance any subfile will\n # be included.\n if all(not p[0] for p in sub) and not matched:\n continue\n # I think this would greatly speed up dockerignore handling by not\n # recursing into directories we are sure would be entirely\n # included, and only yielding the directory itself, which will be\n # recursively archived anyway. However the current unit test expect\n # the full list of subfiles and I'm not 100% sure it would make no\n # difference yet.\n # if all(p[0] for p in sub) and matched:\n # yield f\n # continue\n children = False\n for r in (os.path.join(f, p) for p in walk(cur, sub, matched)):\n yield r\n children = True\n # The current unit tests expect directories only under those\n # conditions. It might be simplifiable though.\n if (not sub or not children) and hit or hit is None and default:\n yield f\n elif matched:\n yield f\n", "path": "docker/utils/build.py"}]} | 1,713 | 139 |
gh_patches_debug_742 | rasdani/github-patches | git_diff | streamlink__streamlink-3952 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add lxml dependency
### Checklist
- [X] This is a feature request and not a different kind of issue
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed plugin requests](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22feature+request%22)
### Description
Streamlink should finally switch to a proper HTML/XML parser for extracting data instead of using cheap regex workarounds which don't work properly. I've already commented on this issue last year:
https://github.com/streamlink/streamlink/issues/3241#issuecomment-706486239
The reason why I'm suggesting this again right now is that I was trying to fix the deutschewelle plugin (https://dw.com) yesterday and ran into issues with the `itertags` utility method, which is based on simple regexes for iterating HTML nodes and their attributes+body. `itertags` for example does not work with nested nodes, which makes adding ridiculous custom regexes necessary. Just take a look at this madness:
https://github.com/streamlink/streamlink/blob/3668770d608f0fab54d40a46acd6720a97f63775/src/streamlink/plugins/deutschewelle.py#L18-L29
With `lxml` (https://lxml.de/), HTML page contents can be parsed and the data extracted via XPath queries and/or the respective API methods. The methods are similar to python's native `xml.etree.ElementTree`, which itself is considered too slow and unsafe in certain cases. I am by no means an expert regarding python's standard library though, so if someone has better insight here, please share. In regards to packaging, this lib is available on basically every packaging system and adding it as a dependency here only has benefits.
I'd suggest that we add `lxml` as a dependency now and start using it for extracting data from HTML documents. The validation schema methods could be improved for this as well. There's also the `parse_xml` utility method, which is currently based on the native module.
Comments?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 import codecs
3 from os import environ, path
4 from sys import argv, path as sys_path
5
6 from setuptools import find_packages, setup
7
8 import versioneer
9
10
11 data_files = []
12 deps = [
13 "requests>=2.26.0,<3.0",
14 "isodate",
15 "websocket-client>=0.58.0",
16 # Support for SOCKS proxies
17 "PySocks!=1.5.7,>=1.5.6",
18 ]
19
20 # for encrypted streams
21 if environ.get("STREAMLINK_USE_PYCRYPTO"):
22 deps.append("pycrypto")
23 else:
24 # this version of pycryptodome is known to work and has a Windows wheel for py2.7, py3.3-3.6
25 deps.append("pycryptodome>=3.4.3,<4")
26
27 # for localization
28 if environ.get("STREAMLINK_USE_PYCOUNTRY"):
29 deps.append("pycountry")
30 else:
31 deps.append("iso-639")
32 deps.append("iso3166")
33
34 # When we build an egg for the Win32 bootstrap we don"t want dependency
35 # information built into it.
36 if environ.get("NO_DEPS"):
37 deps = []
38
39 this_directory = path.abspath(path.dirname(__file__))
40 srcdir = path.join(this_directory, "src/")
41 sys_path.insert(0, srcdir)
42
43 with codecs.open(path.join(this_directory, "README.md"), 'r', "utf8") as f:
44 long_description = f.read()
45
46
47 def is_wheel_for_windows():
48 if "bdist_wheel" in argv:
49 names = ["win32", "win-amd64", "cygwin"]
50 length = len(argv)
51 for pos in range(argv.index("bdist_wheel") + 1, length):
52 if argv[pos] == "--plat-name" and pos + 1 < length:
53 return argv[pos + 1] in names
54 elif argv[pos][:12] == "--plat-name=":
55 return argv[pos][12:] in names
56 return False
57
58
59 entry_points = {
60 "console_scripts": ["streamlink=streamlink_cli.main:main"]
61 }
62
63 if is_wheel_for_windows():
64 entry_points["gui_scripts"] = ["streamlinkw=streamlink_cli.main:main"]
65
66
67 additional_files = [
68 ("share/man/man1", ["docs/_build/man/streamlink.1"])
69 ]
70
71 for destdir, srcfiles in additional_files:
72 files = []
73 for srcfile in srcfiles:
74 if path.exists(srcfile):
75 files.append(srcfile)
76 if files:
77 data_files.append((destdir, files))
78
79
80 setup(name="streamlink",
81 version=versioneer.get_version(),
82 cmdclass=versioneer.get_cmdclass(),
83 description="Streamlink is a command-line utility that extracts streams "
84 "from various services and pipes them into a video player of "
85 "choice.",
86 long_description=long_description,
87 long_description_content_type="text/markdown",
88 url="https://github.com/streamlink/streamlink",
89 project_urls={
90 "Documentation": "https://streamlink.github.io/",
91 "Tracker": "https://github.com/streamlink/streamlink/issues",
92 "Source": "https://github.com/streamlink/streamlink",
93 "Funding": "https://opencollective.com/streamlink"
94 },
95 author="Streamlink",
96 # temp until we have a mailing list / global email
97 author_email="[email protected]",
98 license="Simplified BSD",
99 packages=find_packages("src"),
100 package_dir={"": "src"},
101 package_data={"streamlink.plugins": [".removed"]},
102 entry_points=entry_points,
103 data_files=data_files,
104 install_requires=deps,
105 test_suite="tests",
106 python_requires=">=3.6, <4",
107 classifiers=["Development Status :: 5 - Production/Stable",
108 "License :: OSI Approved :: BSD License",
109 "Environment :: Console",
110 "Intended Audience :: End Users/Desktop",
111 "Operating System :: POSIX",
112 "Operating System :: Microsoft :: Windows",
113 "Operating System :: MacOS",
114 "Programming Language :: Python :: 3",
115 "Programming Language :: Python :: 3 :: Only",
116 "Programming Language :: Python :: 3.6",
117 "Programming Language :: Python :: 3.7",
118 "Programming Language :: Python :: 3.8",
119 "Programming Language :: Python :: 3.9",
120 "Topic :: Internet :: WWW/HTTP",
121 "Topic :: Multimedia :: Sound/Audio",
122 "Topic :: Multimedia :: Video",
123 "Topic :: Utilities"])
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -12,6 +12,7 @@
deps = [
"requests>=2.26.0,<3.0",
"isodate",
+ "lxml>=4.6.3",
"websocket-client>=0.58.0",
# Support for SOCKS proxies
"PySocks!=1.5.7,>=1.5.6",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -12,6 +12,7 @@\n deps = [\n \"requests>=2.26.0,<3.0\",\n \"isodate\",\n+ \"lxml>=4.6.3\",\n \"websocket-client>=0.58.0\",\n # Support for SOCKS proxies\n \"PySocks!=1.5.7,>=1.5.6\",\n", "issue": "Add lxml dependency\n### Checklist\n\n- [X] This is a feature request and not a different kind of issue\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\n- [X] [I have checked the list of open and recently closed plugin requests](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22feature+request%22)\n\n### Description\n\nStreamlink should finally switch to a proper HTML/XML parser for extracting data instead of using cheap regex workarounds which don't work properly. I've already commented on this issue last year:\r\nhttps://github.com/streamlink/streamlink/issues/3241#issuecomment-706486239\r\n\r\nThe reason why I'm suggesting this again right now is that I was trying to fix the deutschewelle plugin (https://dw.com) yesterday and ran into issues with the `itertags` utility method, which is based on simple regexes for iterating HTML nodes and their attributes+body. `itertags` for example does not work with nested nodes, which makes adding ridiculous custom regexes necessary. Just take a look at this madness:\r\nhttps://github.com/streamlink/streamlink/blob/3668770d608f0fab54d40a46acd6720a97f63775/src/streamlink/plugins/deutschewelle.py#L18-L29\r\n\r\nWith `lxml` (https://lxml.de/), HTML page contents can be parsed and the data extracted via XPath queries and/or the respective API methods. The methods are similar to python's native `xml.etree.ElementTree`, which itself is considered too slow and unsafe in certain cases. I am by no means an expert regarding python's standard library though, so if someone has better insight here, please share. In regards to packaging, this lib is available on basically every packaging system and adding it as a dependency here only has benefits.\r\n\r\nI'd suggest that we add `lxml` as a dependency now and start using it for extracting data from HTML documents. The validation schema methods could be improved for this as well. There's also the `parse_xml` utility method, which is currently based on the native module.\r\n\r\nComments?\n", "before_files": [{"content": "#!/usr/bin/env python\nimport codecs\nfrom os import environ, path\nfrom sys import argv, path as sys_path\n\nfrom setuptools import find_packages, setup\n\nimport versioneer\n\n\ndata_files = []\ndeps = [\n \"requests>=2.26.0,<3.0\",\n \"isodate\",\n \"websocket-client>=0.58.0\",\n # Support for SOCKS proxies\n \"PySocks!=1.5.7,>=1.5.6\",\n]\n\n# for encrypted streams\nif environ.get(\"STREAMLINK_USE_PYCRYPTO\"):\n deps.append(\"pycrypto\")\nelse:\n # this version of pycryptodome is known to work and has a Windows wheel for py2.7, py3.3-3.6\n deps.append(\"pycryptodome>=3.4.3,<4\")\n\n# for localization\nif environ.get(\"STREAMLINK_USE_PYCOUNTRY\"):\n deps.append(\"pycountry\")\nelse:\n deps.append(\"iso-639\")\n deps.append(\"iso3166\")\n\n# When we build an egg for the Win32 bootstrap we don\"t want dependency\n# information built into it.\nif environ.get(\"NO_DEPS\"):\n deps = []\n\nthis_directory = path.abspath(path.dirname(__file__))\nsrcdir = path.join(this_directory, \"src/\")\nsys_path.insert(0, srcdir)\n\nwith codecs.open(path.join(this_directory, \"README.md\"), 'r', \"utf8\") as f:\n long_description = f.read()\n\n\ndef is_wheel_for_windows():\n if \"bdist_wheel\" in argv:\n names = [\"win32\", \"win-amd64\", \"cygwin\"]\n length = len(argv)\n for pos in range(argv.index(\"bdist_wheel\") + 1, length):\n if argv[pos] == \"--plat-name\" and pos + 1 < length:\n return argv[pos + 1] in names\n elif argv[pos][:12] == \"--plat-name=\":\n return argv[pos][12:] in names\n return False\n\n\nentry_points = {\n \"console_scripts\": [\"streamlink=streamlink_cli.main:main\"]\n}\n\nif is_wheel_for_windows():\n entry_points[\"gui_scripts\"] = [\"streamlinkw=streamlink_cli.main:main\"]\n\n\nadditional_files = [\n (\"share/man/man1\", [\"docs/_build/man/streamlink.1\"])\n]\n\nfor destdir, srcfiles in additional_files:\n files = []\n for srcfile in srcfiles:\n if path.exists(srcfile):\n files.append(srcfile)\n if files:\n data_files.append((destdir, files))\n\n\nsetup(name=\"streamlink\",\n version=versioneer.get_version(),\n cmdclass=versioneer.get_cmdclass(),\n description=\"Streamlink is a command-line utility that extracts streams \"\n \"from various services and pipes them into a video player of \"\n \"choice.\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/streamlink/streamlink\",\n project_urls={\n \"Documentation\": \"https://streamlink.github.io/\",\n \"Tracker\": \"https://github.com/streamlink/streamlink/issues\",\n \"Source\": \"https://github.com/streamlink/streamlink\",\n \"Funding\": \"https://opencollective.com/streamlink\"\n },\n author=\"Streamlink\",\n # temp until we have a mailing list / global email\n author_email=\"[email protected]\",\n license=\"Simplified BSD\",\n packages=find_packages(\"src\"),\n package_dir={\"\": \"src\"},\n package_data={\"streamlink.plugins\": [\".removed\"]},\n entry_points=entry_points,\n data_files=data_files,\n install_requires=deps,\n test_suite=\"tests\",\n python_requires=\">=3.6, <4\",\n classifiers=[\"Development Status :: 5 - Production/Stable\",\n \"License :: OSI Approved :: BSD License\",\n \"Environment :: Console\",\n \"Intended Audience :: End Users/Desktop\",\n \"Operating System :: POSIX\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: MacOS\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Multimedia :: Sound/Audio\",\n \"Topic :: Multimedia :: Video\",\n \"Topic :: Utilities\"])\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport codecs\nfrom os import environ, path\nfrom sys import argv, path as sys_path\n\nfrom setuptools import find_packages, setup\n\nimport versioneer\n\n\ndata_files = []\ndeps = [\n \"requests>=2.26.0,<3.0\",\n \"isodate\",\n \"lxml>=4.6.3\",\n \"websocket-client>=0.58.0\",\n # Support for SOCKS proxies\n \"PySocks!=1.5.7,>=1.5.6\",\n]\n\n# for encrypted streams\nif environ.get(\"STREAMLINK_USE_PYCRYPTO\"):\n deps.append(\"pycrypto\")\nelse:\n # this version of pycryptodome is known to work and has a Windows wheel for py2.7, py3.3-3.6\n deps.append(\"pycryptodome>=3.4.3,<4\")\n\n# for localization\nif environ.get(\"STREAMLINK_USE_PYCOUNTRY\"):\n deps.append(\"pycountry\")\nelse:\n deps.append(\"iso-639\")\n deps.append(\"iso3166\")\n\n# When we build an egg for the Win32 bootstrap we don\"t want dependency\n# information built into it.\nif environ.get(\"NO_DEPS\"):\n deps = []\n\nthis_directory = path.abspath(path.dirname(__file__))\nsrcdir = path.join(this_directory, \"src/\")\nsys_path.insert(0, srcdir)\n\nwith codecs.open(path.join(this_directory, \"README.md\"), 'r', \"utf8\") as f:\n long_description = f.read()\n\n\ndef is_wheel_for_windows():\n if \"bdist_wheel\" in argv:\n names = [\"win32\", \"win-amd64\", \"cygwin\"]\n length = len(argv)\n for pos in range(argv.index(\"bdist_wheel\") + 1, length):\n if argv[pos] == \"--plat-name\" and pos + 1 < length:\n return argv[pos + 1] in names\n elif argv[pos][:12] == \"--plat-name=\":\n return argv[pos][12:] in names\n return False\n\n\nentry_points = {\n \"console_scripts\": [\"streamlink=streamlink_cli.main:main\"]\n}\n\nif is_wheel_for_windows():\n entry_points[\"gui_scripts\"] = [\"streamlinkw=streamlink_cli.main:main\"]\n\n\nadditional_files = [\n (\"share/man/man1\", [\"docs/_build/man/streamlink.1\"])\n]\n\nfor destdir, srcfiles in additional_files:\n files = []\n for srcfile in srcfiles:\n if path.exists(srcfile):\n files.append(srcfile)\n if files:\n data_files.append((destdir, files))\n\n\nsetup(name=\"streamlink\",\n version=versioneer.get_version(),\n cmdclass=versioneer.get_cmdclass(),\n description=\"Streamlink is a command-line utility that extracts streams \"\n \"from various services and pipes them into a video player of \"\n \"choice.\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/streamlink/streamlink\",\n project_urls={\n \"Documentation\": \"https://streamlink.github.io/\",\n \"Tracker\": \"https://github.com/streamlink/streamlink/issues\",\n \"Source\": \"https://github.com/streamlink/streamlink\",\n \"Funding\": \"https://opencollective.com/streamlink\"\n },\n author=\"Streamlink\",\n # temp until we have a mailing list / global email\n author_email=\"[email protected]\",\n license=\"Simplified BSD\",\n packages=find_packages(\"src\"),\n package_dir={\"\": \"src\"},\n package_data={\"streamlink.plugins\": [\".removed\"]},\n entry_points=entry_points,\n data_files=data_files,\n install_requires=deps,\n test_suite=\"tests\",\n python_requires=\">=3.6, <4\",\n classifiers=[\"Development Status :: 5 - Production/Stable\",\n \"License :: OSI Approved :: BSD License\",\n \"Environment :: Console\",\n \"Intended Audience :: End Users/Desktop\",\n \"Operating System :: POSIX\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: MacOS\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Multimedia :: Sound/Audio\",\n \"Topic :: Multimedia :: Video\",\n \"Topic :: Utilities\"])\n", "path": "setup.py"}]} | 2,015 | 106 |
gh_patches_debug_5701 | rasdani/github-patches | git_diff | getpelican__pelican-3094 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
A dead link on PyPI for the contributions and feedback
I just stumbled upon [Pelican's page in PyPI](https://pypi.org/project/pelican/) and found that the l[ink for the contributions and feedback](https://pypi.org/project/pelican/CONTRIBUTING.rst) is dead. Perhaps, it needs to be updated?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 from os import walk
4 from os.path import join, relpath
5
6 from setuptools import find_packages, setup
7
8
9 version = "4.8.0"
10
11 requires = ['feedgenerator >= 1.9', 'jinja2 >= 2.7', 'pygments',
12 'docutils>=0.15', 'pytz >= 0a', 'blinker', 'unidecode',
13 'python-dateutil', 'rich']
14
15 entry_points = {
16 'console_scripts': [
17 'pelican = pelican.__main__:main',
18 'pelican-import = pelican.tools.pelican_import:main',
19 'pelican-quickstart = pelican.tools.pelican_quickstart:main',
20 'pelican-themes = pelican.tools.pelican_themes:main',
21 'pelican-plugins = pelican.plugins._utils:list_plugins'
22 ]
23 }
24
25 README = open('README.rst', encoding='utf-8').read()
26 CHANGELOG = open('docs/changelog.rst', encoding='utf-8').read()
27
28 description = '\n'.join([README, CHANGELOG])
29
30 setup(
31 name='pelican',
32 version=version,
33 url='https://getpelican.com/',
34 author='Justin Mayer',
35 author_email='[email protected]',
36 description="Static site generator supporting reStructuredText and "
37 "Markdown source content.",
38 project_urls={
39 'Documentation': 'https://docs.getpelican.com/',
40 'Funding': 'https://donate.getpelican.com/',
41 'Source': 'https://github.com/getpelican/pelican',
42 'Tracker': 'https://github.com/getpelican/pelican/issues',
43 },
44 keywords='static web site generator SSG reStructuredText Markdown',
45 license='AGPLv3',
46 long_description=description,
47 long_description_content_type='text/x-rst',
48 packages=find_packages(),
49 include_package_data=True, # includes all in MANIFEST.in if in package
50 # NOTE : This will collect any files that happen to be in the themes
51 # directory, even though they may not be checked into version control.
52 package_data={ # pelican/themes is not a package, so include manually
53 'pelican': [relpath(join(root, name), 'pelican')
54 for root, _, names in walk(join('pelican', 'themes'))
55 for name in names],
56 },
57 install_requires=requires,
58 extras_require={
59 'Markdown': ['markdown~=3.1.1']
60 },
61 entry_points=entry_points,
62 classifiers=[
63 'Development Status :: 5 - Production/Stable',
64 'Environment :: Console',
65 'Framework :: Pelican',
66 'License :: OSI Approved :: GNU Affero General Public License v3',
67 'Operating System :: OS Independent',
68 'Programming Language :: Python :: 3',
69 'Programming Language :: Python :: 3.7',
70 'Programming Language :: Python :: 3.8',
71 'Programming Language :: Python :: 3.9',
72 'Programming Language :: Python :: 3.10',
73 'Programming Language :: Python :: Implementation :: CPython',
74 'Topic :: Internet :: WWW/HTTP',
75 'Topic :: Software Development :: Libraries :: Python Modules',
76 ],
77 test_suite='pelican.tests',
78 )
79
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -25,6 +25,13 @@
README = open('README.rst', encoding='utf-8').read()
CHANGELOG = open('docs/changelog.rst', encoding='utf-8').read()
+# Relative links in the README must be converted to absolute URL's
+# so that they render correctly on PyPI.
+README = README.replace(
+ "<CONTRIBUTING.rst>",
+ "<https://docs.getpelican.com/en/latest/contribute.html>",
+)
+
description = '\n'.join([README, CHANGELOG])
setup(
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -25,6 +25,13 @@\n README = open('README.rst', encoding='utf-8').read()\n CHANGELOG = open('docs/changelog.rst', encoding='utf-8').read()\n \n+# Relative links in the README must be converted to absolute URL's\n+# so that they render correctly on PyPI.\n+README = README.replace(\n+ \"<CONTRIBUTING.rst>\",\n+ \"<https://docs.getpelican.com/en/latest/contribute.html>\",\n+)\n+\n description = '\\n'.join([README, CHANGELOG])\n \n setup(\n", "issue": "A dead link on PyPI for the contributions and feedback\nI just stumbled upon [Pelican's page in PyPI](https://pypi.org/project/pelican/) and found that the l[ink for the contributions and feedback](https://pypi.org/project/pelican/CONTRIBUTING.rst) is dead. Perhaps, it needs to be updated?\n", "before_files": [{"content": "#!/usr/bin/env python\n\nfrom os import walk\nfrom os.path import join, relpath\n\nfrom setuptools import find_packages, setup\n\n\nversion = \"4.8.0\"\n\nrequires = ['feedgenerator >= 1.9', 'jinja2 >= 2.7', 'pygments',\n 'docutils>=0.15', 'pytz >= 0a', 'blinker', 'unidecode',\n 'python-dateutil', 'rich']\n\nentry_points = {\n 'console_scripts': [\n 'pelican = pelican.__main__:main',\n 'pelican-import = pelican.tools.pelican_import:main',\n 'pelican-quickstart = pelican.tools.pelican_quickstart:main',\n 'pelican-themes = pelican.tools.pelican_themes:main',\n 'pelican-plugins = pelican.plugins._utils:list_plugins'\n ]\n}\n\nREADME = open('README.rst', encoding='utf-8').read()\nCHANGELOG = open('docs/changelog.rst', encoding='utf-8').read()\n\ndescription = '\\n'.join([README, CHANGELOG])\n\nsetup(\n name='pelican',\n version=version,\n url='https://getpelican.com/',\n author='Justin Mayer',\n author_email='[email protected]',\n description=\"Static site generator supporting reStructuredText and \"\n \"Markdown source content.\",\n project_urls={\n 'Documentation': 'https://docs.getpelican.com/',\n 'Funding': 'https://donate.getpelican.com/',\n 'Source': 'https://github.com/getpelican/pelican',\n 'Tracker': 'https://github.com/getpelican/pelican/issues',\n },\n keywords='static web site generator SSG reStructuredText Markdown',\n license='AGPLv3',\n long_description=description,\n long_description_content_type='text/x-rst',\n packages=find_packages(),\n include_package_data=True, # includes all in MANIFEST.in if in package\n # NOTE : This will collect any files that happen to be in the themes\n # directory, even though they may not be checked into version control.\n package_data={ # pelican/themes is not a package, so include manually\n 'pelican': [relpath(join(root, name), 'pelican')\n for root, _, names in walk(join('pelican', 'themes'))\n for name in names],\n },\n install_requires=requires,\n extras_require={\n 'Markdown': ['markdown~=3.1.1']\n },\n entry_points=entry_points,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Framework :: Pelican',\n 'License :: OSI Approved :: GNU Affero General Public License v3',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'Programming Language :: Python :: 3.10',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n test_suite='pelican.tests',\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\nfrom os import walk\nfrom os.path import join, relpath\n\nfrom setuptools import find_packages, setup\n\n\nversion = \"4.8.0\"\n\nrequires = ['feedgenerator >= 1.9', 'jinja2 >= 2.7', 'pygments',\n 'docutils>=0.15', 'pytz >= 0a', 'blinker', 'unidecode',\n 'python-dateutil', 'rich']\n\nentry_points = {\n 'console_scripts': [\n 'pelican = pelican.__main__:main',\n 'pelican-import = pelican.tools.pelican_import:main',\n 'pelican-quickstart = pelican.tools.pelican_quickstart:main',\n 'pelican-themes = pelican.tools.pelican_themes:main',\n 'pelican-plugins = pelican.plugins._utils:list_plugins'\n ]\n}\n\nREADME = open('README.rst', encoding='utf-8').read()\nCHANGELOG = open('docs/changelog.rst', encoding='utf-8').read()\n\n# Relative links in the README must be converted to absolute URL's\n# so that they render correctly on PyPI.\nREADME = README.replace(\n \"<CONTRIBUTING.rst>\",\n \"<https://docs.getpelican.com/en/latest/contribute.html>\",\n)\n\ndescription = '\\n'.join([README, CHANGELOG])\n\nsetup(\n name='pelican',\n version=version,\n url='https://getpelican.com/',\n author='Justin Mayer',\n author_email='[email protected]',\n description=\"Static site generator supporting reStructuredText and \"\n \"Markdown source content.\",\n project_urls={\n 'Documentation': 'https://docs.getpelican.com/',\n 'Funding': 'https://donate.getpelican.com/',\n 'Source': 'https://github.com/getpelican/pelican',\n 'Tracker': 'https://github.com/getpelican/pelican/issues',\n },\n keywords='static web site generator SSG reStructuredText Markdown',\n license='AGPLv3',\n long_description=description,\n long_description_content_type='text/x-rst',\n packages=find_packages(),\n include_package_data=True, # includes all in MANIFEST.in if in package\n # NOTE : This will collect any files that happen to be in the themes\n # directory, even though they may not be checked into version control.\n package_data={ # pelican/themes is not a package, so include manually\n 'pelican': [relpath(join(root, name), 'pelican')\n for root, _, names in walk(join('pelican', 'themes'))\n for name in names],\n },\n install_requires=requires,\n extras_require={\n 'Markdown': ['markdown~=3.1.1']\n },\n entry_points=entry_points,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Framework :: Pelican',\n 'License :: OSI Approved :: GNU Affero General Public License v3',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'Programming Language :: Python :: 3.10',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n test_suite='pelican.tests',\n)\n", "path": "setup.py"}]} | 1,194 | 143 |
gh_patches_debug_29680 | rasdani/github-patches | git_diff | ARM-DOE__ACT-664 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
act.utils.decode_present_weather classification tables
* ACT version: 1.4.2
* Python version: 3.9.16
* Operating System: Windows 10
### Description
I was working on decoding the FD70 present weather codes for the ATMOS FD70. It appears the FD70 uses both the WMO 4680 and 4677 in the numeric code. For it's metar code it uses WMO 4678.
When the code encountered a value of 88, for snow pellets, which it adopted from the 4677 table, it errors and stops. Suggestion would possible be adding in some of the WMO4677 to fill number gaps if applicable.
### What I Did
```
input:
act.utils.decode_present_weather(ds,variable='present_wx0')
output:
File D:\anaconda3\lib\site-packages\act\utils\inst_utils.py:133 in <listcomp>
wx_type = [weather[d] for d in data.values]
KeyError: 88
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `act/utils/inst_utils.py`
Content:
```
1 """
2 Functions containing utilities for instruments.
3
4 """
5
6
7 def decode_present_weather(ds, variable=None, decoded_name=None):
8 """
9 This function is to decode codes reported from automatic weather stations suchas the PWD22.
10 This is based on WMO Table 4680.
11
12 Parameters
13 ----------
14 ds : xarray.Dataset
15 ACT or Xarray dataset from which to convert codes
16 variable : string
17 Variable to decode
18 decoded_name : string
19 New variable name to store updated labels
20
21 Returns
22 -------
23 ds : xarray.Dataset
24 Returns dataset with new decoded data
25
26 References
27 ----------
28 WMO Manual on Code Volume I.1 A-360.
29 https://library.wmo.int/doc_num.php?explnum_id=10235
30
31 """
32
33 # Check to ensure that a variable name is passed
34 if variable is None:
35 raise ValueError('You must specify a variable')
36
37 if variable not in ds:
38 raise ValueError('Variable not in the dataset')
39
40 # Define the weather hash
41 weather = {
42 0: 'No significant weather observed',
43 1: 'Clouds generally dissolving or becoming less developed during the past hour',
44 2: 'State of the sky on the whole unchanged during the past hour',
45 3: 'Clouds generally forming or developing during the past hour',
46 4: 'Haze or smoke, or dust in suspension in the air, visibility >= 1 km',
47 5: 'Haze or smoke, or dust in suspension in the air, visibility < 1 km',
48 10: 'Mist',
49 11: 'Diamond dust',
50 12: 'Distant lightning',
51 18: 'Squalls',
52 20: 'Fog',
53 21: 'Precipitation',
54 22: 'Drizzle (not freezing) or snow grains',
55 23: 'Rain (not freezing)',
56 24: 'Snow',
57 25: 'Freezing drizzle or freezing rain',
58 26: 'Thunderstorm (with or without precipitation)',
59 27: 'Blowing or drifting snow or sand',
60 28: 'Blowing or drifting snow or sand, visibility >= 1 km',
61 29: 'Blowing or drifting snow or sand, visibility < 1 km',
62 30: 'Fog',
63 31: 'Fog or ice fog in patches',
64 32: 'Fog or ice fog, has become thinner during the past hour',
65 33: 'Fog or ice fog, no appreciable change during the past hour',
66 34: 'Fog or ice fog, has begun or become thicker during the past hour',
67 35: 'Fog, depositing rime',
68 40: 'Precipitation',
69 41: 'Precipitation, slight or moderate',
70 42: 'Precipitation, heavy',
71 43: 'Liquid precipitation, slight or moderate',
72 44: 'Liquid precipitation, heavy',
73 45: 'Solid precipitation, slight or moderate',
74 46: 'Solid precipitation, heavy',
75 47: 'Freezing precipitation, slight or moderate',
76 48: 'Freezing precipitation, heavy',
77 50: 'Drizzle',
78 51: 'Drizzle, not freezing, slight',
79 52: 'Drizzle, not freezing, moderate',
80 53: 'Drizzle, not freezing, heavy',
81 54: 'Drizzle, freezing, slight',
82 55: 'Drizzle, freezing, moderate',
83 56: 'Drizzle, freezing, heavy',
84 57: 'Drizzle and rain, slight',
85 58: 'Drizzle and rain, moderate or heavy',
86 60: 'Rain',
87 61: 'Rain, not freezing, slight',
88 62: 'Rain, not freezing, moderate',
89 63: 'Rain, not freezing, heavy',
90 64: 'Rain, freezing, slight',
91 65: 'Rain, freezing, moderate',
92 66: 'Rain, freezing, heavy',
93 67: 'Rain (or drizzle) and snow, slight',
94 68: 'Rain (or drizzle) and snow, moderate or heavy',
95 70: 'Snow',
96 71: 'Snow, light',
97 72: 'Snow, moderate',
98 73: 'Snow, heavy',
99 74: 'Ice pellets, slight',
100 75: 'Ice pellets, moderate',
101 76: 'Ice pellets, heavy',
102 77: 'Snow grains',
103 78: 'Ice crystals',
104 80: 'Shower(s) or Intermittent Precipitation',
105 81: 'Rain shower(s) or intermittent rain, slight',
106 82: 'Rain shower(s) or intermittent rain, moderate',
107 83: 'Rain shower(s) or intermittent rain, heavy',
108 84: 'Rain shower(s) or intermittent rain, violent',
109 85: 'Snow shower(s) or intermittent snow, slight',
110 86: 'Snow shower(s) or intermittent snow, moderate',
111 87: 'Snow shower(s) or intermittent snow, heavy',
112 89: 'Hail',
113 90: 'Thunderstorm',
114 91: 'Thunderstorm, slight or moderate, with no precipitation',
115 92: 'Thunderstorm, slight or moderate, with rain showers and/or snow showers',
116 93: 'Thunderstorm, slight or moderate, with hail',
117 94: 'Thunderstorm, heavy, with no precipitation',
118 95: 'Thunderstorm, heavy, with rain showers and/or snow showers',
119 96: 'Thunderstorm, heavy, with hail',
120 99: 'Tornado',
121 -9999: 'Missing',
122 }
123
124 # If a decoded name is not passed, make one
125 if decoded_name is None:
126 decoded_name = variable + '_decoded'
127
128 # Get data and fill nans with -9999
129 data = ds[variable]
130 data = data.fillna(-9999)
131
132 # Get the weather type for each code
133 wx_type = [weather[d] for d in data.values]
134
135 # Massage the data array to set back in the dataset
136 data.values = wx_type
137 if 'long_name' in data.attrs:
138 data.attrs['long_name'] = data.attrs['long_name'] + ' Decoded'
139 else:
140 data.attrs['long_name'] = 'Decoded present weather values'
141 if 'valid_min' in data.attrs:
142 del data.attrs['valid_min']
143 if 'valid_max' in data.attrs:
144 del data.attrs['valid_max']
145
146 ds[decoded_name] = data
147
148 return ds
149
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/act/utils/inst_utils.py b/act/utils/inst_utils.py
--- a/act/utils/inst_utils.py
+++ b/act/utils/inst_utils.py
@@ -6,8 +6,8 @@
def decode_present_weather(ds, variable=None, decoded_name=None):
"""
- This function is to decode codes reported from automatic weather stations suchas the PWD22.
- This is based on WMO Table 4680.
+ This function is to decode codes reported from automatic weather stations such as the PWD22.
+ This is based on WMO Table 4680 as well as a supplement table for WMO table 4677.
Parameters
----------
@@ -37,8 +37,8 @@
if variable not in ds:
raise ValueError('Variable not in the dataset')
- # Define the weather hash
- weather = {
+ # Define the weather hash for WMO table 4680.
+ weather_4680 = {
0: 'No significant weather observed',
1: 'Clouds generally dissolving or becoming less developed during the past hour',
2: 'State of the sky on the whole unchanged during the past hour',
@@ -121,6 +121,18 @@
-9999: 'Missing',
}
+ # Define the weather hash for WMO table 4677.
+ weather_4677 = {
+ 88: 'Shower(s) of snow pellets or small hail, with or without rain or rain and snow mixed, moderate or heavy',
+ }
+
+ # Join weather tables
+ weather_combined = dict(weather_4680)
+ weather_combined.update(weather_4677)
+
+ # Sort keys to be in order
+ weather = dict(sorted(weather_combined.items()))
+
# If a decoded name is not passed, make one
if decoded_name is None:
decoded_name = variable + '_decoded'
| {"golden_diff": "diff --git a/act/utils/inst_utils.py b/act/utils/inst_utils.py\n--- a/act/utils/inst_utils.py\n+++ b/act/utils/inst_utils.py\n@@ -6,8 +6,8 @@\n \n def decode_present_weather(ds, variable=None, decoded_name=None):\n \"\"\"\n- This function is to decode codes reported from automatic weather stations suchas the PWD22.\n- This is based on WMO Table 4680.\n+ This function is to decode codes reported from automatic weather stations such as the PWD22.\n+ This is based on WMO Table 4680 as well as a supplement table for WMO table 4677.\n \n Parameters\n ----------\n@@ -37,8 +37,8 @@\n if variable not in ds:\n raise ValueError('Variable not in the dataset')\n \n- # Define the weather hash\n- weather = {\n+ # Define the weather hash for WMO table 4680.\n+ weather_4680 = {\n 0: 'No significant weather observed',\n 1: 'Clouds generally dissolving or becoming less developed during the past hour',\n 2: 'State of the sky on the whole unchanged during the past hour',\n@@ -121,6 +121,18 @@\n -9999: 'Missing',\n }\n \n+ # Define the weather hash for WMO table 4677.\n+ weather_4677 = {\n+ 88: 'Shower(s) of snow pellets or small hail, with or without rain or rain and snow mixed, moderate or heavy',\n+ }\n+\n+ # Join weather tables\n+ weather_combined = dict(weather_4680)\n+ weather_combined.update(weather_4677)\n+\n+ # Sort keys to be in order\n+ weather = dict(sorted(weather_combined.items()))\n+\n # If a decoded name is not passed, make one\n if decoded_name is None:\n decoded_name = variable + '_decoded'\n", "issue": "act.utils.decode_present_weather classification tables\n* ACT version: 1.4.2\r\n* Python version: 3.9.16\r\n* Operating System: Windows 10\r\n\r\n### Description\r\n\r\nI was working on decoding the FD70 present weather codes for the ATMOS FD70. It appears the FD70 uses both the WMO 4680 and 4677 in the numeric code. For it's metar code it uses WMO 4678. \r\n\r\nWhen the code encountered a value of 88, for snow pellets, which it adopted from the 4677 table, it errors and stops. Suggestion would possible be adding in some of the WMO4677 to fill number gaps if applicable. \r\n\r\n### What I Did\r\n\r\n```\r\ninput: \r\nact.utils.decode_present_weather(ds,variable='present_wx0')\r\noutput:\r\n\r\n File D:\\anaconda3\\lib\\site-packages\\act\\utils\\inst_utils.py:133 in <listcomp>\r\n wx_type = [weather[d] for d in data.values]\r\n\r\nKeyError: 88\r\n\r\n```\r\n\n", "before_files": [{"content": "\"\"\"\nFunctions containing utilities for instruments.\n\n\"\"\"\n\n\ndef decode_present_weather(ds, variable=None, decoded_name=None):\n \"\"\"\n This function is to decode codes reported from automatic weather stations suchas the PWD22.\n This is based on WMO Table 4680.\n\n Parameters\n ----------\n ds : xarray.Dataset\n ACT or Xarray dataset from which to convert codes\n variable : string\n Variable to decode\n decoded_name : string\n New variable name to store updated labels\n\n Returns\n -------\n ds : xarray.Dataset\n Returns dataset with new decoded data\n\n References\n ----------\n WMO Manual on Code Volume I.1 A-360.\n https://library.wmo.int/doc_num.php?explnum_id=10235\n\n \"\"\"\n\n # Check to ensure that a variable name is passed\n if variable is None:\n raise ValueError('You must specify a variable')\n\n if variable not in ds:\n raise ValueError('Variable not in the dataset')\n\n # Define the weather hash\n weather = {\n 0: 'No significant weather observed',\n 1: 'Clouds generally dissolving or becoming less developed during the past hour',\n 2: 'State of the sky on the whole unchanged during the past hour',\n 3: 'Clouds generally forming or developing during the past hour',\n 4: 'Haze or smoke, or dust in suspension in the air, visibility >= 1 km',\n 5: 'Haze or smoke, or dust in suspension in the air, visibility < 1 km',\n 10: 'Mist',\n 11: 'Diamond dust',\n 12: 'Distant lightning',\n 18: 'Squalls',\n 20: 'Fog',\n 21: 'Precipitation',\n 22: 'Drizzle (not freezing) or snow grains',\n 23: 'Rain (not freezing)',\n 24: 'Snow',\n 25: 'Freezing drizzle or freezing rain',\n 26: 'Thunderstorm (with or without precipitation)',\n 27: 'Blowing or drifting snow or sand',\n 28: 'Blowing or drifting snow or sand, visibility >= 1 km',\n 29: 'Blowing or drifting snow or sand, visibility < 1 km',\n 30: 'Fog',\n 31: 'Fog or ice fog in patches',\n 32: 'Fog or ice fog, has become thinner during the past hour',\n 33: 'Fog or ice fog, no appreciable change during the past hour',\n 34: 'Fog or ice fog, has begun or become thicker during the past hour',\n 35: 'Fog, depositing rime',\n 40: 'Precipitation',\n 41: 'Precipitation, slight or moderate',\n 42: 'Precipitation, heavy',\n 43: 'Liquid precipitation, slight or moderate',\n 44: 'Liquid precipitation, heavy',\n 45: 'Solid precipitation, slight or moderate',\n 46: 'Solid precipitation, heavy',\n 47: 'Freezing precipitation, slight or moderate',\n 48: 'Freezing precipitation, heavy',\n 50: 'Drizzle',\n 51: 'Drizzle, not freezing, slight',\n 52: 'Drizzle, not freezing, moderate',\n 53: 'Drizzle, not freezing, heavy',\n 54: 'Drizzle, freezing, slight',\n 55: 'Drizzle, freezing, moderate',\n 56: 'Drizzle, freezing, heavy',\n 57: 'Drizzle and rain, slight',\n 58: 'Drizzle and rain, moderate or heavy',\n 60: 'Rain',\n 61: 'Rain, not freezing, slight',\n 62: 'Rain, not freezing, moderate',\n 63: 'Rain, not freezing, heavy',\n 64: 'Rain, freezing, slight',\n 65: 'Rain, freezing, moderate',\n 66: 'Rain, freezing, heavy',\n 67: 'Rain (or drizzle) and snow, slight',\n 68: 'Rain (or drizzle) and snow, moderate or heavy',\n 70: 'Snow',\n 71: 'Snow, light',\n 72: 'Snow, moderate',\n 73: 'Snow, heavy',\n 74: 'Ice pellets, slight',\n 75: 'Ice pellets, moderate',\n 76: 'Ice pellets, heavy',\n 77: 'Snow grains',\n 78: 'Ice crystals',\n 80: 'Shower(s) or Intermittent Precipitation',\n 81: 'Rain shower(s) or intermittent rain, slight',\n 82: 'Rain shower(s) or intermittent rain, moderate',\n 83: 'Rain shower(s) or intermittent rain, heavy',\n 84: 'Rain shower(s) or intermittent rain, violent',\n 85: 'Snow shower(s) or intermittent snow, slight',\n 86: 'Snow shower(s) or intermittent snow, moderate',\n 87: 'Snow shower(s) or intermittent snow, heavy',\n 89: 'Hail',\n 90: 'Thunderstorm',\n 91: 'Thunderstorm, slight or moderate, with no precipitation',\n 92: 'Thunderstorm, slight or moderate, with rain showers and/or snow showers',\n 93: 'Thunderstorm, slight or moderate, with hail',\n 94: 'Thunderstorm, heavy, with no precipitation',\n 95: 'Thunderstorm, heavy, with rain showers and/or snow showers',\n 96: 'Thunderstorm, heavy, with hail',\n 99: 'Tornado',\n -9999: 'Missing',\n }\n\n # If a decoded name is not passed, make one\n if decoded_name is None:\n decoded_name = variable + '_decoded'\n\n # Get data and fill nans with -9999\n data = ds[variable]\n data = data.fillna(-9999)\n\n # Get the weather type for each code\n wx_type = [weather[d] for d in data.values]\n\n # Massage the data array to set back in the dataset\n data.values = wx_type\n if 'long_name' in data.attrs:\n data.attrs['long_name'] = data.attrs['long_name'] + ' Decoded'\n else:\n data.attrs['long_name'] = 'Decoded present weather values'\n if 'valid_min' in data.attrs:\n del data.attrs['valid_min']\n if 'valid_max' in data.attrs:\n del data.attrs['valid_max']\n\n ds[decoded_name] = data\n\n return ds\n", "path": "act/utils/inst_utils.py"}], "after_files": [{"content": "\"\"\"\nFunctions containing utilities for instruments.\n\n\"\"\"\n\n\ndef decode_present_weather(ds, variable=None, decoded_name=None):\n \"\"\"\n This function is to decode codes reported from automatic weather stations such as the PWD22.\n This is based on WMO Table 4680 as well as a supplement table for WMO table 4677.\n\n Parameters\n ----------\n ds : xarray.Dataset\n ACT or Xarray dataset from which to convert codes\n variable : string\n Variable to decode\n decoded_name : string\n New variable name to store updated labels\n\n Returns\n -------\n ds : xarray.Dataset\n Returns dataset with new decoded data\n\n References\n ----------\n WMO Manual on Code Volume I.1 A-360.\n https://library.wmo.int/doc_num.php?explnum_id=10235\n\n \"\"\"\n\n # Check to ensure that a variable name is passed\n if variable is None:\n raise ValueError('You must specify a variable')\n\n if variable not in ds:\n raise ValueError('Variable not in the dataset')\n\n # Define the weather hash for WMO table 4680.\n weather_4680 = {\n 0: 'No significant weather observed',\n 1: 'Clouds generally dissolving or becoming less developed during the past hour',\n 2: 'State of the sky on the whole unchanged during the past hour',\n 3: 'Clouds generally forming or developing during the past hour',\n 4: 'Haze or smoke, or dust in suspension in the air, visibility >= 1 km',\n 5: 'Haze or smoke, or dust in suspension in the air, visibility < 1 km',\n 10: 'Mist',\n 11: 'Diamond dust',\n 12: 'Distant lightning',\n 18: 'Squalls',\n 20: 'Fog',\n 21: 'Precipitation',\n 22: 'Drizzle (not freezing) or snow grains',\n 23: 'Rain (not freezing)',\n 24: 'Snow',\n 25: 'Freezing drizzle or freezing rain',\n 26: 'Thunderstorm (with or without precipitation)',\n 27: 'Blowing or drifting snow or sand',\n 28: 'Blowing or drifting snow or sand, visibility >= 1 km',\n 29: 'Blowing or drifting snow or sand, visibility < 1 km',\n 30: 'Fog',\n 31: 'Fog or ice fog in patches',\n 32: 'Fog or ice fog, has become thinner during the past hour',\n 33: 'Fog or ice fog, no appreciable change during the past hour',\n 34: 'Fog or ice fog, has begun or become thicker during the past hour',\n 35: 'Fog, depositing rime',\n 40: 'Precipitation',\n 41: 'Precipitation, slight or moderate',\n 42: 'Precipitation, heavy',\n 43: 'Liquid precipitation, slight or moderate',\n 44: 'Liquid precipitation, heavy',\n 45: 'Solid precipitation, slight or moderate',\n 46: 'Solid precipitation, heavy',\n 47: 'Freezing precipitation, slight or moderate',\n 48: 'Freezing precipitation, heavy',\n 50: 'Drizzle',\n 51: 'Drizzle, not freezing, slight',\n 52: 'Drizzle, not freezing, moderate',\n 53: 'Drizzle, not freezing, heavy',\n 54: 'Drizzle, freezing, slight',\n 55: 'Drizzle, freezing, moderate',\n 56: 'Drizzle, freezing, heavy',\n 57: 'Drizzle and rain, slight',\n 58: 'Drizzle and rain, moderate or heavy',\n 60: 'Rain',\n 61: 'Rain, not freezing, slight',\n 62: 'Rain, not freezing, moderate',\n 63: 'Rain, not freezing, heavy',\n 64: 'Rain, freezing, slight',\n 65: 'Rain, freezing, moderate',\n 66: 'Rain, freezing, heavy',\n 67: 'Rain (or drizzle) and snow, slight',\n 68: 'Rain (or drizzle) and snow, moderate or heavy',\n 70: 'Snow',\n 71: 'Snow, light',\n 72: 'Snow, moderate',\n 73: 'Snow, heavy',\n 74: 'Ice pellets, slight',\n 75: 'Ice pellets, moderate',\n 76: 'Ice pellets, heavy',\n 77: 'Snow grains',\n 78: 'Ice crystals',\n 80: 'Shower(s) or Intermittent Precipitation',\n 81: 'Rain shower(s) or intermittent rain, slight',\n 82: 'Rain shower(s) or intermittent rain, moderate',\n 83: 'Rain shower(s) or intermittent rain, heavy',\n 84: 'Rain shower(s) or intermittent rain, violent',\n 85: 'Snow shower(s) or intermittent snow, slight',\n 86: 'Snow shower(s) or intermittent snow, moderate',\n 87: 'Snow shower(s) or intermittent snow, heavy',\n 89: 'Hail',\n 90: 'Thunderstorm',\n 91: 'Thunderstorm, slight or moderate, with no precipitation',\n 92: 'Thunderstorm, slight or moderate, with rain showers and/or snow showers',\n 93: 'Thunderstorm, slight or moderate, with hail',\n 94: 'Thunderstorm, heavy, with no precipitation',\n 95: 'Thunderstorm, heavy, with rain showers and/or snow showers',\n 96: 'Thunderstorm, heavy, with hail',\n 99: 'Tornado',\n -9999: 'Missing',\n }\n\n # Define the weather hash for WMO table 4677.\n weather_4677 = {\n 88: 'Shower(s) of snow pellets or small hail, with or without rain or rain and snow mixed, moderate or heavy',\n }\n\n # Join weather tables\n weather_combined = dict(weather_4680)\n weather_combined.update(weather_4677)\n\n # Sort keys to be in order\n weather = dict(sorted(weather_combined.items()))\n\n # If a decoded name is not passed, make one\n if decoded_name is None:\n decoded_name = variable + '_decoded'\n\n # Get data and fill nans with -9999\n data = ds[variable]\n data = data.fillna(-9999)\n\n # Get the weather type for each code\n wx_type = [weather[d] for d in data.values]\n\n # Massage the data array to set back in the dataset\n data.values = wx_type\n if 'long_name' in data.attrs:\n data.attrs['long_name'] = data.attrs['long_name'] + ' Decoded'\n else:\n data.attrs['long_name'] = 'Decoded present weather values'\n if 'valid_min' in data.attrs:\n del data.attrs['valid_min']\n if 'valid_max' in data.attrs:\n del data.attrs['valid_max']\n\n ds[decoded_name] = data\n\n return ds\n", "path": "act/utils/inst_utils.py"}]} | 2,379 | 450 |
gh_patches_debug_5379 | rasdani/github-patches | git_diff | cltk__cltk-399 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Compile Poeti d’Italia for CLTK
http://www.mqdq.it/mqdq/poetiditalia/indice_autori_alfa.jsp?scelta=AZ&path=metri_opere
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cltk/corpus/latin/corpora.py`
Content:
```
1 """Latin language corpora available for download or loading locally.
2 All remote corpora hosted by github on the cltk organization account, eg:
3 'http://github.com/cltk' + name
4 """
5
6 LATIN_CORPORA = [
7 {'encoding': 'utf-8',
8 'markup': 'tei_xml',
9 'location': 'remote',
10 'type': 'text',
11 'name': 'latin_text_perseus',
12 'origin': 'https://github.com/cltk/latin_text_perseus.git'},
13 {'encoding': 'utf-8',
14 'markup': 'xml',
15 'name': 'latin_treebank_perseus',
16 'origin': 'https://github.com/cltk/latin_treebank_perseus.git',
17 'location': 'remote',
18 'type': 'treebank'},
19 {'encoding': 'utf-8',
20 'markup': 'plaintext',
21 'name': 'latin_treebank_perseus',
22 'origin': 'https://github.com/cltk/latin_treebank_perseus.git',
23 'location': 'remote',
24 'type': 'text'},
25 {'encoding': 'utf-8',
26 'markup': 'plaintext',
27 'name': 'latin_text_latin_library',
28 'origin': 'https://github.com/cltk/latin_text_latin_library.git',
29 'location': 'remote',
30 'type': 'text'},
31 {'encoding': 'latin-1',
32 'markup': 'beta_code',
33 'name': '',
34 'location': 'local',
35 'name': 'phi5',
36 'origin': None,
37 'type': 'text'},
38 {'encoding': 'latin-1',
39 'markup': 'beta_code',
40 'origin': None,
41 'name': 'phi7',
42 'location': 'local',
43 'type': 'text'},
44 {'encoding': 'utf-8',
45 'markup': 'plaintext',
46 'name': 'latin_proper_names_cltk',
47 'origin': 'https://github.com/cltk/latin_proper_names_cltk.git',
48 'location': 'remote',
49 'type': 'lexicon'},
50 {'origin': 'https://github.com/cltk/latin_models_cltk.git',
51 'name': 'latin_models_cltk',
52 'location': 'remote',
53 'type': 'model'},
54 {'encoding': 'utf-8',
55 'markup': 'python',
56 'name': 'latin_pos_lemmata_cltk',
57 'origin': 'https://github.com/cltk/latin_pos_lemmata_cltk.git',
58 'location': 'remote',
59 'type': 'lemma'},
60 {'encoding': 'utf-8',
61 'markup': 'xml',
62 'name': 'latin_treebank_index_thomisticus',
63 'origin': 'https://github.com/cltk/latin_treebank_index_thomisticus.git',
64 'location': 'remote',
65 'type': 'treebank'},
66 {'encoding': 'xml',
67 'markup': 'plaintext',
68 'name': 'latin_lexica_perseus',
69 'origin': 'https://github.com/cltk/latin_lexica_perseus.git',
70 'location': 'remote',
71 'type': 'lexicon'},
72 {'encoding': 'utf-8',
73 'markup': 'plaintext',
74 'name': 'latin_training_set_sentence_cltk',
75 'origin': 'https://github.com/cltk/latin_training_set_sentence_cltk.git',
76 'location': 'remote',
77 'type': 'training_set'},
78 {'origin': 'https://github.com/cltk/latin_word2vec_cltk.git',
79 'name': 'latin_word2vec_cltk',
80 'location': 'remote',
81 'type': 'model'},
82 {'encoding': 'utf-8',
83 'markup': 'tei_xml',
84 'location': 'remote',
85 'type': 'text',
86 'name': 'latin_text_antique_digiliblt',
87 'origin': 'https://github.com/cltk/latin_text_antique_digiliblt.git'},
88 {'location': 'remote',
89 'type': 'text',
90 'name': 'latin_text_corpus_grammaticorum_latinorum',
91 'origin': 'https://github.com/cltk/latin_text_corpus_grammaticorum_latinorum.git'}
92 ]
93
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cltk/corpus/latin/corpora.py b/cltk/corpus/latin/corpora.py
--- a/cltk/corpus/latin/corpora.py
+++ b/cltk/corpus/latin/corpora.py
@@ -88,5 +88,9 @@
{'location': 'remote',
'type': 'text',
'name': 'latin_text_corpus_grammaticorum_latinorum',
- 'origin': 'https://github.com/cltk/latin_text_corpus_grammaticorum_latinorum.git'}
+ 'origin': 'https://github.com/cltk/latin_text_corpus_grammaticorum_latinorum.git'},
+ {'location': 'remote',
+ 'type': 'text',
+ 'name': 'latin_text_poeti_ditalia',
+ 'origin': 'https://github.com/cltk/latin_text_poeti_ditalia.git'}
]
| {"golden_diff": "diff --git a/cltk/corpus/latin/corpora.py b/cltk/corpus/latin/corpora.py\n--- a/cltk/corpus/latin/corpora.py\n+++ b/cltk/corpus/latin/corpora.py\n@@ -88,5 +88,9 @@\n {'location': 'remote',\n 'type': 'text',\n 'name': 'latin_text_corpus_grammaticorum_latinorum',\n- 'origin': 'https://github.com/cltk/latin_text_corpus_grammaticorum_latinorum.git'}\n+ 'origin': 'https://github.com/cltk/latin_text_corpus_grammaticorum_latinorum.git'},\n+ {'location': 'remote',\n+ 'type': 'text',\n+ 'name': 'latin_text_poeti_ditalia',\n+ 'origin': 'https://github.com/cltk/latin_text_poeti_ditalia.git'}\n ]\n", "issue": "Compile Poeti d\u2019Italia for CLTK\nhttp://www.mqdq.it/mqdq/poetiditalia/indice_autori_alfa.jsp?scelta=AZ&path=metri_opere\n\n", "before_files": [{"content": "\"\"\"Latin language corpora available for download or loading locally.\nAll remote corpora hosted by github on the cltk organization account, eg:\n'http://github.com/cltk' + name\n\"\"\"\n\nLATIN_CORPORA = [\n {'encoding': 'utf-8',\n 'markup': 'tei_xml',\n 'location': 'remote',\n 'type': 'text',\n 'name': 'latin_text_perseus',\n 'origin': 'https://github.com/cltk/latin_text_perseus.git'},\n {'encoding': 'utf-8',\n 'markup': 'xml',\n 'name': 'latin_treebank_perseus',\n 'origin': 'https://github.com/cltk/latin_treebank_perseus.git',\n 'location': 'remote',\n 'type': 'treebank'},\n {'encoding': 'utf-8',\n 'markup': 'plaintext',\n 'name': 'latin_treebank_perseus',\n 'origin': 'https://github.com/cltk/latin_treebank_perseus.git',\n 'location': 'remote',\n 'type': 'text'},\n {'encoding': 'utf-8',\n 'markup': 'plaintext',\n 'name': 'latin_text_latin_library',\n 'origin': 'https://github.com/cltk/latin_text_latin_library.git',\n 'location': 'remote',\n 'type': 'text'},\n {'encoding': 'latin-1',\n 'markup': 'beta_code',\n 'name': '',\n 'location': 'local',\n 'name': 'phi5',\n 'origin': None,\n 'type': 'text'},\n {'encoding': 'latin-1',\n 'markup': 'beta_code',\n 'origin': None,\n 'name': 'phi7',\n 'location': 'local',\n 'type': 'text'},\n {'encoding': 'utf-8',\n 'markup': 'plaintext',\n 'name': 'latin_proper_names_cltk',\n 'origin': 'https://github.com/cltk/latin_proper_names_cltk.git',\n 'location': 'remote',\n 'type': 'lexicon'},\n {'origin': 'https://github.com/cltk/latin_models_cltk.git',\n 'name': 'latin_models_cltk',\n 'location': 'remote',\n 'type': 'model'},\n {'encoding': 'utf-8',\n 'markup': 'python',\n 'name': 'latin_pos_lemmata_cltk',\n 'origin': 'https://github.com/cltk/latin_pos_lemmata_cltk.git',\n 'location': 'remote',\n 'type': 'lemma'},\n {'encoding': 'utf-8',\n 'markup': 'xml',\n 'name': 'latin_treebank_index_thomisticus',\n 'origin': 'https://github.com/cltk/latin_treebank_index_thomisticus.git',\n 'location': 'remote',\n 'type': 'treebank'},\n {'encoding': 'xml',\n 'markup': 'plaintext',\n 'name': 'latin_lexica_perseus',\n 'origin': 'https://github.com/cltk/latin_lexica_perseus.git',\n 'location': 'remote',\n 'type': 'lexicon'},\n {'encoding': 'utf-8',\n 'markup': 'plaintext',\n 'name': 'latin_training_set_sentence_cltk',\n 'origin': 'https://github.com/cltk/latin_training_set_sentence_cltk.git',\n 'location': 'remote',\n 'type': 'training_set'},\n {'origin': 'https://github.com/cltk/latin_word2vec_cltk.git',\n 'name': 'latin_word2vec_cltk',\n 'location': 'remote',\n 'type': 'model'},\n {'encoding': 'utf-8',\n 'markup': 'tei_xml',\n 'location': 'remote',\n 'type': 'text',\n 'name': 'latin_text_antique_digiliblt',\n 'origin': 'https://github.com/cltk/latin_text_antique_digiliblt.git'},\n {'location': 'remote',\n 'type': 'text',\n 'name': 'latin_text_corpus_grammaticorum_latinorum',\n 'origin': 'https://github.com/cltk/latin_text_corpus_grammaticorum_latinorum.git'}\n]\n", "path": "cltk/corpus/latin/corpora.py"}], "after_files": [{"content": "\"\"\"Latin language corpora available for download or loading locally.\nAll remote corpora hosted by github on the cltk organization account, eg:\n'http://github.com/cltk' + name\n\"\"\"\n\nLATIN_CORPORA = [\n {'encoding': 'utf-8',\n 'markup': 'tei_xml',\n 'location': 'remote',\n 'type': 'text',\n 'name': 'latin_text_perseus',\n 'origin': 'https://github.com/cltk/latin_text_perseus.git'},\n {'encoding': 'utf-8',\n 'markup': 'xml',\n 'name': 'latin_treebank_perseus',\n 'origin': 'https://github.com/cltk/latin_treebank_perseus.git',\n 'location': 'remote',\n 'type': 'treebank'},\n {'encoding': 'utf-8',\n 'markup': 'plaintext',\n 'name': 'latin_treebank_perseus',\n 'origin': 'https://github.com/cltk/latin_treebank_perseus.git',\n 'location': 'remote',\n 'type': 'text'},\n {'encoding': 'utf-8',\n 'markup': 'plaintext',\n 'name': 'latin_text_latin_library',\n 'origin': 'https://github.com/cltk/latin_text_latin_library.git',\n 'location': 'remote',\n 'type': 'text'},\n {'encoding': 'latin-1',\n 'markup': 'beta_code',\n 'name': '',\n 'location': 'local',\n 'name': 'phi5',\n 'origin': None,\n 'type': 'text'},\n {'encoding': 'latin-1',\n 'markup': 'beta_code',\n 'origin': None,\n 'name': 'phi7',\n 'location': 'local',\n 'type': 'text'},\n {'encoding': 'utf-8',\n 'markup': 'plaintext',\n 'name': 'latin_proper_names_cltk',\n 'origin': 'https://github.com/cltk/latin_proper_names_cltk.git',\n 'location': 'remote',\n 'type': 'lexicon'},\n {'origin': 'https://github.com/cltk/latin_models_cltk.git',\n 'name': 'latin_models_cltk',\n 'location': 'remote',\n 'type': 'model'},\n {'encoding': 'utf-8',\n 'markup': 'python',\n 'name': 'latin_pos_lemmata_cltk',\n 'origin': 'https://github.com/cltk/latin_pos_lemmata_cltk.git',\n 'location': 'remote',\n 'type': 'lemma'},\n {'encoding': 'utf-8',\n 'markup': 'xml',\n 'name': 'latin_treebank_index_thomisticus',\n 'origin': 'https://github.com/cltk/latin_treebank_index_thomisticus.git',\n 'location': 'remote',\n 'type': 'treebank'},\n {'encoding': 'xml',\n 'markup': 'plaintext',\n 'name': 'latin_lexica_perseus',\n 'origin': 'https://github.com/cltk/latin_lexica_perseus.git',\n 'location': 'remote',\n 'type': 'lexicon'},\n {'encoding': 'utf-8',\n 'markup': 'plaintext',\n 'name': 'latin_training_set_sentence_cltk',\n 'origin': 'https://github.com/cltk/latin_training_set_sentence_cltk.git',\n 'location': 'remote',\n 'type': 'training_set'},\n {'origin': 'https://github.com/cltk/latin_word2vec_cltk.git',\n 'name': 'latin_word2vec_cltk',\n 'location': 'remote',\n 'type': 'model'},\n {'encoding': 'utf-8',\n 'markup': 'tei_xml',\n 'location': 'remote',\n 'type': 'text',\n 'name': 'latin_text_antique_digiliblt',\n 'origin': 'https://github.com/cltk/latin_text_antique_digiliblt.git'},\n {'location': 'remote',\n 'type': 'text',\n 'name': 'latin_text_corpus_grammaticorum_latinorum',\n 'origin': 'https://github.com/cltk/latin_text_corpus_grammaticorum_latinorum.git'},\n {'location': 'remote',\n 'type': 'text',\n 'name': 'latin_text_poeti_ditalia',\n 'origin': 'https://github.com/cltk/latin_text_poeti_ditalia.git'}\n]\n", "path": "cltk/corpus/latin/corpora.py"}]} | 1,409 | 201 |
gh_patches_debug_9824 | rasdani/github-patches | git_diff | coala__coala-1585 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
DefaultArgParser: Fix spelling mistake
Change analaysis on line 30 to analysis
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `coalib/parsing/DefaultArgParser.py`
Content:
```
1 import argparse
2 import sys
3
4 from coalib.misc import Constants
5
6
7 def default_arg_parser(formatter_class=None):
8 """
9 This function creates an ArgParser to parse command line arguments.
10
11 :param formatter_class: Formatting the arg_parser output into a specific
12 form. For example: In the manpage format.
13 """
14 formatter_class = formatter_class or argparse.RawDescriptionHelpFormatter
15
16 entry_point = sys.argv[0]
17 for entry in ['coala-ci', 'coala-dbus', 'coala-format', 'coala-json',
18 'coala-delete-orig']:
19 if entry_point.endswith(entry):
20 parser_type = entry
21 break
22 else:
23 parser_type = 'coala'
24
25 arg_parser = argparse.ArgumentParser(
26 formatter_class=formatter_class,
27 prog="coala",
28 description="coala is a simple COde AnaLysis Application. Its goal "
29 "is to make static code analysis easy and convenient "
30 "for all languages. coala uses bears, which are analaysis "
31 "routines that can be combined arbitrarily.")
32
33 arg_parser.add_argument('TARGETS',
34 nargs='*',
35 help="Sections to be executed exclusively.")
36 arg_parser.add_argument('-c',
37 '--config',
38 nargs=1,
39 metavar='FILE',
40 help='Configuration file to be used, defaults to '
41 + repr(Constants.default_coafile))
42 FIND_CONFIG_HELP = ('Attempt to find config file by checking parent '
43 'directories of the current working directory. It is '
44 'assumed that the config file is named '
45 + repr(Constants.default_coafile) + '. This arg is '
46 'ignored if --config is also given')
47 arg_parser.add_argument('-F',
48 '--find-config',
49 nargs='?',
50 const=True,
51 metavar='BOOL',
52 help=FIND_CONFIG_HELP)
53 arg_parser.add_argument('-f',
54 '--files',
55 nargs='+',
56 metavar='FILE',
57 help='Files that should be checked')
58 arg_parser.add_argument('-i',
59 '--ignore',
60 nargs='+',
61 metavar='FILE',
62 help='Files that should be ignored')
63 arg_parser.add_argument('--limit-files',
64 nargs='+',
65 metavar='FILE',
66 help='Files that will be analyzed will be '
67 'restricted to those in the globs listed '
68 'in this argument as well the files setting')
69 arg_parser.add_argument('-b',
70 '--bears',
71 nargs='+',
72 metavar='NAME',
73 help='Names of bears to use')
74 BEAR_DIRS_HELP = 'Additional directories where bears may lie'
75 arg_parser.add_argument('-d',
76 '--bear-dirs',
77 nargs='+',
78 metavar='DIR',
79 help=BEAR_DIRS_HELP)
80 LOG_LEVEL_HELP = ("Enum('ERROR','INFO','WARNING','DEBUG') to set level of "
81 "log output")
82 arg_parser.add_argument('-L',
83 '--log-level',
84 nargs=1,
85 choices=['ERROR', 'INFO', 'WARNING', 'DEBUG'],
86 metavar='ENUM',
87 help=LOG_LEVEL_HELP)
88 MIN_SEVERITY_HELP = ("Enum('INFO', 'NORMAL', 'MAJOR') to set the minimal "
89 "result severity.")
90 arg_parser.add_argument('-m',
91 '--min-severity',
92 nargs=1,
93 choices=('INFO', 'NORMAL', 'MAJOR'),
94 metavar='ENUM',
95 help=MIN_SEVERITY_HELP)
96 SETTINGS_HELP = 'Arbitrary settings in the form of section.key=value'
97 arg_parser.add_argument('-S',
98 '--settings',
99 nargs='+',
100 metavar='SETTING',
101 help=SETTINGS_HELP)
102 if parser_type == 'coala-json':
103 arg_parser.add_argument('--text-logs',
104 nargs='?',
105 const=True,
106 metavar='BOOL',
107 help='Don\'t display logs as json, display '
108 'them as we normally do in the console.')
109 if parser_type == 'coala':
110 SHOW_BEARS_HELP = ("Display bears and its metadata with the sections "
111 "that they belong to")
112 arg_parser.add_argument('-B',
113 '--show-bears',
114 nargs='?',
115 const=True,
116 metavar='BOOL',
117 help=SHOW_BEARS_HELP)
118 arg_parser.add_argument('-A',
119 '--show-all-bears',
120 nargs='?',
121 const=True,
122 metavar='BOOL',
123 help="Display all bears.")
124 SAVE_HELP = ('Filename of file to be saved to, if provided with no '
125 'arguments, settings will be stored back to the file given '
126 'by -c')
127 arg_parser.add_argument('-s',
128 '--save',
129 nargs='?',
130 const=True,
131 metavar='FILE',
132 help=SAVE_HELP)
133 TAG_HELP = ('Tag results with a specific name. You can access the results'
134 ' later with that tag.')
135 arg_parser.add_argument('-t',
136 '--tag',
137 nargs='?',
138 const=True,
139 metavar='STRING',
140 help=TAG_HELP)
141
142 DELETE_TAG_HELP = 'Delete pre-tagged results with tag name.'
143 arg_parser.add_argument('-g',
144 '--dtag',
145 nargs='?',
146 const=True,
147 metavar='STRING',
148 help=DELETE_TAG_HELP)
149
150 arg_parser.add_argument("-j",
151 "--jobs",
152 type=int,
153 help="Number of jobs to use in parallel.")
154
155 arg_parser.add_argument('-v',
156 '--version',
157 action='version',
158 version=Constants.VERSION)
159
160 arg_parser.add_argument('-n',
161 '--no-orig',
162 nargs='?',
163 const=True,
164 help="Deactivate creation of .orig files,"
165 ".orig backup files before applying patches")
166
167 return arg_parser
168
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/coalib/parsing/DefaultArgParser.py b/coalib/parsing/DefaultArgParser.py
--- a/coalib/parsing/DefaultArgParser.py
+++ b/coalib/parsing/DefaultArgParser.py
@@ -27,7 +27,7 @@
prog="coala",
description="coala is a simple COde AnaLysis Application. Its goal "
"is to make static code analysis easy and convenient "
- "for all languages. coala uses bears, which are analaysis "
+ "for all languages. coala uses bears, which are analysis "
"routines that can be combined arbitrarily.")
arg_parser.add_argument('TARGETS',
| {"golden_diff": "diff --git a/coalib/parsing/DefaultArgParser.py b/coalib/parsing/DefaultArgParser.py\n--- a/coalib/parsing/DefaultArgParser.py\n+++ b/coalib/parsing/DefaultArgParser.py\n@@ -27,7 +27,7 @@\n prog=\"coala\",\n description=\"coala is a simple COde AnaLysis Application. Its goal \"\n \"is to make static code analysis easy and convenient \"\n- \"for all languages. coala uses bears, which are analaysis \"\n+ \"for all languages. coala uses bears, which are analysis \"\n \"routines that can be combined arbitrarily.\")\n \n arg_parser.add_argument('TARGETS',\n", "issue": "DefaultArgParser: Fix spelling mistake\nChange analaysis on line 30 to analysis\n\n", "before_files": [{"content": "import argparse\nimport sys\n\nfrom coalib.misc import Constants\n\n\ndef default_arg_parser(formatter_class=None):\n \"\"\"\n This function creates an ArgParser to parse command line arguments.\n\n :param formatter_class: Formatting the arg_parser output into a specific\n form. For example: In the manpage format.\n \"\"\"\n formatter_class = formatter_class or argparse.RawDescriptionHelpFormatter\n\n entry_point = sys.argv[0]\n for entry in ['coala-ci', 'coala-dbus', 'coala-format', 'coala-json',\n 'coala-delete-orig']:\n if entry_point.endswith(entry):\n parser_type = entry\n break\n else:\n parser_type = 'coala'\n\n arg_parser = argparse.ArgumentParser(\n formatter_class=formatter_class,\n prog=\"coala\",\n description=\"coala is a simple COde AnaLysis Application. Its goal \"\n \"is to make static code analysis easy and convenient \"\n \"for all languages. coala uses bears, which are analaysis \"\n \"routines that can be combined arbitrarily.\")\n\n arg_parser.add_argument('TARGETS',\n nargs='*',\n help=\"Sections to be executed exclusively.\")\n arg_parser.add_argument('-c',\n '--config',\n nargs=1,\n metavar='FILE',\n help='Configuration file to be used, defaults to '\n + repr(Constants.default_coafile))\n FIND_CONFIG_HELP = ('Attempt to find config file by checking parent '\n 'directories of the current working directory. It is '\n 'assumed that the config file is named '\n + repr(Constants.default_coafile) + '. This arg is '\n 'ignored if --config is also given')\n arg_parser.add_argument('-F',\n '--find-config',\n nargs='?',\n const=True,\n metavar='BOOL',\n help=FIND_CONFIG_HELP)\n arg_parser.add_argument('-f',\n '--files',\n nargs='+',\n metavar='FILE',\n help='Files that should be checked')\n arg_parser.add_argument('-i',\n '--ignore',\n nargs='+',\n metavar='FILE',\n help='Files that should be ignored')\n arg_parser.add_argument('--limit-files',\n nargs='+',\n metavar='FILE',\n help='Files that will be analyzed will be '\n 'restricted to those in the globs listed '\n 'in this argument as well the files setting')\n arg_parser.add_argument('-b',\n '--bears',\n nargs='+',\n metavar='NAME',\n help='Names of bears to use')\n BEAR_DIRS_HELP = 'Additional directories where bears may lie'\n arg_parser.add_argument('-d',\n '--bear-dirs',\n nargs='+',\n metavar='DIR',\n help=BEAR_DIRS_HELP)\n LOG_LEVEL_HELP = (\"Enum('ERROR','INFO','WARNING','DEBUG') to set level of \"\n \"log output\")\n arg_parser.add_argument('-L',\n '--log-level',\n nargs=1,\n choices=['ERROR', 'INFO', 'WARNING', 'DEBUG'],\n metavar='ENUM',\n help=LOG_LEVEL_HELP)\n MIN_SEVERITY_HELP = (\"Enum('INFO', 'NORMAL', 'MAJOR') to set the minimal \"\n \"result severity.\")\n arg_parser.add_argument('-m',\n '--min-severity',\n nargs=1,\n choices=('INFO', 'NORMAL', 'MAJOR'),\n metavar='ENUM',\n help=MIN_SEVERITY_HELP)\n SETTINGS_HELP = 'Arbitrary settings in the form of section.key=value'\n arg_parser.add_argument('-S',\n '--settings',\n nargs='+',\n metavar='SETTING',\n help=SETTINGS_HELP)\n if parser_type == 'coala-json':\n arg_parser.add_argument('--text-logs',\n nargs='?',\n const=True,\n metavar='BOOL',\n help='Don\\'t display logs as json, display '\n 'them as we normally do in the console.')\n if parser_type == 'coala':\n SHOW_BEARS_HELP = (\"Display bears and its metadata with the sections \"\n \"that they belong to\")\n arg_parser.add_argument('-B',\n '--show-bears',\n nargs='?',\n const=True,\n metavar='BOOL',\n help=SHOW_BEARS_HELP)\n arg_parser.add_argument('-A',\n '--show-all-bears',\n nargs='?',\n const=True,\n metavar='BOOL',\n help=\"Display all bears.\")\n SAVE_HELP = ('Filename of file to be saved to, if provided with no '\n 'arguments, settings will be stored back to the file given '\n 'by -c')\n arg_parser.add_argument('-s',\n '--save',\n nargs='?',\n const=True,\n metavar='FILE',\n help=SAVE_HELP)\n TAG_HELP = ('Tag results with a specific name. You can access the results'\n ' later with that tag.')\n arg_parser.add_argument('-t',\n '--tag',\n nargs='?',\n const=True,\n metavar='STRING',\n help=TAG_HELP)\n\n DELETE_TAG_HELP = 'Delete pre-tagged results with tag name.'\n arg_parser.add_argument('-g',\n '--dtag',\n nargs='?',\n const=True,\n metavar='STRING',\n help=DELETE_TAG_HELP)\n\n arg_parser.add_argument(\"-j\",\n \"--jobs\",\n type=int,\n help=\"Number of jobs to use in parallel.\")\n\n arg_parser.add_argument('-v',\n '--version',\n action='version',\n version=Constants.VERSION)\n\n arg_parser.add_argument('-n',\n '--no-orig',\n nargs='?',\n const=True,\n help=\"Deactivate creation of .orig files,\"\n \".orig backup files before applying patches\")\n\n return arg_parser\n", "path": "coalib/parsing/DefaultArgParser.py"}], "after_files": [{"content": "import argparse\nimport sys\n\nfrom coalib.misc import Constants\n\n\ndef default_arg_parser(formatter_class=None):\n \"\"\"\n This function creates an ArgParser to parse command line arguments.\n\n :param formatter_class: Formatting the arg_parser output into a specific\n form. For example: In the manpage format.\n \"\"\"\n formatter_class = formatter_class or argparse.RawDescriptionHelpFormatter\n\n entry_point = sys.argv[0]\n for entry in ['coala-ci', 'coala-dbus', 'coala-format', 'coala-json',\n 'coala-delete-orig']:\n if entry_point.endswith(entry):\n parser_type = entry\n break\n else:\n parser_type = 'coala'\n\n arg_parser = argparse.ArgumentParser(\n formatter_class=formatter_class,\n prog=\"coala\",\n description=\"coala is a simple COde AnaLysis Application. Its goal \"\n \"is to make static code analysis easy and convenient \"\n \"for all languages. coala uses bears, which are analysis \"\n \"routines that can be combined arbitrarily.\")\n\n arg_parser.add_argument('TARGETS',\n nargs='*',\n help=\"Sections to be executed exclusively.\")\n arg_parser.add_argument('-c',\n '--config',\n nargs=1,\n metavar='FILE',\n help='Configuration file to be used, defaults to '\n + repr(Constants.default_coafile))\n FIND_CONFIG_HELP = ('Attempt to find config file by checking parent '\n 'directories of the current working directory. It is '\n 'assumed that the config file is named '\n + repr(Constants.default_coafile) + '. This arg is '\n 'ignored if --config is also given')\n arg_parser.add_argument('-F',\n '--find-config',\n nargs='?',\n const=True,\n metavar='BOOL',\n help=FIND_CONFIG_HELP)\n arg_parser.add_argument('-f',\n '--files',\n nargs='+',\n metavar='FILE',\n help='Files that should be checked')\n arg_parser.add_argument('-i',\n '--ignore',\n nargs='+',\n metavar='FILE',\n help='Files that should be ignored')\n arg_parser.add_argument('--limit-files',\n nargs='+',\n metavar='FILE',\n help='Files that will be analyzed will be '\n 'restricted to those in the globs listed '\n 'in this argument as well the files setting')\n arg_parser.add_argument('-b',\n '--bears',\n nargs='+',\n metavar='NAME',\n help='Names of bears to use')\n BEAR_DIRS_HELP = 'Additional directories where bears may lie'\n arg_parser.add_argument('-d',\n '--bear-dirs',\n nargs='+',\n metavar='DIR',\n help=BEAR_DIRS_HELP)\n LOG_LEVEL_HELP = (\"Enum('ERROR','INFO','WARNING','DEBUG') to set level of \"\n \"log output\")\n arg_parser.add_argument('-L',\n '--log-level',\n nargs=1,\n choices=['ERROR', 'INFO', 'WARNING', 'DEBUG'],\n metavar='ENUM',\n help=LOG_LEVEL_HELP)\n MIN_SEVERITY_HELP = (\"Enum('INFO', 'NORMAL', 'MAJOR') to set the minimal \"\n \"result severity.\")\n arg_parser.add_argument('-m',\n '--min-severity',\n nargs=1,\n choices=('INFO', 'NORMAL', 'MAJOR'),\n metavar='ENUM',\n help=MIN_SEVERITY_HELP)\n SETTINGS_HELP = 'Arbitrary settings in the form of section.key=value'\n arg_parser.add_argument('-S',\n '--settings',\n nargs='+',\n metavar='SETTING',\n help=SETTINGS_HELP)\n if parser_type == 'coala-json':\n arg_parser.add_argument('--text-logs',\n nargs='?',\n const=True,\n metavar='BOOL',\n help='Don\\'t display logs as json, display '\n 'them as we normally do in the console.')\n if parser_type == 'coala':\n SHOW_BEARS_HELP = (\"Display bears and its metadata with the sections \"\n \"that they belong to\")\n arg_parser.add_argument('-B',\n '--show-bears',\n nargs='?',\n const=True,\n metavar='BOOL',\n help=SHOW_BEARS_HELP)\n arg_parser.add_argument('-A',\n '--show-all-bears',\n nargs='?',\n const=True,\n metavar='BOOL',\n help=\"Display all bears.\")\n SAVE_HELP = ('Filename of file to be saved to, if provided with no '\n 'arguments, settings will be stored back to the file given '\n 'by -c')\n arg_parser.add_argument('-s',\n '--save',\n nargs='?',\n const=True,\n metavar='FILE',\n help=SAVE_HELP)\n TAG_HELP = ('Tag results with a specific name. You can access the results'\n ' later with that tag.')\n arg_parser.add_argument('-t',\n '--tag',\n nargs='?',\n const=True,\n metavar='STRING',\n help=TAG_HELP)\n\n DELETE_TAG_HELP = 'Delete pre-tagged results with tag name.'\n arg_parser.add_argument('-g',\n '--dtag',\n nargs='?',\n const=True,\n metavar='STRING',\n help=DELETE_TAG_HELP)\n\n arg_parser.add_argument(\"-j\",\n \"--jobs\",\n type=int,\n help=\"Number of jobs to use in parallel.\")\n\n arg_parser.add_argument('-v',\n '--version',\n action='version',\n version=Constants.VERSION)\n\n arg_parser.add_argument('-n',\n '--no-orig',\n nargs='?',\n const=True,\n help=\"Deactivate creation of .orig files,\"\n \".orig backup files before applying patches\")\n\n return arg_parser\n", "path": "coalib/parsing/DefaultArgParser.py"}]} | 1,880 | 154 |
gh_patches_debug_18951 | rasdani/github-patches | git_diff | ansible__ansible-23067 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
win_domain_controller: documentation error in examples '_pass' should be '_password'
<!---
Verify first that your issue/request is not already reported on GitHub.
Also test if the latest release, and master branch are affected too.
-->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Documentation Report
##### COMPONENT NAME
<!--- Name of the module/plugin/task/feature -->
win_domain_controller
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.3.0.0 (detached HEAD e4494f85b6) last updated 2017/03/17 12:34:17 (GMT +100)
config file = /home/jon/ansible/ansible.cfg
configured module search path = [u'/home/jon/ansible/library']
python version = 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2]
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Ansible controller Windows 10 WSL (ubuntu)
targets: Windows Server 2012 R2
##### SUMMARY
<!--- Explain the problem briefly -->
The examples need updating to match the correct parameter names for the _password parameters.
For example in the examples domain_admin_pass, but the module actually expects 'domain_admin_password'. The same thing is true for 'safe_mode_password' and 'local_admin_password'.
Would create a PR for this but am away from dev machine at the moment.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem, using a minimal test-case.
For new features, show how the feature would be used.
-->
N/A
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
N/A
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
N/A
<!--- Paste verbatim command output between quotes below -->
```
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/ansible/modules/windows/win_domain_controller.py`
Content:
```
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # (c) 2017, Red Hat, Inc.
5 #
6 # This file is part of Ansible
7 #
8 # Ansible is free software: you can redistribute it and/or modify
9 # it under the terms of the GNU General Public License as published by
10 # the Free Software Foundation, either version 3 of the License, or
11 # (at your option) any later version.
12 #
13 # Ansible is distributed in the hope that it will be useful,
14 # but WITHOUT ANY WARRANTY; without even the implied warranty of
15 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 # GNU General Public License for more details.
17 #
18 # You should have received a copy of the GNU General Public License
19 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
20
21 ANSIBLE_METADATA = {'metadata_version': '1.0',
22 'status': ['preview'],
23 'supported_by': 'core'}
24
25
26 DOCUMENTATION='''
27 module: win_domain_controller
28 short_description: Manage domain controller/member server state for a Windows host
29 version_added: 2.3
30 description:
31 - Ensure that a Windows Server 2012+ host is configured as a domain controller or demoted to member server. This module may require
32 subsequent use of the M(win_reboot) action if changes are made.
33 options:
34 dns_domain_name:
35 description:
36 - when C(state) is C(domain_controller), the DNS name of the domain for which the targeted Windows host should be a DC
37 domain_admin_user:
38 description:
39 - username of a domain admin for the target domain (necessary to promote or demote a domain controller)
40 required: true
41 domain_admin_password:
42 description:
43 - password for the specified C(domain_admin_user)
44 required: true
45 safe_mode_password:
46 description:
47 - safe mode password for the domain controller (required when C(state) is C(domain_controller))
48 local_admin_password:
49 description:
50 - password to be assigned to the local C(Administrator) user (required when C(state) is C(member_server))
51 state:
52 description:
53 - whether the target host should be a domain controller or a member server
54 choices:
55 - domain_controller
56 - member_server
57 author:
58 - Matt Davis (@nitzmahone)
59 '''
60
61 RETURN='''
62 reboot_required:
63 description: True if changes were made that require a reboot.
64 returned: always
65 type: boolean
66 sample: true
67
68 '''
69
70 EXAMPLES=r'''
71 # ensure a server is a domain controller
72 - hosts: winclient
73 gather_facts: no
74 tasks:
75 - win_domain_controller:
76 dns_domain_name: ansible.vagrant
77 domain_admin_user: [email protected]
78 domain_admin_pass: password123!
79 safe_mode_pass: password123!
80 state: domain_controller
81 log_path: c:\ansible_win_domain_controller.txt
82
83 # ensure a server is not a domain controller
84 # note that without an action wrapper, in the case where a DC is demoted,
85 # the task will fail with a 401 Unauthorized, because the domain credential
86 # becomes invalid to fetch the final output over WinRM. This requires win_async
87 # with credential switching (or other clever credential-switching
88 # mechanism to get the output and trigger the required reboot)
89 - hosts: winclient
90 gather_facts: no
91 tasks:
92 - win_domain_controller:
93 domain_admin_user: [email protected]
94 domain_admin_pass: password123!
95 local_admin_pass: password123!
96 state: member_server
97 log_path: c:\ansible_win_domain_controller.txt
98
99 '''
100
101
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/ansible/modules/windows/win_domain_controller.py b/lib/ansible/modules/windows/win_domain_controller.py
--- a/lib/ansible/modules/windows/win_domain_controller.py
+++ b/lib/ansible/modules/windows/win_domain_controller.py
@@ -75,8 +75,8 @@
- win_domain_controller:
dns_domain_name: ansible.vagrant
domain_admin_user: [email protected]
- domain_admin_pass: password123!
- safe_mode_pass: password123!
+ domain_admin_password: password123!
+ safe_mode_password: password123!
state: domain_controller
log_path: c:\ansible_win_domain_controller.txt
@@ -91,8 +91,8 @@
tasks:
- win_domain_controller:
domain_admin_user: [email protected]
- domain_admin_pass: password123!
- local_admin_pass: password123!
+ domain_admin_password: password123!
+ local_admin_password: password123!
state: member_server
log_path: c:\ansible_win_domain_controller.txt
| {"golden_diff": "diff --git a/lib/ansible/modules/windows/win_domain_controller.py b/lib/ansible/modules/windows/win_domain_controller.py\n--- a/lib/ansible/modules/windows/win_domain_controller.py\n+++ b/lib/ansible/modules/windows/win_domain_controller.py\n@@ -75,8 +75,8 @@\n - win_domain_controller:\n dns_domain_name: ansible.vagrant\n domain_admin_user: [email protected]\n- domain_admin_pass: password123!\n- safe_mode_pass: password123!\n+ domain_admin_password: password123!\n+ safe_mode_password: password123!\n state: domain_controller\n log_path: c:\\ansible_win_domain_controller.txt\n \n@@ -91,8 +91,8 @@\n tasks:\n - win_domain_controller:\n domain_admin_user: [email protected]\n- domain_admin_pass: password123!\n- local_admin_pass: password123!\n+ domain_admin_password: password123!\n+ local_admin_password: password123!\n state: member_server\n log_path: c:\\ansible_win_domain_controller.txt\n", "issue": "win_domain_controller: documentation error in examples '_pass' should be '_password'\n<!---\r\nVerify first that your issue/request is not already reported on GitHub.\r\nAlso test if the latest release, and master branch are affected too.\r\n-->\r\n\r\n##### ISSUE TYPE\r\n<!--- Pick one below and delete the rest: -->\r\n\r\n - Documentation Report\r\n\r\n##### COMPONENT NAME\r\n<!--- Name of the module/plugin/task/feature -->\r\nwin_domain_controller\r\n##### ANSIBLE VERSION\r\n<!--- Paste verbatim output from \u201cansible --version\u201d between quotes below -->\r\n```\r\nansible 2.3.0.0 (detached HEAD e4494f85b6) last updated 2017/03/17 12:34:17 (GMT +100)\r\n config file = /home/jon/ansible/ansible.cfg\r\n configured module search path = [u'/home/jon/ansible/library']\r\n python version = 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2]\r\n```\r\n\r\n##### CONFIGURATION\r\n<!---\r\nMention any settings you have changed/added/removed in ansible.cfg\r\n(or using the ANSIBLE_* environment variables).\r\n-->\r\n\r\n##### OS / ENVIRONMENT\r\n<!---\r\nMention the OS you are running Ansible from, and the OS you are\r\nmanaging, or say \u201cN/A\u201d for anything that is not platform-specific.\r\n-->\r\nAnsible controller Windows 10 WSL (ubuntu)\r\ntargets: Windows Server 2012 R2\r\n##### SUMMARY\r\n<!--- Explain the problem briefly -->\r\n\r\nThe examples need updating to match the correct parameter names for the _password parameters.\r\nFor example in the examples domain_admin_pass, but the module actually expects 'domain_admin_password'. The same thing is true for 'safe_mode_password' and 'local_admin_password'.\r\n\r\nWould create a PR for this but am away from dev machine at the moment.\r\n\r\n\r\n##### STEPS TO REPRODUCE\r\n<!---\r\nFor bugs, show exactly how to reproduce the problem, using a minimal test-case.\r\nFor new features, show how the feature would be used.\r\n-->\r\nN/A\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml\r\n\r\n```\r\n\r\n<!--- You can also paste gist.github.com links for larger files -->\r\n\r\n##### EXPECTED RESULTS\r\n<!--- What did you expect to happen when running the steps above? -->\r\nN/A\r\n##### ACTUAL RESULTS\r\n<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->\r\nN/A\r\n<!--- Paste verbatim command output between quotes below -->\r\n```\r\n\r\n```\r\n\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# (c) 2017, Red Hat, Inc.\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n\nANSIBLE_METADATA = {'metadata_version': '1.0',\n 'status': ['preview'],\n 'supported_by': 'core'}\n\n\nDOCUMENTATION='''\nmodule: win_domain_controller\nshort_description: Manage domain controller/member server state for a Windows host\nversion_added: 2.3\ndescription:\n - Ensure that a Windows Server 2012+ host is configured as a domain controller or demoted to member server. This module may require\n subsequent use of the M(win_reboot) action if changes are made.\noptions:\n dns_domain_name:\n description:\n - when C(state) is C(domain_controller), the DNS name of the domain for which the targeted Windows host should be a DC\n domain_admin_user:\n description:\n - username of a domain admin for the target domain (necessary to promote or demote a domain controller)\n required: true\n domain_admin_password:\n description:\n - password for the specified C(domain_admin_user)\n required: true\n safe_mode_password:\n description:\n - safe mode password for the domain controller (required when C(state) is C(domain_controller))\n local_admin_password:\n description:\n - password to be assigned to the local C(Administrator) user (required when C(state) is C(member_server))\n state:\n description:\n - whether the target host should be a domain controller or a member server\n choices:\n - domain_controller\n - member_server\nauthor:\n - Matt Davis (@nitzmahone)\n'''\n\nRETURN='''\nreboot_required:\n description: True if changes were made that require a reboot.\n returned: always\n type: boolean\n sample: true\n\n'''\n\nEXAMPLES=r'''\n# ensure a server is a domain controller\n- hosts: winclient\n gather_facts: no\n tasks:\n - win_domain_controller:\n dns_domain_name: ansible.vagrant\n domain_admin_user: [email protected]\n domain_admin_pass: password123!\n safe_mode_pass: password123!\n state: domain_controller\n log_path: c:\\ansible_win_domain_controller.txt\n\n# ensure a server is not a domain controller\n# note that without an action wrapper, in the case where a DC is demoted,\n# the task will fail with a 401 Unauthorized, because the domain credential\n# becomes invalid to fetch the final output over WinRM. This requires win_async\n# with credential switching (or other clever credential-switching\n# mechanism to get the output and trigger the required reboot)\n- hosts: winclient\n gather_facts: no\n tasks:\n - win_domain_controller:\n domain_admin_user: [email protected]\n domain_admin_pass: password123!\n local_admin_pass: password123!\n state: member_server\n log_path: c:\\ansible_win_domain_controller.txt\n\n'''\n\n", "path": "lib/ansible/modules/windows/win_domain_controller.py"}], "after_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# (c) 2017, Red Hat, Inc.\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n\nANSIBLE_METADATA = {'metadata_version': '1.0',\n 'status': ['preview'],\n 'supported_by': 'core'}\n\n\nDOCUMENTATION='''\nmodule: win_domain_controller\nshort_description: Manage domain controller/member server state for a Windows host\nversion_added: 2.3\ndescription:\n - Ensure that a Windows Server 2012+ host is configured as a domain controller or demoted to member server. This module may require\n subsequent use of the M(win_reboot) action if changes are made.\noptions:\n dns_domain_name:\n description:\n - when C(state) is C(domain_controller), the DNS name of the domain for which the targeted Windows host should be a DC\n domain_admin_user:\n description:\n - username of a domain admin for the target domain (necessary to promote or demote a domain controller)\n required: true\n domain_admin_password:\n description:\n - password for the specified C(domain_admin_user)\n required: true\n safe_mode_password:\n description:\n - safe mode password for the domain controller (required when C(state) is C(domain_controller))\n local_admin_password:\n description:\n - password to be assigned to the local C(Administrator) user (required when C(state) is C(member_server))\n state:\n description:\n - whether the target host should be a domain controller or a member server\n choices:\n - domain_controller\n - member_server\nauthor:\n - Matt Davis (@nitzmahone)\n'''\n\nRETURN='''\nreboot_required:\n description: True if changes were made that require a reboot.\n returned: always\n type: boolean\n sample: true\n\n'''\n\nEXAMPLES=r'''\n# ensure a server is a domain controller\n- hosts: winclient\n gather_facts: no\n tasks:\n - win_domain_controller:\n dns_domain_name: ansible.vagrant\n domain_admin_user: [email protected]\n domain_admin_password: password123!\n safe_mode_password: password123!\n state: domain_controller\n log_path: c:\\ansible_win_domain_controller.txt\n\n# ensure a server is not a domain controller\n# note that without an action wrapper, in the case where a DC is demoted,\n# the task will fail with a 401 Unauthorized, because the domain credential\n# becomes invalid to fetch the final output over WinRM. This requires win_async\n# with credential switching (or other clever credential-switching\n# mechanism to get the output and trigger the required reboot)\n- hosts: winclient\n gather_facts: no\n tasks:\n - win_domain_controller:\n domain_admin_user: [email protected]\n domain_admin_password: password123!\n local_admin_password: password123!\n state: member_server\n log_path: c:\\ansible_win_domain_controller.txt\n\n'''\n\n", "path": "lib/ansible/modules/windows/win_domain_controller.py"}]} | 1,812 | 246 |
gh_patches_debug_7897 | rasdani/github-patches | git_diff | nautobot__nautobot-2640 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
wrong link from circuit types to circuit
<!--
NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED.
This form is only for reporting reproducible bugs. If you need assistance
with Nautobot installation, or if you have a general question, please start a
discussion instead: https://github.com/nautobot/nautobot/discussions
Please describe the environment in which you are running Nautobot. Be sure
that you are running an unmodified instance of the latest stable release
before submitting a bug report, and that any plugins have been disabled.
-->
### Environment
* Nautobot version (Docker tag too if applicable): eee34d7bc54d (v1.4.5)
* Python version: 3.10
* Database platform, version: postgresql 14
* Middleware(s):
<!--
Describe in detail the exact steps that someone else can take to reproduce
this bug using the current stable release of Nautobot. Begin with the
creation of any necessary database objects and call out every operation
being performed explicitly. If reporting a bug in the REST API, be sure to
reconstruct the raw HTTP request(s) being made: Don't rely on a client
library such as pynautobot.
-->
### Steps to Reproduce
1. Create circuit type
2. Create circuit of above circuit type
3. Go to Circuit Types
4. Click one circuit type
5. Click number of circuits belonging to circuit type
<!-- What did you expect to happen? -->
### Expected Behavior
redirect to `circuits/circuits/?type=XXX`
<!-- What happened instead? -->
### Observed Behavior
redirects to `circuits/circuits/?circuit_type=XXX`
which shows an "Invalid filters were specified" error
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nautobot/circuits/tables.py`
Content:
```
1 import django_tables2 as tables
2 from django_tables2.utils import Accessor
3
4 from nautobot.extras.tables import StatusTableMixin
5 from nautobot.tenancy.tables import TenantColumn
6 from nautobot.utilities.tables import (
7 BaseTable,
8 ButtonsColumn,
9 TagColumn,
10 ToggleColumn,
11 )
12 from .models import Circuit, CircuitType, Provider, ProviderNetwork
13
14 CIRCUIT_TERMINATION_PARENT = """
15 {% if value.provider_network %}
16 <a href="{{ value.provider_network.get_absolute_url }}">{{ value.provider_network }}</a>
17 {% elif value.site %}
18 <a href="{{ value.site.get_absolute_url }}">{{ value.site }}</a>
19 {% endif %}
20 """
21
22 #
23 # Provider Network
24 #
25
26
27 class ProviderNetworkTable(BaseTable):
28 pk = ToggleColumn()
29 name = tables.Column(linkify=True)
30 provider = tables.Column(linkify=True)
31 tags = TagColumn(url_name="circuits:providernetwork_list")
32
33 class Meta(BaseTable.Meta):
34 model = ProviderNetwork
35 fields = ("pk", "name", "provider", "description", "tags")
36 default_columns = ("pk", "name", "provider", "description")
37
38
39 #
40 # Providers
41 #
42
43
44 class ProviderTable(BaseTable):
45 pk = ToggleColumn()
46 name = tables.LinkColumn()
47 circuit_count = tables.Column(accessor=Accessor("count_circuits"), verbose_name="Circuits")
48 tags = TagColumn(url_name="circuits:provider_list")
49
50 class Meta(BaseTable.Meta):
51 model = Provider
52 fields = (
53 "pk",
54 "name",
55 "asn",
56 "account",
57 "portal_url",
58 "noc_contact",
59 "admin_contact",
60 "circuit_count",
61 "tags",
62 )
63 default_columns = ("pk", "name", "asn", "account", "circuit_count")
64
65
66 #
67 # Circuit types
68 #
69
70
71 class CircuitTypeTable(BaseTable):
72 pk = ToggleColumn()
73 name = tables.LinkColumn()
74 circuit_count = tables.Column(verbose_name="Circuits")
75 actions = ButtonsColumn(CircuitType, pk_field="slug")
76
77 class Meta(BaseTable.Meta):
78 model = CircuitType
79 fields = ("pk", "name", "circuit_count", "description", "slug", "actions")
80 default_columns = (
81 "pk",
82 "name",
83 "circuit_count",
84 "description",
85 "slug",
86 "actions",
87 )
88
89
90 #
91 # Circuits
92 #
93
94
95 class CircuitTable(StatusTableMixin, BaseTable):
96 pk = ToggleColumn()
97 cid = tables.LinkColumn(verbose_name="ID")
98 provider = tables.LinkColumn(viewname="circuits:provider", args=[Accessor("provider__slug")])
99 tenant = TenantColumn()
100 tags = TagColumn(url_name="circuits:circuit_list")
101
102 termination_a = tables.TemplateColumn(
103 template_code=CIRCUIT_TERMINATION_PARENT,
104 accessor=Accessor("termination_a"),
105 orderable=False,
106 verbose_name="Side A",
107 )
108 termination_z = tables.TemplateColumn(
109 template_code=CIRCUIT_TERMINATION_PARENT,
110 accessor=Accessor("termination_z"),
111 orderable=False,
112 verbose_name="Side Z",
113 )
114
115 class Meta(BaseTable.Meta):
116 model = Circuit
117 fields = (
118 "pk",
119 "cid",
120 "provider",
121 "type",
122 "status",
123 "tenant",
124 "termination_a",
125 "termination_z",
126 "install_date",
127 "commit_rate",
128 "description",
129 "tags",
130 )
131 default_columns = (
132 "pk",
133 "cid",
134 "provider",
135 "type",
136 "status",
137 "tenant",
138 "termination_a",
139 "termination_z",
140 "description",
141 )
142
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nautobot/circuits/tables.py b/nautobot/circuits/tables.py
--- a/nautobot/circuits/tables.py
+++ b/nautobot/circuits/tables.py
@@ -12,10 +12,13 @@
from .models import Circuit, CircuitType, Provider, ProviderNetwork
CIRCUIT_TERMINATION_PARENT = """
+{% load helpers %}
{% if value.provider_network %}
-<a href="{{ value.provider_network.get_absolute_url }}">{{ value.provider_network }}</a>
+{{ value.provider_network|hyperlinked_object }}
{% elif value.site %}
-<a href="{{ value.site.get_absolute_url }}">{{ value.site }}</a>
+{{ value.site|hyperlinked_object }}
+{% else %}
+{{ None|placeholder }}
{% endif %}
"""
| {"golden_diff": "diff --git a/nautobot/circuits/tables.py b/nautobot/circuits/tables.py\n--- a/nautobot/circuits/tables.py\n+++ b/nautobot/circuits/tables.py\n@@ -12,10 +12,13 @@\n from .models import Circuit, CircuitType, Provider, ProviderNetwork\n \n CIRCUIT_TERMINATION_PARENT = \"\"\"\n+{% load helpers %}\n {% if value.provider_network %}\n-<a href=\"{{ value.provider_network.get_absolute_url }}\">{{ value.provider_network }}</a>\n+{{ value.provider_network|hyperlinked_object }}\n {% elif value.site %}\n-<a href=\"{{ value.site.get_absolute_url }}\">{{ value.site }}</a>\n+{{ value.site|hyperlinked_object }}\n+{% else %}\n+{{ None|placeholder }}\n {% endif %}\n \"\"\"\n", "issue": "wrong link from circuit types to circuit\n<!--\r\n NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED.\r\n\r\n This form is only for reporting reproducible bugs. If you need assistance\r\n with Nautobot installation, or if you have a general question, please start a\r\n discussion instead: https://github.com/nautobot/nautobot/discussions\r\n\r\n Please describe the environment in which you are running Nautobot. Be sure\r\n that you are running an unmodified instance of the latest stable release\r\n before submitting a bug report, and that any plugins have been disabled.\r\n-->\r\n### Environment\r\n* Nautobot version (Docker tag too if applicable): eee34d7bc54d (v1.4.5)\r\n* Python version: 3.10\r\n* Database platform, version: postgresql 14\r\n* Middleware(s):\r\n\r\n<!--\r\n Describe in detail the exact steps that someone else can take to reproduce\r\n this bug using the current stable release of Nautobot. Begin with the\r\n creation of any necessary database objects and call out every operation\r\n being performed explicitly. If reporting a bug in the REST API, be sure to\r\n reconstruct the raw HTTP request(s) being made: Don't rely on a client\r\n library such as pynautobot.\r\n-->\r\n### Steps to Reproduce\r\n1. Create circuit type\r\n2. Create circuit of above circuit type\r\n3. Go to Circuit Types\r\n4. Click one circuit type\r\n5. Click number of circuits belonging to circuit type\r\n\r\n<!-- What did you expect to happen? -->\r\n### Expected Behavior\r\nredirect to `circuits/circuits/?type=XXX`\r\n\r\n<!-- What happened instead? -->\r\n### Observed Behavior\r\nredirects to `circuits/circuits/?circuit_type=XXX`\r\nwhich shows an \"Invalid filters were specified\" error\n", "before_files": [{"content": "import django_tables2 as tables\nfrom django_tables2.utils import Accessor\n\nfrom nautobot.extras.tables import StatusTableMixin\nfrom nautobot.tenancy.tables import TenantColumn\nfrom nautobot.utilities.tables import (\n BaseTable,\n ButtonsColumn,\n TagColumn,\n ToggleColumn,\n)\nfrom .models import Circuit, CircuitType, Provider, ProviderNetwork\n\nCIRCUIT_TERMINATION_PARENT = \"\"\"\n{% if value.provider_network %}\n<a href=\"{{ value.provider_network.get_absolute_url }}\">{{ value.provider_network }}</a>\n{% elif value.site %}\n<a href=\"{{ value.site.get_absolute_url }}\">{{ value.site }}</a>\n{% endif %}\n\"\"\"\n\n#\n# Provider Network\n#\n\n\nclass ProviderNetworkTable(BaseTable):\n pk = ToggleColumn()\n name = tables.Column(linkify=True)\n provider = tables.Column(linkify=True)\n tags = TagColumn(url_name=\"circuits:providernetwork_list\")\n\n class Meta(BaseTable.Meta):\n model = ProviderNetwork\n fields = (\"pk\", \"name\", \"provider\", \"description\", \"tags\")\n default_columns = (\"pk\", \"name\", \"provider\", \"description\")\n\n\n#\n# Providers\n#\n\n\nclass ProviderTable(BaseTable):\n pk = ToggleColumn()\n name = tables.LinkColumn()\n circuit_count = tables.Column(accessor=Accessor(\"count_circuits\"), verbose_name=\"Circuits\")\n tags = TagColumn(url_name=\"circuits:provider_list\")\n\n class Meta(BaseTable.Meta):\n model = Provider\n fields = (\n \"pk\",\n \"name\",\n \"asn\",\n \"account\",\n \"portal_url\",\n \"noc_contact\",\n \"admin_contact\",\n \"circuit_count\",\n \"tags\",\n )\n default_columns = (\"pk\", \"name\", \"asn\", \"account\", \"circuit_count\")\n\n\n#\n# Circuit types\n#\n\n\nclass CircuitTypeTable(BaseTable):\n pk = ToggleColumn()\n name = tables.LinkColumn()\n circuit_count = tables.Column(verbose_name=\"Circuits\")\n actions = ButtonsColumn(CircuitType, pk_field=\"slug\")\n\n class Meta(BaseTable.Meta):\n model = CircuitType\n fields = (\"pk\", \"name\", \"circuit_count\", \"description\", \"slug\", \"actions\")\n default_columns = (\n \"pk\",\n \"name\",\n \"circuit_count\",\n \"description\",\n \"slug\",\n \"actions\",\n )\n\n\n#\n# Circuits\n#\n\n\nclass CircuitTable(StatusTableMixin, BaseTable):\n pk = ToggleColumn()\n cid = tables.LinkColumn(verbose_name=\"ID\")\n provider = tables.LinkColumn(viewname=\"circuits:provider\", args=[Accessor(\"provider__slug\")])\n tenant = TenantColumn()\n tags = TagColumn(url_name=\"circuits:circuit_list\")\n\n termination_a = tables.TemplateColumn(\n template_code=CIRCUIT_TERMINATION_PARENT,\n accessor=Accessor(\"termination_a\"),\n orderable=False,\n verbose_name=\"Side A\",\n )\n termination_z = tables.TemplateColumn(\n template_code=CIRCUIT_TERMINATION_PARENT,\n accessor=Accessor(\"termination_z\"),\n orderable=False,\n verbose_name=\"Side Z\",\n )\n\n class Meta(BaseTable.Meta):\n model = Circuit\n fields = (\n \"pk\",\n \"cid\",\n \"provider\",\n \"type\",\n \"status\",\n \"tenant\",\n \"termination_a\",\n \"termination_z\",\n \"install_date\",\n \"commit_rate\",\n \"description\",\n \"tags\",\n )\n default_columns = (\n \"pk\",\n \"cid\",\n \"provider\",\n \"type\",\n \"status\",\n \"tenant\",\n \"termination_a\",\n \"termination_z\",\n \"description\",\n )\n", "path": "nautobot/circuits/tables.py"}], "after_files": [{"content": "import django_tables2 as tables\nfrom django_tables2.utils import Accessor\n\nfrom nautobot.extras.tables import StatusTableMixin\nfrom nautobot.tenancy.tables import TenantColumn\nfrom nautobot.utilities.tables import (\n BaseTable,\n ButtonsColumn,\n TagColumn,\n ToggleColumn,\n)\nfrom .models import Circuit, CircuitType, Provider, ProviderNetwork\n\nCIRCUIT_TERMINATION_PARENT = \"\"\"\n{% load helpers %}\n{% if value.provider_network %}\n{{ value.provider_network|hyperlinked_object }}\n{% elif value.site %}\n{{ value.site|hyperlinked_object }}\n{% else %}\n{{ None|placeholder }}\n{% endif %}\n\"\"\"\n\n#\n# Provider Network\n#\n\n\nclass ProviderNetworkTable(BaseTable):\n pk = ToggleColumn()\n name = tables.Column(linkify=True)\n provider = tables.Column(linkify=True)\n tags = TagColumn(url_name=\"circuits:providernetwork_list\")\n\n class Meta(BaseTable.Meta):\n model = ProviderNetwork\n fields = (\"pk\", \"name\", \"provider\", \"description\", \"tags\")\n default_columns = (\"pk\", \"name\", \"provider\", \"description\")\n\n\n#\n# Providers\n#\n\n\nclass ProviderTable(BaseTable):\n pk = ToggleColumn()\n name = tables.LinkColumn()\n circuit_count = tables.Column(accessor=Accessor(\"count_circuits\"), verbose_name=\"Circuits\")\n tags = TagColumn(url_name=\"circuits:provider_list\")\n\n class Meta(BaseTable.Meta):\n model = Provider\n fields = (\n \"pk\",\n \"name\",\n \"asn\",\n \"account\",\n \"portal_url\",\n \"noc_contact\",\n \"admin_contact\",\n \"circuit_count\",\n \"tags\",\n )\n default_columns = (\"pk\", \"name\", \"asn\", \"account\", \"circuit_count\")\n\n\n#\n# Circuit types\n#\n\n\nclass CircuitTypeTable(BaseTable):\n pk = ToggleColumn()\n name = tables.LinkColumn()\n circuit_count = tables.Column(verbose_name=\"Circuits\")\n actions = ButtonsColumn(CircuitType, pk_field=\"slug\")\n\n class Meta(BaseTable.Meta):\n model = CircuitType\n fields = (\"pk\", \"name\", \"circuit_count\", \"description\", \"slug\", \"actions\")\n default_columns = (\n \"pk\",\n \"name\",\n \"circuit_count\",\n \"description\",\n \"slug\",\n \"actions\",\n )\n\n\n#\n# Circuits\n#\n\n\nclass CircuitTable(StatusTableMixin, BaseTable):\n pk = ToggleColumn()\n cid = tables.LinkColumn(verbose_name=\"ID\")\n provider = tables.LinkColumn(viewname=\"circuits:provider\", args=[Accessor(\"provider__slug\")])\n tenant = TenantColumn()\n tags = TagColumn(url_name=\"circuits:circuit_list\")\n\n termination_a = tables.TemplateColumn(\n template_code=CIRCUIT_TERMINATION_PARENT,\n accessor=Accessor(\"termination_a\"),\n orderable=False,\n verbose_name=\"Side A\",\n )\n termination_z = tables.TemplateColumn(\n template_code=CIRCUIT_TERMINATION_PARENT,\n accessor=Accessor(\"termination_z\"),\n orderable=False,\n verbose_name=\"Side Z\",\n )\n\n class Meta(BaseTable.Meta):\n model = Circuit\n fields = (\n \"pk\",\n \"cid\",\n \"provider\",\n \"type\",\n \"status\",\n \"tenant\",\n \"termination_a\",\n \"termination_z\",\n \"install_date\",\n \"commit_rate\",\n \"description\",\n \"tags\",\n )\n default_columns = (\n \"pk\",\n \"cid\",\n \"provider\",\n \"type\",\n \"status\",\n \"tenant\",\n \"termination_a\",\n \"termination_z\",\n \"description\",\n )\n", "path": "nautobot/circuits/tables.py"}]} | 1,767 | 168 |
gh_patches_debug_4998 | rasdani/github-patches | git_diff | bookwyrm-social__bookwyrm-611 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Expand book data task is re-setting default editions
it should not.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bookwyrm/connectors/abstract_connector.py`
Content:
```
1 ''' functionality outline for a book data connector '''
2 from abc import ABC, abstractmethod
3 from dataclasses import asdict, dataclass
4 import logging
5 from urllib3.exceptions import RequestError
6
7 from django.db import transaction
8 import requests
9 from requests.exceptions import SSLError
10
11 from bookwyrm import activitypub, models, settings
12 from .connector_manager import load_more_data, ConnectorException
13
14
15 logger = logging.getLogger(__name__)
16 class AbstractMinimalConnector(ABC):
17 ''' just the bare bones, for other bookwyrm instances '''
18 def __init__(self, identifier):
19 # load connector settings
20 info = models.Connector.objects.get(identifier=identifier)
21 self.connector = info
22
23 # the things in the connector model to copy over
24 self_fields = [
25 'base_url',
26 'books_url',
27 'covers_url',
28 'search_url',
29 'max_query_count',
30 'name',
31 'identifier',
32 'local'
33 ]
34 for field in self_fields:
35 setattr(self, field, getattr(info, field))
36
37 def search(self, query, min_confidence=None):
38 ''' free text search '''
39 params = {}
40 if min_confidence:
41 params['min_confidence'] = min_confidence
42
43 resp = requests.get(
44 '%s%s' % (self.search_url, query),
45 params=params,
46 headers={
47 'Accept': 'application/json; charset=utf-8',
48 'User-Agent': settings.USER_AGENT,
49 },
50 )
51 if not resp.ok:
52 resp.raise_for_status()
53 try:
54 data = resp.json()
55 except ValueError as e:
56 logger.exception(e)
57 raise ConnectorException('Unable to parse json response', e)
58 results = []
59
60 for doc in self.parse_search_data(data)[:10]:
61 results.append(self.format_search_result(doc))
62 return results
63
64 @abstractmethod
65 def get_or_create_book(self, remote_id):
66 ''' pull up a book record by whatever means possible '''
67
68 @abstractmethod
69 def parse_search_data(self, data):
70 ''' turn the result json from a search into a list '''
71
72 @abstractmethod
73 def format_search_result(self, search_result):
74 ''' create a SearchResult obj from json '''
75
76
77 class AbstractConnector(AbstractMinimalConnector):
78 ''' generic book data connector '''
79 def __init__(self, identifier):
80 super().__init__(identifier)
81 # fields we want to look for in book data to copy over
82 # title we handle separately.
83 self.book_mappings = []
84
85
86 def is_available(self):
87 ''' check if you're allowed to use this connector '''
88 if self.max_query_count is not None:
89 if self.connector.query_count >= self.max_query_count:
90 return False
91 return True
92
93
94 def get_or_create_book(self, remote_id):
95 ''' translate arbitrary json into an Activitypub dataclass '''
96 # first, check if we have the origin_id saved
97 existing = models.Edition.find_existing_by_remote_id(remote_id) or \
98 models.Work.find_existing_by_remote_id(remote_id)
99 if existing:
100 if hasattr(existing, 'get_default_editon'):
101 return existing.get_default_editon()
102 return existing
103
104 # load the json
105 data = get_data(remote_id)
106 mapped_data = dict_from_mappings(data, self.book_mappings)
107 if self.is_work_data(data):
108 try:
109 edition_data = self.get_edition_from_work_data(data)
110 except (KeyError, ConnectorException):
111 # hack: re-use the work data as the edition data
112 # this is why remote ids aren't necessarily unique
113 edition_data = data
114 work_data = mapped_data
115 else:
116 try:
117 work_data = self.get_work_from_edition_data(data)
118 work_data = dict_from_mappings(work_data, self.book_mappings)
119 except (KeyError, ConnectorException):
120 work_data = mapped_data
121 edition_data = data
122
123 if not work_data or not edition_data:
124 raise ConnectorException('Unable to load book data: %s' % remote_id)
125
126 with transaction.atomic():
127 # create activitypub object
128 work_activity = activitypub.Work(**work_data)
129 # this will dedupe automatically
130 work = work_activity.to_model(models.Work)
131 for author in self.get_authors_from_data(data):
132 work.authors.add(author)
133
134 edition = self.create_edition_from_data(work, edition_data)
135 load_more_data.delay(self.connector.id, work.id)
136 return edition
137
138
139 def create_edition_from_data(self, work, edition_data):
140 ''' if we already have the work, we're ready '''
141 mapped_data = dict_from_mappings(edition_data, self.book_mappings)
142 mapped_data['work'] = work.remote_id
143 edition_activity = activitypub.Edition(**mapped_data)
144 edition = edition_activity.to_model(models.Edition)
145 edition.connector = self.connector
146 edition.save()
147
148 work.default_edition = edition
149 work.save()
150
151 for author in self.get_authors_from_data(edition_data):
152 edition.authors.add(author)
153 if not edition.authors.exists() and work.authors.exists():
154 edition.authors.set(work.authors.all())
155
156 return edition
157
158
159 def get_or_create_author(self, remote_id):
160 ''' load that author '''
161 existing = models.Author.find_existing_by_remote_id(remote_id)
162 if existing:
163 return existing
164
165 data = get_data(remote_id)
166
167 mapped_data = dict_from_mappings(data, self.author_mappings)
168 activity = activitypub.Author(**mapped_data)
169 # this will dedupe
170 return activity.to_model(models.Author)
171
172
173 @abstractmethod
174 def is_work_data(self, data):
175 ''' differentiate works and editions '''
176
177 @abstractmethod
178 def get_edition_from_work_data(self, data):
179 ''' every work needs at least one edition '''
180
181 @abstractmethod
182 def get_work_from_edition_data(self, data):
183 ''' every edition needs a work '''
184
185 @abstractmethod
186 def get_authors_from_data(self, data):
187 ''' load author data '''
188
189 @abstractmethod
190 def expand_book_data(self, book):
191 ''' get more info on a book '''
192
193
194 def dict_from_mappings(data, mappings):
195 ''' create a dict in Activitypub format, using mappings supplies by
196 the subclass '''
197 result = {}
198 for mapping in mappings:
199 result[mapping.local_field] = mapping.get_value(data)
200 return result
201
202
203 def get_data(url):
204 ''' wrapper for request.get '''
205 try:
206 resp = requests.get(
207 url,
208 headers={
209 'Accept': 'application/json; charset=utf-8',
210 'User-Agent': settings.USER_AGENT,
211 },
212 )
213 except (RequestError, SSLError) as e:
214 logger.exception(e)
215 raise ConnectorException()
216
217 if not resp.ok:
218 try:
219 resp.raise_for_status()
220 except requests.exceptions.HTTPError as e:
221 logger.exception(e)
222 raise ConnectorException()
223 try:
224 data = resp.json()
225 except ValueError as e:
226 logger.exception(e)
227 raise ConnectorException()
228
229 return data
230
231
232 def get_image(url):
233 ''' wrapper for requesting an image '''
234 try:
235 resp = requests.get(
236 url,
237 headers={
238 'User-Agent': settings.USER_AGENT,
239 },
240 )
241 except (RequestError, SSLError):
242 return None
243 if not resp.ok:
244 return None
245 return resp
246
247
248 @dataclass
249 class SearchResult:
250 ''' standardized search result object '''
251 title: str
252 key: str
253 author: str
254 year: str
255 connector: object
256 confidence: int = 1
257
258 def __repr__(self):
259 return "<SearchResult key={!r} title={!r} author={!r}>".format(
260 self.key, self.title, self.author)
261
262 def json(self):
263 ''' serialize a connector for json response '''
264 serialized = asdict(self)
265 del serialized['connector']
266 return serialized
267
268
269 class Mapping:
270 ''' associate a local database field with a field in an external dataset '''
271 def __init__(self, local_field, remote_field=None, formatter=None):
272 noop = lambda x: x
273
274 self.local_field = local_field
275 self.remote_field = remote_field or local_field
276 self.formatter = formatter or noop
277
278 def get_value(self, data):
279 ''' pull a field from incoming json and return the formatted version '''
280 value = data.get(self.remote_field)
281 if not value:
282 return None
283 try:
284 return self.formatter(value)
285 except:# pylint: disable=bare-except
286 return None
287
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bookwyrm/connectors/abstract_connector.py b/bookwyrm/connectors/abstract_connector.py
--- a/bookwyrm/connectors/abstract_connector.py
+++ b/bookwyrm/connectors/abstract_connector.py
@@ -145,8 +145,9 @@
edition.connector = self.connector
edition.save()
- work.default_edition = edition
- work.save()
+ if not work.default_edition:
+ work.default_edition = edition
+ work.save()
for author in self.get_authors_from_data(edition_data):
edition.authors.add(author)
| {"golden_diff": "diff --git a/bookwyrm/connectors/abstract_connector.py b/bookwyrm/connectors/abstract_connector.py\n--- a/bookwyrm/connectors/abstract_connector.py\n+++ b/bookwyrm/connectors/abstract_connector.py\n@@ -145,8 +145,9 @@\n edition.connector = self.connector\n edition.save()\n \n- work.default_edition = edition\n- work.save()\n+ if not work.default_edition:\n+ work.default_edition = edition\n+ work.save()\n \n for author in self.get_authors_from_data(edition_data):\n edition.authors.add(author)\n", "issue": "Expand book data task is re-setting default editions\nit should not.\n", "before_files": [{"content": "''' functionality outline for a book data connector '''\nfrom abc import ABC, abstractmethod\nfrom dataclasses import asdict, dataclass\nimport logging\nfrom urllib3.exceptions import RequestError\n\nfrom django.db import transaction\nimport requests\nfrom requests.exceptions import SSLError\n\nfrom bookwyrm import activitypub, models, settings\nfrom .connector_manager import load_more_data, ConnectorException\n\n\nlogger = logging.getLogger(__name__)\nclass AbstractMinimalConnector(ABC):\n ''' just the bare bones, for other bookwyrm instances '''\n def __init__(self, identifier):\n # load connector settings\n info = models.Connector.objects.get(identifier=identifier)\n self.connector = info\n\n # the things in the connector model to copy over\n self_fields = [\n 'base_url',\n 'books_url',\n 'covers_url',\n 'search_url',\n 'max_query_count',\n 'name',\n 'identifier',\n 'local'\n ]\n for field in self_fields:\n setattr(self, field, getattr(info, field))\n\n def search(self, query, min_confidence=None):\n ''' free text search '''\n params = {}\n if min_confidence:\n params['min_confidence'] = min_confidence\n\n resp = requests.get(\n '%s%s' % (self.search_url, query),\n params=params,\n headers={\n 'Accept': 'application/json; charset=utf-8',\n 'User-Agent': settings.USER_AGENT,\n },\n )\n if not resp.ok:\n resp.raise_for_status()\n try:\n data = resp.json()\n except ValueError as e:\n logger.exception(e)\n raise ConnectorException('Unable to parse json response', e)\n results = []\n\n for doc in self.parse_search_data(data)[:10]:\n results.append(self.format_search_result(doc))\n return results\n\n @abstractmethod\n def get_or_create_book(self, remote_id):\n ''' pull up a book record by whatever means possible '''\n\n @abstractmethod\n def parse_search_data(self, data):\n ''' turn the result json from a search into a list '''\n\n @abstractmethod\n def format_search_result(self, search_result):\n ''' create a SearchResult obj from json '''\n\n\nclass AbstractConnector(AbstractMinimalConnector):\n ''' generic book data connector '''\n def __init__(self, identifier):\n super().__init__(identifier)\n # fields we want to look for in book data to copy over\n # title we handle separately.\n self.book_mappings = []\n\n\n def is_available(self):\n ''' check if you're allowed to use this connector '''\n if self.max_query_count is not None:\n if self.connector.query_count >= self.max_query_count:\n return False\n return True\n\n\n def get_or_create_book(self, remote_id):\n ''' translate arbitrary json into an Activitypub dataclass '''\n # first, check if we have the origin_id saved\n existing = models.Edition.find_existing_by_remote_id(remote_id) or \\\n models.Work.find_existing_by_remote_id(remote_id)\n if existing:\n if hasattr(existing, 'get_default_editon'):\n return existing.get_default_editon()\n return existing\n\n # load the json\n data = get_data(remote_id)\n mapped_data = dict_from_mappings(data, self.book_mappings)\n if self.is_work_data(data):\n try:\n edition_data = self.get_edition_from_work_data(data)\n except (KeyError, ConnectorException):\n # hack: re-use the work data as the edition data\n # this is why remote ids aren't necessarily unique\n edition_data = data\n work_data = mapped_data\n else:\n try:\n work_data = self.get_work_from_edition_data(data)\n work_data = dict_from_mappings(work_data, self.book_mappings)\n except (KeyError, ConnectorException):\n work_data = mapped_data\n edition_data = data\n\n if not work_data or not edition_data:\n raise ConnectorException('Unable to load book data: %s' % remote_id)\n\n with transaction.atomic():\n # create activitypub object\n work_activity = activitypub.Work(**work_data)\n # this will dedupe automatically\n work = work_activity.to_model(models.Work)\n for author in self.get_authors_from_data(data):\n work.authors.add(author)\n\n edition = self.create_edition_from_data(work, edition_data)\n load_more_data.delay(self.connector.id, work.id)\n return edition\n\n\n def create_edition_from_data(self, work, edition_data):\n ''' if we already have the work, we're ready '''\n mapped_data = dict_from_mappings(edition_data, self.book_mappings)\n mapped_data['work'] = work.remote_id\n edition_activity = activitypub.Edition(**mapped_data)\n edition = edition_activity.to_model(models.Edition)\n edition.connector = self.connector\n edition.save()\n\n work.default_edition = edition\n work.save()\n\n for author in self.get_authors_from_data(edition_data):\n edition.authors.add(author)\n if not edition.authors.exists() and work.authors.exists():\n edition.authors.set(work.authors.all())\n\n return edition\n\n\n def get_or_create_author(self, remote_id):\n ''' load that author '''\n existing = models.Author.find_existing_by_remote_id(remote_id)\n if existing:\n return existing\n\n data = get_data(remote_id)\n\n mapped_data = dict_from_mappings(data, self.author_mappings)\n activity = activitypub.Author(**mapped_data)\n # this will dedupe\n return activity.to_model(models.Author)\n\n\n @abstractmethod\n def is_work_data(self, data):\n ''' differentiate works and editions '''\n\n @abstractmethod\n def get_edition_from_work_data(self, data):\n ''' every work needs at least one edition '''\n\n @abstractmethod\n def get_work_from_edition_data(self, data):\n ''' every edition needs a work '''\n\n @abstractmethod\n def get_authors_from_data(self, data):\n ''' load author data '''\n\n @abstractmethod\n def expand_book_data(self, book):\n ''' get more info on a book '''\n\n\ndef dict_from_mappings(data, mappings):\n ''' create a dict in Activitypub format, using mappings supplies by\n the subclass '''\n result = {}\n for mapping in mappings:\n result[mapping.local_field] = mapping.get_value(data)\n return result\n\n\ndef get_data(url):\n ''' wrapper for request.get '''\n try:\n resp = requests.get(\n url,\n headers={\n 'Accept': 'application/json; charset=utf-8',\n 'User-Agent': settings.USER_AGENT,\n },\n )\n except (RequestError, SSLError) as e:\n logger.exception(e)\n raise ConnectorException()\n\n if not resp.ok:\n try:\n resp.raise_for_status()\n except requests.exceptions.HTTPError as e:\n logger.exception(e)\n raise ConnectorException()\n try:\n data = resp.json()\n except ValueError as e:\n logger.exception(e)\n raise ConnectorException()\n\n return data\n\n\ndef get_image(url):\n ''' wrapper for requesting an image '''\n try:\n resp = requests.get(\n url,\n headers={\n 'User-Agent': settings.USER_AGENT,\n },\n )\n except (RequestError, SSLError):\n return None\n if not resp.ok:\n return None\n return resp\n\n\n@dataclass\nclass SearchResult:\n ''' standardized search result object '''\n title: str\n key: str\n author: str\n year: str\n connector: object\n confidence: int = 1\n\n def __repr__(self):\n return \"<SearchResult key={!r} title={!r} author={!r}>\".format(\n self.key, self.title, self.author)\n\n def json(self):\n ''' serialize a connector for json response '''\n serialized = asdict(self)\n del serialized['connector']\n return serialized\n\n\nclass Mapping:\n ''' associate a local database field with a field in an external dataset '''\n def __init__(self, local_field, remote_field=None, formatter=None):\n noop = lambda x: x\n\n self.local_field = local_field\n self.remote_field = remote_field or local_field\n self.formatter = formatter or noop\n\n def get_value(self, data):\n ''' pull a field from incoming json and return the formatted version '''\n value = data.get(self.remote_field)\n if not value:\n return None\n try:\n return self.formatter(value)\n except:# pylint: disable=bare-except\n return None\n", "path": "bookwyrm/connectors/abstract_connector.py"}], "after_files": [{"content": "''' functionality outline for a book data connector '''\nfrom abc import ABC, abstractmethod\nfrom dataclasses import asdict, dataclass\nimport logging\nfrom urllib3.exceptions import RequestError\n\nfrom django.db import transaction\nimport requests\nfrom requests.exceptions import SSLError\n\nfrom bookwyrm import activitypub, models, settings\nfrom .connector_manager import load_more_data, ConnectorException\n\n\nlogger = logging.getLogger(__name__)\nclass AbstractMinimalConnector(ABC):\n ''' just the bare bones, for other bookwyrm instances '''\n def __init__(self, identifier):\n # load connector settings\n info = models.Connector.objects.get(identifier=identifier)\n self.connector = info\n\n # the things in the connector model to copy over\n self_fields = [\n 'base_url',\n 'books_url',\n 'covers_url',\n 'search_url',\n 'max_query_count',\n 'name',\n 'identifier',\n 'local'\n ]\n for field in self_fields:\n setattr(self, field, getattr(info, field))\n\n def search(self, query, min_confidence=None):\n ''' free text search '''\n params = {}\n if min_confidence:\n params['min_confidence'] = min_confidence\n\n resp = requests.get(\n '%s%s' % (self.search_url, query),\n params=params,\n headers={\n 'Accept': 'application/json; charset=utf-8',\n 'User-Agent': settings.USER_AGENT,\n },\n )\n if not resp.ok:\n resp.raise_for_status()\n try:\n data = resp.json()\n except ValueError as e:\n logger.exception(e)\n raise ConnectorException('Unable to parse json response', e)\n results = []\n\n for doc in self.parse_search_data(data)[:10]:\n results.append(self.format_search_result(doc))\n return results\n\n @abstractmethod\n def get_or_create_book(self, remote_id):\n ''' pull up a book record by whatever means possible '''\n\n @abstractmethod\n def parse_search_data(self, data):\n ''' turn the result json from a search into a list '''\n\n @abstractmethod\n def format_search_result(self, search_result):\n ''' create a SearchResult obj from json '''\n\n\nclass AbstractConnector(AbstractMinimalConnector):\n ''' generic book data connector '''\n def __init__(self, identifier):\n super().__init__(identifier)\n # fields we want to look for in book data to copy over\n # title we handle separately.\n self.book_mappings = []\n\n\n def is_available(self):\n ''' check if you're allowed to use this connector '''\n if self.max_query_count is not None:\n if self.connector.query_count >= self.max_query_count:\n return False\n return True\n\n\n def get_or_create_book(self, remote_id):\n ''' translate arbitrary json into an Activitypub dataclass '''\n # first, check if we have the origin_id saved\n existing = models.Edition.find_existing_by_remote_id(remote_id) or \\\n models.Work.find_existing_by_remote_id(remote_id)\n if existing:\n if hasattr(existing, 'get_default_editon'):\n return existing.get_default_editon()\n return existing\n\n # load the json\n data = get_data(remote_id)\n mapped_data = dict_from_mappings(data, self.book_mappings)\n if self.is_work_data(data):\n try:\n edition_data = self.get_edition_from_work_data(data)\n except (KeyError, ConnectorException):\n # hack: re-use the work data as the edition data\n # this is why remote ids aren't necessarily unique\n edition_data = data\n work_data = mapped_data\n else:\n try:\n work_data = self.get_work_from_edition_data(data)\n work_data = dict_from_mappings(work_data, self.book_mappings)\n except (KeyError, ConnectorException):\n work_data = mapped_data\n edition_data = data\n\n if not work_data or not edition_data:\n raise ConnectorException('Unable to load book data: %s' % remote_id)\n\n with transaction.atomic():\n # create activitypub object\n work_activity = activitypub.Work(**work_data)\n # this will dedupe automatically\n work = work_activity.to_model(models.Work)\n for author in self.get_authors_from_data(data):\n work.authors.add(author)\n\n edition = self.create_edition_from_data(work, edition_data)\n load_more_data.delay(self.connector.id, work.id)\n return edition\n\n\n def create_edition_from_data(self, work, edition_data):\n ''' if we already have the work, we're ready '''\n mapped_data = dict_from_mappings(edition_data, self.book_mappings)\n mapped_data['work'] = work.remote_id\n edition_activity = activitypub.Edition(**mapped_data)\n edition = edition_activity.to_model(models.Edition)\n edition.connector = self.connector\n edition.save()\n\n if not work.default_edition:\n work.default_edition = edition\n work.save()\n\n for author in self.get_authors_from_data(edition_data):\n edition.authors.add(author)\n if not edition.authors.exists() and work.authors.exists():\n edition.authors.set(work.authors.all())\n\n return edition\n\n\n def get_or_create_author(self, remote_id):\n ''' load that author '''\n existing = models.Author.find_existing_by_remote_id(remote_id)\n if existing:\n return existing\n\n data = get_data(remote_id)\n\n mapped_data = dict_from_mappings(data, self.author_mappings)\n activity = activitypub.Author(**mapped_data)\n # this will dedupe\n return activity.to_model(models.Author)\n\n\n @abstractmethod\n def is_work_data(self, data):\n ''' differentiate works and editions '''\n\n @abstractmethod\n def get_edition_from_work_data(self, data):\n ''' every work needs at least one edition '''\n\n @abstractmethod\n def get_work_from_edition_data(self, data):\n ''' every edition needs a work '''\n\n @abstractmethod\n def get_authors_from_data(self, data):\n ''' load author data '''\n\n @abstractmethod\n def expand_book_data(self, book):\n ''' get more info on a book '''\n\n\ndef dict_from_mappings(data, mappings):\n ''' create a dict in Activitypub format, using mappings supplies by\n the subclass '''\n result = {}\n for mapping in mappings:\n result[mapping.local_field] = mapping.get_value(data)\n return result\n\n\ndef get_data(url):\n ''' wrapper for request.get '''\n try:\n resp = requests.get(\n url,\n headers={\n 'Accept': 'application/json; charset=utf-8',\n 'User-Agent': settings.USER_AGENT,\n },\n )\n except (RequestError, SSLError) as e:\n logger.exception(e)\n raise ConnectorException()\n\n if not resp.ok:\n try:\n resp.raise_for_status()\n except requests.exceptions.HTTPError as e:\n logger.exception(e)\n raise ConnectorException()\n try:\n data = resp.json()\n except ValueError as e:\n logger.exception(e)\n raise ConnectorException()\n\n return data\n\n\ndef get_image(url):\n ''' wrapper for requesting an image '''\n try:\n resp = requests.get(\n url,\n headers={\n 'User-Agent': settings.USER_AGENT,\n },\n )\n except (RequestError, SSLError):\n return None\n if not resp.ok:\n return None\n return resp\n\n\n@dataclass\nclass SearchResult:\n ''' standardized search result object '''\n title: str\n key: str\n author: str\n year: str\n connector: object\n confidence: int = 1\n\n def __repr__(self):\n return \"<SearchResult key={!r} title={!r} author={!r}>\".format(\n self.key, self.title, self.author)\n\n def json(self):\n ''' serialize a connector for json response '''\n serialized = asdict(self)\n del serialized['connector']\n return serialized\n\n\nclass Mapping:\n ''' associate a local database field with a field in an external dataset '''\n def __init__(self, local_field, remote_field=None, formatter=None):\n noop = lambda x: x\n\n self.local_field = local_field\n self.remote_field = remote_field or local_field\n self.formatter = formatter or noop\n\n def get_value(self, data):\n ''' pull a field from incoming json and return the formatted version '''\n value = data.get(self.remote_field)\n if not value:\n return None\n try:\n return self.formatter(value)\n except:# pylint: disable=bare-except\n return None\n", "path": "bookwyrm/connectors/abstract_connector.py"}]} | 2,863 | 133 |
gh_patches_debug_608 | rasdani/github-patches | git_diff | pex-tool__pex-1482 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 2.1.51
On the docket:
+ [ ] UnicodeDecodeError when packaging after upgrading to v2.1.46 #1479
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.50"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.50"
+__version__ = "2.1.51"
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.50\"\n+__version__ = \"2.1.51\"\n", "issue": "Release 2.1.51\nOn the docket:\r\n+ [ ] UnicodeDecodeError when packaging after upgrading to v2.1.46 #1479 \n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.50\"\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.51\"\n", "path": "pex/version.py"}]} | 346 | 96 |
gh_patches_debug_22877 | rasdani/github-patches | git_diff | kornia__kornia-2514 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix a Bug in `kornia.augmentation.RandomRain`
### Describe the bug
When applying `[kornia.augmentation.RandomRain](https://kornia.readthedocs.io/en/latest/augmentation.module.html#kornia.augmentation.RandomRain)` over an image, it seems that subsequent distortions are all affected by the rain effect. For example, if you apply `RandomRotation` before and after calling rain, the rotation before `RandomRain` is fine. However, the rotation after calling `RandomRain` is influenced by the rain. same thing happens for all of the other distortion you may call after `RandomRotation` function. I tried several other [augmentation ](https://kornia.readthedocs.io/en/latest/augmentation.module.html#kornia.augmentation)here and this is the case for all of them.

### Reproduction steps
```bash
The following link is a minimal code as an example (on google colab).
```
[sample code](https://colab.research.google.com/drive/1m_Pjqa6-MgO2ybZeNnLYdDerrErYpKw_#scrollTo=dwr_iTFt28t5)

### Expected behavior
When applying the `kornia.augmentation.RandomRain` augmentation to an image, it should only affect the rain effect on that particular image and not introduce any unintended global effects on subsequent transformations. For example, if rotation is applied before and after calling `RandomRain`, the rotation after the rain effect should not be influenced by the rain.
### Environment
```shell
- Running on colab
- OS Linux
- Python version: 2.0.1+cu118
- GPU models and configuration:T4
```
### Additional context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kornia/augmentation/_2d/intensity/random_rain.py`
Content:
```
1 from __future__ import annotations
2
3 from typing import Any
4
5 import torch
6
7 from kornia.augmentation._2d.intensity.base import IntensityAugmentationBase2D
8 from kornia.augmentation.random_generator._2d import RainGenerator
9 from kornia.core import Tensor
10 from kornia.core.check import KORNIA_CHECK
11
12
13 class RandomRain(IntensityAugmentationBase2D):
14 r"""Add Random Rain to the image.
15
16 Args:
17 p: probability of applying the transformation.
18 number_of_drops: number of drops per image
19 drop_height: height of the drop in image(same for each drops in one image)
20 drop_width: width of the drop in image(same for each drops in one image)
21 Shape:
22 - Input: :math:`(C, H, W)` or :math:`(B, C, H, W)`
23 - Output: :math:`(B, C, H, W)`
24
25 Examples:
26 >>> rng = torch.manual_seed(0)
27 >>> input = torch.rand(1, 1, 5, 5)
28 >>> rain = RandomRain(p=1,drop_height=(1,2),drop_width=(1,2),number_of_drops=(1,1))
29 >>> rain(input)
30 tensor([[[[0.4963, 0.7843, 0.0885, 0.1320, 0.3074],
31 [0.6341, 0.4901, 0.8964, 0.4556, 0.6323],
32 [0.3489, 0.4017, 0.0223, 0.1689, 0.2939],
33 [0.5185, 0.6977, 0.8000, 0.1610, 0.2823],
34 [0.6816, 0.9152, 0.3971, 0.8742, 0.4194]]]])
35 """
36
37 def __init__(
38 self,
39 same_on_batch: bool = False,
40 p: float = 0.5,
41 keepdim: bool = False,
42 number_of_drops: tuple[int, int] = (1000, 2000),
43 drop_height: tuple[int, int] = (5, 20),
44 drop_width: tuple[int, int] = (-5, 5),
45 ) -> None:
46 super().__init__(p=p, same_on_batch=same_on_batch, p_batch=1.0, keepdim=keepdim)
47 self._param_generator = RainGenerator(number_of_drops, drop_height, drop_width)
48
49 def apply_transform(
50 self, image: Tensor, params: dict[str, Tensor], flags: dict[str, Any], transform: Tensor | None = None
51 ) -> Tensor:
52 # Check array and drops size
53 KORNIA_CHECK(image.shape[1] in {3, 1}, "Number of color channels should be 1 or 3.")
54 KORNIA_CHECK(
55 bool(
56 torch.all(params['drop_height_factor'] <= image.shape[2])
57 and torch.all(params['drop_height_factor'] > 0)
58 ),
59 "Height of drop should be greater than zero and less than image height.",
60 )
61
62 KORNIA_CHECK(
63 bool(torch.all(torch.abs(params['drop_width_factor']) <= image.shape[3])),
64 "Width of drop should be less than image width.",
65 )
66
67 for i in range(image.shape[0]):
68 number_of_drops: int = int(params['number_of_drops_factor'][i])
69 # We generate tensor with maximum number of drops, and then remove unnecessary drops.
70
71 coordinates_of_drops: Tensor = params['coordinates_factor'][i][:number_of_drops]
72 height_of_drop: int = int(params['drop_height_factor'][i])
73 width_of_drop: int = int(params['drop_width_factor'][i])
74
75 # Generate start coordinates for each drop
76 random_y_coords = coordinates_of_drops[:, 0] * (image.shape[2] - height_of_drop - 1)
77 if width_of_drop > 0:
78 random_x_coords = coordinates_of_drops[:, 1] * (image.shape[3] - width_of_drop - 1)
79 else:
80 random_x_coords = coordinates_of_drops[:, 1] * (image.shape[3] + width_of_drop - 1) - width_of_drop
81
82 coords = torch.cat([random_y_coords[None], random_x_coords[None]], dim=0).to(image.device, dtype=torch.long)
83
84 # Generate how our drop will look like into the image
85 size_of_line: int = max(height_of_drop, abs(width_of_drop))
86 x = torch.linspace(start=0, end=height_of_drop, steps=size_of_line, dtype=torch.long).to(image.device)
87 y = torch.linspace(start=0, end=width_of_drop, steps=size_of_line, dtype=torch.long).to(image.device)
88 # Draw lines
89 for k in range(x.shape[0]):
90 image[i, :, coords[0] + x[k], coords[1] + y[k]] = 200 / 255
91 return image
92
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kornia/augmentation/_2d/intensity/random_rain.py b/kornia/augmentation/_2d/intensity/random_rain.py
--- a/kornia/augmentation/_2d/intensity/random_rain.py
+++ b/kornia/augmentation/_2d/intensity/random_rain.py
@@ -63,7 +63,7 @@
bool(torch.all(torch.abs(params['drop_width_factor']) <= image.shape[3])),
"Width of drop should be less than image width.",
)
-
+ modeified_img = image.clone()
for i in range(image.shape[0]):
number_of_drops: int = int(params['number_of_drops_factor'][i])
# We generate tensor with maximum number of drops, and then remove unnecessary drops.
@@ -87,5 +87,5 @@
y = torch.linspace(start=0, end=width_of_drop, steps=size_of_line, dtype=torch.long).to(image.device)
# Draw lines
for k in range(x.shape[0]):
- image[i, :, coords[0] + x[k], coords[1] + y[k]] = 200 / 255
- return image
+ modeified_img[i, :, coords[0] + x[k], coords[1] + y[k]] = 200 / 255
+ return modeified_img
| {"golden_diff": "diff --git a/kornia/augmentation/_2d/intensity/random_rain.py b/kornia/augmentation/_2d/intensity/random_rain.py\n--- a/kornia/augmentation/_2d/intensity/random_rain.py\n+++ b/kornia/augmentation/_2d/intensity/random_rain.py\n@@ -63,7 +63,7 @@\n bool(torch.all(torch.abs(params['drop_width_factor']) <= image.shape[3])),\n \"Width of drop should be less than image width.\",\n )\n-\n+ modeified_img = image.clone()\n for i in range(image.shape[0]):\n number_of_drops: int = int(params['number_of_drops_factor'][i])\n # We generate tensor with maximum number of drops, and then remove unnecessary drops.\n@@ -87,5 +87,5 @@\n y = torch.linspace(start=0, end=width_of_drop, steps=size_of_line, dtype=torch.long).to(image.device)\n # Draw lines\n for k in range(x.shape[0]):\n- image[i, :, coords[0] + x[k], coords[1] + y[k]] = 200 / 255\n- return image\n+ modeified_img[i, :, coords[0] + x[k], coords[1] + y[k]] = 200 / 255\n+ return modeified_img\n", "issue": "Fix a Bug in `kornia.augmentation.RandomRain`\n### Describe the bug\r\n\r\n\r\nWhen applying `[kornia.augmentation.RandomRain](https://kornia.readthedocs.io/en/latest/augmentation.module.html#kornia.augmentation.RandomRain)` over an image, it seems that subsequent distortions are all affected by the rain effect. For example, if you apply `RandomRotation` before and after calling rain, the rotation before `RandomRain` is fine. However, the rotation after calling `RandomRain` is influenced by the rain. same thing happens for all of the other distortion you may call after `RandomRotation` function. I tried several other [augmentation ](https://kornia.readthedocs.io/en/latest/augmentation.module.html#kornia.augmentation)here and this is the case for all of them. \r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n### Reproduction steps\r\n\r\n```bash\r\nThe following link is a minimal code as an example (on google colab).\r\n```\r\n[sample code](https://colab.research.google.com/drive/1m_Pjqa6-MgO2ybZeNnLYdDerrErYpKw_#scrollTo=dwr_iTFt28t5)\r\n\r\n\r\n\r\n### Expected behavior\r\n\r\nWhen applying the `kornia.augmentation.RandomRain` augmentation to an image, it should only affect the rain effect on that particular image and not introduce any unintended global effects on subsequent transformations. For example, if rotation is applied before and after calling `RandomRain`, the rotation after the rain effect should not be influenced by the rain.\r\n\r\n\r\n### Environment\r\n\r\n```shell\r\n- Running on colab\r\n- OS Linux\r\n- Python version: 2.0.1+cu118\r\n- GPU models and configuration:T4\r\n```\r\n\r\n\r\n### Additional context\r\n\r\n_No response_\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import Any\n\nimport torch\n\nfrom kornia.augmentation._2d.intensity.base import IntensityAugmentationBase2D\nfrom kornia.augmentation.random_generator._2d import RainGenerator\nfrom kornia.core import Tensor\nfrom kornia.core.check import KORNIA_CHECK\n\n\nclass RandomRain(IntensityAugmentationBase2D):\n r\"\"\"Add Random Rain to the image.\n\n Args:\n p: probability of applying the transformation.\n number_of_drops: number of drops per image\n drop_height: height of the drop in image(same for each drops in one image)\n drop_width: width of the drop in image(same for each drops in one image)\n Shape:\n - Input: :math:`(C, H, W)` or :math:`(B, C, H, W)`\n - Output: :math:`(B, C, H, W)`\n\n Examples:\n >>> rng = torch.manual_seed(0)\n >>> input = torch.rand(1, 1, 5, 5)\n >>> rain = RandomRain(p=1,drop_height=(1,2),drop_width=(1,2),number_of_drops=(1,1))\n >>> rain(input)\n tensor([[[[0.4963, 0.7843, 0.0885, 0.1320, 0.3074],\n [0.6341, 0.4901, 0.8964, 0.4556, 0.6323],\n [0.3489, 0.4017, 0.0223, 0.1689, 0.2939],\n [0.5185, 0.6977, 0.8000, 0.1610, 0.2823],\n [0.6816, 0.9152, 0.3971, 0.8742, 0.4194]]]])\n \"\"\"\n\n def __init__(\n self,\n same_on_batch: bool = False,\n p: float = 0.5,\n keepdim: bool = False,\n number_of_drops: tuple[int, int] = (1000, 2000),\n drop_height: tuple[int, int] = (5, 20),\n drop_width: tuple[int, int] = (-5, 5),\n ) -> None:\n super().__init__(p=p, same_on_batch=same_on_batch, p_batch=1.0, keepdim=keepdim)\n self._param_generator = RainGenerator(number_of_drops, drop_height, drop_width)\n\n def apply_transform(\n self, image: Tensor, params: dict[str, Tensor], flags: dict[str, Any], transform: Tensor | None = None\n ) -> Tensor:\n # Check array and drops size\n KORNIA_CHECK(image.shape[1] in {3, 1}, \"Number of color channels should be 1 or 3.\")\n KORNIA_CHECK(\n bool(\n torch.all(params['drop_height_factor'] <= image.shape[2])\n and torch.all(params['drop_height_factor'] > 0)\n ),\n \"Height of drop should be greater than zero and less than image height.\",\n )\n\n KORNIA_CHECK(\n bool(torch.all(torch.abs(params['drop_width_factor']) <= image.shape[3])),\n \"Width of drop should be less than image width.\",\n )\n\n for i in range(image.shape[0]):\n number_of_drops: int = int(params['number_of_drops_factor'][i])\n # We generate tensor with maximum number of drops, and then remove unnecessary drops.\n\n coordinates_of_drops: Tensor = params['coordinates_factor'][i][:number_of_drops]\n height_of_drop: int = int(params['drop_height_factor'][i])\n width_of_drop: int = int(params['drop_width_factor'][i])\n\n # Generate start coordinates for each drop\n random_y_coords = coordinates_of_drops[:, 0] * (image.shape[2] - height_of_drop - 1)\n if width_of_drop > 0:\n random_x_coords = coordinates_of_drops[:, 1] * (image.shape[3] - width_of_drop - 1)\n else:\n random_x_coords = coordinates_of_drops[:, 1] * (image.shape[3] + width_of_drop - 1) - width_of_drop\n\n coords = torch.cat([random_y_coords[None], random_x_coords[None]], dim=0).to(image.device, dtype=torch.long)\n\n # Generate how our drop will look like into the image\n size_of_line: int = max(height_of_drop, abs(width_of_drop))\n x = torch.linspace(start=0, end=height_of_drop, steps=size_of_line, dtype=torch.long).to(image.device)\n y = torch.linspace(start=0, end=width_of_drop, steps=size_of_line, dtype=torch.long).to(image.device)\n # Draw lines\n for k in range(x.shape[0]):\n image[i, :, coords[0] + x[k], coords[1] + y[k]] = 200 / 255\n return image\n", "path": "kornia/augmentation/_2d/intensity/random_rain.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom typing import Any\n\nimport torch\n\nfrom kornia.augmentation._2d.intensity.base import IntensityAugmentationBase2D\nfrom kornia.augmentation.random_generator._2d import RainGenerator\nfrom kornia.core import Tensor\nfrom kornia.core.check import KORNIA_CHECK\n\n\nclass RandomRain(IntensityAugmentationBase2D):\n r\"\"\"Add Random Rain to the image.\n\n Args:\n p: probability of applying the transformation.\n number_of_drops: number of drops per image\n drop_height: height of the drop in image(same for each drops in one image)\n drop_width: width of the drop in image(same for each drops in one image)\n Shape:\n - Input: :math:`(C, H, W)` or :math:`(B, C, H, W)`\n - Output: :math:`(B, C, H, W)`\n\n Examples:\n >>> rng = torch.manual_seed(0)\n >>> input = torch.rand(1, 1, 5, 5)\n >>> rain = RandomRain(p=1,drop_height=(1,2),drop_width=(1,2),number_of_drops=(1,1))\n >>> rain(input)\n tensor([[[[0.4963, 0.7843, 0.0885, 0.1320, 0.3074],\n [0.6341, 0.4901, 0.8964, 0.4556, 0.6323],\n [0.3489, 0.4017, 0.0223, 0.1689, 0.2939],\n [0.5185, 0.6977, 0.8000, 0.1610, 0.2823],\n [0.6816, 0.9152, 0.3971, 0.8742, 0.4194]]]])\n \"\"\"\n\n def __init__(\n self,\n same_on_batch: bool = False,\n p: float = 0.5,\n keepdim: bool = False,\n number_of_drops: tuple[int, int] = (1000, 2000),\n drop_height: tuple[int, int] = (5, 20),\n drop_width: tuple[int, int] = (-5, 5),\n ) -> None:\n super().__init__(p=p, same_on_batch=same_on_batch, p_batch=1.0, keepdim=keepdim)\n self._param_generator = RainGenerator(number_of_drops, drop_height, drop_width)\n\n def apply_transform(\n self, image: Tensor, params: dict[str, Tensor], flags: dict[str, Any], transform: Tensor | None = None\n ) -> Tensor:\n # Check array and drops size\n KORNIA_CHECK(image.shape[1] in {3, 1}, \"Number of color channels should be 1 or 3.\")\n KORNIA_CHECK(\n bool(\n torch.all(params['drop_height_factor'] <= image.shape[2])\n and torch.all(params['drop_height_factor'] > 0)\n ),\n \"Height of drop should be greater than zero and less than image height.\",\n )\n\n KORNIA_CHECK(\n bool(torch.all(torch.abs(params['drop_width_factor']) <= image.shape[3])),\n \"Width of drop should be less than image width.\",\n )\n modeified_img = image.clone()\n for i in range(image.shape[0]):\n number_of_drops: int = int(params['number_of_drops_factor'][i])\n # We generate tensor with maximum number of drops, and then remove unnecessary drops.\n\n coordinates_of_drops: Tensor = params['coordinates_factor'][i][:number_of_drops]\n height_of_drop: int = int(params['drop_height_factor'][i])\n width_of_drop: int = int(params['drop_width_factor'][i])\n\n # Generate start coordinates for each drop\n random_y_coords = coordinates_of_drops[:, 0] * (image.shape[2] - height_of_drop - 1)\n if width_of_drop > 0:\n random_x_coords = coordinates_of_drops[:, 1] * (image.shape[3] - width_of_drop - 1)\n else:\n random_x_coords = coordinates_of_drops[:, 1] * (image.shape[3] + width_of_drop - 1) - width_of_drop\n\n coords = torch.cat([random_y_coords[None], random_x_coords[None]], dim=0).to(image.device, dtype=torch.long)\n\n # Generate how our drop will look like into the image\n size_of_line: int = max(height_of_drop, abs(width_of_drop))\n x = torch.linspace(start=0, end=height_of_drop, steps=size_of_line, dtype=torch.long).to(image.device)\n y = torch.linspace(start=0, end=width_of_drop, steps=size_of_line, dtype=torch.long).to(image.device)\n # Draw lines\n for k in range(x.shape[0]):\n modeified_img[i, :, coords[0] + x[k], coords[1] + y[k]] = 200 / 255\n return modeified_img\n", "path": "kornia/augmentation/_2d/intensity/random_rain.py"}]} | 2,136 | 307 |
gh_patches_debug_3021 | rasdani/github-patches | git_diff | mindsdb__mindsdb-1576 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add new method to count number of rows for MySQL datasources :electric_plug: :1234:
When MindsDB creates a new MySQL datasource we get information for row counts by fetching all datasources. The problem here is that if datasource is big it takes a lot of time. We need a new get_row_count method to return the number of rows per datasource. The PR should include this method inside the PostgreSQL class .
## Steps :male_detective: :female_detective:
- Implement in https://github.com/mindsdb/mindsdb/blob/stable/mindsdb/integrations/mysql/mysql.py#L51
- Example method:
```py
def get_row_count(self, query):
result = conn.execute(query)
return len(query)
```
- Push to staging branch
## Additional rewards :1st_place_medal:
Each code PR brings :three: point for entry into the draw for a :computer: Deep Learning Laptop powered by the NVIDIA RTX 3080 Max-Q GPU or other swag :shirt: :bear: . For more info check out https://mindsdb.com/hacktoberfest/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mindsdb/integrations/mysql/mysql.py`
Content:
```
1 import os
2 import shutil
3 import tempfile
4
5 from contextlib import closing
6 import mysql.connector
7
8 from lightwood.api import dtype
9 from mindsdb.integrations.base import Integration
10 from mindsdb.utilities.log import log
11
12
13 class MySQLConnectionChecker:
14 def __init__(self, **kwargs):
15 self.host = kwargs.get('host')
16 self.port = kwargs.get('port')
17 self.user = kwargs.get('user')
18 self.password = kwargs.get('password')
19 self.ssl = kwargs.get('ssl')
20 self.ssl_ca = kwargs.get('ssl_ca')
21 self.ssl_cert = kwargs.get('ssl_cert')
22 self.ssl_key = kwargs.get('ssl_key')
23
24 def _get_connnection(self):
25 config = {
26 "host": self.host,
27 "port": self.port,
28 "user": self.user,
29 "password": self.password
30 }
31 if self.ssl is True:
32 config['client_flags'] = [mysql.connector.constants.ClientFlag.SSL]
33 if self.ssl_ca is not None:
34 config["ssl_ca"] = self.ssl_ca
35 if self.ssl_cert is not None:
36 config["ssl_cert"] = self.ssl_cert
37 if self.ssl_key is not None:
38 config["ssl_key"] = self.ssl_key
39 return mysql.connector.connect(**config)
40
41 def check_connection(self):
42 try:
43 con = self._get_connnection()
44 with closing(con) as con:
45 connected = con.is_connected()
46 except Exception:
47 connected = False
48 return connected
49
50
51 class MySQL(Integration, MySQLConnectionChecker):
52 def __init__(self, config, name, db_info):
53 super().__init__(config, name)
54 self.user = db_info.get('user')
55 self.password = db_info.get('password')
56 self.host = db_info.get('host')
57 self.port = db_info.get('port')
58 self.ssl = db_info.get('ssl')
59 self.ssl_ca = db_info.get('ssl_ca')
60 self.ssl_cert = db_info.get('ssl_cert')
61 self.ssl_key = db_info.get('ssl_key')
62
63 def _to_mysql_table(self, dtype_dict, predicted_cols, columns):
64 subtype_map = {
65 dtype.integer: 'int',
66 dtype.float: 'double',
67 dtype.binary: 'bool',
68 dtype.date: 'Date',
69 dtype.datetime: 'Datetime',
70 dtype.binary: 'VARCHAR(500)',
71 dtype.categorical: 'VARCHAR(500)',
72 dtype.tags: 'VARCHAR(500)',
73 dtype.image: 'VARCHAR(500)',
74 dtype.video: 'VARCHAR(500)',
75 dtype.audio: 'VARCHAR(500)',
76 dtype.short_text: 'VARCHAR(500)',
77 dtype.rich_text: 'VARCHAR(500)',
78 dtype.array: 'VARCHAR(500)'
79 }
80
81 column_declaration = []
82 for name in columns:
83 try:
84 col_subtype = dtype_dict[name]
85 new_type = subtype_map[col_subtype]
86 column_declaration.append(f' `{name}` {new_type} ')
87 if name in predicted_cols:
88 column_declaration.append(f' `{name}_original` {new_type} ')
89 except Exception as e:
90 log.error(f'Error: can not determine mysql data type for column {name}: {e}')
91
92 return column_declaration
93
94 def _escape_table_name(self, name):
95 return '`' + name.replace('`', '``') + '`'
96
97 def _query(self, query):
98 con = self._get_connnection()
99 with closing(con) as con:
100 cur = con.cursor(dictionary=True, buffered=True)
101 cur.execute(query)
102 res = True
103 try:
104 res = cur.fetchall()
105 except Exception:
106 pass
107 con.commit()
108
109 return res
110
111 def _get_connect_string(self, table):
112 user = f"{self.config['api']['mysql']['user']}_{self.name}"
113 password = self.config['api']['mysql']['password']
114 host = self.config['api']['mysql']['host']
115 port = self.config['api']['mysql']['port']
116
117 if password is None or password == '':
118 connect = f'mysql://{user}@{host}:{port}/mindsdb/{table}'
119 else:
120 connect = f'mysql://{user}:{password}@{host}:{port}/mindsdb/{table}'
121
122 return connect
123
124 def setup(self):
125 self._query(f'DROP DATABASE IF EXISTS {self.mindsdb_database}')
126 self._query(f'CREATE DATABASE IF NOT EXISTS {self.mindsdb_database}')
127
128 connect = self._get_connect_string('predictors')
129
130 q = f"""
131 CREATE TABLE IF NOT EXISTS {self.mindsdb_database}.predictors (
132 name VARCHAR(500),
133 status VARCHAR(500),
134 accuracy VARCHAR(500),
135 predict VARCHAR(500),
136 select_data_query VARCHAR(500),
137 external_datasource VARCHAR(500),
138 training_options VARCHAR(500),
139 key name_key (name)
140 ) ENGINE=FEDERATED CHARSET=utf8 CONNECTION='{connect}';
141 """
142 self._query(q)
143
144 connect = self._get_connect_string('commands')
145
146 q = f"""
147 CREATE TABLE IF NOT EXISTS {self.mindsdb_database}.commands (
148 command VARCHAR(500),
149 key command_key (command)
150 ) ENGINE=FEDERATED CHARSET=utf8 CONNECTION='{connect}';
151 """
152 self._query(q)
153
154 def register_predictors(self, model_data_arr):
155 for model_meta in model_data_arr:
156 name = model_meta['name']
157 predict = model_meta['predict']
158 if not isinstance(predict, list):
159 predict = [predict]
160 columns_sql = ','.join(self._to_mysql_table(
161 model_meta['dtype_dict'],
162 predict,
163 list(model_meta['dtype_dict'].keys())
164 ))
165 columns_sql += ',`when_data` varchar(500)'
166 columns_sql += ',`select_data_query` varchar(500)'
167 columns_sql += ',`external_datasource` varchar(500)'
168 for col in predict:
169 columns_sql += f',`{col}_confidence` double'
170 if model_meta['dtype_dict'][col] in (dtype.integer, dtype.float):
171 columns_sql += f',`{col}_min` double'
172 columns_sql += f',`{col}_max` double'
173 columns_sql += f',`{col}_explain` varchar(500)'
174
175 connect = self._get_connect_string(name)
176
177 self.unregister_predictor(name)
178 q = f"""
179 CREATE TABLE {self.mindsdb_database}.{self._escape_table_name(name)} (
180 {columns_sql},
181 index when_data_index (when_data),
182 index select_data_query_index (select_data_query),
183 index external_datasource_index (external_datasource)
184 ) ENGINE=FEDERATED CHARSET=utf8 CONNECTION='{connect}';
185 """
186 self._query(q)
187
188 def unregister_predictor(self, name):
189 q = f"""
190 drop table if exists {self.mindsdb_database}.{self._escape_table_name(name)};
191 """
192 self._query(q)
193
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mindsdb/integrations/mysql/mysql.py b/mindsdb/integrations/mysql/mysql.py
--- a/mindsdb/integrations/mysql/mysql.py
+++ b/mindsdb/integrations/mysql/mysql.py
@@ -190,3 +190,10 @@
drop table if exists {self.mindsdb_database}.{self._escape_table_name(name)};
"""
self._query(q)
+
+ def get_row_count(self, query):
+ q = f"""
+ SELECT COUNT(*) as count
+ FROM ({query}) as query;"""
+ result = self._query(q)
+ return result[0]['count']
| {"golden_diff": "diff --git a/mindsdb/integrations/mysql/mysql.py b/mindsdb/integrations/mysql/mysql.py\n--- a/mindsdb/integrations/mysql/mysql.py\n+++ b/mindsdb/integrations/mysql/mysql.py\n@@ -190,3 +190,10 @@\n drop table if exists {self.mindsdb_database}.{self._escape_table_name(name)};\n \"\"\"\n self._query(q)\n+\n+ def get_row_count(self, query):\n+ q = f\"\"\" \n+ SELECT COUNT(*) as count\n+ FROM ({query}) as query;\"\"\"\n+ result = self._query(q)\n+ return result[0]['count']\n", "issue": "Add new method to count number of rows for MySQL datasources :electric_plug: :1234: \nWhen MindsDB creates a new MySQL datasource we get information for row counts by fetching all datasources. The problem here is that if datasource is big it takes a lot of time. We need a new get_row_count method to return the number of rows per datasource. The PR should include this method inside the PostgreSQL class .\r\n\r\n## Steps :male_detective: :female_detective: \r\n\r\n- Implement in https://github.com/mindsdb/mindsdb/blob/stable/mindsdb/integrations/mysql/mysql.py#L51\r\n- Example method:\r\n```py\r\ndef get_row_count(self, query):\r\n result = conn.execute(query)\r\n return len(query)\r\n```\r\n- Push to staging branch\r\n\r\n## Additional rewards :1st_place_medal: \r\n\r\nEach code PR brings :three: point for entry into the draw for a :computer: Deep Learning Laptop powered by the NVIDIA RTX 3080 Max-Q GPU or other swag :shirt: :bear: . For more info check out https://mindsdb.com/hacktoberfest/\r\n \r\n\r\n\n", "before_files": [{"content": "import os\nimport shutil\nimport tempfile\n\nfrom contextlib import closing\nimport mysql.connector\n\nfrom lightwood.api import dtype\nfrom mindsdb.integrations.base import Integration\nfrom mindsdb.utilities.log import log\n\n\nclass MySQLConnectionChecker:\n def __init__(self, **kwargs):\n self.host = kwargs.get('host')\n self.port = kwargs.get('port')\n self.user = kwargs.get('user')\n self.password = kwargs.get('password')\n self.ssl = kwargs.get('ssl')\n self.ssl_ca = kwargs.get('ssl_ca')\n self.ssl_cert = kwargs.get('ssl_cert')\n self.ssl_key = kwargs.get('ssl_key')\n\n def _get_connnection(self):\n config = {\n \"host\": self.host,\n \"port\": self.port,\n \"user\": self.user,\n \"password\": self.password\n }\n if self.ssl is True:\n config['client_flags'] = [mysql.connector.constants.ClientFlag.SSL]\n if self.ssl_ca is not None:\n config[\"ssl_ca\"] = self.ssl_ca\n if self.ssl_cert is not None:\n config[\"ssl_cert\"] = self.ssl_cert\n if self.ssl_key is not None:\n config[\"ssl_key\"] = self.ssl_key\n return mysql.connector.connect(**config)\n\n def check_connection(self):\n try:\n con = self._get_connnection()\n with closing(con) as con:\n connected = con.is_connected()\n except Exception:\n connected = False\n return connected\n\n\nclass MySQL(Integration, MySQLConnectionChecker):\n def __init__(self, config, name, db_info):\n super().__init__(config, name)\n self.user = db_info.get('user')\n self.password = db_info.get('password')\n self.host = db_info.get('host')\n self.port = db_info.get('port')\n self.ssl = db_info.get('ssl')\n self.ssl_ca = db_info.get('ssl_ca')\n self.ssl_cert = db_info.get('ssl_cert')\n self.ssl_key = db_info.get('ssl_key')\n\n def _to_mysql_table(self, dtype_dict, predicted_cols, columns):\n subtype_map = {\n dtype.integer: 'int',\n dtype.float: 'double',\n dtype.binary: 'bool',\n dtype.date: 'Date',\n dtype.datetime: 'Datetime',\n dtype.binary: 'VARCHAR(500)',\n dtype.categorical: 'VARCHAR(500)',\n dtype.tags: 'VARCHAR(500)',\n dtype.image: 'VARCHAR(500)',\n dtype.video: 'VARCHAR(500)',\n dtype.audio: 'VARCHAR(500)',\n dtype.short_text: 'VARCHAR(500)',\n dtype.rich_text: 'VARCHAR(500)',\n dtype.array: 'VARCHAR(500)'\n }\n\n column_declaration = []\n for name in columns:\n try:\n col_subtype = dtype_dict[name]\n new_type = subtype_map[col_subtype]\n column_declaration.append(f' `{name}` {new_type} ')\n if name in predicted_cols:\n column_declaration.append(f' `{name}_original` {new_type} ')\n except Exception as e:\n log.error(f'Error: can not determine mysql data type for column {name}: {e}')\n\n return column_declaration\n\n def _escape_table_name(self, name):\n return '`' + name.replace('`', '``') + '`'\n\n def _query(self, query):\n con = self._get_connnection()\n with closing(con) as con:\n cur = con.cursor(dictionary=True, buffered=True)\n cur.execute(query)\n res = True\n try:\n res = cur.fetchall()\n except Exception:\n pass\n con.commit()\n\n return res\n\n def _get_connect_string(self, table):\n user = f\"{self.config['api']['mysql']['user']}_{self.name}\"\n password = self.config['api']['mysql']['password']\n host = self.config['api']['mysql']['host']\n port = self.config['api']['mysql']['port']\n\n if password is None or password == '':\n connect = f'mysql://{user}@{host}:{port}/mindsdb/{table}'\n else:\n connect = f'mysql://{user}:{password}@{host}:{port}/mindsdb/{table}'\n\n return connect\n\n def setup(self):\n self._query(f'DROP DATABASE IF EXISTS {self.mindsdb_database}')\n self._query(f'CREATE DATABASE IF NOT EXISTS {self.mindsdb_database}')\n\n connect = self._get_connect_string('predictors')\n\n q = f\"\"\"\n CREATE TABLE IF NOT EXISTS {self.mindsdb_database}.predictors (\n name VARCHAR(500),\n status VARCHAR(500),\n accuracy VARCHAR(500),\n predict VARCHAR(500),\n select_data_query VARCHAR(500),\n external_datasource VARCHAR(500),\n training_options VARCHAR(500),\n key name_key (name)\n ) ENGINE=FEDERATED CHARSET=utf8 CONNECTION='{connect}';\n \"\"\"\n self._query(q)\n\n connect = self._get_connect_string('commands')\n\n q = f\"\"\"\n CREATE TABLE IF NOT EXISTS {self.mindsdb_database}.commands (\n command VARCHAR(500),\n key command_key (command)\n ) ENGINE=FEDERATED CHARSET=utf8 CONNECTION='{connect}';\n \"\"\"\n self._query(q)\n\n def register_predictors(self, model_data_arr):\n for model_meta in model_data_arr:\n name = model_meta['name']\n predict = model_meta['predict']\n if not isinstance(predict, list):\n predict = [predict]\n columns_sql = ','.join(self._to_mysql_table(\n model_meta['dtype_dict'],\n predict,\n list(model_meta['dtype_dict'].keys())\n ))\n columns_sql += ',`when_data` varchar(500)'\n columns_sql += ',`select_data_query` varchar(500)'\n columns_sql += ',`external_datasource` varchar(500)'\n for col in predict:\n columns_sql += f',`{col}_confidence` double'\n if model_meta['dtype_dict'][col] in (dtype.integer, dtype.float):\n columns_sql += f',`{col}_min` double'\n columns_sql += f',`{col}_max` double'\n columns_sql += f',`{col}_explain` varchar(500)'\n\n connect = self._get_connect_string(name)\n\n self.unregister_predictor(name)\n q = f\"\"\"\n CREATE TABLE {self.mindsdb_database}.{self._escape_table_name(name)} (\n {columns_sql},\n index when_data_index (when_data),\n index select_data_query_index (select_data_query),\n index external_datasource_index (external_datasource)\n ) ENGINE=FEDERATED CHARSET=utf8 CONNECTION='{connect}';\n \"\"\"\n self._query(q)\n\n def unregister_predictor(self, name):\n q = f\"\"\"\n drop table if exists {self.mindsdb_database}.{self._escape_table_name(name)};\n \"\"\"\n self._query(q)\n", "path": "mindsdb/integrations/mysql/mysql.py"}], "after_files": [{"content": "import os\nimport shutil\nimport tempfile\n\nfrom contextlib import closing\nimport mysql.connector\n\nfrom lightwood.api import dtype\nfrom mindsdb.integrations.base import Integration\nfrom mindsdb.utilities.log import log\n\n\nclass MySQLConnectionChecker:\n def __init__(self, **kwargs):\n self.host = kwargs.get('host')\n self.port = kwargs.get('port')\n self.user = kwargs.get('user')\n self.password = kwargs.get('password')\n self.ssl = kwargs.get('ssl')\n self.ssl_ca = kwargs.get('ssl_ca')\n self.ssl_cert = kwargs.get('ssl_cert')\n self.ssl_key = kwargs.get('ssl_key')\n\n def _get_connnection(self):\n config = {\n \"host\": self.host,\n \"port\": self.port,\n \"user\": self.user,\n \"password\": self.password\n }\n if self.ssl is True:\n config['client_flags'] = [mysql.connector.constants.ClientFlag.SSL]\n if self.ssl_ca is not None:\n config[\"ssl_ca\"] = self.ssl_ca\n if self.ssl_cert is not None:\n config[\"ssl_cert\"] = self.ssl_cert\n if self.ssl_key is not None:\n config[\"ssl_key\"] = self.ssl_key\n return mysql.connector.connect(**config)\n\n def check_connection(self):\n try:\n con = self._get_connnection()\n with closing(con) as con:\n connected = con.is_connected()\n except Exception:\n connected = False\n return connected\n\n\nclass MySQL(Integration, MySQLConnectionChecker):\n def __init__(self, config, name, db_info):\n super().__init__(config, name)\n self.user = db_info.get('user')\n self.password = db_info.get('password')\n self.host = db_info.get('host')\n self.port = db_info.get('port')\n self.ssl = db_info.get('ssl')\n self.ssl_ca = db_info.get('ssl_ca')\n self.ssl_cert = db_info.get('ssl_cert')\n self.ssl_key = db_info.get('ssl_key')\n\n def _to_mysql_table(self, dtype_dict, predicted_cols, columns):\n subtype_map = {\n dtype.integer: 'int',\n dtype.float: 'double',\n dtype.binary: 'bool',\n dtype.date: 'Date',\n dtype.datetime: 'Datetime',\n dtype.binary: 'VARCHAR(500)',\n dtype.categorical: 'VARCHAR(500)',\n dtype.tags: 'VARCHAR(500)',\n dtype.image: 'VARCHAR(500)',\n dtype.video: 'VARCHAR(500)',\n dtype.audio: 'VARCHAR(500)',\n dtype.short_text: 'VARCHAR(500)',\n dtype.rich_text: 'VARCHAR(500)',\n dtype.array: 'VARCHAR(500)'\n }\n\n column_declaration = []\n for name in columns:\n try:\n col_subtype = dtype_dict[name]\n new_type = subtype_map[col_subtype]\n column_declaration.append(f' `{name}` {new_type} ')\n if name in predicted_cols:\n column_declaration.append(f' `{name}_original` {new_type} ')\n except Exception as e:\n log.error(f'Error: can not determine mysql data type for column {name}: {e}')\n\n return column_declaration\n\n def _escape_table_name(self, name):\n return '`' + name.replace('`', '``') + '`'\n\n def _query(self, query):\n con = self._get_connnection()\n with closing(con) as con:\n cur = con.cursor(dictionary=True, buffered=True)\n cur.execute(query)\n res = True\n try:\n res = cur.fetchall()\n except Exception:\n pass\n con.commit()\n\n return res\n\n def _get_connect_string(self, table):\n user = f\"{self.config['api']['mysql']['user']}_{self.name}\"\n password = self.config['api']['mysql']['password']\n host = self.config['api']['mysql']['host']\n port = self.config['api']['mysql']['port']\n\n if password is None or password == '':\n connect = f'mysql://{user}@{host}:{port}/mindsdb/{table}'\n else:\n connect = f'mysql://{user}:{password}@{host}:{port}/mindsdb/{table}'\n\n return connect\n\n def setup(self):\n self._query(f'DROP DATABASE IF EXISTS {self.mindsdb_database}')\n self._query(f'CREATE DATABASE IF NOT EXISTS {self.mindsdb_database}')\n\n connect = self._get_connect_string('predictors')\n\n q = f\"\"\"\n CREATE TABLE IF NOT EXISTS {self.mindsdb_database}.predictors (\n name VARCHAR(500),\n status VARCHAR(500),\n accuracy VARCHAR(500),\n predict VARCHAR(500),\n select_data_query VARCHAR(500),\n external_datasource VARCHAR(500),\n training_options VARCHAR(500),\n key name_key (name)\n ) ENGINE=FEDERATED CHARSET=utf8 CONNECTION='{connect}';\n \"\"\"\n self._query(q)\n\n connect = self._get_connect_string('commands')\n\n q = f\"\"\"\n CREATE TABLE IF NOT EXISTS {self.mindsdb_database}.commands (\n command VARCHAR(500),\n key command_key (command)\n ) ENGINE=FEDERATED CHARSET=utf8 CONNECTION='{connect}';\n \"\"\"\n self._query(q)\n\n def register_predictors(self, model_data_arr):\n for model_meta in model_data_arr:\n name = model_meta['name']\n predict = model_meta['predict']\n if not isinstance(predict, list):\n predict = [predict]\n columns_sql = ','.join(self._to_mysql_table(\n model_meta['dtype_dict'],\n predict,\n list(model_meta['dtype_dict'].keys())\n ))\n columns_sql += ',`when_data` varchar(500)'\n columns_sql += ',`select_data_query` varchar(500)'\n columns_sql += ',`external_datasource` varchar(500)'\n for col in predict:\n columns_sql += f',`{col}_confidence` double'\n if model_meta['dtype_dict'][col] in (dtype.integer, dtype.float):\n columns_sql += f',`{col}_min` double'\n columns_sql += f',`{col}_max` double'\n columns_sql += f',`{col}_explain` varchar(500)'\n\n connect = self._get_connect_string(name)\n\n self.unregister_predictor(name)\n q = f\"\"\"\n CREATE TABLE {self.mindsdb_database}.{self._escape_table_name(name)} (\n {columns_sql},\n index when_data_index (when_data),\n index select_data_query_index (select_data_query),\n index external_datasource_index (external_datasource)\n ) ENGINE=FEDERATED CHARSET=utf8 CONNECTION='{connect}';\n \"\"\"\n self._query(q)\n\n def unregister_predictor(self, name):\n q = f\"\"\"\n drop table if exists {self.mindsdb_database}.{self._escape_table_name(name)};\n \"\"\"\n self._query(q)\n\n def get_row_count(self, query):\n q = f\"\"\" \n SELECT COUNT(*) as count\n FROM ({query}) as query;\"\"\"\n result = self._query(q)\n return result[0]['count']\n", "path": "mindsdb/integrations/mysql/mysql.py"}]} | 2,505 | 144 |
gh_patches_debug_8859 | rasdani/github-patches | git_diff | angr__angr-2453 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error being raised while exploring ls
This error is being thrown after a simple exploration:
```
Traceback (most recent call last):
File "try.py", line 5, in <module>
sm.explore()
File "/home/berlinm/.local/lib/python3.7/site-packages/angr/sim_manager.py", line 239, in explore
self.run(stash=stash, n=n, **kwargs)
File "/home/berlinm/.local/lib/python3.7/site-packages/angr/sim_manager.py", line 261, in run
self.step(stash=stash, **kwargs)
File "/home/berlinm/.local/lib/python3.7/site-packages/angr/misc/hookset.py", line 75, in __call__
result = current_hook(self.func.__self__, *args, **kwargs)
File "/home/berlinm/.local/lib/python3.7/site-packages/angr/exploration_techniques/explorer.py", line 96, in step
return simgr.step(stash=stash, extra_stop_points=base_extra_stop_points | self._extra_stop_points, **kwargs)
File "/home/berlinm/.local/lib/python3.7/site-packages/angr/misc/hookset.py", line 80, in __call__
return self.func(*args, **kwargs)
File "/home/berlinm/.local/lib/python3.7/site-packages/angr/sim_manager.py", line 346, in step
successors = self.step_state(state, successor_func=successor_func, **run_args)
File "/home/berlinm/.local/lib/python3.7/site-packages/angr/sim_manager.py", line 383, in step_state
successors = self.successors(state, successor_func=successor_func, **run_args)
File "/home/berlinm/.local/lib/python3.7/site-packages/angr/sim_manager.py", line 422, in successors
return self._project.factory.successors(state, **run_args)
File "/home/berlinm/.local/lib/python3.7/site-packages/angr/factory.py", line 60, in successors
return self.default_engine.process(*args, **kwargs)
File "/home/berlinm/.local/lib/python3.7/site-packages/angr/engines/vex/light/slicing.py", line 19, in process
return super().process(*args, **kwargs)
File "/home/berlinm/.local/lib/python3.7/site-packages/angr/engines/engine.py", line 149, in process
self.process_successors(self.successors, **kwargs)
File "/home/berlinm/.local/lib/python3.7/site-packages/angr/engines/failure.py", line 21, in process_successors
return super().process_successors(successors, **kwargs)
File "/home/berlinm/.local/lib/python3.7/site-packages/angr/engines/syscall.py", line 38, in process_successors
return self.process_procedure(state, successors, sys_procedure, **kwargs)
File "/home/berlinm/.local/lib/python3.7/site-packages/angr/engines/procedure.py", line 37, in process_procedure
inst = procedure.execute(state, successors, ret_to=ret_to, arguments=arguments)
File "/home/berlinm/.local/lib/python3.7/site-packages/angr/sim_procedure.py", line 230, in execute
r = getattr(inst, inst.run_func)(*sim_args, **inst.kwargs)
File "/home/berlinm/.local/lib/python3.7/site-packages/angr/procedures/linux_kernel/iovec.py", line 21, in run
if self.state.solver.is_true(self.state.solver.SLT(tmpres, 0)):
File "/home/berlinm/.local/lib/python3.7/site-packages/angr/state_plugins/solver.py", line 126, in concrete_shortcut_bool
return f(self, *args, **kwargs)
File "/home/berlinm/.local/lib/python3.7/site-packages/angr/state_plugins/sim_action_object.py", line 57, in ast_stripper
return f(*new_args, **new_kwargs)
File "/home/berlinm/.local/lib/python3.7/site-packages/angr/state_plugins/solver.py", line 89, in wrapped_f
return f(*args, **kwargs)
File "/home/berlinm/.local/lib/python3.7/site-packages/angr/state_plugins/solver.py", line 585, in is_true
return self._solver.is_true(e, extra_constraints=self._adjust_constraint_list(extra_constraints), exact=exact)
File "/home/berlinm/.local/lib/python3.7/site-packages/claripy/frontend_mixins/concrete_handler_mixin.py", line 53, in is_true
return super(ConcreteHandlerMixin, self).is_true(e, **kwargs)
File "/home/berlinm/.local/lib/python3.7/site-packages/claripy/frontend_mixins/constraint_filter_mixin.py", line 60, in is_true
return super(ConstraintFilterMixin, self).is_true(e, extra_constraints=ec, **kwargs)
File "/home/berlinm/.local/lib/python3.7/site-packages/claripy/frontends/composite_frontend.py", line 349, in is_true
r = ms.is_true(e, extra_constraints=extra_constraints, exact=exact)
File "/home/berlinm/.local/lib/python3.7/site-packages/claripy/frontends/full_frontend.py", line 184, in is_true
return e.is_true()
AttributeError: 'NotImplementedType' object has no attribute 'is_true'
```
**Environment Information.**
angr environment report
=============================
Date: 2020-12-30 12:39:53.872698
Running in virtual environment at /home/berlinm/.virtualenvs/angr
Platform: linux-x86_64
Python version: 3.7.9 (7e6e2bb30ac5, Nov 18 2020, 10:55:52)
[PyPy 7.3.3-beta0 with GCC 7.3.1 20180303 (Red Hat 7.3.1-5)]
######## angr #########
Python found it in /home/berlinm/.local/lib/python3.7/site-packages/angr
Pip version angr 9.0.5034
Couldn't find git info
######## ailment #########
Python found it in /home/berlinm/.local/lib/python3.7/site-packages/ailment
Pip version ailment 9.0.5034
Couldn't find git info
######## cle #########
Python found it in /home/berlinm/.local/lib/python3.7/site-packages/cle
Pip version cle 9.0.5034
Couldn't find git info
######## pyvex #########
Python found it in /home/berlinm/.local/lib/python3.7/site-packages/pyvex
Pip version pyvex 9.0.5034
Couldn't find git info
######## claripy #########
Python found it in /home/berlinm/.local/lib/python3.7/site-packages/claripy
Pip version claripy 9.0.5034
Couldn't find git info
######## archinfo #########
Python found it in /home/berlinm/.local/lib/python3.7/site-packages/archinfo
Pip version archinfo 9.0.5034
Couldn't find git info
######## z3 #########
Python found it in /home/berlinm/.local/lib/python3.7/site-packages/z3
Pip version z3-solver 4.8.9.0
Couldn't find git info
######## unicorn #########
Python found it in /home/berlinm/.local/lib/python3.7/site-packages/unicorn
Pip version unicorn 1.0.2rc4
Couldn't find git info
######### Native Module Info ##########
angr: <CDLL '/home/berlinm/.local/lib/python3.7/site-packages/angr/lib/angr_native.so', handle 61bfe90 at 0x638ebb8>
unicorn: <CDLL '/home/berlinm/.local/lib/python3.7/site-packages/unicorn/lib/libunicorn.so', handle 2393fd0 at 0x7fcacfae5da8>
pyvex: <cffi.api._make_ffi_library.<locals>.FFILibrary object at 0x00007fcad0b9b398>
z3: <CDLL '/home/berlinm/.local/lib/python3.7/site-packages/z3/lib/libz3.so', handle 2fc8df0 at 0x3206758>
**To Reproduce.**
project = angr.Project('ls')
sm = project.factory.simulation_manager(project.factory.call_state(0x404a70)) # Some function's address
sm.explore()
# The binary (inside the zip):
[ls.zip](https://github.com/angr/angr/files/5753958/ls.zip)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `angr/procedures/linux_kernel/iovec.py`
Content:
```
1 import angr
2 from ..posix.read import read
3 from ..posix.write import write
4 from ...sim_type import register_types, parse_types
5
6 register_types(parse_types("""
7 struct iovec {
8 void *iov_base; /* Starting address */
9 size_t iov_len; /* Number of bytes to transfer */
10 };
11 """))
12
13 class readv(angr.SimProcedure):
14 def run(self, fd, iovec, iovcnt):
15 if iovec.symbolic or iovcnt.symbolic:
16 raise angr.errors.SimPosixError("Can't handle symbolic arguments to readv")
17 iovcnt = self.state.solver.eval(iovcnt)
18 res = 0
19 for element in self.state.mem[iovec].struct.iovec.array(iovcnt).resolved:
20 tmpres = self.inline_call(read, fd, element.iov_base, element.iov_len)
21 if self.state.solver.is_true(self.state.solver.SLT(tmpres, 0)):
22 return tmpres
23
24 return res
25
26
27 class writev(angr.SimProcedure):
28 def run(self, fd, iovec, iovcnt):
29 if iovec.symbolic or iovcnt.symbolic:
30 raise angr.errors.SimPosixError("Can't handle symbolic arguments to writev")
31 iovcnt = self.state.solver.eval(iovcnt)
32 res = 0
33 for element in self.state.mem[iovec].struct.iovec.array(iovcnt).resolved:
34 tmpres = self.inline_call(write, fd, element.iov_base, element.iov_len).ret_expr
35 if self.state.solver.is_true(self.state.solver.SLT(tmpres, 0)):
36 return tmpres
37
38 return res
39
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/angr/procedures/linux_kernel/iovec.py b/angr/procedures/linux_kernel/iovec.py
--- a/angr/procedures/linux_kernel/iovec.py
+++ b/angr/procedures/linux_kernel/iovec.py
@@ -17,7 +17,7 @@
iovcnt = self.state.solver.eval(iovcnt)
res = 0
for element in self.state.mem[iovec].struct.iovec.array(iovcnt).resolved:
- tmpres = self.inline_call(read, fd, element.iov_base, element.iov_len)
+ tmpres = self.inline_call(read, fd, element.iov_base, element.iov_len).ret_expr
if self.state.solver.is_true(self.state.solver.SLT(tmpres, 0)):
return tmpres
| {"golden_diff": "diff --git a/angr/procedures/linux_kernel/iovec.py b/angr/procedures/linux_kernel/iovec.py\n--- a/angr/procedures/linux_kernel/iovec.py\n+++ b/angr/procedures/linux_kernel/iovec.py\n@@ -17,7 +17,7 @@\n iovcnt = self.state.solver.eval(iovcnt)\n res = 0\n for element in self.state.mem[iovec].struct.iovec.array(iovcnt).resolved:\n- tmpres = self.inline_call(read, fd, element.iov_base, element.iov_len)\n+ tmpres = self.inline_call(read, fd, element.iov_base, element.iov_len).ret_expr\n if self.state.solver.is_true(self.state.solver.SLT(tmpres, 0)):\n return tmpres\n", "issue": "Error being raised while exploring ls\nThis error is being thrown after a simple exploration:\r\n\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"try.py\", line 5, in <module>\r\n sm.explore()\r\n File \"/home/berlinm/.local/lib/python3.7/site-packages/angr/sim_manager.py\", line 239, in explore\r\n self.run(stash=stash, n=n, **kwargs)\r\n File \"/home/berlinm/.local/lib/python3.7/site-packages/angr/sim_manager.py\", line 261, in run\r\n self.step(stash=stash, **kwargs)\r\n File \"/home/berlinm/.local/lib/python3.7/site-packages/angr/misc/hookset.py\", line 75, in __call__\r\n result = current_hook(self.func.__self__, *args, **kwargs)\r\n File \"/home/berlinm/.local/lib/python3.7/site-packages/angr/exploration_techniques/explorer.py\", line 96, in step\r\n return simgr.step(stash=stash, extra_stop_points=base_extra_stop_points | self._extra_stop_points, **kwargs)\r\n File \"/home/berlinm/.local/lib/python3.7/site-packages/angr/misc/hookset.py\", line 80, in __call__\r\n return self.func(*args, **kwargs)\r\n File \"/home/berlinm/.local/lib/python3.7/site-packages/angr/sim_manager.py\", line 346, in step\r\n successors = self.step_state(state, successor_func=successor_func, **run_args)\r\n File \"/home/berlinm/.local/lib/python3.7/site-packages/angr/sim_manager.py\", line 383, in step_state\r\n successors = self.successors(state, successor_func=successor_func, **run_args)\r\n File \"/home/berlinm/.local/lib/python3.7/site-packages/angr/sim_manager.py\", line 422, in successors\r\n return self._project.factory.successors(state, **run_args)\r\n File \"/home/berlinm/.local/lib/python3.7/site-packages/angr/factory.py\", line 60, in successors\r\n return self.default_engine.process(*args, **kwargs)\r\n File \"/home/berlinm/.local/lib/python3.7/site-packages/angr/engines/vex/light/slicing.py\", line 19, in process\r\n return super().process(*args, **kwargs)\r\n File \"/home/berlinm/.local/lib/python3.7/site-packages/angr/engines/engine.py\", line 149, in process\r\n self.process_successors(self.successors, **kwargs)\r\n File \"/home/berlinm/.local/lib/python3.7/site-packages/angr/engines/failure.py\", line 21, in process_successors\r\n return super().process_successors(successors, **kwargs)\r\n File \"/home/berlinm/.local/lib/python3.7/site-packages/angr/engines/syscall.py\", line 38, in process_successors\r\n return self.process_procedure(state, successors, sys_procedure, **kwargs)\r\n File \"/home/berlinm/.local/lib/python3.7/site-packages/angr/engines/procedure.py\", line 37, in process_procedure\r\n inst = procedure.execute(state, successors, ret_to=ret_to, arguments=arguments)\r\n File \"/home/berlinm/.local/lib/python3.7/site-packages/angr/sim_procedure.py\", line 230, in execute\r\n r = getattr(inst, inst.run_func)(*sim_args, **inst.kwargs)\r\n File \"/home/berlinm/.local/lib/python3.7/site-packages/angr/procedures/linux_kernel/iovec.py\", line 21, in run\r\n if self.state.solver.is_true(self.state.solver.SLT(tmpres, 0)):\r\n File \"/home/berlinm/.local/lib/python3.7/site-packages/angr/state_plugins/solver.py\", line 126, in concrete_shortcut_bool\r\n return f(self, *args, **kwargs)\r\n File \"/home/berlinm/.local/lib/python3.7/site-packages/angr/state_plugins/sim_action_object.py\", line 57, in ast_stripper\r\n return f(*new_args, **new_kwargs)\r\n File \"/home/berlinm/.local/lib/python3.7/site-packages/angr/state_plugins/solver.py\", line 89, in wrapped_f\r\n return f(*args, **kwargs)\r\n File \"/home/berlinm/.local/lib/python3.7/site-packages/angr/state_plugins/solver.py\", line 585, in is_true\r\n return self._solver.is_true(e, extra_constraints=self._adjust_constraint_list(extra_constraints), exact=exact)\r\n File \"/home/berlinm/.local/lib/python3.7/site-packages/claripy/frontend_mixins/concrete_handler_mixin.py\", line 53, in is_true\r\n return super(ConcreteHandlerMixin, self).is_true(e, **kwargs)\r\n File \"/home/berlinm/.local/lib/python3.7/site-packages/claripy/frontend_mixins/constraint_filter_mixin.py\", line 60, in is_true\r\n return super(ConstraintFilterMixin, self).is_true(e, extra_constraints=ec, **kwargs)\r\n File \"/home/berlinm/.local/lib/python3.7/site-packages/claripy/frontends/composite_frontend.py\", line 349, in is_true\r\n r = ms.is_true(e, extra_constraints=extra_constraints, exact=exact)\r\n File \"/home/berlinm/.local/lib/python3.7/site-packages/claripy/frontends/full_frontend.py\", line 184, in is_true\r\n return e.is_true()\r\nAttributeError: 'NotImplementedType' object has no attribute 'is_true'\r\n```\r\n\r\n**Environment Information.**\r\nangr environment report\r\n=============================\r\nDate: 2020-12-30 12:39:53.872698\r\nRunning in virtual environment at /home/berlinm/.virtualenvs/angr\r\nPlatform: linux-x86_64\r\nPython version: 3.7.9 (7e6e2bb30ac5, Nov 18 2020, 10:55:52)\r\n[PyPy 7.3.3-beta0 with GCC 7.3.1 20180303 (Red Hat 7.3.1-5)]\r\n######## angr #########\r\nPython found it in /home/berlinm/.local/lib/python3.7/site-packages/angr\r\nPip version angr 9.0.5034\r\nCouldn't find git info\r\n######## ailment #########\r\nPython found it in /home/berlinm/.local/lib/python3.7/site-packages/ailment\r\nPip version ailment 9.0.5034\r\nCouldn't find git info\r\n######## cle #########\r\nPython found it in /home/berlinm/.local/lib/python3.7/site-packages/cle\r\nPip version cle 9.0.5034\r\nCouldn't find git info\r\n######## pyvex #########\r\nPython found it in /home/berlinm/.local/lib/python3.7/site-packages/pyvex\r\nPip version pyvex 9.0.5034\r\nCouldn't find git info\r\n######## claripy #########\r\nPython found it in /home/berlinm/.local/lib/python3.7/site-packages/claripy\r\nPip version claripy 9.0.5034\r\nCouldn't find git info\r\n######## archinfo #########\r\nPython found it in /home/berlinm/.local/lib/python3.7/site-packages/archinfo\r\nPip version archinfo 9.0.5034\r\nCouldn't find git info\r\n######## z3 #########\r\nPython found it in /home/berlinm/.local/lib/python3.7/site-packages/z3\r\nPip version z3-solver 4.8.9.0\r\nCouldn't find git info\r\n######## unicorn #########\r\nPython found it in /home/berlinm/.local/lib/python3.7/site-packages/unicorn\r\nPip version unicorn 1.0.2rc4\r\nCouldn't find git info\r\n######### Native Module Info ##########\r\nangr: <CDLL '/home/berlinm/.local/lib/python3.7/site-packages/angr/lib/angr_native.so', handle 61bfe90 at 0x638ebb8>\r\nunicorn: <CDLL '/home/berlinm/.local/lib/python3.7/site-packages/unicorn/lib/libunicorn.so', handle 2393fd0 at 0x7fcacfae5da8>\r\npyvex: <cffi.api._make_ffi_library.<locals>.FFILibrary object at 0x00007fcad0b9b398>\r\nz3: <CDLL '/home/berlinm/.local/lib/python3.7/site-packages/z3/lib/libz3.so', handle 2fc8df0 at 0x3206758>\r\n\r\n\r\n**To Reproduce.**\r\n\r\nproject = angr.Project('ls')\r\n\r\nsm = project.factory.simulation_manager(project.factory.call_state(0x404a70)) # Some function's address\r\nsm.explore()\r\n\r\n\r\n# The binary (inside the zip):\r\n[ls.zip](https://github.com/angr/angr/files/5753958/ls.zip)\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "import angr\nfrom ..posix.read import read\nfrom ..posix.write import write\nfrom ...sim_type import register_types, parse_types\n\nregister_types(parse_types(\"\"\"\nstruct iovec {\n void *iov_base; /* Starting address */\n size_t iov_len; /* Number of bytes to transfer */\n};\n\"\"\"))\n\nclass readv(angr.SimProcedure):\n def run(self, fd, iovec, iovcnt):\n if iovec.symbolic or iovcnt.symbolic:\n raise angr.errors.SimPosixError(\"Can't handle symbolic arguments to readv\")\n iovcnt = self.state.solver.eval(iovcnt)\n res = 0\n for element in self.state.mem[iovec].struct.iovec.array(iovcnt).resolved:\n tmpres = self.inline_call(read, fd, element.iov_base, element.iov_len)\n if self.state.solver.is_true(self.state.solver.SLT(tmpres, 0)):\n return tmpres\n\n return res\n\n\nclass writev(angr.SimProcedure):\n def run(self, fd, iovec, iovcnt):\n if iovec.symbolic or iovcnt.symbolic:\n raise angr.errors.SimPosixError(\"Can't handle symbolic arguments to writev\")\n iovcnt = self.state.solver.eval(iovcnt)\n res = 0\n for element in self.state.mem[iovec].struct.iovec.array(iovcnt).resolved:\n tmpres = self.inline_call(write, fd, element.iov_base, element.iov_len).ret_expr\n if self.state.solver.is_true(self.state.solver.SLT(tmpres, 0)):\n return tmpres\n\n return res\n", "path": "angr/procedures/linux_kernel/iovec.py"}], "after_files": [{"content": "import angr\nfrom ..posix.read import read\nfrom ..posix.write import write\nfrom ...sim_type import register_types, parse_types\n\nregister_types(parse_types(\"\"\"\nstruct iovec {\n void *iov_base; /* Starting address */\n size_t iov_len; /* Number of bytes to transfer */\n};\n\"\"\"))\n\nclass readv(angr.SimProcedure):\n def run(self, fd, iovec, iovcnt):\n if iovec.symbolic or iovcnt.symbolic:\n raise angr.errors.SimPosixError(\"Can't handle symbolic arguments to readv\")\n iovcnt = self.state.solver.eval(iovcnt)\n res = 0\n for element in self.state.mem[iovec].struct.iovec.array(iovcnt).resolved:\n tmpres = self.inline_call(read, fd, element.iov_base, element.iov_len).ret_expr\n if self.state.solver.is_true(self.state.solver.SLT(tmpres, 0)):\n return tmpres\n\n return res\n\n\nclass writev(angr.SimProcedure):\n def run(self, fd, iovec, iovcnt):\n if iovec.symbolic or iovcnt.symbolic:\n raise angr.errors.SimPosixError(\"Can't handle symbolic arguments to writev\")\n iovcnt = self.state.solver.eval(iovcnt)\n res = 0\n for element in self.state.mem[iovec].struct.iovec.array(iovcnt).resolved:\n tmpres = self.inline_call(write, fd, element.iov_base, element.iov_len).ret_expr\n if self.state.solver.is_true(self.state.solver.SLT(tmpres, 0)):\n return tmpres\n\n return res\n", "path": "angr/procedures/linux_kernel/iovec.py"}]} | 2,814 | 176 |
gh_patches_debug_13431 | rasdani/github-patches | git_diff | comic__grand-challenge.org-1755 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CSV export fails on converting uuid to json
Problematic JSON structure in https://grand-challenge.org/api/v1/cases/images/redacted_uuid/?format=csv
```
{"pk":"redacted_uuid","name":"redacted.png","study":null,"files":[{"pk":"redacted_uuid","image":"redacted_uuid","file":"https://grand-challenge.org/media/images/...mhd","image_type":"MHD"},{"pk":"redacted_uuid","image":"09b3b3d6-0994-43d2-b6a9-eaff634b8805","file":"https://grand-challenge.org/media/images/...zraw","image_type":"MHD"}],"reader_study_set":["https://grand-challenge.org/api/v1/reader-studies/redacted_uuid/","https://grand-challenge.org/api/v1/reader-studies/redacted_uuid/"],"archive_set":[],"job_set":[],"width":596,"height":596,"depth":null,"color_space":"RGB","modality":null,"eye_choice":"NA","stereoscopic_choice":null,"field_of_view":null,"shape_without_color":[596,596],"shape":[596,596,3],"voxel_width_mm":null,"voxel_height_mm":null,"voxel_depth_mm":null,"api_url":"https://grand-challenge.org/api/v1/cases/images/redacted_uuid/"}
```
Probably due to trying to serialize the list of files to json in https://github.com/comic/grand-challenge.org/blob/14bc3dd4002756e9cf4a32bb0f238859a9175252/app/grandchallenge/core/renderers.py#L26-L27
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/grandchallenge/core/renderers.py`
Content:
```
1 import json
2
3 from rest_framework_csv.renderers import CSVRenderer
4
5
6 class PaginatedCSVRenderer(CSVRenderer):
7 results_field = "results"
8
9 def render(self, data, *args, **kwargs):
10 if self.results_field in data:
11 data = data[self.results_field]
12
13 return super().render(data, *args, **kwargs)
14
15 def flatten_data(self, data):
16 """
17 Create a dictionary that is 1 level deep, with nested values serialized
18 as json. This means that the header rows are now consistent.
19 """
20 for row in data:
21 flat_row = {k: self._flatten_value(v) for k, v in row.items()}
22 yield flat_row
23
24 @staticmethod
25 def _flatten_value(value):
26 if isinstance(value, (dict, list)):
27 return json.dumps(value)
28 else:
29 return value
30
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/grandchallenge/core/renderers.py b/app/grandchallenge/core/renderers.py
--- a/app/grandchallenge/core/renderers.py
+++ b/app/grandchallenge/core/renderers.py
@@ -1,5 +1,7 @@
import json
+from rest_framework.settings import api_settings
+from rest_framework.utils.encoders import JSONEncoder
from rest_framework_csv.renderers import CSVRenderer
@@ -24,6 +26,11 @@
@staticmethod
def _flatten_value(value):
if isinstance(value, (dict, list)):
- return json.dumps(value)
+ return json.dumps(
+ value,
+ cls=JSONEncoder,
+ ensure_ascii=not api_settings.UNICODE_JSON,
+ allow_nan=not api_settings.STRICT_JSON,
+ )
else:
return value
| {"golden_diff": "diff --git a/app/grandchallenge/core/renderers.py b/app/grandchallenge/core/renderers.py\n--- a/app/grandchallenge/core/renderers.py\n+++ b/app/grandchallenge/core/renderers.py\n@@ -1,5 +1,7 @@\n import json\n \n+from rest_framework.settings import api_settings\n+from rest_framework.utils.encoders import JSONEncoder\n from rest_framework_csv.renderers import CSVRenderer\n \n \n@@ -24,6 +26,11 @@\n @staticmethod\n def _flatten_value(value):\n if isinstance(value, (dict, list)):\n- return json.dumps(value)\n+ return json.dumps(\n+ value,\n+ cls=JSONEncoder,\n+ ensure_ascii=not api_settings.UNICODE_JSON,\n+ allow_nan=not api_settings.STRICT_JSON,\n+ )\n else:\n return value\n", "issue": "CSV export fails on converting uuid to json\nProblematic JSON structure in https://grand-challenge.org/api/v1/cases/images/redacted_uuid/?format=csv\r\n\r\n```\r\n{\"pk\":\"redacted_uuid\",\"name\":\"redacted.png\",\"study\":null,\"files\":[{\"pk\":\"redacted_uuid\",\"image\":\"redacted_uuid\",\"file\":\"https://grand-challenge.org/media/images/...mhd\",\"image_type\":\"MHD\"},{\"pk\":\"redacted_uuid\",\"image\":\"09b3b3d6-0994-43d2-b6a9-eaff634b8805\",\"file\":\"https://grand-challenge.org/media/images/...zraw\",\"image_type\":\"MHD\"}],\"reader_study_set\":[\"https://grand-challenge.org/api/v1/reader-studies/redacted_uuid/\",\"https://grand-challenge.org/api/v1/reader-studies/redacted_uuid/\"],\"archive_set\":[],\"job_set\":[],\"width\":596,\"height\":596,\"depth\":null,\"color_space\":\"RGB\",\"modality\":null,\"eye_choice\":\"NA\",\"stereoscopic_choice\":null,\"field_of_view\":null,\"shape_without_color\":[596,596],\"shape\":[596,596,3],\"voxel_width_mm\":null,\"voxel_height_mm\":null,\"voxel_depth_mm\":null,\"api_url\":\"https://grand-challenge.org/api/v1/cases/images/redacted_uuid/\"}\r\n```\r\n\r\nProbably due to trying to serialize the list of files to json in https://github.com/comic/grand-challenge.org/blob/14bc3dd4002756e9cf4a32bb0f238859a9175252/app/grandchallenge/core/renderers.py#L26-L27\n", "before_files": [{"content": "import json\n\nfrom rest_framework_csv.renderers import CSVRenderer\n\n\nclass PaginatedCSVRenderer(CSVRenderer):\n results_field = \"results\"\n\n def render(self, data, *args, **kwargs):\n if self.results_field in data:\n data = data[self.results_field]\n\n return super().render(data, *args, **kwargs)\n\n def flatten_data(self, data):\n \"\"\"\n Create a dictionary that is 1 level deep, with nested values serialized\n as json. This means that the header rows are now consistent.\n \"\"\"\n for row in data:\n flat_row = {k: self._flatten_value(v) for k, v in row.items()}\n yield flat_row\n\n @staticmethod\n def _flatten_value(value):\n if isinstance(value, (dict, list)):\n return json.dumps(value)\n else:\n return value\n", "path": "app/grandchallenge/core/renderers.py"}], "after_files": [{"content": "import json\n\nfrom rest_framework.settings import api_settings\nfrom rest_framework.utils.encoders import JSONEncoder\nfrom rest_framework_csv.renderers import CSVRenderer\n\n\nclass PaginatedCSVRenderer(CSVRenderer):\n results_field = \"results\"\n\n def render(self, data, *args, **kwargs):\n if self.results_field in data:\n data = data[self.results_field]\n\n return super().render(data, *args, **kwargs)\n\n def flatten_data(self, data):\n \"\"\"\n Create a dictionary that is 1 level deep, with nested values serialized\n as json. This means that the header rows are now consistent.\n \"\"\"\n for row in data:\n flat_row = {k: self._flatten_value(v) for k, v in row.items()}\n yield flat_row\n\n @staticmethod\n def _flatten_value(value):\n if isinstance(value, (dict, list)):\n return json.dumps(\n value,\n cls=JSONEncoder,\n ensure_ascii=not api_settings.UNICODE_JSON,\n allow_nan=not api_settings.STRICT_JSON,\n )\n else:\n return value\n", "path": "app/grandchallenge/core/renderers.py"}]} | 887 | 180 |
gh_patches_debug_494 | rasdani/github-patches | git_diff | OpenNMT__OpenNMT-py-1455 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Queue full error - Multi-GPU 1M custom dataset
```
Traceback (most recent call last):
File "/opt/conda/envs/learn-dev/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/opt/conda/envs/learn-dev/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/workspace/OpenNMT-py/train.py", line 127, in batch_producer
q.put(b, False)
File "/opt/conda/envs/learn-dev/lib/python3.7/multiprocessing/queues.py", line 83, in put
raise Full
queue.Full
[2019-06-04 07:32:18,291 INFO] Step 1200/100000; acc: 79.56; ppl: 1.82; xent: 0.60; lr: 1.00000; 33996/13919 tok/s; 401 sec
Traceback (most recent call last):
File "train.py", line 196, in <module>
main(opt)
File "train.py", line 78, in main
p.join()
File "/opt/conda/envs/learn-dev/lib/python3.7/multiprocessing/process.py", line 140, in join
res = self._popen.wait(timeout)
File "/opt/conda/envs/learn-dev/lib/python3.7/multiprocessing/popen_fork.py", line 48, in wait
return self.poll(os.WNOHANG if timeout == 0.0 else 0)
File "/opt/conda/envs/learn-dev/lib/python3.7/multiprocessing/popen_fork.py", line 28, in poll
pid, sts = os.waitpid(self.pid, flag)
File "train.py", line 180, in signal_handler
raise Exception(msg)
Exception:
-- Tracebacks above this line can probably
be ignored --
Traceback (most recent call last):
File "/workspace/OpenNMT-py/train.py", line 138, in run
single_main(opt, device_id, batch_queue, semaphore)
File "/workspace/OpenNMT-py/onmt/train_single.py", line 139, in main
valid_steps=opt.valid_steps)
File "/workspace/OpenNMT-py/onmt/trainer.py", line 224, in train
self._accum_batches(train_iter)):
File "/workspace/OpenNMT-py/onmt/trainer.py", line 162, in _accum_batches
for batch in iterator:
File "/workspace/OpenNMT-py/onmt/train_single.py", line 116, in _train_iter
batch = batch_queue.get()
File "/opt/conda/envs/learn-dev/lib/python3.7/multiprocessing/queues.py", line 113, in get
return _ForkingPickler.loads(res)
File "/opt/conda/envs/learn-dev/lib/python3.7/site-packages/torch/multiprocessing/reductions.py", line 109, in rebuild_cuda_tensor
event_sync_required)
RuntimeError: CUDA error: unknown error
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `train.py`
Content:
```
1 #!/usr/bin/env python
2 """Train models."""
3 import os
4 import signal
5 import torch
6
7 import onmt.opts as opts
8 import onmt.utils.distributed
9
10 from onmt.utils.misc import set_random_seed
11 from onmt.utils.logging import init_logger, logger
12 from onmt.train_single import main as single_main
13 from onmt.utils.parse import ArgumentParser
14 from onmt.inputters.inputter import build_dataset_iter, \
15 load_old_vocab, old_style_vocab, build_dataset_iter_multiple
16
17 from itertools import cycle
18
19
20 def main(opt):
21 ArgumentParser.validate_train_opts(opt)
22 ArgumentParser.update_model_opts(opt)
23 ArgumentParser.validate_model_opts(opt)
24
25 # Load checkpoint if we resume from a previous training.
26 if opt.train_from:
27 logger.info('Loading checkpoint from %s' % opt.train_from)
28 checkpoint = torch.load(opt.train_from,
29 map_location=lambda storage, loc: storage)
30 logger.info('Loading vocab from checkpoint at %s.' % opt.train_from)
31 vocab = checkpoint['vocab']
32 else:
33 vocab = torch.load(opt.data + '.vocab.pt')
34
35 # check for code where vocab is saved instead of fields
36 # (in the future this will be done in a smarter way)
37 if old_style_vocab(vocab):
38 fields = load_old_vocab(
39 vocab, opt.model_type, dynamic_dict=opt.copy_attn)
40 else:
41 fields = vocab
42
43 if len(opt.data_ids) > 1:
44 train_shards = []
45 for train_id in opt.data_ids:
46 shard_base = "train_" + train_id
47 train_shards.append(shard_base)
48 train_iter = build_dataset_iter_multiple(train_shards, fields, opt)
49 else:
50 train_iter = build_dataset_iter("train", fields, opt)
51
52 nb_gpu = len(opt.gpu_ranks)
53
54 if opt.world_size > 1:
55 queues = []
56 mp = torch.multiprocessing.get_context('spawn')
57 semaphore = mp.Semaphore(opt.world_size * opt.queue_size)
58 # Create a thread to listen for errors in the child processes.
59 error_queue = mp.SimpleQueue()
60 error_handler = ErrorHandler(error_queue)
61 # Train with multiprocessing.
62 procs = []
63 for device_id in range(nb_gpu):
64 q = mp.Queue(opt.queue_size)
65 queues += [q]
66 procs.append(mp.Process(target=run, args=(
67 opt, device_id, error_queue, q, semaphore), daemon=True))
68 procs[device_id].start()
69 logger.info(" Starting process pid: %d " % procs[device_id].pid)
70 error_handler.add_child(procs[device_id].pid)
71 producer = mp.Process(target=batch_producer,
72 args=(train_iter, queues, semaphore, opt,),
73 daemon=True)
74 producer.start()
75 error_handler.add_child(producer.pid)
76
77 for p in procs:
78 p.join()
79 producer.terminate()
80
81 elif nb_gpu == 1: # case 1 GPU only
82 single_main(opt, 0)
83 else: # case only CPU
84 single_main(opt, -1)
85
86
87 def batch_producer(generator_to_serve, queues, semaphore, opt):
88 init_logger(opt.log_file)
89 set_random_seed(opt.seed, False)
90 # generator_to_serve = iter(generator_to_serve)
91
92 def pred(x):
93 """
94 Filters batches that belong only
95 to gpu_ranks of current node
96 """
97 for rank in opt.gpu_ranks:
98 if x[0] % opt.world_size == rank:
99 return True
100
101 generator_to_serve = filter(
102 pred, enumerate(generator_to_serve))
103
104 def next_batch(device_id):
105 new_batch = next(generator_to_serve)
106 semaphore.acquire()
107 return new_batch[1]
108
109 b = next_batch(0)
110
111 for device_id, q in cycle(enumerate(queues)):
112 b.dataset = None
113 if isinstance(b.src, tuple):
114 b.src = tuple([_.to(torch.device(device_id))
115 for _ in b.src])
116 else:
117 b.src = b.src.to(torch.device(device_id))
118 b.tgt = b.tgt.to(torch.device(device_id))
119 b.indices = b.indices.to(torch.device(device_id))
120 b.alignment = b.alignment.to(torch.device(device_id)) \
121 if hasattr(b, 'alignment') else None
122 b.src_map = b.src_map.to(torch.device(device_id)) \
123 if hasattr(b, 'src_map') else None
124
125 # hack to dodge unpicklable `dict_keys`
126 b.fields = list(b.fields)
127 q.put(b, False)
128 b = next_batch(device_id)
129
130
131 def run(opt, device_id, error_queue, batch_queue, semaphore):
132 """ run process """
133 try:
134 gpu_rank = onmt.utils.distributed.multi_init(opt, device_id)
135 if gpu_rank != opt.gpu_ranks[device_id]:
136 raise AssertionError("An error occurred in \
137 Distributed initialization")
138 single_main(opt, device_id, batch_queue, semaphore)
139 except KeyboardInterrupt:
140 pass # killed by parent, do nothing
141 except Exception:
142 # propagate exception to parent process, keeping original traceback
143 import traceback
144 error_queue.put((opt.gpu_ranks[device_id], traceback.format_exc()))
145
146
147 class ErrorHandler(object):
148 """A class that listens for exceptions in children processes and propagates
149 the tracebacks to the parent process."""
150
151 def __init__(self, error_queue):
152 """ init error handler """
153 import signal
154 import threading
155 self.error_queue = error_queue
156 self.children_pids = []
157 self.error_thread = threading.Thread(
158 target=self.error_listener, daemon=True)
159 self.error_thread.start()
160 signal.signal(signal.SIGUSR1, self.signal_handler)
161
162 def add_child(self, pid):
163 """ error handler """
164 self.children_pids.append(pid)
165
166 def error_listener(self):
167 """ error listener """
168 (rank, original_trace) = self.error_queue.get()
169 self.error_queue.put((rank, original_trace))
170 os.kill(os.getpid(), signal.SIGUSR1)
171
172 def signal_handler(self, signalnum, stackframe):
173 """ signal handler """
174 for pid in self.children_pids:
175 os.kill(pid, signal.SIGINT) # kill children processes
176 (rank, original_trace) = self.error_queue.get()
177 msg = """\n\n-- Tracebacks above this line can probably
178 be ignored --\n\n"""
179 msg += original_trace
180 raise Exception(msg)
181
182
183 def _get_parser():
184 parser = ArgumentParser(description='train.py')
185
186 opts.config_opts(parser)
187 opts.model_opts(parser)
188 opts.train_opts(parser)
189 return parser
190
191
192 if __name__ == "__main__":
193 parser = _get_parser()
194
195 opt = parser.parse_args()
196 main(opt)
197
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/train.py b/train.py
--- a/train.py
+++ b/train.py
@@ -124,7 +124,7 @@
# hack to dodge unpicklable `dict_keys`
b.fields = list(b.fields)
- q.put(b, False)
+ q.put(b)
b = next_batch(device_id)
| {"golden_diff": "diff --git a/train.py b/train.py\n--- a/train.py\n+++ b/train.py\n@@ -124,7 +124,7 @@\n \n # hack to dodge unpicklable `dict_keys`\n b.fields = list(b.fields)\n- q.put(b, False)\n+ q.put(b)\n b = next_batch(device_id)\n", "issue": "Queue full error - Multi-GPU 1M custom dataset\n```\r\nTraceback (most recent call last):\r\n File \"/opt/conda/envs/learn-dev/lib/python3.7/multiprocessing/process.py\", line 297, in _bootstrap\r\n self.run()\r\n File \"/opt/conda/envs/learn-dev/lib/python3.7/multiprocessing/process.py\", line 99, in run\r\n self._target(*self._args, **self._kwargs)\r\n File \"/workspace/OpenNMT-py/train.py\", line 127, in batch_producer\r\n q.put(b, False)\r\n File \"/opt/conda/envs/learn-dev/lib/python3.7/multiprocessing/queues.py\", line 83, in put\r\n raise Full\r\nqueue.Full\r\n[2019-06-04 07:32:18,291 INFO] Step 1200/100000; acc: 79.56; ppl: 1.82; xent: 0.60; lr: 1.00000; 33996/13919 tok/s; 401 sec\r\nTraceback (most recent call last):\r\n File \"train.py\", line 196, in <module>\r\n main(opt)\r\n File \"train.py\", line 78, in main\r\n p.join()\r\n File \"/opt/conda/envs/learn-dev/lib/python3.7/multiprocessing/process.py\", line 140, in join\r\n res = self._popen.wait(timeout)\r\n File \"/opt/conda/envs/learn-dev/lib/python3.7/multiprocessing/popen_fork.py\", line 48, in wait\r\n return self.poll(os.WNOHANG if timeout == 0.0 else 0)\r\n File \"/opt/conda/envs/learn-dev/lib/python3.7/multiprocessing/popen_fork.py\", line 28, in poll\r\n pid, sts = os.waitpid(self.pid, flag)\r\n File \"train.py\", line 180, in signal_handler\r\n raise Exception(msg)\r\nException: \r\n\r\n-- Tracebacks above this line can probably\r\n be ignored --\r\n\r\nTraceback (most recent call last):\r\n File \"/workspace/OpenNMT-py/train.py\", line 138, in run\r\n single_main(opt, device_id, batch_queue, semaphore)\r\n File \"/workspace/OpenNMT-py/onmt/train_single.py\", line 139, in main\r\n valid_steps=opt.valid_steps)\r\n File \"/workspace/OpenNMT-py/onmt/trainer.py\", line 224, in train\r\n self._accum_batches(train_iter)):\r\n File \"/workspace/OpenNMT-py/onmt/trainer.py\", line 162, in _accum_batches\r\n for batch in iterator:\r\n File \"/workspace/OpenNMT-py/onmt/train_single.py\", line 116, in _train_iter\r\n batch = batch_queue.get()\r\n File \"/opt/conda/envs/learn-dev/lib/python3.7/multiprocessing/queues.py\", line 113, in get\r\n return _ForkingPickler.loads(res)\r\n File \"/opt/conda/envs/learn-dev/lib/python3.7/site-packages/torch/multiprocessing/reductions.py\", line 109, in rebuild_cuda_tensor\r\n event_sync_required)\r\nRuntimeError: CUDA error: unknown error\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"Train models.\"\"\"\nimport os\nimport signal\nimport torch\n\nimport onmt.opts as opts\nimport onmt.utils.distributed\n\nfrom onmt.utils.misc import set_random_seed\nfrom onmt.utils.logging import init_logger, logger\nfrom onmt.train_single import main as single_main\nfrom onmt.utils.parse import ArgumentParser\nfrom onmt.inputters.inputter import build_dataset_iter, \\\n load_old_vocab, old_style_vocab, build_dataset_iter_multiple\n\nfrom itertools import cycle\n\n\ndef main(opt):\n ArgumentParser.validate_train_opts(opt)\n ArgumentParser.update_model_opts(opt)\n ArgumentParser.validate_model_opts(opt)\n\n # Load checkpoint if we resume from a previous training.\n if opt.train_from:\n logger.info('Loading checkpoint from %s' % opt.train_from)\n checkpoint = torch.load(opt.train_from,\n map_location=lambda storage, loc: storage)\n logger.info('Loading vocab from checkpoint at %s.' % opt.train_from)\n vocab = checkpoint['vocab']\n else:\n vocab = torch.load(opt.data + '.vocab.pt')\n\n # check for code where vocab is saved instead of fields\n # (in the future this will be done in a smarter way)\n if old_style_vocab(vocab):\n fields = load_old_vocab(\n vocab, opt.model_type, dynamic_dict=opt.copy_attn)\n else:\n fields = vocab\n\n if len(opt.data_ids) > 1:\n train_shards = []\n for train_id in opt.data_ids:\n shard_base = \"train_\" + train_id\n train_shards.append(shard_base)\n train_iter = build_dataset_iter_multiple(train_shards, fields, opt)\n else:\n train_iter = build_dataset_iter(\"train\", fields, opt)\n\n nb_gpu = len(opt.gpu_ranks)\n\n if opt.world_size > 1:\n queues = []\n mp = torch.multiprocessing.get_context('spawn')\n semaphore = mp.Semaphore(opt.world_size * opt.queue_size)\n # Create a thread to listen for errors in the child processes.\n error_queue = mp.SimpleQueue()\n error_handler = ErrorHandler(error_queue)\n # Train with multiprocessing.\n procs = []\n for device_id in range(nb_gpu):\n q = mp.Queue(opt.queue_size)\n queues += [q]\n procs.append(mp.Process(target=run, args=(\n opt, device_id, error_queue, q, semaphore), daemon=True))\n procs[device_id].start()\n logger.info(\" Starting process pid: %d \" % procs[device_id].pid)\n error_handler.add_child(procs[device_id].pid)\n producer = mp.Process(target=batch_producer,\n args=(train_iter, queues, semaphore, opt,),\n daemon=True)\n producer.start()\n error_handler.add_child(producer.pid)\n\n for p in procs:\n p.join()\n producer.terminate()\n\n elif nb_gpu == 1: # case 1 GPU only\n single_main(opt, 0)\n else: # case only CPU\n single_main(opt, -1)\n\n\ndef batch_producer(generator_to_serve, queues, semaphore, opt):\n init_logger(opt.log_file)\n set_random_seed(opt.seed, False)\n # generator_to_serve = iter(generator_to_serve)\n\n def pred(x):\n \"\"\"\n Filters batches that belong only\n to gpu_ranks of current node\n \"\"\"\n for rank in opt.gpu_ranks:\n if x[0] % opt.world_size == rank:\n return True\n\n generator_to_serve = filter(\n pred, enumerate(generator_to_serve))\n\n def next_batch(device_id):\n new_batch = next(generator_to_serve)\n semaphore.acquire()\n return new_batch[1]\n\n b = next_batch(0)\n\n for device_id, q in cycle(enumerate(queues)):\n b.dataset = None\n if isinstance(b.src, tuple):\n b.src = tuple([_.to(torch.device(device_id))\n for _ in b.src])\n else:\n b.src = b.src.to(torch.device(device_id))\n b.tgt = b.tgt.to(torch.device(device_id))\n b.indices = b.indices.to(torch.device(device_id))\n b.alignment = b.alignment.to(torch.device(device_id)) \\\n if hasattr(b, 'alignment') else None\n b.src_map = b.src_map.to(torch.device(device_id)) \\\n if hasattr(b, 'src_map') else None\n\n # hack to dodge unpicklable `dict_keys`\n b.fields = list(b.fields)\n q.put(b, False)\n b = next_batch(device_id)\n\n\ndef run(opt, device_id, error_queue, batch_queue, semaphore):\n \"\"\" run process \"\"\"\n try:\n gpu_rank = onmt.utils.distributed.multi_init(opt, device_id)\n if gpu_rank != opt.gpu_ranks[device_id]:\n raise AssertionError(\"An error occurred in \\\n Distributed initialization\")\n single_main(opt, device_id, batch_queue, semaphore)\n except KeyboardInterrupt:\n pass # killed by parent, do nothing\n except Exception:\n # propagate exception to parent process, keeping original traceback\n import traceback\n error_queue.put((opt.gpu_ranks[device_id], traceback.format_exc()))\n\n\nclass ErrorHandler(object):\n \"\"\"A class that listens for exceptions in children processes and propagates\n the tracebacks to the parent process.\"\"\"\n\n def __init__(self, error_queue):\n \"\"\" init error handler \"\"\"\n import signal\n import threading\n self.error_queue = error_queue\n self.children_pids = []\n self.error_thread = threading.Thread(\n target=self.error_listener, daemon=True)\n self.error_thread.start()\n signal.signal(signal.SIGUSR1, self.signal_handler)\n\n def add_child(self, pid):\n \"\"\" error handler \"\"\"\n self.children_pids.append(pid)\n\n def error_listener(self):\n \"\"\" error listener \"\"\"\n (rank, original_trace) = self.error_queue.get()\n self.error_queue.put((rank, original_trace))\n os.kill(os.getpid(), signal.SIGUSR1)\n\n def signal_handler(self, signalnum, stackframe):\n \"\"\" signal handler \"\"\"\n for pid in self.children_pids:\n os.kill(pid, signal.SIGINT) # kill children processes\n (rank, original_trace) = self.error_queue.get()\n msg = \"\"\"\\n\\n-- Tracebacks above this line can probably\n be ignored --\\n\\n\"\"\"\n msg += original_trace\n raise Exception(msg)\n\n\ndef _get_parser():\n parser = ArgumentParser(description='train.py')\n\n opts.config_opts(parser)\n opts.model_opts(parser)\n opts.train_opts(parser)\n return parser\n\n\nif __name__ == \"__main__\":\n parser = _get_parser()\n\n opt = parser.parse_args()\n main(opt)\n", "path": "train.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\"\"\"Train models.\"\"\"\nimport os\nimport signal\nimport torch\n\nimport onmt.opts as opts\nimport onmt.utils.distributed\n\nfrom onmt.utils.misc import set_random_seed\nfrom onmt.utils.logging import init_logger, logger\nfrom onmt.train_single import main as single_main\nfrom onmt.utils.parse import ArgumentParser\nfrom onmt.inputters.inputter import build_dataset_iter, \\\n load_old_vocab, old_style_vocab, build_dataset_iter_multiple\n\nfrom itertools import cycle\n\n\ndef main(opt):\n ArgumentParser.validate_train_opts(opt)\n ArgumentParser.update_model_opts(opt)\n ArgumentParser.validate_model_opts(opt)\n\n # Load checkpoint if we resume from a previous training.\n if opt.train_from:\n logger.info('Loading checkpoint from %s' % opt.train_from)\n checkpoint = torch.load(opt.train_from,\n map_location=lambda storage, loc: storage)\n logger.info('Loading vocab from checkpoint at %s.' % opt.train_from)\n vocab = checkpoint['vocab']\n else:\n vocab = torch.load(opt.data + '.vocab.pt')\n\n # check for code where vocab is saved instead of fields\n # (in the future this will be done in a smarter way)\n if old_style_vocab(vocab):\n fields = load_old_vocab(\n vocab, opt.model_type, dynamic_dict=opt.copy_attn)\n else:\n fields = vocab\n\n if len(opt.data_ids) > 1:\n train_shards = []\n for train_id in opt.data_ids:\n shard_base = \"train_\" + train_id\n train_shards.append(shard_base)\n train_iter = build_dataset_iter_multiple(train_shards, fields, opt)\n else:\n train_iter = build_dataset_iter(\"train\", fields, opt)\n\n nb_gpu = len(opt.gpu_ranks)\n\n if opt.world_size > 1:\n queues = []\n mp = torch.multiprocessing.get_context('spawn')\n semaphore = mp.Semaphore(opt.world_size * opt.queue_size)\n # Create a thread to listen for errors in the child processes.\n error_queue = mp.SimpleQueue()\n error_handler = ErrorHandler(error_queue)\n # Train with multiprocessing.\n procs = []\n for device_id in range(nb_gpu):\n q = mp.Queue(opt.queue_size)\n queues += [q]\n procs.append(mp.Process(target=run, args=(\n opt, device_id, error_queue, q, semaphore), daemon=True))\n procs[device_id].start()\n logger.info(\" Starting process pid: %d \" % procs[device_id].pid)\n error_handler.add_child(procs[device_id].pid)\n producer = mp.Process(target=batch_producer,\n args=(train_iter, queues, semaphore, opt,),\n daemon=True)\n producer.start()\n error_handler.add_child(producer.pid)\n\n for p in procs:\n p.join()\n producer.terminate()\n\n elif nb_gpu == 1: # case 1 GPU only\n single_main(opt, 0)\n else: # case only CPU\n single_main(opt, -1)\n\n\ndef batch_producer(generator_to_serve, queues, semaphore, opt):\n init_logger(opt.log_file)\n set_random_seed(opt.seed, False)\n # generator_to_serve = iter(generator_to_serve)\n\n def pred(x):\n \"\"\"\n Filters batches that belong only\n to gpu_ranks of current node\n \"\"\"\n for rank in opt.gpu_ranks:\n if x[0] % opt.world_size == rank:\n return True\n\n generator_to_serve = filter(\n pred, enumerate(generator_to_serve))\n\n def next_batch(device_id):\n new_batch = next(generator_to_serve)\n semaphore.acquire()\n return new_batch[1]\n\n b = next_batch(0)\n\n for device_id, q in cycle(enumerate(queues)):\n b.dataset = None\n if isinstance(b.src, tuple):\n b.src = tuple([_.to(torch.device(device_id))\n for _ in b.src])\n else:\n b.src = b.src.to(torch.device(device_id))\n b.tgt = b.tgt.to(torch.device(device_id))\n b.indices = b.indices.to(torch.device(device_id))\n b.alignment = b.alignment.to(torch.device(device_id)) \\\n if hasattr(b, 'alignment') else None\n b.src_map = b.src_map.to(torch.device(device_id)) \\\n if hasattr(b, 'src_map') else None\n\n # hack to dodge unpicklable `dict_keys`\n b.fields = list(b.fields)\n q.put(b)\n b = next_batch(device_id)\n\n\ndef run(opt, device_id, error_queue, batch_queue, semaphore):\n \"\"\" run process \"\"\"\n try:\n gpu_rank = onmt.utils.distributed.multi_init(opt, device_id)\n if gpu_rank != opt.gpu_ranks[device_id]:\n raise AssertionError(\"An error occurred in \\\n Distributed initialization\")\n single_main(opt, device_id, batch_queue, semaphore)\n except KeyboardInterrupt:\n pass # killed by parent, do nothing\n except Exception:\n # propagate exception to parent process, keeping original traceback\n import traceback\n error_queue.put((opt.gpu_ranks[device_id], traceback.format_exc()))\n\n\nclass ErrorHandler(object):\n \"\"\"A class that listens for exceptions in children processes and propagates\n the tracebacks to the parent process.\"\"\"\n\n def __init__(self, error_queue):\n \"\"\" init error handler \"\"\"\n import signal\n import threading\n self.error_queue = error_queue\n self.children_pids = []\n self.error_thread = threading.Thread(\n target=self.error_listener, daemon=True)\n self.error_thread.start()\n signal.signal(signal.SIGUSR1, self.signal_handler)\n\n def add_child(self, pid):\n \"\"\" error handler \"\"\"\n self.children_pids.append(pid)\n\n def error_listener(self):\n \"\"\" error listener \"\"\"\n (rank, original_trace) = self.error_queue.get()\n self.error_queue.put((rank, original_trace))\n os.kill(os.getpid(), signal.SIGUSR1)\n\n def signal_handler(self, signalnum, stackframe):\n \"\"\" signal handler \"\"\"\n for pid in self.children_pids:\n os.kill(pid, signal.SIGINT) # kill children processes\n (rank, original_trace) = self.error_queue.get()\n msg = \"\"\"\\n\\n-- Tracebacks above this line can probably\n be ignored --\\n\\n\"\"\"\n msg += original_trace\n raise Exception(msg)\n\n\ndef _get_parser():\n parser = ArgumentParser(description='train.py')\n\n opts.config_opts(parser)\n opts.model_opts(parser)\n opts.train_opts(parser)\n return parser\n\n\nif __name__ == \"__main__\":\n parser = _get_parser()\n\n opt = parser.parse_args()\n main(opt)\n", "path": "train.py"}]} | 2,965 | 78 |
gh_patches_debug_27459 | rasdani/github-patches | git_diff | NVIDIA__apex-590 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SyncBatchNorm doesn't support 2 dimensions input?
Hi,
I'm facing the issue that the program crash when the input for SyncBatchNorm is two dimensions. Here's the code:
```python
import torch
import apex
model = apex.parallel.SyncBatchNorm(4).cuda()
data = torch.rand((8,4)).cuda()
output = model(data)
```
When running the code, error raised like this:
```
Traceback (most recent call last):
File "syncbn_test.by", line 7, in <module>
output = model(data)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/apex/parallel/optimized_sync_batchnorm.py", line 81, in forward
return SyncBatchnormFunction.apply(input, self.weight, self.bias, self.running_mean, self.running_var, self.eps, self.training or not self.track_running_stats, exponential_average_factor, self.process_group, self.channel_last)
File "/usr/local/lib/python3.5/dist-packages/apex/parallel/optimized_sync_batchnorm_kernel.py", line 27, in forward
mean, var_biased = syncbn.welford_mean_var(input)
RuntimeError: Dimension out of range (expected to be in range of [-2, 1], but got 2) (maybe_wrap_dim at /pytorch/aten/src/ATen/core/WrapDimMinimal.h:18)
```
And everthing runs ok when `data` a 4 dims tensor.
Here is my environment:
```
Ubuntu 16.04
Python 3.5.2
Pytorch 1.0.1, installed with "pip install torch"
apex is installed with command:
pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" .
cuda 10.0
nvidia driver 410.72
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apex/parallel/optimized_sync_batchnorm.py`
Content:
```
1 import torch
2 from torch.nn.modules.batchnorm import _BatchNorm
3 from torch.nn import functional as F
4
5 import syncbn
6 from .optimized_sync_batchnorm_kernel import SyncBatchnormFunction
7
8
9 class SyncBatchNorm(_BatchNorm):
10 """
11 synchronized batch normalization module extented from `torch.nn.BatchNormNd`
12 with the added stats reduction across multiple processes.
13 :class:`apex.parallel.SyncBatchNorm` is designed to work with
14 `DistributedDataParallel`.
15
16 When running in training mode, the layer reduces stats across all processes
17 to increase the effective batchsize for normalization layer. This is useful
18 in applications where batch size is small on a given process that would
19 diminish converged accuracy of the model. The model uses collective
20 communication package from `torch.distributed`.
21
22 When running in evaluation mode, the layer falls back to
23 `torch.nn.functional.batch_norm`
24
25 Args:
26 num_features: :math:`C` from an expected input of size
27 :math:`(N, C, L)` or :math:`L` from input of size :math:`(N, L)`
28 eps: a value added to the denominator for numerical stability.
29 Default: 1e-5
30 momentum: the value used for the running_mean and running_var
31 computation. Can be set to ``None`` for cumulative moving average
32 (i.e. simple average). Default: 0.1
33 affine: a boolean value that when set to ``True``, this module has
34 learnable affine parameters. Default: ``True``
35 track_running_stats: a boolean value that when set to ``True``, this
36 module tracks the running mean and variance, and when set to ``False``,
37 this module does not track such statistics and always uses batch
38 statistics in both training and eval modes. Default: ``True``
39 process_group: pass in a process group within which the stats of the
40 mini-batch is being synchronized. ``None`` for using default process
41 group
42 channel_last: a boolean value that when set to ``True``, this module
43 take the last dimension of the input tensor to be the channel
44 dimension. Default: False
45
46 Examples::
47 >>> # channel first tensor
48 >>> sbn = apex.parallel.SyncBatchNorm(100).cuda()
49 >>> inp = torch.randn(10, 100, 14, 14).cuda()
50 >>> out = sbn(inp)
51 >>> inp = torch.randn(3, 100, 20).cuda()
52 >>> out = sbn(inp)
53 >>> # channel last tensor
54 >>> sbn = apex.parallel.SyncBatchNorm(100, channel_last=True).cuda()
55 >>> inp = torch.randn(10, 14, 14, 100).cuda()
56 """
57
58 def __init__(self, num_features, eps=1e-5, momentum=0.1, affine=True, track_running_stats=True, process_group=None, channel_last=False, fuse_relu=False):
59 super(SyncBatchNorm, self).__init__(num_features, eps=eps, momentum=momentum, affine=affine, track_running_stats=track_running_stats)
60 self.process_group = process_group
61 self.channel_last = channel_last
62 self.fuse_relu = fuse_relu
63
64 def _specify_process_group(self, process_group):
65 self.process_group = process_group
66
67 def _specify_channel_last(self, channel_last):
68 self.channel_last = channel_last
69
70 def forward(self, input, z = None):
71 # if input.dim() == 2, we switch to channel_last for efficient memory accessing
72 channel_last = self.channel_last if input.dim() != 2 else True
73
74 if not self.training and self.track_running_stats and not self.channel_last and not self.fuse_relu and z == None:
75 # fall back to pytorch implementation for inference
76 return F.batch_norm(input, self.running_mean, self.running_var, self.weight, self.bias, False, 0.0, self.eps)
77 else:
78 exponential_average_factor = 0.0
79 if self.training and self.track_running_stats:
80 self.num_batches_tracked += 1
81 if self.momentum is None:
82 exponential_average_factor = 1.0 / float(self.num_batches_tracked)
83 else:
84 exponential_average_factor = self.momentum
85 return SyncBatchnormFunction.apply(input, z, self.weight, self.bias, self.running_mean, self.running_var, self.eps, self.training or not self.track_running_stats, exponential_average_factor, self.process_group, self.channel_last, self.fuse_relu)
86
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/apex/parallel/optimized_sync_batchnorm.py b/apex/parallel/optimized_sync_batchnorm.py
--- a/apex/parallel/optimized_sync_batchnorm.py
+++ b/apex/parallel/optimized_sync_batchnorm.py
@@ -71,7 +71,7 @@
# if input.dim() == 2, we switch to channel_last for efficient memory accessing
channel_last = self.channel_last if input.dim() != 2 else True
- if not self.training and self.track_running_stats and not self.channel_last and not self.fuse_relu and z == None:
+ if not self.training and self.track_running_stats and not channel_last and not self.fuse_relu and z == None:
# fall back to pytorch implementation for inference
return F.batch_norm(input, self.running_mean, self.running_var, self.weight, self.bias, False, 0.0, self.eps)
else:
@@ -82,4 +82,4 @@
exponential_average_factor = 1.0 / float(self.num_batches_tracked)
else:
exponential_average_factor = self.momentum
- return SyncBatchnormFunction.apply(input, z, self.weight, self.bias, self.running_mean, self.running_var, self.eps, self.training or not self.track_running_stats, exponential_average_factor, self.process_group, self.channel_last, self.fuse_relu)
+ return SyncBatchnormFunction.apply(input, z, self.weight, self.bias, self.running_mean, self.running_var, self.eps, self.training or not self.track_running_stats, exponential_average_factor, self.process_group, channel_last, self.fuse_relu)
| {"golden_diff": "diff --git a/apex/parallel/optimized_sync_batchnorm.py b/apex/parallel/optimized_sync_batchnorm.py\n--- a/apex/parallel/optimized_sync_batchnorm.py\n+++ b/apex/parallel/optimized_sync_batchnorm.py\n@@ -71,7 +71,7 @@\n # if input.dim() == 2, we switch to channel_last for efficient memory accessing\n channel_last = self.channel_last if input.dim() != 2 else True\n \n- if not self.training and self.track_running_stats and not self.channel_last and not self.fuse_relu and z == None:\n+ if not self.training and self.track_running_stats and not channel_last and not self.fuse_relu and z == None:\n # fall back to pytorch implementation for inference\n return F.batch_norm(input, self.running_mean, self.running_var, self.weight, self.bias, False, 0.0, self.eps)\n else:\n@@ -82,4 +82,4 @@\n exponential_average_factor = 1.0 / float(self.num_batches_tracked)\n else:\n exponential_average_factor = self.momentum\n- return SyncBatchnormFunction.apply(input, z, self.weight, self.bias, self.running_mean, self.running_var, self.eps, self.training or not self.track_running_stats, exponential_average_factor, self.process_group, self.channel_last, self.fuse_relu)\n+ return SyncBatchnormFunction.apply(input, z, self.weight, self.bias, self.running_mean, self.running_var, self.eps, self.training or not self.track_running_stats, exponential_average_factor, self.process_group, channel_last, self.fuse_relu)\n", "issue": "SyncBatchNorm doesn't support 2 dimensions input?\nHi,\r\nI'm facing the issue that the program crash when the input for SyncBatchNorm is two dimensions. Here's the code:\r\n```python\r\nimport torch\r\nimport apex\r\n\r\nmodel = apex.parallel.SyncBatchNorm(4).cuda()\r\ndata = torch.rand((8,4)).cuda()\r\noutput = model(data)\r\n```\r\nWhen running the code, error raised like this:\r\n```\r\nTraceback (most recent call last):\r\n File \"syncbn_test.by\", line 7, in <module>\r\n output = model(data)\r\n File \"/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py\", line 489, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.5/dist-packages/apex/parallel/optimized_sync_batchnorm.py\", line 81, in forward\r\n return SyncBatchnormFunction.apply(input, self.weight, self.bias, self.running_mean, self.running_var, self.eps, self.training or not self.track_running_stats, exponential_average_factor, self.process_group, self.channel_last)\r\n File \"/usr/local/lib/python3.5/dist-packages/apex/parallel/optimized_sync_batchnorm_kernel.py\", line 27, in forward\r\n mean, var_biased = syncbn.welford_mean_var(input)\r\nRuntimeError: Dimension out of range (expected to be in range of [-2, 1], but got 2) (maybe_wrap_dim at /pytorch/aten/src/ATen/core/WrapDimMinimal.h:18)\r\n```\r\nAnd everthing runs ok when `data` a 4 dims tensor. \r\n\r\nHere is my environment:\r\n```\r\nUbuntu 16.04\r\nPython 3.5.2\r\nPytorch 1.0.1, installed with \"pip install torch\"\r\napex is installed with command:\r\n pip install -v --no-cache-dir --global-option=\"--cpp_ext\" --global-option=\"--cuda_ext\" .\r\ncuda 10.0\r\nnvidia driver 410.72\r\n```\n", "before_files": [{"content": "import torch\nfrom torch.nn.modules.batchnorm import _BatchNorm\nfrom torch.nn import functional as F\n\nimport syncbn\nfrom .optimized_sync_batchnorm_kernel import SyncBatchnormFunction\n\n\nclass SyncBatchNorm(_BatchNorm):\n \"\"\"\n synchronized batch normalization module extented from `torch.nn.BatchNormNd`\n with the added stats reduction across multiple processes.\n :class:`apex.parallel.SyncBatchNorm` is designed to work with\n `DistributedDataParallel`.\n\n When running in training mode, the layer reduces stats across all processes\n to increase the effective batchsize for normalization layer. This is useful\n in applications where batch size is small on a given process that would\n diminish converged accuracy of the model. The model uses collective\n communication package from `torch.distributed`.\n\n When running in evaluation mode, the layer falls back to\n `torch.nn.functional.batch_norm`\n\n Args:\n num_features: :math:`C` from an expected input of size\n :math:`(N, C, L)` or :math:`L` from input of size :math:`(N, L)`\n eps: a value added to the denominator for numerical stability.\n Default: 1e-5\n momentum: the value used for the running_mean and running_var\n computation. Can be set to ``None`` for cumulative moving average\n (i.e. simple average). Default: 0.1\n affine: a boolean value that when set to ``True``, this module has\n learnable affine parameters. Default: ``True``\n track_running_stats: a boolean value that when set to ``True``, this\n module tracks the running mean and variance, and when set to ``False``,\n this module does not track such statistics and always uses batch\n statistics in both training and eval modes. Default: ``True``\n process_group: pass in a process group within which the stats of the\n mini-batch is being synchronized. ``None`` for using default process\n group\n channel_last: a boolean value that when set to ``True``, this module\n take the last dimension of the input tensor to be the channel\n dimension. Default: False\n\n Examples::\n >>> # channel first tensor\n >>> sbn = apex.parallel.SyncBatchNorm(100).cuda()\n >>> inp = torch.randn(10, 100, 14, 14).cuda()\n >>> out = sbn(inp)\n >>> inp = torch.randn(3, 100, 20).cuda()\n >>> out = sbn(inp)\n >>> # channel last tensor\n >>> sbn = apex.parallel.SyncBatchNorm(100, channel_last=True).cuda()\n >>> inp = torch.randn(10, 14, 14, 100).cuda()\n \"\"\"\n\n def __init__(self, num_features, eps=1e-5, momentum=0.1, affine=True, track_running_stats=True, process_group=None, channel_last=False, fuse_relu=False):\n super(SyncBatchNorm, self).__init__(num_features, eps=eps, momentum=momentum, affine=affine, track_running_stats=track_running_stats)\n self.process_group = process_group\n self.channel_last = channel_last\n self.fuse_relu = fuse_relu\n\n def _specify_process_group(self, process_group):\n self.process_group = process_group\n\n def _specify_channel_last(self, channel_last):\n self.channel_last = channel_last\n\n def forward(self, input, z = None):\n # if input.dim() == 2, we switch to channel_last for efficient memory accessing\n channel_last = self.channel_last if input.dim() != 2 else True\n\n if not self.training and self.track_running_stats and not self.channel_last and not self.fuse_relu and z == None:\n # fall back to pytorch implementation for inference\n return F.batch_norm(input, self.running_mean, self.running_var, self.weight, self.bias, False, 0.0, self.eps)\n else:\n exponential_average_factor = 0.0\n if self.training and self.track_running_stats:\n self.num_batches_tracked += 1\n if self.momentum is None:\n exponential_average_factor = 1.0 / float(self.num_batches_tracked)\n else:\n exponential_average_factor = self.momentum\n return SyncBatchnormFunction.apply(input, z, self.weight, self.bias, self.running_mean, self.running_var, self.eps, self.training or not self.track_running_stats, exponential_average_factor, self.process_group, self.channel_last, self.fuse_relu)\n", "path": "apex/parallel/optimized_sync_batchnorm.py"}], "after_files": [{"content": "import torch\nfrom torch.nn.modules.batchnorm import _BatchNorm\nfrom torch.nn import functional as F\n\nimport syncbn\nfrom .optimized_sync_batchnorm_kernel import SyncBatchnormFunction\n\n\nclass SyncBatchNorm(_BatchNorm):\n \"\"\"\n synchronized batch normalization module extented from `torch.nn.BatchNormNd`\n with the added stats reduction across multiple processes.\n :class:`apex.parallel.SyncBatchNorm` is designed to work with\n `DistributedDataParallel`.\n\n When running in training mode, the layer reduces stats across all processes\n to increase the effective batchsize for normalization layer. This is useful\n in applications where batch size is small on a given process that would\n diminish converged accuracy of the model. The model uses collective\n communication package from `torch.distributed`.\n\n When running in evaluation mode, the layer falls back to\n `torch.nn.functional.batch_norm`\n\n Args:\n num_features: :math:`C` from an expected input of size\n :math:`(N, C, L)` or :math:`L` from input of size :math:`(N, L)`\n eps: a value added to the denominator for numerical stability.\n Default: 1e-5\n momentum: the value used for the running_mean and running_var\n computation. Can be set to ``None`` for cumulative moving average\n (i.e. simple average). Default: 0.1\n affine: a boolean value that when set to ``True``, this module has\n learnable affine parameters. Default: ``True``\n track_running_stats: a boolean value that when set to ``True``, this\n module tracks the running mean and variance, and when set to ``False``,\n this module does not track such statistics and always uses batch\n statistics in both training and eval modes. Default: ``True``\n process_group: pass in a process group within which the stats of the\n mini-batch is being synchronized. ``None`` for using default process\n group\n channel_last: a boolean value that when set to ``True``, this module\n take the last dimension of the input tensor to be the channel\n dimension. Default: False\n\n Examples::\n >>> # channel first tensor\n >>> sbn = apex.parallel.SyncBatchNorm(100).cuda()\n >>> inp = torch.randn(10, 100, 14, 14).cuda()\n >>> out = sbn(inp)\n >>> inp = torch.randn(3, 100, 20).cuda()\n >>> out = sbn(inp)\n >>> # channel last tensor\n >>> sbn = apex.parallel.SyncBatchNorm(100, channel_last=True).cuda()\n >>> inp = torch.randn(10, 14, 14, 100).cuda()\n \"\"\"\n\n def __init__(self, num_features, eps=1e-5, momentum=0.1, affine=True, track_running_stats=True, process_group=None, channel_last=False, fuse_relu=False):\n super(SyncBatchNorm, self).__init__(num_features, eps=eps, momentum=momentum, affine=affine, track_running_stats=track_running_stats)\n self.process_group = process_group\n self.channel_last = channel_last\n self.fuse_relu = fuse_relu\n\n def _specify_process_group(self, process_group):\n self.process_group = process_group\n\n def _specify_channel_last(self, channel_last):\n self.channel_last = channel_last\n\n def forward(self, input, z = None):\n # if input.dim() == 2, we switch to channel_last for efficient memory accessing\n channel_last = self.channel_last if input.dim() != 2 else True\n\n if not self.training and self.track_running_stats and not channel_last and not self.fuse_relu and z == None:\n # fall back to pytorch implementation for inference\n return F.batch_norm(input, self.running_mean, self.running_var, self.weight, self.bias, False, 0.0, self.eps)\n else:\n exponential_average_factor = 0.0\n if self.training and self.track_running_stats:\n self.num_batches_tracked += 1\n if self.momentum is None:\n exponential_average_factor = 1.0 / float(self.num_batches_tracked)\n else:\n exponential_average_factor = self.momentum\n return SyncBatchnormFunction.apply(input, z, self.weight, self.bias, self.running_mean, self.running_var, self.eps, self.training or not self.track_running_stats, exponential_average_factor, self.process_group, channel_last, self.fuse_relu)\n", "path": "apex/parallel/optimized_sync_batchnorm.py"}]} | 1,869 | 349 |
gh_patches_debug_23483 | rasdani/github-patches | git_diff | getredash__redash-3088 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Access current user id/details from Python data source
First of all, thanks to the authors and contributors for a very interesting and promising tool.
### Question:
Is it possible to access current user id/details from Python data source?
I would like to be able to control access to the viewed data at row level without a need of creating and maintaining multiple (in my case about 80) data sources and dashboards. My desired scenario:
1. Store A logs into their account and accesses dashboard with KPI/Metrics.
2. Dashboard calls Python data source(s).
3. Python code loads data from a db table with all stores, identifies current user and filters out all records where store != Store A.
4. Dashboard is dynamically loaded with entries for Store A only.
Also if there are any other ways to achieve the above, please advise.
### Technical details:
* Redash Version: 0.12.0
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `redash/query_runner/python.py`
Content:
```
1 import datetime
2 import importlib
3 import logging
4 import sys
5
6 from redash.query_runner import *
7 from redash.utils import json_dumps, json_loads
8 from redash import models
9 from RestrictedPython import compile_restricted
10 from RestrictedPython.Guards import safe_builtins
11
12
13 logger = logging.getLogger(__name__)
14
15
16 class CustomPrint(object):
17 """CustomPrint redirect "print" calls to be sent as "log" on the result object."""
18 def __init__(self):
19 self.enabled = True
20 self.lines = []
21
22 def write(self, text):
23 if self.enabled:
24 if text and text.strip():
25 log_line = "[{0}] {1}".format(datetime.datetime.utcnow().isoformat(), text)
26 self.lines.append(log_line)
27
28 def enable(self):
29 self.enabled = True
30
31 def disable(self):
32 self.enabled = False
33
34 def __call__(self):
35 return self
36
37
38 class Python(BaseQueryRunner):
39 safe_builtins = (
40 'sorted', 'reversed', 'map', 'reduce', 'any', 'all',
41 'slice', 'filter', 'len', 'next', 'enumerate',
42 'sum', 'abs', 'min', 'max', 'round', 'cmp', 'divmod',
43 'str', 'unicode', 'int', 'float', 'complex',
44 'tuple', 'set', 'list', 'dict', 'bool',
45 )
46
47 @classmethod
48 def configuration_schema(cls):
49 return {
50 'type': 'object',
51 'properties': {
52 'allowedImportModules': {
53 'type': 'string',
54 'title': 'Modules to import prior to running the script'
55 },
56 'additionalModulesPaths': {
57 'type': 'string'
58 }
59 },
60 }
61
62 @classmethod
63 def enabled(cls):
64 return True
65
66 @classmethod
67 def annotate_query(cls):
68 return False
69
70 def __init__(self, configuration):
71 super(Python, self).__init__(configuration)
72
73 self.syntax = "python"
74
75 self._allowed_modules = {}
76 self._script_locals = {"result": {"rows": [], "columns": [], "log": []}}
77 self._enable_print_log = True
78 self._custom_print = CustomPrint()
79
80 if self.configuration.get("allowedImportModules", None):
81 for item in self.configuration["allowedImportModules"].split(","):
82 self._allowed_modules[item] = None
83
84 if self.configuration.get("additionalModulesPaths", None):
85 for p in self.configuration["additionalModulesPaths"].split(","):
86 if p not in sys.path:
87 sys.path.append(p)
88
89 def custom_import(self, name, globals=None, locals=None, fromlist=(), level=0):
90 if name in self._allowed_modules:
91 m = None
92 if self._allowed_modules[name] is None:
93 m = importlib.import_module(name)
94 self._allowed_modules[name] = m
95 else:
96 m = self._allowed_modules[name]
97
98 return m
99
100 raise Exception("'{0}' is not configured as a supported import module".format(name))
101
102 @staticmethod
103 def custom_write(obj):
104 """
105 Custom hooks which controls the way objects/lists/tuples/dicts behave in
106 RestrictedPython
107 """
108 return obj
109
110 @staticmethod
111 def custom_get_item(obj, key):
112 return obj[key]
113
114 @staticmethod
115 def custom_get_iter(obj):
116 return iter(obj)
117
118 @staticmethod
119 def add_result_column(result, column_name, friendly_name, column_type):
120 """Helper function to add columns inside a Python script running in Redash in an easier way
121
122 Parameters:
123 :result dict: The result dict
124 :column_name string: Name of the column, which should be consisted of lowercase latin letters or underscore.
125 :friendly_name string: Name of the column for display
126 :column_type string: Type of the column. Check supported data types for details.
127 """
128 if column_type not in SUPPORTED_COLUMN_TYPES:
129 raise Exception("'{0}' is not a supported column type".format(column_type))
130
131 if "columns" not in result:
132 result["columns"] = []
133
134 result["columns"].append({
135 "name": column_name,
136 "friendly_name": friendly_name,
137 "type": column_type
138 })
139
140 @staticmethod
141 def add_result_row(result, values):
142 """Helper function to add one row to results set.
143
144 Parameters:
145 :result dict: The result dict
146 :values dict: One row of result in dict. The key should be one of the column names. The value is the value of the column in this row.
147 """
148 if "rows" not in result:
149 result["rows"] = []
150
151 result["rows"].append(values)
152
153 @staticmethod
154 def execute_query(data_source_name_or_id, query):
155 """Run query from specific data source.
156
157 Parameters:
158 :data_source_name_or_id string|integer: Name or ID of the data source
159 :query string: Query to run
160 """
161 try:
162 if type(data_source_name_or_id) == int:
163 data_source = models.DataSource.get_by_id(data_source_name_or_id)
164 else:
165 data_source = models.DataSource.get_by_name(data_source_name_or_id)
166 except models.NoResultFound:
167 raise Exception("Wrong data source name/id: %s." % data_source_name_or_id)
168
169 # TODO: pass the user here...
170 data, error = data_source.query_runner.run_query(query, None)
171 if error is not None:
172 raise Exception(error)
173
174 # TODO: allow avoiding the JSON dumps/loads in same process
175 return json_loads(data)
176
177 @staticmethod
178 def get_source_schema(data_source_name_or_id):
179 """Get schema from specific data source.
180
181 :param data_source_name_or_id: string|integer: Name or ID of the data source
182 :return:
183 """
184 try:
185 if type(data_source_name_or_id) == int:
186 data_source = models.DataSource.get_by_id(data_source_name_or_id)
187 else:
188 data_source = models.DataSource.get_by_name(data_source_name_or_id)
189 except models.NoResultFound:
190 raise Exception("Wrong data source name/id: %s." % data_source_name_or_id)
191 schema = data_source.query_runner.get_schema()
192 return schema
193
194 @staticmethod
195 def get_query_result(query_id):
196 """Get result of an existing query.
197
198 Parameters:
199 :query_id integer: ID of existing query
200 """
201 try:
202 query = models.Query.get_by_id(query_id)
203 except models.NoResultFound:
204 raise Exception("Query id %s does not exist." % query_id)
205
206 if query.latest_query_data is None:
207 raise Exception("Query does not have results yet.")
208
209 if query.latest_query_data.data is None:
210 raise Exception("Query does not have results yet.")
211
212 return json_loads(query.latest_query_data.data)
213
214 def test_connection(self):
215 pass
216
217 def run_query(self, query, user):
218 try:
219 error = None
220
221 code = compile_restricted(query, '<string>', 'exec')
222
223 builtins = safe_builtins.copy()
224 builtins["_write_"] = self.custom_write
225 builtins["__import__"] = self.custom_import
226 builtins["_getattr_"] = getattr
227 builtins["getattr"] = getattr
228 builtins["_setattr_"] = setattr
229 builtins["setattr"] = setattr
230 builtins["_getitem_"] = self.custom_get_item
231 builtins["_getiter_"] = self.custom_get_iter
232 builtins["_print_"] = self._custom_print
233
234 # Layer in our own additional set of builtins that we have
235 # considered safe.
236 for key in self.safe_builtins:
237 builtins[key] = __builtins__[key]
238
239 restricted_globals = dict(__builtins__=builtins)
240 restricted_globals["get_query_result"] = self.get_query_result
241 restricted_globals["get_source_schema"] = self.get_source_schema
242 restricted_globals["execute_query"] = self.execute_query
243 restricted_globals["add_result_column"] = self.add_result_column
244 restricted_globals["add_result_row"] = self.add_result_row
245 restricted_globals["disable_print_log"] = self._custom_print.disable
246 restricted_globals["enable_print_log"] = self._custom_print.enable
247
248 # Supported data types
249 restricted_globals["TYPE_DATETIME"] = TYPE_DATETIME
250 restricted_globals["TYPE_BOOLEAN"] = TYPE_BOOLEAN
251 restricted_globals["TYPE_INTEGER"] = TYPE_INTEGER
252 restricted_globals["TYPE_STRING"] = TYPE_STRING
253 restricted_globals["TYPE_DATE"] = TYPE_DATE
254 restricted_globals["TYPE_FLOAT"] = TYPE_FLOAT
255
256
257 # TODO: Figure out the best way to have a timeout on a script
258 # One option is to use ETA with Celery + timeouts on workers
259 # And replacement of worker process every X requests handled.
260
261 exec((code), restricted_globals, self._script_locals)
262
263 result = self._script_locals['result']
264 result['log'] = self._custom_print.lines
265 json_data = json_dumps(result)
266 except KeyboardInterrupt:
267 error = "Query cancelled by user."
268 json_data = None
269 except Exception as e:
270 error = str(type(e)) + " " + str(e)
271 json_data = None
272
273 return json_data, error
274
275
276 register(Python)
277
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/redash/query_runner/python.py b/redash/query_runner/python.py
--- a/redash/query_runner/python.py
+++ b/redash/query_runner/python.py
@@ -211,10 +211,15 @@
return json_loads(query.latest_query_data.data)
+ def get_current_user(self):
+ return self._current_user.to_dict()
+
def test_connection(self):
pass
def run_query(self, query, user):
+ self._current_user = user
+
try:
error = None
@@ -239,6 +244,7 @@
restricted_globals = dict(__builtins__=builtins)
restricted_globals["get_query_result"] = self.get_query_result
restricted_globals["get_source_schema"] = self.get_source_schema
+ restricted_globals["get_current_user"] = self.get_current_user
restricted_globals["execute_query"] = self.execute_query
restricted_globals["add_result_column"] = self.add_result_column
restricted_globals["add_result_row"] = self.add_result_row
| {"golden_diff": "diff --git a/redash/query_runner/python.py b/redash/query_runner/python.py\n--- a/redash/query_runner/python.py\n+++ b/redash/query_runner/python.py\n@@ -211,10 +211,15 @@\n \n return json_loads(query.latest_query_data.data)\n \n+ def get_current_user(self):\n+ return self._current_user.to_dict()\n+\n def test_connection(self):\n pass\n \n def run_query(self, query, user):\n+ self._current_user = user\n+\n try:\n error = None\n \n@@ -239,6 +244,7 @@\n restricted_globals = dict(__builtins__=builtins)\n restricted_globals[\"get_query_result\"] = self.get_query_result\n restricted_globals[\"get_source_schema\"] = self.get_source_schema\n+ restricted_globals[\"get_current_user\"] = self.get_current_user\n restricted_globals[\"execute_query\"] = self.execute_query\n restricted_globals[\"add_result_column\"] = self.add_result_column\n restricted_globals[\"add_result_row\"] = self.add_result_row\n", "issue": "Access current user id/details from Python data source \nFirst of all, thanks to the authors and contributors for a very interesting and promising tool. \r\n### Question:\r\nIs it possible to access current user id/details from Python data source?\r\nI would like to be able to control access to the viewed data at row level without a need of creating and maintaining multiple (in my case about 80) data sources and dashboards. My desired scenario:\r\n 1. Store A logs into their account and accesses dashboard with KPI/Metrics.\r\n 2. Dashboard calls Python data source(s).\r\n 3. Python code loads data from a db table with all stores, identifies current user and filters out all records where store != Store A.\r\n 4. Dashboard is dynamically loaded with entries for Store A only.\r\n\r\nAlso if there are any other ways to achieve the above, please advise. \r\n\r\n### Technical details:\r\n* Redash Version: 0.12.0\n", "before_files": [{"content": "import datetime\nimport importlib\nimport logging\nimport sys\n\nfrom redash.query_runner import *\nfrom redash.utils import json_dumps, json_loads\nfrom redash import models\nfrom RestrictedPython import compile_restricted\nfrom RestrictedPython.Guards import safe_builtins\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass CustomPrint(object):\n \"\"\"CustomPrint redirect \"print\" calls to be sent as \"log\" on the result object.\"\"\"\n def __init__(self):\n self.enabled = True\n self.lines = []\n\n def write(self, text):\n if self.enabled:\n if text and text.strip():\n log_line = \"[{0}] {1}\".format(datetime.datetime.utcnow().isoformat(), text)\n self.lines.append(log_line)\n\n def enable(self):\n self.enabled = True\n\n def disable(self):\n self.enabled = False\n\n def __call__(self):\n return self\n\n\nclass Python(BaseQueryRunner):\n safe_builtins = (\n 'sorted', 'reversed', 'map', 'reduce', 'any', 'all',\n 'slice', 'filter', 'len', 'next', 'enumerate',\n 'sum', 'abs', 'min', 'max', 'round', 'cmp', 'divmod',\n 'str', 'unicode', 'int', 'float', 'complex',\n 'tuple', 'set', 'list', 'dict', 'bool',\n )\n\n @classmethod\n def configuration_schema(cls):\n return {\n 'type': 'object',\n 'properties': {\n 'allowedImportModules': {\n 'type': 'string',\n 'title': 'Modules to import prior to running the script'\n },\n 'additionalModulesPaths': {\n 'type': 'string'\n }\n },\n }\n\n @classmethod\n def enabled(cls):\n return True\n\n @classmethod\n def annotate_query(cls):\n return False\n\n def __init__(self, configuration):\n super(Python, self).__init__(configuration)\n\n self.syntax = \"python\"\n\n self._allowed_modules = {}\n self._script_locals = {\"result\": {\"rows\": [], \"columns\": [], \"log\": []}}\n self._enable_print_log = True\n self._custom_print = CustomPrint()\n\n if self.configuration.get(\"allowedImportModules\", None):\n for item in self.configuration[\"allowedImportModules\"].split(\",\"):\n self._allowed_modules[item] = None\n\n if self.configuration.get(\"additionalModulesPaths\", None):\n for p in self.configuration[\"additionalModulesPaths\"].split(\",\"):\n if p not in sys.path:\n sys.path.append(p)\n\n def custom_import(self, name, globals=None, locals=None, fromlist=(), level=0):\n if name in self._allowed_modules:\n m = None\n if self._allowed_modules[name] is None:\n m = importlib.import_module(name)\n self._allowed_modules[name] = m\n else:\n m = self._allowed_modules[name]\n\n return m\n\n raise Exception(\"'{0}' is not configured as a supported import module\".format(name))\n\n @staticmethod\n def custom_write(obj):\n \"\"\"\n Custom hooks which controls the way objects/lists/tuples/dicts behave in\n RestrictedPython\n \"\"\"\n return obj\n\n @staticmethod\n def custom_get_item(obj, key):\n return obj[key]\n\n @staticmethod\n def custom_get_iter(obj):\n return iter(obj)\n\n @staticmethod\n def add_result_column(result, column_name, friendly_name, column_type):\n \"\"\"Helper function to add columns inside a Python script running in Redash in an easier way\n\n Parameters:\n :result dict: The result dict\n :column_name string: Name of the column, which should be consisted of lowercase latin letters or underscore.\n :friendly_name string: Name of the column for display\n :column_type string: Type of the column. Check supported data types for details.\n \"\"\"\n if column_type not in SUPPORTED_COLUMN_TYPES:\n raise Exception(\"'{0}' is not a supported column type\".format(column_type))\n\n if \"columns\" not in result:\n result[\"columns\"] = []\n\n result[\"columns\"].append({\n \"name\": column_name,\n \"friendly_name\": friendly_name,\n \"type\": column_type\n })\n\n @staticmethod\n def add_result_row(result, values):\n \"\"\"Helper function to add one row to results set.\n\n Parameters:\n :result dict: The result dict\n :values dict: One row of result in dict. The key should be one of the column names. The value is the value of the column in this row.\n \"\"\"\n if \"rows\" not in result:\n result[\"rows\"] = []\n\n result[\"rows\"].append(values)\n\n @staticmethod\n def execute_query(data_source_name_or_id, query):\n \"\"\"Run query from specific data source.\n\n Parameters:\n :data_source_name_or_id string|integer: Name or ID of the data source\n :query string: Query to run\n \"\"\"\n try:\n if type(data_source_name_or_id) == int:\n data_source = models.DataSource.get_by_id(data_source_name_or_id)\n else:\n data_source = models.DataSource.get_by_name(data_source_name_or_id)\n except models.NoResultFound:\n raise Exception(\"Wrong data source name/id: %s.\" % data_source_name_or_id)\n\n # TODO: pass the user here...\n data, error = data_source.query_runner.run_query(query, None)\n if error is not None:\n raise Exception(error)\n\n # TODO: allow avoiding the JSON dumps/loads in same process\n return json_loads(data)\n\n @staticmethod\n def get_source_schema(data_source_name_or_id):\n \"\"\"Get schema from specific data source.\n\n :param data_source_name_or_id: string|integer: Name or ID of the data source\n :return:\n \"\"\"\n try:\n if type(data_source_name_or_id) == int:\n data_source = models.DataSource.get_by_id(data_source_name_or_id)\n else:\n data_source = models.DataSource.get_by_name(data_source_name_or_id)\n except models.NoResultFound:\n raise Exception(\"Wrong data source name/id: %s.\" % data_source_name_or_id)\n schema = data_source.query_runner.get_schema()\n return schema\n\n @staticmethod\n def get_query_result(query_id):\n \"\"\"Get result of an existing query.\n\n Parameters:\n :query_id integer: ID of existing query\n \"\"\"\n try:\n query = models.Query.get_by_id(query_id)\n except models.NoResultFound:\n raise Exception(\"Query id %s does not exist.\" % query_id)\n\n if query.latest_query_data is None:\n raise Exception(\"Query does not have results yet.\")\n\n if query.latest_query_data.data is None:\n raise Exception(\"Query does not have results yet.\")\n\n return json_loads(query.latest_query_data.data)\n\n def test_connection(self):\n pass\n\n def run_query(self, query, user):\n try:\n error = None\n\n code = compile_restricted(query, '<string>', 'exec')\n\n builtins = safe_builtins.copy()\n builtins[\"_write_\"] = self.custom_write\n builtins[\"__import__\"] = self.custom_import\n builtins[\"_getattr_\"] = getattr\n builtins[\"getattr\"] = getattr\n builtins[\"_setattr_\"] = setattr\n builtins[\"setattr\"] = setattr\n builtins[\"_getitem_\"] = self.custom_get_item\n builtins[\"_getiter_\"] = self.custom_get_iter\n builtins[\"_print_\"] = self._custom_print\n\n # Layer in our own additional set of builtins that we have\n # considered safe.\n for key in self.safe_builtins:\n builtins[key] = __builtins__[key]\n\n restricted_globals = dict(__builtins__=builtins)\n restricted_globals[\"get_query_result\"] = self.get_query_result\n restricted_globals[\"get_source_schema\"] = self.get_source_schema\n restricted_globals[\"execute_query\"] = self.execute_query\n restricted_globals[\"add_result_column\"] = self.add_result_column\n restricted_globals[\"add_result_row\"] = self.add_result_row\n restricted_globals[\"disable_print_log\"] = self._custom_print.disable\n restricted_globals[\"enable_print_log\"] = self._custom_print.enable\n\n # Supported data types\n restricted_globals[\"TYPE_DATETIME\"] = TYPE_DATETIME\n restricted_globals[\"TYPE_BOOLEAN\"] = TYPE_BOOLEAN\n restricted_globals[\"TYPE_INTEGER\"] = TYPE_INTEGER\n restricted_globals[\"TYPE_STRING\"] = TYPE_STRING\n restricted_globals[\"TYPE_DATE\"] = TYPE_DATE\n restricted_globals[\"TYPE_FLOAT\"] = TYPE_FLOAT\n\n\n # TODO: Figure out the best way to have a timeout on a script\n # One option is to use ETA with Celery + timeouts on workers\n # And replacement of worker process every X requests handled.\n\n exec((code), restricted_globals, self._script_locals)\n\n result = self._script_locals['result']\n result['log'] = self._custom_print.lines\n json_data = json_dumps(result)\n except KeyboardInterrupt:\n error = \"Query cancelled by user.\"\n json_data = None\n except Exception as e:\n error = str(type(e)) + \" \" + str(e)\n json_data = None\n\n return json_data, error\n\n\nregister(Python)\n", "path": "redash/query_runner/python.py"}], "after_files": [{"content": "import datetime\nimport importlib\nimport logging\nimport sys\n\nfrom redash.query_runner import *\nfrom redash.utils import json_dumps, json_loads\nfrom redash import models\nfrom RestrictedPython import compile_restricted\nfrom RestrictedPython.Guards import safe_builtins\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass CustomPrint(object):\n \"\"\"CustomPrint redirect \"print\" calls to be sent as \"log\" on the result object.\"\"\"\n def __init__(self):\n self.enabled = True\n self.lines = []\n\n def write(self, text):\n if self.enabled:\n if text and text.strip():\n log_line = \"[{0}] {1}\".format(datetime.datetime.utcnow().isoformat(), text)\n self.lines.append(log_line)\n\n def enable(self):\n self.enabled = True\n\n def disable(self):\n self.enabled = False\n\n def __call__(self):\n return self\n\n\nclass Python(BaseQueryRunner):\n safe_builtins = (\n 'sorted', 'reversed', 'map', 'reduce', 'any', 'all',\n 'slice', 'filter', 'len', 'next', 'enumerate',\n 'sum', 'abs', 'min', 'max', 'round', 'cmp', 'divmod',\n 'str', 'unicode', 'int', 'float', 'complex',\n 'tuple', 'set', 'list', 'dict', 'bool',\n )\n\n @classmethod\n def configuration_schema(cls):\n return {\n 'type': 'object',\n 'properties': {\n 'allowedImportModules': {\n 'type': 'string',\n 'title': 'Modules to import prior to running the script'\n },\n 'additionalModulesPaths': {\n 'type': 'string'\n }\n },\n }\n\n @classmethod\n def enabled(cls):\n return True\n\n @classmethod\n def annotate_query(cls):\n return False\n\n def __init__(self, configuration):\n super(Python, self).__init__(configuration)\n\n self.syntax = \"python\"\n\n self._allowed_modules = {}\n self._script_locals = {\"result\": {\"rows\": [], \"columns\": [], \"log\": []}}\n self._enable_print_log = True\n self._custom_print = CustomPrint()\n\n if self.configuration.get(\"allowedImportModules\", None):\n for item in self.configuration[\"allowedImportModules\"].split(\",\"):\n self._allowed_modules[item] = None\n\n if self.configuration.get(\"additionalModulesPaths\", None):\n for p in self.configuration[\"additionalModulesPaths\"].split(\",\"):\n if p not in sys.path:\n sys.path.append(p)\n\n def custom_import(self, name, globals=None, locals=None, fromlist=(), level=0):\n if name in self._allowed_modules:\n m = None\n if self._allowed_modules[name] is None:\n m = importlib.import_module(name)\n self._allowed_modules[name] = m\n else:\n m = self._allowed_modules[name]\n\n return m\n\n raise Exception(\"'{0}' is not configured as a supported import module\".format(name))\n\n @staticmethod\n def custom_write(obj):\n \"\"\"\n Custom hooks which controls the way objects/lists/tuples/dicts behave in\n RestrictedPython\n \"\"\"\n return obj\n\n @staticmethod\n def custom_get_item(obj, key):\n return obj[key]\n\n @staticmethod\n def custom_get_iter(obj):\n return iter(obj)\n\n @staticmethod\n def add_result_column(result, column_name, friendly_name, column_type):\n \"\"\"Helper function to add columns inside a Python script running in Redash in an easier way\n\n Parameters:\n :result dict: The result dict\n :column_name string: Name of the column, which should be consisted of lowercase latin letters or underscore.\n :friendly_name string: Name of the column for display\n :column_type string: Type of the column. Check supported data types for details.\n \"\"\"\n if column_type not in SUPPORTED_COLUMN_TYPES:\n raise Exception(\"'{0}' is not a supported column type\".format(column_type))\n\n if \"columns\" not in result:\n result[\"columns\"] = []\n\n result[\"columns\"].append({\n \"name\": column_name,\n \"friendly_name\": friendly_name,\n \"type\": column_type\n })\n\n @staticmethod\n def add_result_row(result, values):\n \"\"\"Helper function to add one row to results set.\n\n Parameters:\n :result dict: The result dict\n :values dict: One row of result in dict. The key should be one of the column names. The value is the value of the column in this row.\n \"\"\"\n if \"rows\" not in result:\n result[\"rows\"] = []\n\n result[\"rows\"].append(values)\n\n @staticmethod\n def execute_query(data_source_name_or_id, query):\n \"\"\"Run query from specific data source.\n\n Parameters:\n :data_source_name_or_id string|integer: Name or ID of the data source\n :query string: Query to run\n \"\"\"\n try:\n if type(data_source_name_or_id) == int:\n data_source = models.DataSource.get_by_id(data_source_name_or_id)\n else:\n data_source = models.DataSource.get_by_name(data_source_name_or_id)\n except models.NoResultFound:\n raise Exception(\"Wrong data source name/id: %s.\" % data_source_name_or_id)\n\n # TODO: pass the user here...\n data, error = data_source.query_runner.run_query(query, None)\n if error is not None:\n raise Exception(error)\n\n # TODO: allow avoiding the JSON dumps/loads in same process\n return json_loads(data)\n\n @staticmethod\n def get_source_schema(data_source_name_or_id):\n \"\"\"Get schema from specific data source.\n\n :param data_source_name_or_id: string|integer: Name or ID of the data source\n :return:\n \"\"\"\n try:\n if type(data_source_name_or_id) == int:\n data_source = models.DataSource.get_by_id(data_source_name_or_id)\n else:\n data_source = models.DataSource.get_by_name(data_source_name_or_id)\n except models.NoResultFound:\n raise Exception(\"Wrong data source name/id: %s.\" % data_source_name_or_id)\n schema = data_source.query_runner.get_schema()\n return schema\n\n @staticmethod\n def get_query_result(query_id):\n \"\"\"Get result of an existing query.\n\n Parameters:\n :query_id integer: ID of existing query\n \"\"\"\n try:\n query = models.Query.get_by_id(query_id)\n except models.NoResultFound:\n raise Exception(\"Query id %s does not exist.\" % query_id)\n\n if query.latest_query_data is None:\n raise Exception(\"Query does not have results yet.\")\n\n if query.latest_query_data.data is None:\n raise Exception(\"Query does not have results yet.\")\n\n return json_loads(query.latest_query_data.data)\n\n def get_current_user(self):\n return self._current_user.to_dict()\n\n def test_connection(self):\n pass\n\n def run_query(self, query, user):\n self._current_user = user\n\n try:\n error = None\n\n code = compile_restricted(query, '<string>', 'exec')\n\n builtins = safe_builtins.copy()\n builtins[\"_write_\"] = self.custom_write\n builtins[\"__import__\"] = self.custom_import\n builtins[\"_getattr_\"] = getattr\n builtins[\"getattr\"] = getattr\n builtins[\"_setattr_\"] = setattr\n builtins[\"setattr\"] = setattr\n builtins[\"_getitem_\"] = self.custom_get_item\n builtins[\"_getiter_\"] = self.custom_get_iter\n builtins[\"_print_\"] = self._custom_print\n\n # Layer in our own additional set of builtins that we have\n # considered safe.\n for key in self.safe_builtins:\n builtins[key] = __builtins__[key]\n\n restricted_globals = dict(__builtins__=builtins)\n restricted_globals[\"get_query_result\"] = self.get_query_result\n restricted_globals[\"get_source_schema\"] = self.get_source_schema\n restricted_globals[\"get_current_user\"] = self.get_current_user\n restricted_globals[\"execute_query\"] = self.execute_query\n restricted_globals[\"add_result_column\"] = self.add_result_column\n restricted_globals[\"add_result_row\"] = self.add_result_row\n restricted_globals[\"disable_print_log\"] = self._custom_print.disable\n restricted_globals[\"enable_print_log\"] = self._custom_print.enable\n\n # Supported data types\n restricted_globals[\"TYPE_DATETIME\"] = TYPE_DATETIME\n restricted_globals[\"TYPE_BOOLEAN\"] = TYPE_BOOLEAN\n restricted_globals[\"TYPE_INTEGER\"] = TYPE_INTEGER\n restricted_globals[\"TYPE_STRING\"] = TYPE_STRING\n restricted_globals[\"TYPE_DATE\"] = TYPE_DATE\n restricted_globals[\"TYPE_FLOAT\"] = TYPE_FLOAT\n\n\n # TODO: Figure out the best way to have a timeout on a script\n # One option is to use ETA with Celery + timeouts on workers\n # And replacement of worker process every X requests handled.\n\n exec((code), restricted_globals, self._script_locals)\n\n result = self._script_locals['result']\n result['log'] = self._custom_print.lines\n json_data = json_dumps(result)\n except KeyboardInterrupt:\n error = \"Query cancelled by user.\"\n json_data = None\n except Exception as e:\n error = str(type(e)) + \" \" + str(e)\n json_data = None\n\n return json_data, error\n\n\nregister(Python)\n", "path": "redash/query_runner/python.py"}]} | 3,223 | 231 |
gh_patches_debug_25801 | rasdani/github-patches | git_diff | akvo__akvo-rsr-3421 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Results framework loads very slowly for projects with lot of indicator dimensions
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `akvo/rest/views/indicator_dimension.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Akvo RSR is covered by the GNU Affero General Public License.
4 # See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6
7
8 from akvo.rsr.models import IndicatorDimension
9
10 from ..serializers import IndicatorDimensionSerializer
11 from ..viewsets import PublicProjectViewSet
12
13
14 class IndicatorDimensionViewSet(PublicProjectViewSet):
15 """
16 """
17 queryset = IndicatorDimension.objects.all()
18 serializer_class = IndicatorDimensionSerializer
19 project_relation = 'indicator__result__project__'
20
```
Path: `akvo/rest/pagination.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Akvo RSR is covered by the GNU Affero General Public License.
4 # See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6
7 from rest_framework import pagination
8 from rest_framework.response import Response
9
10
11 class LimitSizePageNumberPagination(pagination.PageNumberPagination):
12 page_size = 30
13 page_size_query_param = 'limit'
14 max_page_size = 100
15
16
17 class TastypieOffsetPagination(pagination.LimitOffsetPagination):
18
19 def get_paginated_response(self, data):
20 """ Emulate the old style Tastypie format if the URL contains /api/v1/
21 """
22 return Response({
23 'meta': {
24 'next': self.get_next_link(),
25 'previous': self.get_previous_link(),
26 'total_count': self.count,
27 'limit': self.limit,
28 'offset': self.offset,
29 },
30 'objects': data
31 })
32
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/akvo/rest/pagination.py b/akvo/rest/pagination.py
--- a/akvo/rest/pagination.py
+++ b/akvo/rest/pagination.py
@@ -8,12 +8,18 @@
from rest_framework.response import Response
-class LimitSizePageNumberPagination(pagination.PageNumberPagination):
+class StandardSizePageNumberPagination(pagination.PageNumberPagination):
page_size = 30
page_size_query_param = 'limit'
max_page_size = 100
+class LargeSizePageNumberPagination(pagination.PageNumberPagination):
+ page_size = 100
+ page_size_query_param = 'limit'
+ max_page_size = 1000
+
+
class TastypieOffsetPagination(pagination.LimitOffsetPagination):
def get_paginated_response(self, data):
diff --git a/akvo/rest/views/indicator_dimension.py b/akvo/rest/views/indicator_dimension.py
--- a/akvo/rest/views/indicator_dimension.py
+++ b/akvo/rest/views/indicator_dimension.py
@@ -6,6 +6,7 @@
from akvo.rsr.models import IndicatorDimension
+from akvo.rest.pagination import LargeSizePageNumberPagination
from ..serializers import IndicatorDimensionSerializer
from ..viewsets import PublicProjectViewSet
@@ -17,3 +18,4 @@
queryset = IndicatorDimension.objects.all()
serializer_class = IndicatorDimensionSerializer
project_relation = 'indicator__result__project__'
+ pagination_class = LargeSizePageNumberPagination
| {"golden_diff": "diff --git a/akvo/rest/pagination.py b/akvo/rest/pagination.py\n--- a/akvo/rest/pagination.py\n+++ b/akvo/rest/pagination.py\n@@ -8,12 +8,18 @@\n from rest_framework.response import Response\n \n \n-class LimitSizePageNumberPagination(pagination.PageNumberPagination):\n+class StandardSizePageNumberPagination(pagination.PageNumberPagination):\n page_size = 30\n page_size_query_param = 'limit'\n max_page_size = 100\n \n \n+class LargeSizePageNumberPagination(pagination.PageNumberPagination):\n+ page_size = 100\n+ page_size_query_param = 'limit'\n+ max_page_size = 1000\n+\n+\n class TastypieOffsetPagination(pagination.LimitOffsetPagination):\n \n def get_paginated_response(self, data):\ndiff --git a/akvo/rest/views/indicator_dimension.py b/akvo/rest/views/indicator_dimension.py\n--- a/akvo/rest/views/indicator_dimension.py\n+++ b/akvo/rest/views/indicator_dimension.py\n@@ -6,6 +6,7 @@\n \n \n from akvo.rsr.models import IndicatorDimension\n+from akvo.rest.pagination import LargeSizePageNumberPagination\n \n from ..serializers import IndicatorDimensionSerializer\n from ..viewsets import PublicProjectViewSet\n@@ -17,3 +18,4 @@\n queryset = IndicatorDimension.objects.all()\n serializer_class = IndicatorDimensionSerializer\n project_relation = 'indicator__result__project__'\n+ pagination_class = LargeSizePageNumberPagination\n", "issue": "Results framework loads very slowly for projects with lot of indicator dimensions\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\n\nfrom akvo.rsr.models import IndicatorDimension\n\nfrom ..serializers import IndicatorDimensionSerializer\nfrom ..viewsets import PublicProjectViewSet\n\n\nclass IndicatorDimensionViewSet(PublicProjectViewSet):\n \"\"\"\n \"\"\"\n queryset = IndicatorDimension.objects.all()\n serializer_class = IndicatorDimensionSerializer\n project_relation = 'indicator__result__project__'\n", "path": "akvo/rest/views/indicator_dimension.py"}, {"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\nfrom rest_framework import pagination\nfrom rest_framework.response import Response\n\n\nclass LimitSizePageNumberPagination(pagination.PageNumberPagination):\n page_size = 30\n page_size_query_param = 'limit'\n max_page_size = 100\n\n\nclass TastypieOffsetPagination(pagination.LimitOffsetPagination):\n\n def get_paginated_response(self, data):\n \"\"\" Emulate the old style Tastypie format if the URL contains /api/v1/\n \"\"\"\n return Response({\n 'meta': {\n 'next': self.get_next_link(),\n 'previous': self.get_previous_link(),\n 'total_count': self.count,\n 'limit': self.limit,\n 'offset': self.offset,\n },\n 'objects': data\n })\n", "path": "akvo/rest/pagination.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\n\nfrom akvo.rsr.models import IndicatorDimension\nfrom akvo.rest.pagination import LargeSizePageNumberPagination\n\nfrom ..serializers import IndicatorDimensionSerializer\nfrom ..viewsets import PublicProjectViewSet\n\n\nclass IndicatorDimensionViewSet(PublicProjectViewSet):\n \"\"\"\n \"\"\"\n queryset = IndicatorDimension.objects.all()\n serializer_class = IndicatorDimensionSerializer\n project_relation = 'indicator__result__project__'\n pagination_class = LargeSizePageNumberPagination\n", "path": "akvo/rest/views/indicator_dimension.py"}, {"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\nfrom rest_framework import pagination\nfrom rest_framework.response import Response\n\n\nclass StandardSizePageNumberPagination(pagination.PageNumberPagination):\n page_size = 30\n page_size_query_param = 'limit'\n max_page_size = 100\n\n\nclass LargeSizePageNumberPagination(pagination.PageNumberPagination):\n page_size = 100\n page_size_query_param = 'limit'\n max_page_size = 1000\n\n\nclass TastypieOffsetPagination(pagination.LimitOffsetPagination):\n\n def get_paginated_response(self, data):\n \"\"\" Emulate the old style Tastypie format if the URL contains /api/v1/\n \"\"\"\n return Response({\n 'meta': {\n 'next': self.get_next_link(),\n 'previous': self.get_previous_link(),\n 'total_count': self.count,\n 'limit': self.limit,\n 'offset': self.offset,\n },\n 'objects': data\n })\n", "path": "akvo/rest/pagination.py"}]} | 752 | 336 |
gh_patches_debug_317 | rasdani/github-patches | git_diff | jazzband__pip-tools-1871 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Convert the README from rst to md
<!--- Describe the changes here. --->
This PR converts the documentation from README.rst to README.md
Related: https://github.com/jazzband/pip-tools/issues/1856
##### Contributor checklist
- [ ] Provided the tests for the changes.
- [x] Assure PR title is short, clear, and good to be included in the user-oriented changelog
##### Maintainer checklist
- [ ] Assure one of these labels is present: `backwards incompatible`, `feature`, `enhancement`, `deprecation`, `bug`, `dependency`, `docs` or `skip-changelog` as they determine changelog listing.
- [ ] Assign the PR to an existing or new milestone for the target version (following [Semantic Versioning](https://blog.versioneye.com/2014/01/16/semantic-versioning/)).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 # https://www.sphinx-doc.org/en/master/usage/configuration.html
2 """Configuration file for the Sphinx documentation builder."""
3
4 from __future__ import annotations
5
6 from functools import partial
7 from pathlib import Path
8
9 from setuptools_scm import get_version
10
11 # -- Path setup --------------------------------------------------------------
12
13 PROJECT_ROOT_DIR = Path(__file__).parents[1].resolve()
14 get_scm_version = partial(get_version, root=PROJECT_ROOT_DIR)
15
16
17 # -- Project information -----------------------------------------------------
18
19 project = "pip-tools"
20 author = f"{project} Contributors"
21 copyright = f"The {author}"
22
23 # The short X.Y version
24 version = ".".join(
25 get_scm_version(
26 local_scheme="no-local-version",
27 ).split(
28 "."
29 )[:3],
30 )
31
32 # The full version, including alpha/beta/rc tags
33 release = get_scm_version()
34
35
36 # -- General configuration ---------------------------------------------------
37
38 # Add any Sphinx extension module names here, as strings. They can be
39 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
40 # ones.
41 extensions = ["myst_parser"]
42
43
44 # -- Options for HTML output -------------------------------------------------
45
46 # The theme to use for HTML and HTML Help pages. See the documentation for
47 # a list of builtin themes.
48 #
49 html_theme = "furo"
50
51
52 # -------------------------------------------------------------------------
53 default_role = "any"
54 nitpicky = True
55
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -52,3 +52,4 @@
# -------------------------------------------------------------------------
default_role = "any"
nitpicky = True
+suppress_warnings = ["myst.xref_missing"]
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -52,3 +52,4 @@\n # -------------------------------------------------------------------------\n default_role = \"any\"\n nitpicky = True\n+suppress_warnings = [\"myst.xref_missing\"]\n", "issue": "Convert the README from rst to md\n<!--- Describe the changes here. --->\r\nThis PR converts the documentation from README.rst to README.md\r\nRelated: https://github.com/jazzband/pip-tools/issues/1856\r\n##### Contributor checklist\r\n\r\n- [ ] Provided the tests for the changes.\r\n- [x] Assure PR title is short, clear, and good to be included in the user-oriented changelog\r\n\r\n##### Maintainer checklist\r\n\r\n- [ ] Assure one of these labels is present: `backwards incompatible`, `feature`, `enhancement`, `deprecation`, `bug`, `dependency`, `docs` or `skip-changelog` as they determine changelog listing.\r\n- [ ] Assign the PR to an existing or new milestone for the target version (following [Semantic Versioning](https://blog.versioneye.com/2014/01/16/semantic-versioning/)).\r\n\n", "before_files": [{"content": "# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\"\"\"Configuration file for the Sphinx documentation builder.\"\"\"\n\nfrom __future__ import annotations\n\nfrom functools import partial\nfrom pathlib import Path\n\nfrom setuptools_scm import get_version\n\n# -- Path setup --------------------------------------------------------------\n\nPROJECT_ROOT_DIR = Path(__file__).parents[1].resolve()\nget_scm_version = partial(get_version, root=PROJECT_ROOT_DIR)\n\n\n# -- Project information -----------------------------------------------------\n\nproject = \"pip-tools\"\nauthor = f\"{project} Contributors\"\ncopyright = f\"The {author}\"\n\n# The short X.Y version\nversion = \".\".join(\n get_scm_version(\n local_scheme=\"no-local-version\",\n ).split(\n \".\"\n )[:3],\n)\n\n# The full version, including alpha/beta/rc tags\nrelease = get_scm_version()\n\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\"myst_parser\"]\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"furo\"\n\n\n# -------------------------------------------------------------------------\ndefault_role = \"any\"\nnitpicky = True\n", "path": "docs/conf.py"}], "after_files": [{"content": "# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\"\"\"Configuration file for the Sphinx documentation builder.\"\"\"\n\nfrom __future__ import annotations\n\nfrom functools import partial\nfrom pathlib import Path\n\nfrom setuptools_scm import get_version\n\n# -- Path setup --------------------------------------------------------------\n\nPROJECT_ROOT_DIR = Path(__file__).parents[1].resolve()\nget_scm_version = partial(get_version, root=PROJECT_ROOT_DIR)\n\n\n# -- Project information -----------------------------------------------------\n\nproject = \"pip-tools\"\nauthor = f\"{project} Contributors\"\ncopyright = f\"The {author}\"\n\n# The short X.Y version\nversion = \".\".join(\n get_scm_version(\n local_scheme=\"no-local-version\",\n ).split(\n \".\"\n )[:3],\n)\n\n# The full version, including alpha/beta/rc tags\nrelease = get_scm_version()\n\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\"myst_parser\"]\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"furo\"\n\n\n# -------------------------------------------------------------------------\ndefault_role = \"any\"\nnitpicky = True\nsuppress_warnings = [\"myst.xref_missing\"]\n", "path": "docs/conf.py"}]} | 841 | 61 |
gh_patches_debug_40399 | rasdani/github-patches | git_diff | SeldonIO__MLServer-233 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support MLflow current protocol
As a follow-up to #167, it would be interesting to explore adding a custom endpoint to the `mlserver-mlflow` runtime which supports [MLflow's existing API](https://www.mlflow.org/docs/latest/models.html#deploy-mlflow-models). This would help reduce friction on user adoption of MLSever, as well as a temporary stopgap for users while they adopt the V2 protocol.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `runtimes/mlflow/mlserver_mlflow/runtime.py`
Content:
```
1 import mlflow
2
3 from mlserver.types import InferenceRequest, InferenceResponse
4 from mlserver.model import MLModel
5 from mlserver.utils import get_model_uri
6 from mlserver.codecs import get_decoded_or_raw
7
8 from .encoding import to_outputs
9
10
11 class MLflowRuntime(MLModel):
12 """
13 Implementation of the MLModel interface to load and serve `scikit-learn`
14 models persisted with `joblib`.
15 """
16
17 async def load(self) -> bool:
18 # TODO: Log info message
19 model_uri = await get_model_uri(self._settings)
20 self._model = mlflow.pyfunc.load_model(model_uri)
21
22 self.ready = True
23 return self.ready
24
25 async def predict(self, payload: InferenceRequest) -> InferenceResponse:
26 decoded_payload = get_decoded_or_raw(payload)
27
28 # TODO: Can `output` be a dictionary of tensors?
29 model_output = self._model.predict(decoded_payload)
30
31 return InferenceResponse(
32 model_name=self.name,
33 model_version=self.version,
34 outputs=to_outputs(model_output),
35 )
36
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/runtimes/mlflow/mlserver_mlflow/runtime.py b/runtimes/mlflow/mlserver_mlflow/runtime.py
--- a/runtimes/mlflow/mlserver_mlflow/runtime.py
+++ b/runtimes/mlflow/mlserver_mlflow/runtime.py
@@ -1,9 +1,29 @@
import mlflow
+from io import StringIO
+from fastapi import Request, Response
+
+from mlflow.exceptions import MlflowException
+from mlflow.pyfunc.scoring_server import (
+ CONTENT_TYPES,
+ CONTENT_TYPE_CSV,
+ CONTENT_TYPE_JSON,
+ CONTENT_TYPE_JSON_SPLIT_ORIENTED,
+ CONTENT_TYPE_JSON_RECORDS_ORIENTED,
+ CONTENT_TYPE_JSON_SPLIT_NUMPY,
+ parse_csv_input,
+ infer_and_parse_json_input,
+ parse_json_input,
+ parse_split_oriented_json_input_to_numpy,
+ predictions_to_json,
+)
+
from mlserver.types import InferenceRequest, InferenceResponse
from mlserver.model import MLModel
from mlserver.utils import get_model_uri
from mlserver.codecs import get_decoded_or_raw
+from mlserver.handlers import custom_handler
+from mlserver.errors import InferenceError
from .encoding import to_outputs
@@ -14,10 +34,68 @@
models persisted with `joblib`.
"""
+ # TODO: Decouple from REST
+ @custom_handler(rest_path="/invocations")
+ async def invocations(self, request: Request) -> Response:
+ """
+ This custom handler is meant to mimic the behaviour of the existing
+ scoring server in MLflow.
+ For details about its implementation, please consult the original
+ implementation in the MLflow repository:
+
+ https://github.com/mlflow/mlflow/blob/master/mlflow/pyfunc/scoring_server/__init__.py
+ """
+ content_type = request.headers.get("content-type", None)
+ raw_data = await request.body()
+ as_str = raw_data.decode("utf-8")
+
+ if content_type == CONTENT_TYPE_CSV:
+ csv_input = StringIO(as_str)
+ data = parse_csv_input(csv_input=csv_input)
+ elif content_type == CONTENT_TYPE_JSON:
+ data = infer_and_parse_json_input(as_str, self._input_schema)
+ elif content_type == CONTENT_TYPE_JSON_SPLIT_ORIENTED:
+ data = parse_json_input(
+ json_input=StringIO(as_str),
+ orient="split",
+ schema=self._input_schema,
+ )
+ elif content_type == CONTENT_TYPE_JSON_RECORDS_ORIENTED:
+ data = parse_json_input(
+ json_input=StringIO(as_str),
+ orient="records",
+ schema=self._input_schema,
+ )
+ elif content_type == CONTENT_TYPE_JSON_SPLIT_NUMPY:
+ data = parse_split_oriented_json_input_to_numpy(as_str)
+ else:
+ content_type_error_message = (
+ "This predictor only supports the following content types, "
+ f"{CONTENT_TYPES}. Got '{content_type}'."
+ )
+ raise InferenceError(content_type_error_message)
+
+ try:
+ raw_predictions = self._model.predict(data)
+ except MlflowException as e:
+ raise InferenceError(e.message)
+ except Exception:
+ error_message = (
+ "Encountered an unexpected error while evaluating the model. Verify"
+ " that the serialized input Dataframe is compatible with the model for"
+ " inference."
+ )
+ raise InferenceError(error_message)
+
+ result = StringIO()
+ predictions_to_json(raw_predictions, result)
+ return Response(content=result.getvalue(), media_type="application/json")
+
async def load(self) -> bool:
# TODO: Log info message
model_uri = await get_model_uri(self._settings)
self._model = mlflow.pyfunc.load_model(model_uri)
+ self._input_schema = self._model.metadata.get_input_schema()
self.ready = True
return self.ready
| {"golden_diff": "diff --git a/runtimes/mlflow/mlserver_mlflow/runtime.py b/runtimes/mlflow/mlserver_mlflow/runtime.py\n--- a/runtimes/mlflow/mlserver_mlflow/runtime.py\n+++ b/runtimes/mlflow/mlserver_mlflow/runtime.py\n@@ -1,9 +1,29 @@\n import mlflow\n \n+from io import StringIO\n+from fastapi import Request, Response\n+\n+from mlflow.exceptions import MlflowException\n+from mlflow.pyfunc.scoring_server import (\n+ CONTENT_TYPES,\n+ CONTENT_TYPE_CSV,\n+ CONTENT_TYPE_JSON,\n+ CONTENT_TYPE_JSON_SPLIT_ORIENTED,\n+ CONTENT_TYPE_JSON_RECORDS_ORIENTED,\n+ CONTENT_TYPE_JSON_SPLIT_NUMPY,\n+ parse_csv_input,\n+ infer_and_parse_json_input,\n+ parse_json_input,\n+ parse_split_oriented_json_input_to_numpy,\n+ predictions_to_json,\n+)\n+\n from mlserver.types import InferenceRequest, InferenceResponse\n from mlserver.model import MLModel\n from mlserver.utils import get_model_uri\n from mlserver.codecs import get_decoded_or_raw\n+from mlserver.handlers import custom_handler\n+from mlserver.errors import InferenceError\n \n from .encoding import to_outputs\n \n@@ -14,10 +34,68 @@\n models persisted with `joblib`.\n \"\"\"\n \n+ # TODO: Decouple from REST\n+ @custom_handler(rest_path=\"/invocations\")\n+ async def invocations(self, request: Request) -> Response:\n+ \"\"\"\n+ This custom handler is meant to mimic the behaviour of the existing\n+ scoring server in MLflow.\n+ For details about its implementation, please consult the original\n+ implementation in the MLflow repository:\n+\n+ https://github.com/mlflow/mlflow/blob/master/mlflow/pyfunc/scoring_server/__init__.py\n+ \"\"\"\n+ content_type = request.headers.get(\"content-type\", None)\n+ raw_data = await request.body()\n+ as_str = raw_data.decode(\"utf-8\")\n+\n+ if content_type == CONTENT_TYPE_CSV:\n+ csv_input = StringIO(as_str)\n+ data = parse_csv_input(csv_input=csv_input)\n+ elif content_type == CONTENT_TYPE_JSON:\n+ data = infer_and_parse_json_input(as_str, self._input_schema)\n+ elif content_type == CONTENT_TYPE_JSON_SPLIT_ORIENTED:\n+ data = parse_json_input(\n+ json_input=StringIO(as_str),\n+ orient=\"split\",\n+ schema=self._input_schema,\n+ )\n+ elif content_type == CONTENT_TYPE_JSON_RECORDS_ORIENTED:\n+ data = parse_json_input(\n+ json_input=StringIO(as_str),\n+ orient=\"records\",\n+ schema=self._input_schema,\n+ )\n+ elif content_type == CONTENT_TYPE_JSON_SPLIT_NUMPY:\n+ data = parse_split_oriented_json_input_to_numpy(as_str)\n+ else:\n+ content_type_error_message = (\n+ \"This predictor only supports the following content types, \"\n+ f\"{CONTENT_TYPES}. Got '{content_type}'.\"\n+ )\n+ raise InferenceError(content_type_error_message)\n+\n+ try:\n+ raw_predictions = self._model.predict(data)\n+ except MlflowException as e:\n+ raise InferenceError(e.message)\n+ except Exception:\n+ error_message = (\n+ \"Encountered an unexpected error while evaluating the model. Verify\"\n+ \" that the serialized input Dataframe is compatible with the model for\"\n+ \" inference.\"\n+ )\n+ raise InferenceError(error_message)\n+\n+ result = StringIO()\n+ predictions_to_json(raw_predictions, result)\n+ return Response(content=result.getvalue(), media_type=\"application/json\")\n+\n async def load(self) -> bool:\n # TODO: Log info message\n model_uri = await get_model_uri(self._settings)\n self._model = mlflow.pyfunc.load_model(model_uri)\n+ self._input_schema = self._model.metadata.get_input_schema()\n \n self.ready = True\n return self.ready\n", "issue": "Support MLflow current protocol\nAs a follow-up to #167, it would be interesting to explore adding a custom endpoint to the `mlserver-mlflow` runtime which supports [MLflow's existing API](https://www.mlflow.org/docs/latest/models.html#deploy-mlflow-models). This would help reduce friction on user adoption of MLSever, as well as a temporary stopgap for users while they adopt the V2 protocol.\n", "before_files": [{"content": "import mlflow\n\nfrom mlserver.types import InferenceRequest, InferenceResponse\nfrom mlserver.model import MLModel\nfrom mlserver.utils import get_model_uri\nfrom mlserver.codecs import get_decoded_or_raw\n\nfrom .encoding import to_outputs\n\n\nclass MLflowRuntime(MLModel):\n \"\"\"\n Implementation of the MLModel interface to load and serve `scikit-learn`\n models persisted with `joblib`.\n \"\"\"\n\n async def load(self) -> bool:\n # TODO: Log info message\n model_uri = await get_model_uri(self._settings)\n self._model = mlflow.pyfunc.load_model(model_uri)\n\n self.ready = True\n return self.ready\n\n async def predict(self, payload: InferenceRequest) -> InferenceResponse:\n decoded_payload = get_decoded_or_raw(payload)\n\n # TODO: Can `output` be a dictionary of tensors?\n model_output = self._model.predict(decoded_payload)\n\n return InferenceResponse(\n model_name=self.name,\n model_version=self.version,\n outputs=to_outputs(model_output),\n )\n", "path": "runtimes/mlflow/mlserver_mlflow/runtime.py"}], "after_files": [{"content": "import mlflow\n\nfrom io import StringIO\nfrom fastapi import Request, Response\n\nfrom mlflow.exceptions import MlflowException\nfrom mlflow.pyfunc.scoring_server import (\n CONTENT_TYPES,\n CONTENT_TYPE_CSV,\n CONTENT_TYPE_JSON,\n CONTENT_TYPE_JSON_SPLIT_ORIENTED,\n CONTENT_TYPE_JSON_RECORDS_ORIENTED,\n CONTENT_TYPE_JSON_SPLIT_NUMPY,\n parse_csv_input,\n infer_and_parse_json_input,\n parse_json_input,\n parse_split_oriented_json_input_to_numpy,\n predictions_to_json,\n)\n\nfrom mlserver.types import InferenceRequest, InferenceResponse\nfrom mlserver.model import MLModel\nfrom mlserver.utils import get_model_uri\nfrom mlserver.codecs import get_decoded_or_raw\nfrom mlserver.handlers import custom_handler\nfrom mlserver.errors import InferenceError\n\nfrom .encoding import to_outputs\n\n\nclass MLflowRuntime(MLModel):\n \"\"\"\n Implementation of the MLModel interface to load and serve `scikit-learn`\n models persisted with `joblib`.\n \"\"\"\n\n # TODO: Decouple from REST\n @custom_handler(rest_path=\"/invocations\")\n async def invocations(self, request: Request) -> Response:\n \"\"\"\n This custom handler is meant to mimic the behaviour of the existing\n scoring server in MLflow.\n For details about its implementation, please consult the original\n implementation in the MLflow repository:\n\n https://github.com/mlflow/mlflow/blob/master/mlflow/pyfunc/scoring_server/__init__.py\n \"\"\"\n content_type = request.headers.get(\"content-type\", None)\n raw_data = await request.body()\n as_str = raw_data.decode(\"utf-8\")\n\n if content_type == CONTENT_TYPE_CSV:\n csv_input = StringIO(as_str)\n data = parse_csv_input(csv_input=csv_input)\n elif content_type == CONTENT_TYPE_JSON:\n data = infer_and_parse_json_input(as_str, self._input_schema)\n elif content_type == CONTENT_TYPE_JSON_SPLIT_ORIENTED:\n data = parse_json_input(\n json_input=StringIO(as_str),\n orient=\"split\",\n schema=self._input_schema,\n )\n elif content_type == CONTENT_TYPE_JSON_RECORDS_ORIENTED:\n data = parse_json_input(\n json_input=StringIO(as_str),\n orient=\"records\",\n schema=self._input_schema,\n )\n elif content_type == CONTENT_TYPE_JSON_SPLIT_NUMPY:\n data = parse_split_oriented_json_input_to_numpy(as_str)\n else:\n content_type_error_message = (\n \"This predictor only supports the following content types, \"\n f\"{CONTENT_TYPES}. Got '{content_type}'.\"\n )\n raise InferenceError(content_type_error_message)\n\n try:\n raw_predictions = self._model.predict(data)\n except MlflowException as e:\n raise InferenceError(e.message)\n except Exception:\n error_message = (\n \"Encountered an unexpected error while evaluating the model. Verify\"\n \" that the serialized input Dataframe is compatible with the model for\"\n \" inference.\"\n )\n raise InferenceError(error_message)\n\n result = StringIO()\n predictions_to_json(raw_predictions, result)\n return Response(content=result.getvalue(), media_type=\"application/json\")\n\n async def load(self) -> bool:\n # TODO: Log info message\n model_uri = await get_model_uri(self._settings)\n self._model = mlflow.pyfunc.load_model(model_uri)\n self._input_schema = self._model.metadata.get_input_schema()\n\n self.ready = True\n return self.ready\n\n async def predict(self, payload: InferenceRequest) -> InferenceResponse:\n decoded_payload = get_decoded_or_raw(payload)\n\n # TODO: Can `output` be a dictionary of tensors?\n model_output = self._model.predict(decoded_payload)\n\n return InferenceResponse(\n model_name=self.name,\n model_version=self.version,\n outputs=to_outputs(model_output),\n )\n", "path": "runtimes/mlflow/mlserver_mlflow/runtime.py"}]} | 655 | 873 |
gh_patches_debug_38605 | rasdani/github-patches | git_diff | gratipay__gratipay.com-3616 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
add 2.0 payments to history pages
@ehmatthes at https://github.com/gratipay/gratipay.com/issues/3446#issuecomment-103163666:
> I made an account for introtopython.org last week, and it was one of the teams that should have been included in last week's payday. But my balance is low enough that I didn't get a payout.
>
> How will I know if my accounts are set up correctly, ie if my team was included in last week's payday? I don't know how to view my team account, or if it's even possible to view a profile page for that team yet. introtopython is not included in my teams page. My individual history doesn't show anything after payday 152.
#### Notify:
- [x] https://gratipay.freshdesk.com/helpdesk/tickets/2233
- [x] https://gratipay.freshdesk.com/helpdesk/tickets/2358
- [x] https://gratipay.freshdesk.com/helpdesk/tickets/2405
- [x] https://gratipay.freshdesk.com/helpdesk/tickets/2514
- [x] https://gratipay.freshdesk.com/helpdesk/tickets/2468
- [x] https://gratipay.freshdesk.com/helpdesk/tickets/2354
- [x] https://gratipay.freshdesk.com/helpdesk/tickets/2540
- [x] https://gratipay.freshdesk.com/helpdesk/tickets/2450
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gratipay/utils/history.py`
Content:
```
1 from datetime import datetime
2 from decimal import Decimal
3
4 from aspen import Response
5 from psycopg2 import IntegrityError
6
7
8 def get_end_of_year_balance(db, participant, year, current_year):
9 if year == current_year:
10 return participant.balance
11 start = participant.claimed_time or participant.ctime
12 if year < start.year:
13 return Decimal('0.00')
14
15 balance = db.one("""
16 SELECT balance
17 FROM balances_at
18 WHERE participant = %s
19 AND "at" = %s
20 """, (participant.id, datetime(year+1, 1, 1)))
21 if balance is not None:
22 return balance
23
24 username = participant.username
25 start_balance = get_end_of_year_balance(db, participant, year-1, current_year)
26 delta = db.one("""
27 SELECT (
28 SELECT COALESCE(sum(amount), 0) AS a
29 FROM exchanges
30 WHERE participant = %(username)s
31 AND extract(year from timestamp) = %(year)s
32 AND amount > 0
33 AND (status is null OR status = 'succeeded')
34 ) + (
35 SELECT COALESCE(sum(amount-fee), 0) AS a
36 FROM exchanges
37 WHERE participant = %(username)s
38 AND extract(year from timestamp) = %(year)s
39 AND amount < 0
40 AND (status is null OR status <> 'failed')
41 ) + (
42 SELECT COALESCE(sum(-amount), 0) AS a
43 FROM transfers
44 WHERE tipper = %(username)s
45 AND extract(year from timestamp) = %(year)s
46 ) + (
47 SELECT COALESCE(sum(amount), 0) AS a
48 FROM transfers
49 WHERE tippee = %(username)s
50 AND extract(year from timestamp) = %(year)s
51 ) AS delta
52 """, locals())
53 balance = start_balance + delta
54 try:
55 db.run("""
56 INSERT INTO balances_at
57 (participant, at, balance)
58 VALUES (%s, %s, %s)
59 """, (participant.id, datetime(year+1, 1, 1), balance))
60 except IntegrityError:
61 pass
62 return balance
63
64
65 def iter_payday_events(db, participant, year=None):
66 """Yields payday events for the given participant.
67 """
68 current_year = datetime.utcnow().year
69 year = year or current_year
70
71 username = participant.username
72 exchanges = db.all("""
73 SELECT *
74 FROM exchanges
75 WHERE participant=%(username)s
76 AND extract(year from timestamp) = %(year)s
77 """, locals(), back_as=dict)
78 transfers = db.all("""
79 SELECT *
80 FROM transfers
81 WHERE (tipper=%(username)s OR tippee=%(username)s)
82 AND extract(year from timestamp) = %(year)s
83 """, locals(), back_as=dict)
84
85 if not (exchanges or transfers):
86 return
87
88 if transfers:
89 yield dict(
90 kind='totals',
91 given=sum(t['amount'] for t in transfers if t['tipper'] == username and t['context'] != 'take'),
92 received=sum(t['amount'] for t in transfers if t['tippee'] == username),
93 )
94
95 payday_dates = db.all("""
96 SELECT ts_start::date
97 FROM paydays
98 ORDER BY ts_start ASC
99 """)
100
101 balance = get_end_of_year_balance(db, participant, year, current_year)
102 prev_date = None
103 get_timestamp = lambda e: e['timestamp']
104 events = sorted(exchanges+transfers, key=get_timestamp, reverse=True)
105 for event in events:
106
107 event['balance'] = balance
108
109 event_date = event['timestamp'].date()
110 if event_date != prev_date:
111 if prev_date:
112 yield dict(kind='day-close', balance=balance)
113 day_open = dict(kind='day-open', date=event_date, balance=balance)
114 if payday_dates:
115 while payday_dates and payday_dates[-1] > event_date:
116 payday_dates.pop()
117 payday_date = payday_dates[-1] if payday_dates else None
118 if event_date == payday_date:
119 day_open['payday_number'] = len(payday_dates) - 1
120 yield day_open
121 prev_date = event_date
122
123 if 'fee' in event:
124 if event['amount'] > 0:
125 kind = 'charge'
126 if event['status'] in (None, 'succeeded'):
127 balance -= event['amount']
128 else:
129 kind = 'credit'
130 if event['status'] != 'failed':
131 balance -= event['amount'] - event['fee']
132 else:
133 kind = 'transfer'
134 if event['tippee'] == username:
135 balance -= event['amount']
136 else:
137 balance += event['amount']
138 event['kind'] = kind
139
140 yield event
141
142 yield dict(kind='day-close', balance=balance)
143
144
145 def export_history(participant, year, mode, key, back_as='namedtuple', require_key=False):
146 db = participant.db
147 params = dict(username=participant.username, year=year)
148 out = {}
149 if mode == 'aggregate':
150 out['given'] = lambda: db.all("""
151 SELECT tippee, sum(amount) AS amount
152 FROM transfers
153 WHERE tipper = %(username)s
154 AND extract(year from timestamp) = %(year)s
155 GROUP BY tippee
156 """, params, back_as=back_as)
157 out['taken'] = lambda: db.all("""
158 SELECT tipper AS team, sum(amount) AS amount
159 FROM transfers
160 WHERE tippee = %(username)s
161 AND context = 'take'
162 AND extract(year from timestamp) = %(year)s
163 GROUP BY tipper
164 """, params, back_as=back_as)
165 else:
166 out['exchanges'] = lambda: db.all("""
167 SELECT timestamp, amount, fee, status, note
168 FROM exchanges
169 WHERE participant = %(username)s
170 AND extract(year from timestamp) = %(year)s
171 ORDER BY timestamp ASC
172 """, params, back_as=back_as)
173 out['given'] = lambda: db.all("""
174 SELECT timestamp, tippee, amount, context
175 FROM transfers
176 WHERE tipper = %(username)s
177 AND extract(year from timestamp) = %(year)s
178 ORDER BY timestamp ASC
179 """, params, back_as=back_as)
180 out['taken'] = lambda: db.all("""
181 SELECT timestamp, tipper AS team, amount
182 FROM transfers
183 WHERE tippee = %(username)s
184 AND context = 'take'
185 AND extract(year from timestamp) = %(year)s
186 ORDER BY timestamp ASC
187 """, params, back_as=back_as)
188 out['received'] = lambda: db.all("""
189 SELECT timestamp, amount, context
190 FROM transfers
191 WHERE tippee = %(username)s
192 AND context NOT IN ('take', 'take-over')
193 AND extract(year from timestamp) = %(year)s
194 ORDER BY timestamp ASC
195 """, params, back_as=back_as)
196
197 if key:
198 try:
199 return out[key]()
200 except KeyError:
201 raise Response(400, "bad key `%s`" % key)
202 elif require_key:
203 raise Response(400, "missing `key` parameter")
204 else:
205 return {k: v() for k, v in out.items()}
206
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gratipay/utils/history.py b/gratipay/utils/history.py
--- a/gratipay/utils/history.py
+++ b/gratipay/utils/history.py
@@ -75,6 +75,12 @@
WHERE participant=%(username)s
AND extract(year from timestamp) = %(year)s
""", locals(), back_as=dict)
+ payments = db.all("""
+ SELECT *
+ FROM payments
+ WHERE participant=%(username)s
+ AND extract(year from timestamp) = %(year)s
+ """, locals(), back_as=dict)
transfers = db.all("""
SELECT *
FROM transfers
@@ -82,15 +88,20 @@
AND extract(year from timestamp) = %(year)s
""", locals(), back_as=dict)
- if not (exchanges or transfers):
+ if not (exchanges or payments or transfers):
return
- if transfers:
- yield dict(
- kind='totals',
- given=sum(t['amount'] for t in transfers if t['tipper'] == username and t['context'] != 'take'),
- received=sum(t['amount'] for t in transfers if t['tippee'] == username),
- )
+ if payments or transfers:
+ payments_given = sum([p['amount'] for p in payments if p['direction'] == 'to-team'])
+ payments_received = sum([p['amount'] for p in payments \
+ if p['direction'] == 'to-participant'])
+ transfers_given = sum(t['amount'] for t in transfers \
+ if t['tipper'] == username and t['context'] != 'take')
+ transfers_received = sum(t['amount'] for t in transfers if t['tippee'] == username)
+ yield dict( kind='totals'
+ , given=payments_given + transfers_given
+ , received=payments_received + transfers_received
+ )
payday_dates = db.all("""
SELECT ts_start::date
@@ -101,7 +112,7 @@
balance = get_end_of_year_balance(db, participant, year, current_year)
prev_date = None
get_timestamp = lambda e: e['timestamp']
- events = sorted(exchanges+transfers, key=get_timestamp, reverse=True)
+ events = sorted(exchanges+payments+transfers, key=get_timestamp, reverse=True)
for event in events:
event['balance'] = balance
@@ -129,6 +140,13 @@
kind = 'credit'
if event['status'] != 'failed':
balance -= event['amount'] - event['fee']
+ elif 'direction' in event:
+ kind = 'payment'
+ if event['direction'] == 'to-participant':
+ balance -= event['amount']
+ else:
+ assert event['direction'] == 'to-team'
+ balance += event['amount']
else:
kind = 'transfer'
if event['tippee'] == username:
| {"golden_diff": "diff --git a/gratipay/utils/history.py b/gratipay/utils/history.py\n--- a/gratipay/utils/history.py\n+++ b/gratipay/utils/history.py\n@@ -75,6 +75,12 @@\n WHERE participant=%(username)s\n AND extract(year from timestamp) = %(year)s\n \"\"\", locals(), back_as=dict)\n+ payments = db.all(\"\"\"\n+ SELECT *\n+ FROM payments\n+ WHERE participant=%(username)s\n+ AND extract(year from timestamp) = %(year)s\n+ \"\"\", locals(), back_as=dict)\n transfers = db.all(\"\"\"\n SELECT *\n FROM transfers\n@@ -82,15 +88,20 @@\n AND extract(year from timestamp) = %(year)s\n \"\"\", locals(), back_as=dict)\n \n- if not (exchanges or transfers):\n+ if not (exchanges or payments or transfers):\n return\n \n- if transfers:\n- yield dict(\n- kind='totals',\n- given=sum(t['amount'] for t in transfers if t['tipper'] == username and t['context'] != 'take'),\n- received=sum(t['amount'] for t in transfers if t['tippee'] == username),\n- )\n+ if payments or transfers:\n+ payments_given = sum([p['amount'] for p in payments if p['direction'] == 'to-team'])\n+ payments_received = sum([p['amount'] for p in payments \\\n+ if p['direction'] == 'to-participant'])\n+ transfers_given = sum(t['amount'] for t in transfers \\\n+ if t['tipper'] == username and t['context'] != 'take')\n+ transfers_received = sum(t['amount'] for t in transfers if t['tippee'] == username)\n+ yield dict( kind='totals'\n+ , given=payments_given + transfers_given\n+ , received=payments_received + transfers_received\n+ )\n \n payday_dates = db.all(\"\"\"\n SELECT ts_start::date\n@@ -101,7 +112,7 @@\n balance = get_end_of_year_balance(db, participant, year, current_year)\n prev_date = None\n get_timestamp = lambda e: e['timestamp']\n- events = sorted(exchanges+transfers, key=get_timestamp, reverse=True)\n+ events = sorted(exchanges+payments+transfers, key=get_timestamp, reverse=True)\n for event in events:\n \n event['balance'] = balance\n@@ -129,6 +140,13 @@\n kind = 'credit'\n if event['status'] != 'failed':\n balance -= event['amount'] - event['fee']\n+ elif 'direction' in event:\n+ kind = 'payment'\n+ if event['direction'] == 'to-participant':\n+ balance -= event['amount']\n+ else:\n+ assert event['direction'] == 'to-team'\n+ balance += event['amount']\n else:\n kind = 'transfer'\n if event['tippee'] == username:\n", "issue": "add 2.0 payments to history pages\n@ehmatthes at https://github.com/gratipay/gratipay.com/issues/3446#issuecomment-103163666:\n\n> I made an account for introtopython.org last week, and it was one of the teams that should have been included in last week's payday. But my balance is low enough that I didn't get a payout.\n> \n> How will I know if my accounts are set up correctly, ie if my team was included in last week's payday? I don't know how to view my team account, or if it's even possible to view a profile page for that team yet. introtopython is not included in my teams page. My individual history doesn't show anything after payday 152.\n#### Notify:\n- [x] https://gratipay.freshdesk.com/helpdesk/tickets/2233\n- [x] https://gratipay.freshdesk.com/helpdesk/tickets/2358\n- [x] https://gratipay.freshdesk.com/helpdesk/tickets/2405\n- [x] https://gratipay.freshdesk.com/helpdesk/tickets/2514\n- [x] https://gratipay.freshdesk.com/helpdesk/tickets/2468\n- [x] https://gratipay.freshdesk.com/helpdesk/tickets/2354\n- [x] https://gratipay.freshdesk.com/helpdesk/tickets/2540\n- [x] https://gratipay.freshdesk.com/helpdesk/tickets/2450\n\n", "before_files": [{"content": "from datetime import datetime\nfrom decimal import Decimal\n\nfrom aspen import Response\nfrom psycopg2 import IntegrityError\n\n\ndef get_end_of_year_balance(db, participant, year, current_year):\n if year == current_year:\n return participant.balance\n start = participant.claimed_time or participant.ctime\n if year < start.year:\n return Decimal('0.00')\n\n balance = db.one(\"\"\"\n SELECT balance\n FROM balances_at\n WHERE participant = %s\n AND \"at\" = %s\n \"\"\", (participant.id, datetime(year+1, 1, 1)))\n if balance is not None:\n return balance\n\n username = participant.username\n start_balance = get_end_of_year_balance(db, participant, year-1, current_year)\n delta = db.one(\"\"\"\n SELECT (\n SELECT COALESCE(sum(amount), 0) AS a\n FROM exchanges\n WHERE participant = %(username)s\n AND extract(year from timestamp) = %(year)s\n AND amount > 0\n AND (status is null OR status = 'succeeded')\n ) + (\n SELECT COALESCE(sum(amount-fee), 0) AS a\n FROM exchanges\n WHERE participant = %(username)s\n AND extract(year from timestamp) = %(year)s\n AND amount < 0\n AND (status is null OR status <> 'failed')\n ) + (\n SELECT COALESCE(sum(-amount), 0) AS a\n FROM transfers\n WHERE tipper = %(username)s\n AND extract(year from timestamp) = %(year)s\n ) + (\n SELECT COALESCE(sum(amount), 0) AS a\n FROM transfers\n WHERE tippee = %(username)s\n AND extract(year from timestamp) = %(year)s\n ) AS delta\n \"\"\", locals())\n balance = start_balance + delta\n try:\n db.run(\"\"\"\n INSERT INTO balances_at\n (participant, at, balance)\n VALUES (%s, %s, %s)\n \"\"\", (participant.id, datetime(year+1, 1, 1), balance))\n except IntegrityError:\n pass\n return balance\n\n\ndef iter_payday_events(db, participant, year=None):\n \"\"\"Yields payday events for the given participant.\n \"\"\"\n current_year = datetime.utcnow().year\n year = year or current_year\n\n username = participant.username\n exchanges = db.all(\"\"\"\n SELECT *\n FROM exchanges\n WHERE participant=%(username)s\n AND extract(year from timestamp) = %(year)s\n \"\"\", locals(), back_as=dict)\n transfers = db.all(\"\"\"\n SELECT *\n FROM transfers\n WHERE (tipper=%(username)s OR tippee=%(username)s)\n AND extract(year from timestamp) = %(year)s\n \"\"\", locals(), back_as=dict)\n\n if not (exchanges or transfers):\n return\n\n if transfers:\n yield dict(\n kind='totals',\n given=sum(t['amount'] for t in transfers if t['tipper'] == username and t['context'] != 'take'),\n received=sum(t['amount'] for t in transfers if t['tippee'] == username),\n )\n\n payday_dates = db.all(\"\"\"\n SELECT ts_start::date\n FROM paydays\n ORDER BY ts_start ASC\n \"\"\")\n\n balance = get_end_of_year_balance(db, participant, year, current_year)\n prev_date = None\n get_timestamp = lambda e: e['timestamp']\n events = sorted(exchanges+transfers, key=get_timestamp, reverse=True)\n for event in events:\n\n event['balance'] = balance\n\n event_date = event['timestamp'].date()\n if event_date != prev_date:\n if prev_date:\n yield dict(kind='day-close', balance=balance)\n day_open = dict(kind='day-open', date=event_date, balance=balance)\n if payday_dates:\n while payday_dates and payday_dates[-1] > event_date:\n payday_dates.pop()\n payday_date = payday_dates[-1] if payday_dates else None\n if event_date == payday_date:\n day_open['payday_number'] = len(payday_dates) - 1\n yield day_open\n prev_date = event_date\n\n if 'fee' in event:\n if event['amount'] > 0:\n kind = 'charge'\n if event['status'] in (None, 'succeeded'):\n balance -= event['amount']\n else:\n kind = 'credit'\n if event['status'] != 'failed':\n balance -= event['amount'] - event['fee']\n else:\n kind = 'transfer'\n if event['tippee'] == username:\n balance -= event['amount']\n else:\n balance += event['amount']\n event['kind'] = kind\n\n yield event\n\n yield dict(kind='day-close', balance=balance)\n\n\ndef export_history(participant, year, mode, key, back_as='namedtuple', require_key=False):\n db = participant.db\n params = dict(username=participant.username, year=year)\n out = {}\n if mode == 'aggregate':\n out['given'] = lambda: db.all(\"\"\"\n SELECT tippee, sum(amount) AS amount\n FROM transfers\n WHERE tipper = %(username)s\n AND extract(year from timestamp) = %(year)s\n GROUP BY tippee\n \"\"\", params, back_as=back_as)\n out['taken'] = lambda: db.all(\"\"\"\n SELECT tipper AS team, sum(amount) AS amount\n FROM transfers\n WHERE tippee = %(username)s\n AND context = 'take'\n AND extract(year from timestamp) = %(year)s\n GROUP BY tipper\n \"\"\", params, back_as=back_as)\n else:\n out['exchanges'] = lambda: db.all(\"\"\"\n SELECT timestamp, amount, fee, status, note\n FROM exchanges\n WHERE participant = %(username)s\n AND extract(year from timestamp) = %(year)s\n ORDER BY timestamp ASC\n \"\"\", params, back_as=back_as)\n out['given'] = lambda: db.all(\"\"\"\n SELECT timestamp, tippee, amount, context\n FROM transfers\n WHERE tipper = %(username)s\n AND extract(year from timestamp) = %(year)s\n ORDER BY timestamp ASC\n \"\"\", params, back_as=back_as)\n out['taken'] = lambda: db.all(\"\"\"\n SELECT timestamp, tipper AS team, amount\n FROM transfers\n WHERE tippee = %(username)s\n AND context = 'take'\n AND extract(year from timestamp) = %(year)s\n ORDER BY timestamp ASC\n \"\"\", params, back_as=back_as)\n out['received'] = lambda: db.all(\"\"\"\n SELECT timestamp, amount, context\n FROM transfers\n WHERE tippee = %(username)s\n AND context NOT IN ('take', 'take-over')\n AND extract(year from timestamp) = %(year)s\n ORDER BY timestamp ASC\n \"\"\", params, back_as=back_as)\n\n if key:\n try:\n return out[key]()\n except KeyError:\n raise Response(400, \"bad key `%s`\" % key)\n elif require_key:\n raise Response(400, \"missing `key` parameter\")\n else:\n return {k: v() for k, v in out.items()}\n", "path": "gratipay/utils/history.py"}], "after_files": [{"content": "from datetime import datetime\nfrom decimal import Decimal\n\nfrom aspen import Response\nfrom psycopg2 import IntegrityError\n\n\ndef get_end_of_year_balance(db, participant, year, current_year):\n if year == current_year:\n return participant.balance\n start = participant.claimed_time or participant.ctime\n if year < start.year:\n return Decimal('0.00')\n\n balance = db.one(\"\"\"\n SELECT balance\n FROM balances_at\n WHERE participant = %s\n AND \"at\" = %s\n \"\"\", (participant.id, datetime(year+1, 1, 1)))\n if balance is not None:\n return balance\n\n username = participant.username\n start_balance = get_end_of_year_balance(db, participant, year-1, current_year)\n delta = db.one(\"\"\"\n SELECT (\n SELECT COALESCE(sum(amount), 0) AS a\n FROM exchanges\n WHERE participant = %(username)s\n AND extract(year from timestamp) = %(year)s\n AND amount > 0\n AND (status is null OR status = 'succeeded')\n ) + (\n SELECT COALESCE(sum(amount-fee), 0) AS a\n FROM exchanges\n WHERE participant = %(username)s\n AND extract(year from timestamp) = %(year)s\n AND amount < 0\n AND (status is null OR status <> 'failed')\n ) + (\n SELECT COALESCE(sum(-amount), 0) AS a\n FROM transfers\n WHERE tipper = %(username)s\n AND extract(year from timestamp) = %(year)s\n ) + (\n SELECT COALESCE(sum(amount), 0) AS a\n FROM transfers\n WHERE tippee = %(username)s\n AND extract(year from timestamp) = %(year)s\n ) AS delta\n \"\"\", locals())\n balance = start_balance + delta\n try:\n db.run(\"\"\"\n INSERT INTO balances_at\n (participant, at, balance)\n VALUES (%s, %s, %s)\n \"\"\", (participant.id, datetime(year+1, 1, 1), balance))\n except IntegrityError:\n pass\n return balance\n\n\ndef iter_payday_events(db, participant, year=None):\n \"\"\"Yields payday events for the given participant.\n \"\"\"\n current_year = datetime.utcnow().year\n year = year or current_year\n\n username = participant.username\n exchanges = db.all(\"\"\"\n SELECT *\n FROM exchanges\n WHERE participant=%(username)s\n AND extract(year from timestamp) = %(year)s\n \"\"\", locals(), back_as=dict)\n payments = db.all(\"\"\"\n SELECT *\n FROM payments\n WHERE participant=%(username)s\n AND extract(year from timestamp) = %(year)s\n \"\"\", locals(), back_as=dict)\n transfers = db.all(\"\"\"\n SELECT *\n FROM transfers\n WHERE (tipper=%(username)s OR tippee=%(username)s)\n AND extract(year from timestamp) = %(year)s\n \"\"\", locals(), back_as=dict)\n\n if not (exchanges or payments or transfers):\n return\n\n if payments or transfers:\n payments_given = sum([p['amount'] for p in payments if p['direction'] == 'to-team'])\n payments_received = sum([p['amount'] for p in payments \\\n if p['direction'] == 'to-participant'])\n transfers_given = sum(t['amount'] for t in transfers \\\n if t['tipper'] == username and t['context'] != 'take')\n transfers_received = sum(t['amount'] for t in transfers if t['tippee'] == username)\n yield dict( kind='totals'\n , given=payments_given + transfers_given\n , received=payments_received + transfers_received\n )\n\n payday_dates = db.all(\"\"\"\n SELECT ts_start::date\n FROM paydays\n ORDER BY ts_start ASC\n \"\"\")\n\n balance = get_end_of_year_balance(db, participant, year, current_year)\n prev_date = None\n get_timestamp = lambda e: e['timestamp']\n events = sorted(exchanges+payments+transfers, key=get_timestamp, reverse=True)\n for event in events:\n\n event['balance'] = balance\n\n event_date = event['timestamp'].date()\n if event_date != prev_date:\n if prev_date:\n yield dict(kind='day-close', balance=balance)\n day_open = dict(kind='day-open', date=event_date, balance=balance)\n if payday_dates:\n while payday_dates and payday_dates[-1] > event_date:\n payday_dates.pop()\n payday_date = payday_dates[-1] if payday_dates else None\n if event_date == payday_date:\n day_open['payday_number'] = len(payday_dates) - 1\n yield day_open\n prev_date = event_date\n\n if 'fee' in event:\n if event['amount'] > 0:\n kind = 'charge'\n if event['status'] in (None, 'succeeded'):\n balance -= event['amount']\n else:\n kind = 'credit'\n if event['status'] != 'failed':\n balance -= event['amount'] - event['fee']\n elif 'direction' in event:\n kind = 'payment'\n if event['direction'] == 'to-participant':\n balance -= event['amount']\n else:\n assert event['direction'] == 'to-team'\n balance += event['amount']\n else:\n kind = 'transfer'\n if event['tippee'] == username:\n balance -= event['amount']\n else:\n balance += event['amount']\n event['kind'] = kind\n\n yield event\n\n yield dict(kind='day-close', balance=balance)\n\n\ndef export_history(participant, year, mode, key, back_as='namedtuple', require_key=False):\n db = participant.db\n params = dict(username=participant.username, year=year)\n out = {}\n if mode == 'aggregate':\n out['given'] = lambda: db.all(\"\"\"\n SELECT tippee, sum(amount) AS amount\n FROM transfers\n WHERE tipper = %(username)s\n AND extract(year from timestamp) = %(year)s\n GROUP BY tippee\n \"\"\", params, back_as=back_as)\n out['taken'] = lambda: db.all(\"\"\"\n SELECT tipper AS team, sum(amount) AS amount\n FROM transfers\n WHERE tippee = %(username)s\n AND context = 'take'\n AND extract(year from timestamp) = %(year)s\n GROUP BY tipper\n \"\"\", params, back_as=back_as)\n else:\n out['exchanges'] = lambda: db.all(\"\"\"\n SELECT timestamp, amount, fee, status, note\n FROM exchanges\n WHERE participant = %(username)s\n AND extract(year from timestamp) = %(year)s\n ORDER BY timestamp ASC\n \"\"\", params, back_as=back_as)\n out['given'] = lambda: db.all(\"\"\"\n SELECT timestamp, tippee, amount, context\n FROM transfers\n WHERE tipper = %(username)s\n AND extract(year from timestamp) = %(year)s\n ORDER BY timestamp ASC\n \"\"\", params, back_as=back_as)\n out['taken'] = lambda: db.all(\"\"\"\n SELECT timestamp, tipper AS team, amount\n FROM transfers\n WHERE tippee = %(username)s\n AND context = 'take'\n AND extract(year from timestamp) = %(year)s\n ORDER BY timestamp ASC\n \"\"\", params, back_as=back_as)\n out['received'] = lambda: db.all(\"\"\"\n SELECT timestamp, amount, context\n FROM transfers\n WHERE tippee = %(username)s\n AND context NOT IN ('take', 'take-over')\n AND extract(year from timestamp) = %(year)s\n ORDER BY timestamp ASC\n \"\"\", params, back_as=back_as)\n\n if key:\n try:\n return out[key]()\n except KeyError:\n raise Response(400, \"bad key `%s`\" % key)\n elif require_key:\n raise Response(400, \"missing `key` parameter\")\n else:\n return {k: v() for k, v in out.items()}\n", "path": "gratipay/utils/history.py"}]} | 2,722 | 660 |
gh_patches_debug_20942 | rasdani/github-patches | git_diff | Project-MONAI__MONAI-3873 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Enhance `ContrastiveLoss` to avoid warning
Call ContrastiveLoss will see a warning message:
```
To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
```
Simple code to reproduce this issue:
```
from monai.losses import ContrastiveLoss
import torch
inp = torch.randn([2, 10])
target = torch.randn([2, 10])
loss = ContrastiveLoss(batch_size=2)
loss(inp, target)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `monai/losses/contrastive.py`
Content:
```
1 # Copyright (c) MONAI Consortium
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 # http://www.apache.org/licenses/LICENSE-2.0
6 # Unless required by applicable law or agreed to in writing, software
7 # distributed under the License is distributed on an "AS IS" BASIS,
8 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
9 # See the License for the specific language governing permissions and
10 # limitations under the License.
11
12 import torch
13 from torch.nn import functional as F
14 from torch.nn.modules.loss import _Loss
15
16 from monai.utils import deprecated_arg
17
18
19 class ContrastiveLoss(_Loss):
20
21 """
22 Compute the Contrastive loss defined in:
23
24 Chen, Ting, et al. "A simple framework for contrastive learning of visual representations." International
25 conference on machine learning. PMLR, 2020. (http://proceedings.mlr.press/v119/chen20j.html)
26
27 Adapted from:
28 https://github.com/Sara-Ahmed/SiT/blob/1aacd6adcd39b71efc903d16b4e9095b97dda76f/losses.py#L5
29
30 """
31
32 @deprecated_arg(name="reduction", since="0.8", msg_suffix="`reduction` is no longer supported.")
33 def __init__(self, temperature: float = 0.5, batch_size: int = 1, reduction="sum") -> None:
34 """
35 Args:
36 temperature: Can be scaled between 0 and 1 for learning from negative samples, ideally set to 0.5.
37 batch_size: The number of samples.
38
39 Raises:
40 ValueError: When an input of dimension length > 2 is passed
41 ValueError: When input and target are of different shapes
42
43 .. deprecated:: 0.8.0
44
45 `reduction` is no longer supported.
46
47 """
48 super().__init__()
49
50 self.batch_size = batch_size
51 self.temperature = temperature
52
53 def forward(self, input: torch.Tensor, target: torch.Tensor) -> torch.Tensor:
54 """
55 Args:
56 input: the shape should be B[F].
57 target: the shape should be B[F].
58 """
59 if len(target.shape) > 2 or len(input.shape) > 2:
60 raise ValueError(
61 f"Either target or input has dimensions greater than 2 where target "
62 f"shape is ({target.shape}) and input shape is ({input.shape})"
63 )
64
65 if target.shape != input.shape:
66 raise ValueError(f"ground truth has differing shape ({target.shape}) from input ({input.shape})")
67
68 temperature_tensor = torch.tensor(self.temperature).to(input.device)
69
70 norm_i = F.normalize(input, dim=1)
71 norm_j = F.normalize(target, dim=1)
72
73 negatives_mask = ~torch.eye(self.batch_size * 2, self.batch_size * 2, dtype=torch.bool)
74 negatives_mask = torch.tensor(negatives_mask, dtype=torch.float)
75 negatives_mask = torch.clone(torch.as_tensor(negatives_mask)).to(input.device)
76
77 repr = torch.cat([norm_i, norm_j], dim=0)
78 sim_matrix = F.cosine_similarity(repr.unsqueeze(1), repr.unsqueeze(0), dim=2)
79 sim_ij = torch.diag(sim_matrix, self.batch_size)
80 sim_ji = torch.diag(sim_matrix, -self.batch_size)
81
82 positives = torch.cat([sim_ij, sim_ji], dim=0)
83 nominator = torch.exp(positives / temperature_tensor)
84 denominator = negatives_mask * torch.exp(sim_matrix / temperature_tensor)
85
86 loss_partial = -torch.log(nominator / torch.sum(denominator, dim=1))
87
88 return torch.sum(loss_partial) / (2 * self.batch_size)
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/monai/losses/contrastive.py b/monai/losses/contrastive.py
--- a/monai/losses/contrastive.py
+++ b/monai/losses/contrastive.py
@@ -65,14 +65,13 @@
if target.shape != input.shape:
raise ValueError(f"ground truth has differing shape ({target.shape}) from input ({input.shape})")
- temperature_tensor = torch.tensor(self.temperature).to(input.device)
+ temperature_tensor = torch.as_tensor(self.temperature).to(input.device)
norm_i = F.normalize(input, dim=1)
norm_j = F.normalize(target, dim=1)
negatives_mask = ~torch.eye(self.batch_size * 2, self.batch_size * 2, dtype=torch.bool)
- negatives_mask = torch.tensor(negatives_mask, dtype=torch.float)
- negatives_mask = torch.clone(torch.as_tensor(negatives_mask)).to(input.device)
+ negatives_mask = torch.clone(negatives_mask.type(torch.float)).to(input.device)
repr = torch.cat([norm_i, norm_j], dim=0)
sim_matrix = F.cosine_similarity(repr.unsqueeze(1), repr.unsqueeze(0), dim=2)
| {"golden_diff": "diff --git a/monai/losses/contrastive.py b/monai/losses/contrastive.py\n--- a/monai/losses/contrastive.py\n+++ b/monai/losses/contrastive.py\n@@ -65,14 +65,13 @@\n if target.shape != input.shape:\n raise ValueError(f\"ground truth has differing shape ({target.shape}) from input ({input.shape})\")\n \n- temperature_tensor = torch.tensor(self.temperature).to(input.device)\n+ temperature_tensor = torch.as_tensor(self.temperature).to(input.device)\n \n norm_i = F.normalize(input, dim=1)\n norm_j = F.normalize(target, dim=1)\n \n negatives_mask = ~torch.eye(self.batch_size * 2, self.batch_size * 2, dtype=torch.bool)\n- negatives_mask = torch.tensor(negatives_mask, dtype=torch.float)\n- negatives_mask = torch.clone(torch.as_tensor(negatives_mask)).to(input.device)\n+ negatives_mask = torch.clone(negatives_mask.type(torch.float)).to(input.device)\n \n repr = torch.cat([norm_i, norm_j], dim=0)\n sim_matrix = F.cosine_similarity(repr.unsqueeze(1), repr.unsqueeze(0), dim=2)\n", "issue": "Enhance `ContrastiveLoss` to avoid warning\nCall ContrastiveLoss will see a warning message:\r\n```\r\nTo copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).\r\n```\r\n\r\nSimple code to reproduce this issue:\r\n```\r\nfrom monai.losses import ContrastiveLoss\r\nimport torch\r\n\r\ninp = torch.randn([2, 10])\r\ntarget = torch.randn([2, 10])\r\nloss = ContrastiveLoss(batch_size=2)\r\nloss(inp, target)\r\n```\n", "before_files": [{"content": "# Copyright (c) MONAI Consortium\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport torch\nfrom torch.nn import functional as F\nfrom torch.nn.modules.loss import _Loss\n\nfrom monai.utils import deprecated_arg\n\n\nclass ContrastiveLoss(_Loss):\n\n \"\"\"\n Compute the Contrastive loss defined in:\n\n Chen, Ting, et al. \"A simple framework for contrastive learning of visual representations.\" International\n conference on machine learning. PMLR, 2020. (http://proceedings.mlr.press/v119/chen20j.html)\n\n Adapted from:\n https://github.com/Sara-Ahmed/SiT/blob/1aacd6adcd39b71efc903d16b4e9095b97dda76f/losses.py#L5\n\n \"\"\"\n\n @deprecated_arg(name=\"reduction\", since=\"0.8\", msg_suffix=\"`reduction` is no longer supported.\")\n def __init__(self, temperature: float = 0.5, batch_size: int = 1, reduction=\"sum\") -> None:\n \"\"\"\n Args:\n temperature: Can be scaled between 0 and 1 for learning from negative samples, ideally set to 0.5.\n batch_size: The number of samples.\n\n Raises:\n ValueError: When an input of dimension length > 2 is passed\n ValueError: When input and target are of different shapes\n\n .. deprecated:: 0.8.0\n\n `reduction` is no longer supported.\n\n \"\"\"\n super().__init__()\n\n self.batch_size = batch_size\n self.temperature = temperature\n\n def forward(self, input: torch.Tensor, target: torch.Tensor) -> torch.Tensor:\n \"\"\"\n Args:\n input: the shape should be B[F].\n target: the shape should be B[F].\n \"\"\"\n if len(target.shape) > 2 or len(input.shape) > 2:\n raise ValueError(\n f\"Either target or input has dimensions greater than 2 where target \"\n f\"shape is ({target.shape}) and input shape is ({input.shape})\"\n )\n\n if target.shape != input.shape:\n raise ValueError(f\"ground truth has differing shape ({target.shape}) from input ({input.shape})\")\n\n temperature_tensor = torch.tensor(self.temperature).to(input.device)\n\n norm_i = F.normalize(input, dim=1)\n norm_j = F.normalize(target, dim=1)\n\n negatives_mask = ~torch.eye(self.batch_size * 2, self.batch_size * 2, dtype=torch.bool)\n negatives_mask = torch.tensor(negatives_mask, dtype=torch.float)\n negatives_mask = torch.clone(torch.as_tensor(negatives_mask)).to(input.device)\n\n repr = torch.cat([norm_i, norm_j], dim=0)\n sim_matrix = F.cosine_similarity(repr.unsqueeze(1), repr.unsqueeze(0), dim=2)\n sim_ij = torch.diag(sim_matrix, self.batch_size)\n sim_ji = torch.diag(sim_matrix, -self.batch_size)\n\n positives = torch.cat([sim_ij, sim_ji], dim=0)\n nominator = torch.exp(positives / temperature_tensor)\n denominator = negatives_mask * torch.exp(sim_matrix / temperature_tensor)\n\n loss_partial = -torch.log(nominator / torch.sum(denominator, dim=1))\n\n return torch.sum(loss_partial) / (2 * self.batch_size)\n", "path": "monai/losses/contrastive.py"}], "after_files": [{"content": "# Copyright (c) MONAI Consortium\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport torch\nfrom torch.nn import functional as F\nfrom torch.nn.modules.loss import _Loss\n\nfrom monai.utils import deprecated_arg\n\n\nclass ContrastiveLoss(_Loss):\n\n \"\"\"\n Compute the Contrastive loss defined in:\n\n Chen, Ting, et al. \"A simple framework for contrastive learning of visual representations.\" International\n conference on machine learning. PMLR, 2020. (http://proceedings.mlr.press/v119/chen20j.html)\n\n Adapted from:\n https://github.com/Sara-Ahmed/SiT/blob/1aacd6adcd39b71efc903d16b4e9095b97dda76f/losses.py#L5\n\n \"\"\"\n\n @deprecated_arg(name=\"reduction\", since=\"0.8\", msg_suffix=\"`reduction` is no longer supported.\")\n def __init__(self, temperature: float = 0.5, batch_size: int = 1, reduction=\"sum\") -> None:\n \"\"\"\n Args:\n temperature: Can be scaled between 0 and 1 for learning from negative samples, ideally set to 0.5.\n batch_size: The number of samples.\n\n Raises:\n ValueError: When an input of dimension length > 2 is passed\n ValueError: When input and target are of different shapes\n\n .. deprecated:: 0.8.0\n\n `reduction` is no longer supported.\n\n \"\"\"\n super().__init__()\n\n self.batch_size = batch_size\n self.temperature = temperature\n\n def forward(self, input: torch.Tensor, target: torch.Tensor) -> torch.Tensor:\n \"\"\"\n Args:\n input: the shape should be B[F].\n target: the shape should be B[F].\n \"\"\"\n if len(target.shape) > 2 or len(input.shape) > 2:\n raise ValueError(\n f\"Either target or input has dimensions greater than 2 where target \"\n f\"shape is ({target.shape}) and input shape is ({input.shape})\"\n )\n\n if target.shape != input.shape:\n raise ValueError(f\"ground truth has differing shape ({target.shape}) from input ({input.shape})\")\n\n temperature_tensor = torch.as_tensor(self.temperature).to(input.device)\n\n norm_i = F.normalize(input, dim=1)\n norm_j = F.normalize(target, dim=1)\n\n negatives_mask = ~torch.eye(self.batch_size * 2, self.batch_size * 2, dtype=torch.bool)\n negatives_mask = torch.clone(negatives_mask.type(torch.float)).to(input.device)\n\n repr = torch.cat([norm_i, norm_j], dim=0)\n sim_matrix = F.cosine_similarity(repr.unsqueeze(1), repr.unsqueeze(0), dim=2)\n sim_ij = torch.diag(sim_matrix, self.batch_size)\n sim_ji = torch.diag(sim_matrix, -self.batch_size)\n\n positives = torch.cat([sim_ij, sim_ji], dim=0)\n nominator = torch.exp(positives / temperature_tensor)\n denominator = negatives_mask * torch.exp(sim_matrix / temperature_tensor)\n\n loss_partial = -torch.log(nominator / torch.sum(denominator, dim=1))\n\n return torch.sum(loss_partial) / (2 * self.batch_size)\n", "path": "monai/losses/contrastive.py"}]} | 1,414 | 270 |
gh_patches_debug_41944 | rasdani/github-patches | git_diff | wagtail__wagtail-11666 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Inconsistent use of model verbose_name in permissions UI
### Issue Summary
The object permissions UI doesn't show a model's [`verbose_name`](https://docs.djangoproject.com/en/4.2/ref/models/options/#verbose-name) if it differs from the model's name when it was created.
### Steps to Reproduce
Start a new project with `wagtail start myproject` and create a simple model like the following:
```py
from django.db import models
from wagtail.snippets.models import register_snippet
@register_snippet
class MyModel(models.Model):
text = models.TextField()
```
In the group edit view, permissions render like this; note "My model" and "Can view":

Now, give the model a different `verbose_name`:
```py
@register_snippet
class MyModel(models.Model):
text = models.TextField()
class Meta:
verbose_name = "Something else"
```
Now, the permissions render in a surprising way; note "Something else" and "Can view my model":

As a user, I would expect to see "Can view" the way all other objects are typically listed.
This behavior seems to come from this line that defines the permission "name":
https://github.com/wagtail/wagtail/blob/a8bc03dd8aad9d15fd4d8e84ace1bc89cbe96053/wagtail/users/templatetags/wagtailusers_tags.py#L101-L103
This code assumes that a model's content type's name (`perm.content_type.name`) will exactly match the name on the permission object (`perm.name`). This is true as long as the model's verbose name hasn't been changed, but, [per the docs](https://docs.djangoproject.com/en/4.2/ref/contrib/contenttypes/#django.contrib.contenttypes.models.ContentType.name), `content_type.name` "is taken from the `verbose_name` attribute of the model". So if a model's verbose name is changed, that change will show up when you look at its content type.
The issue is that when a model is first created, its automatically-generated permissions are given names that contain the model's name at that time. For example, the above `MyModel` class generated permissions like this:
```
110|28|add_mymodel|Can add my model
111|28|change_mymodel|Can change my model
112|28|delete_mymodel|Can delete my model
113|28|view_mymodel|Can view my model
```
In the above example, the code assumes that since the content type reports its name as `Something else`, the permission's name will be `Can view something else`, but this isn't a valid assumption. The permission's name is still what it was when it was created, `Can view my model`.
To fix this, the code could lookup the model objects corresponding to the permission content types and use those to do the substitution, maybe something like this:
```diff
diff --git a/wagtail/users/templatetags/wagtailusers_tags.py b/wagtail/users/templatetags/wagtailusers_tags.py
index c188425ad0..3da81fe0dd 100644
--- a/wagtail/users/templatetags/wagtailusers_tags.py
+++ b/wagtail/users/templatetags/wagtailusers_tags.py
@@ -95,11 +95,15 @@ def format_permissions(permission_bound_field):
}
else:
extra_perms_exist["custom"] = True
+ perm_model_class = perm.content_type.model_class()
custom_perms.append(
{
"perm": perm,
"name": re.sub(
- f"{perm.content_type.name}$", "", perm.name, flags=re.I
+ f"{perm_model_class._meta.model_name}$", "", perm.name, flags=re.I
).strip(),
"selected": checkbox.data["selected"],
}
```
### Technical details
- I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: yes
- Python version: 3.11.1
- Django version: 4.0.10
- Wagtail version: 5.2a0
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/users/templatetags/wagtailusers_tags.py`
Content:
```
1 import itertools
2 from collections import defaultdict
3
4 from django import template
5
6 from wagtail import hooks
7 from wagtail.users.permission_order import CONTENT_TYPE_ORDER
8
9 register = template.Library()
10
11
12 @register.inclusion_tag("wagtailusers/groups/includes/formatted_permissions.html")
13 def format_permissions(permission_bound_field):
14 """
15 Given a bound field with a queryset of Permission objects - which must be using
16 the CheckboxSelectMultiple widget - construct a list of dictionaries for 'objects':
17
18 'objects': [
19 {
20 'object': name_of_some_content_object,
21 'add': checkbox,
22 'change': checkbox,
23 'delete': checkbox,
24 'publish': checkbox, # only if the model extends DraftStateMixin
25 'custom': list_of_checkboxes_for_custom_permissions
26 },
27 ]
28
29 and a list of other permissions:
30
31 'others': [
32 (any_non_add_change_delete_permission, checkbox),
33 ]
34
35 (where 'checkbox' is an object with a tag() method that renders the checkbox as HTML;
36 this is a BoundWidget on Django >=1.11)
37
38 - and returns a table template formatted with this list.
39
40 """
41 permissions = permission_bound_field.field._queryset
42 # get a distinct and ordered list of the content types that these permissions relate to.
43 # relies on Permission model default ordering, dict.fromkeys() retaining that order
44 # from the queryset, and the stability of sorted().
45 content_type_ids = sorted(
46 dict.fromkeys(permissions.values_list("content_type_id", flat=True)),
47 key=lambda ct: CONTENT_TYPE_ORDER.get(ct, float("inf")),
48 )
49
50 # iterate over permission_bound_field to build a lookup of individual renderable
51 # checkbox objects
52 # checkbox.data['value'] gives a ModelChoiceIteratorValue
53 checkboxes_by_id = {
54 int(checkbox.data["value"].value): checkbox
55 for checkbox in permission_bound_field
56 }
57
58 object_perms = []
59 other_perms = []
60
61 # Permissions that are known by Wagtail, to be shown under their own columns.
62 # Other permissions will be shown under the "custom permissions" column.
63 main_permission_names = ["add", "change", "delete", "publish", "lock", "unlock"]
64
65 # Only show the columns for these permissions if any of the model has them.
66 extra_perms_exist = {
67 "publish": False,
68 "lock": False,
69 "unlock": False,
70 "custom": False,
71 }
72 # Batch the permission query for all content types, then group by content type
73 # (instead of querying permissions for each content type separately)
74 content_perms_by_ct_id = defaultdict(list)
75 permissions = permissions.filter(content_type_id__in=content_type_ids)
76 for permission in permissions:
77 content_perms_by_ct_id[permission.content_type_id].append(permission)
78
79 # Iterate using the sorted content_type_ids
80 for ct_id in content_type_ids:
81 content_perms = content_perms_by_ct_id[ct_id]
82 content_perms_dict = {}
83 custom_perms = []
84
85 if content_perms[0].content_type.name == "admin":
86 perm = content_perms[0]
87 other_perms.append((perm, checkboxes_by_id[perm.id]))
88 continue
89
90 for perm in content_perms:
91 content_perms_dict["object"] = perm.content_type.name
92 checkbox = checkboxes_by_id[perm.id]
93 # identify the main categories of permission, and assign to
94 # the relevant dict key, else bung in the 'custom_perms' list
95 permission_action = perm.codename.split("_", maxsplit=1)
96 permission_action = permission_action[permission_action[0].lower() == "can"]
97 permission_action = permission_action.rsplit(maxsplit=1)[0]
98
99 if permission_action in main_permission_names:
100 if permission_action in extra_perms_exist:
101 extra_perms_exist[permission_action] = True
102 content_perms_dict[permission_action] = {
103 "perm": perm,
104 "checkbox": checkbox,
105 }
106 else:
107 extra_perms_exist["custom"] = True
108 custom_perms.append(
109 {
110 "perm": perm,
111 "name": f"Can {permission_action}",
112 "selected": checkbox.data["selected"],
113 }
114 )
115
116 content_perms_dict["custom"] = custom_perms
117 object_perms.append(content_perms_dict)
118 return {
119 "object_perms": object_perms,
120 "other_perms": other_perms,
121 "extra_perms_exist": extra_perms_exist,
122 }
123
124
125 @register.inclusion_tag("wagtailadmin/shared/buttons.html", takes_context=True)
126 def user_listing_buttons(context, user):
127 button_hooks = hooks.get_hooks("register_user_listing_buttons")
128 buttons = sorted(
129 itertools.chain.from_iterable(hook(context, user) for hook in button_hooks)
130 )
131 return {"user": user, "buttons": buttons}
132
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/wagtail/users/templatetags/wagtailusers_tags.py b/wagtail/users/templatetags/wagtailusers_tags.py
--- a/wagtail/users/templatetags/wagtailusers_tags.py
+++ b/wagtail/users/templatetags/wagtailusers_tags.py
@@ -2,6 +2,9 @@
from collections import defaultdict
from django import template
+from django.contrib.auth import get_permission_codename
+from django.contrib.auth.models import Permission
+from django.utils.text import camel_case_to_spaces
from wagtail import hooks
from wagtail.users.permission_order import CONTENT_TYPE_ORDER
@@ -9,6 +12,45 @@
register = template.Library()
+def normalize_permission_label(permission: Permission):
+ """
+ Strip model name from the end of the label, e.g. "Can deliver pizza" for a
+ Pizza model becomes "Can deliver". For permissions in the model's
+ Meta.default_permissions with default labels, also replace underscores
+ with spaces.
+
+ This is used to display custom model permissions in the admin.
+
+ See https://github.com/wagtail/wagtail/issues/10982.
+ """
+ label = permission.name
+ content_type = permission.content_type
+ model = content_type.model_class()
+ verbose_name = default_verbose_name = content_type.name
+
+ if model:
+ default_verbose_name = camel_case_to_spaces(model._meta.object_name)
+
+ # If it's in default_permissions and the label matches Django's default
+ # label, remove the model name from the end of the label. Also replace
+ # underscores with spaces, as Django uses the action internal name as-is
+ # for the permission label, which means it tends to be in snake_case.
+ for action in model._meta.default_permissions:
+ default_codename = get_permission_codename(action, model._meta)
+ is_default = permission.codename == default_codename
+ if is_default and permission.name.startswith(f"Can {action}"):
+ return f"Can {action.replace('_', ' ')}"
+
+ # For all other cases (including custom permissions), try to remove the
+ # verbose name from the end of the label. This only works if the label
+ # matches the current verbose name or Django's default verbose name.
+ for name in (default_verbose_name, verbose_name):
+ if label.lower().endswith(name.lower()):
+ return label[: -len(name)].strip()
+
+ return label
+
+
@register.inclusion_tag("wagtailusers/groups/includes/formatted_permissions.html")
def format_permissions(permission_bound_field):
"""
@@ -92,10 +134,7 @@
checkbox = checkboxes_by_id[perm.id]
# identify the main categories of permission, and assign to
# the relevant dict key, else bung in the 'custom_perms' list
- permission_action = perm.codename.split("_", maxsplit=1)
- permission_action = permission_action[permission_action[0].lower() == "can"]
- permission_action = permission_action.rsplit(maxsplit=1)[0]
-
+ permission_action = perm.codename.split("_")[0]
if permission_action in main_permission_names:
if permission_action in extra_perms_exist:
extra_perms_exist[permission_action] = True
@@ -105,10 +144,12 @@
}
else:
extra_perms_exist["custom"] = True
+ perm_name = normalize_permission_label(perm)
+
custom_perms.append(
{
"perm": perm,
- "name": f"Can {permission_action}",
+ "name": perm_name,
"selected": checkbox.data["selected"],
}
)
| {"golden_diff": "diff --git a/wagtail/users/templatetags/wagtailusers_tags.py b/wagtail/users/templatetags/wagtailusers_tags.py\n--- a/wagtail/users/templatetags/wagtailusers_tags.py\n+++ b/wagtail/users/templatetags/wagtailusers_tags.py\n@@ -2,6 +2,9 @@\n from collections import defaultdict\n \n from django import template\n+from django.contrib.auth import get_permission_codename\n+from django.contrib.auth.models import Permission\n+from django.utils.text import camel_case_to_spaces\n \n from wagtail import hooks\n from wagtail.users.permission_order import CONTENT_TYPE_ORDER\n@@ -9,6 +12,45 @@\n register = template.Library()\n \n \n+def normalize_permission_label(permission: Permission):\n+ \"\"\"\n+ Strip model name from the end of the label, e.g. \"Can deliver pizza\" for a\n+ Pizza model becomes \"Can deliver\". For permissions in the model's\n+ Meta.default_permissions with default labels, also replace underscores\n+ with spaces.\n+\n+ This is used to display custom model permissions in the admin.\n+\n+ See https://github.com/wagtail/wagtail/issues/10982.\n+ \"\"\"\n+ label = permission.name\n+ content_type = permission.content_type\n+ model = content_type.model_class()\n+ verbose_name = default_verbose_name = content_type.name\n+\n+ if model:\n+ default_verbose_name = camel_case_to_spaces(model._meta.object_name)\n+\n+ # If it's in default_permissions and the label matches Django's default\n+ # label, remove the model name from the end of the label. Also replace\n+ # underscores with spaces, as Django uses the action internal name as-is\n+ # for the permission label, which means it tends to be in snake_case.\n+ for action in model._meta.default_permissions:\n+ default_codename = get_permission_codename(action, model._meta)\n+ is_default = permission.codename == default_codename\n+ if is_default and permission.name.startswith(f\"Can {action}\"):\n+ return f\"Can {action.replace('_', ' ')}\"\n+\n+ # For all other cases (including custom permissions), try to remove the\n+ # verbose name from the end of the label. This only works if the label\n+ # matches the current verbose name or Django's default verbose name.\n+ for name in (default_verbose_name, verbose_name):\n+ if label.lower().endswith(name.lower()):\n+ return label[: -len(name)].strip()\n+\n+ return label\n+\n+\n @register.inclusion_tag(\"wagtailusers/groups/includes/formatted_permissions.html\")\n def format_permissions(permission_bound_field):\n \"\"\"\n@@ -92,10 +134,7 @@\n checkbox = checkboxes_by_id[perm.id]\n # identify the main categories of permission, and assign to\n # the relevant dict key, else bung in the 'custom_perms' list\n- permission_action = perm.codename.split(\"_\", maxsplit=1)\n- permission_action = permission_action[permission_action[0].lower() == \"can\"]\n- permission_action = permission_action.rsplit(maxsplit=1)[0]\n-\n+ permission_action = perm.codename.split(\"_\")[0]\n if permission_action in main_permission_names:\n if permission_action in extra_perms_exist:\n extra_perms_exist[permission_action] = True\n@@ -105,10 +144,12 @@\n }\n else:\n extra_perms_exist[\"custom\"] = True\n+ perm_name = normalize_permission_label(perm)\n+\n custom_perms.append(\n {\n \"perm\": perm,\n- \"name\": f\"Can {permission_action}\",\n+ \"name\": perm_name,\n \"selected\": checkbox.data[\"selected\"],\n }\n )\n", "issue": "Inconsistent use of model verbose_name in permissions UI\n### Issue Summary\r\n\r\nThe object permissions UI doesn't show a model's [`verbose_name`](https://docs.djangoproject.com/en/4.2/ref/models/options/#verbose-name) if it differs from the model's name when it was created.\r\n\r\n### Steps to Reproduce\r\n\r\nStart a new project with `wagtail start myproject` and create a simple model like the following:\r\n\r\n```py\r\nfrom django.db import models\r\n\r\nfrom wagtail.snippets.models import register_snippet\r\n\r\n@register_snippet\r\nclass MyModel(models.Model):\r\n text = models.TextField()\r\n```\r\n\r\nIn the group edit view, permissions render like this; note \"My model\" and \"Can view\":\r\n\r\n\r\n\r\nNow, give the model a different `verbose_name`:\r\n\r\n```py\r\n@register_snippet\r\nclass MyModel(models.Model):\r\n text = models.TextField()\r\n\r\n class Meta:\r\n verbose_name = \"Something else\"\r\n```\r\n\r\nNow, the permissions render in a surprising way; note \"Something else\" and \"Can view my model\":\r\n\r\n\r\n\r\nAs a user, I would expect to see \"Can view\" the way all other objects are typically listed.\r\n\r\nThis behavior seems to come from this line that defines the permission \"name\":\r\n\r\nhttps://github.com/wagtail/wagtail/blob/a8bc03dd8aad9d15fd4d8e84ace1bc89cbe96053/wagtail/users/templatetags/wagtailusers_tags.py#L101-L103\r\n\r\nThis code assumes that a model's content type's name (`perm.content_type.name`) will exactly match the name on the permission object (`perm.name`). This is true as long as the model's verbose name hasn't been changed, but, [per the docs](https://docs.djangoproject.com/en/4.2/ref/contrib/contenttypes/#django.contrib.contenttypes.models.ContentType.name), `content_type.name` \"is taken from the `verbose_name` attribute of the model\". So if a model's verbose name is changed, that change will show up when you look at its content type.\r\n\r\nThe issue is that when a model is first created, its automatically-generated permissions are given names that contain the model's name at that time. For example, the above `MyModel` class generated permissions like this:\r\n\r\n```\r\n110|28|add_mymodel|Can add my model\r\n111|28|change_mymodel|Can change my model\r\n112|28|delete_mymodel|Can delete my model\r\n113|28|view_mymodel|Can view my model\r\n```\r\n\r\nIn the above example, the code assumes that since the content type reports its name as `Something else`, the permission's name will be `Can view something else`, but this isn't a valid assumption. The permission's name is still what it was when it was created, `Can view my model`. \r\n\r\nTo fix this, the code could lookup the model objects corresponding to the permission content types and use those to do the substitution, maybe something like this:\r\n\r\n```diff\r\ndiff --git a/wagtail/users/templatetags/wagtailusers_tags.py b/wagtail/users/templatetags/wagtailusers_tags.py\r\nindex c188425ad0..3da81fe0dd 100644\r\n--- a/wagtail/users/templatetags/wagtailusers_tags.py\r\n+++ b/wagtail/users/templatetags/wagtailusers_tags.py\r\n@@ -95,11 +95,15 @@ def format_permissions(permission_bound_field):\r\n }\r\n else:\r\n extra_perms_exist[\"custom\"] = True\r\n+ perm_model_class = perm.content_type.model_class()\r\n custom_perms.append(\r\n {\r\n \"perm\": perm,\r\n \"name\": re.sub(\r\n- f\"{perm.content_type.name}$\", \"\", perm.name, flags=re.I\r\n+ f\"{perm_model_class._meta.model_name}$\", \"\", perm.name, flags=re.I\r\n ).strip(),\r\n \"selected\": checkbox.data[\"selected\"],\r\n }\r\n```\r\n\r\n### Technical details\r\n\r\n- I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: yes\r\n- Python version: 3.11.1\r\n- Django version: 4.0.10\r\n- Wagtail version: 5.2a0\n", "before_files": [{"content": "import itertools\nfrom collections import defaultdict\n\nfrom django import template\n\nfrom wagtail import hooks\nfrom wagtail.users.permission_order import CONTENT_TYPE_ORDER\n\nregister = template.Library()\n\n\[email protected]_tag(\"wagtailusers/groups/includes/formatted_permissions.html\")\ndef format_permissions(permission_bound_field):\n \"\"\"\n Given a bound field with a queryset of Permission objects - which must be using\n the CheckboxSelectMultiple widget - construct a list of dictionaries for 'objects':\n\n 'objects': [\n {\n 'object': name_of_some_content_object,\n 'add': checkbox,\n 'change': checkbox,\n 'delete': checkbox,\n 'publish': checkbox, # only if the model extends DraftStateMixin\n 'custom': list_of_checkboxes_for_custom_permissions\n },\n ]\n\n and a list of other permissions:\n\n 'others': [\n (any_non_add_change_delete_permission, checkbox),\n ]\n\n (where 'checkbox' is an object with a tag() method that renders the checkbox as HTML;\n this is a BoundWidget on Django >=1.11)\n\n - and returns a table template formatted with this list.\n\n \"\"\"\n permissions = permission_bound_field.field._queryset\n # get a distinct and ordered list of the content types that these permissions relate to.\n # relies on Permission model default ordering, dict.fromkeys() retaining that order\n # from the queryset, and the stability of sorted().\n content_type_ids = sorted(\n dict.fromkeys(permissions.values_list(\"content_type_id\", flat=True)),\n key=lambda ct: CONTENT_TYPE_ORDER.get(ct, float(\"inf\")),\n )\n\n # iterate over permission_bound_field to build a lookup of individual renderable\n # checkbox objects\n # checkbox.data['value'] gives a ModelChoiceIteratorValue\n checkboxes_by_id = {\n int(checkbox.data[\"value\"].value): checkbox\n for checkbox in permission_bound_field\n }\n\n object_perms = []\n other_perms = []\n\n # Permissions that are known by Wagtail, to be shown under their own columns.\n # Other permissions will be shown under the \"custom permissions\" column.\n main_permission_names = [\"add\", \"change\", \"delete\", \"publish\", \"lock\", \"unlock\"]\n\n # Only show the columns for these permissions if any of the model has them.\n extra_perms_exist = {\n \"publish\": False,\n \"lock\": False,\n \"unlock\": False,\n \"custom\": False,\n }\n # Batch the permission query for all content types, then group by content type\n # (instead of querying permissions for each content type separately)\n content_perms_by_ct_id = defaultdict(list)\n permissions = permissions.filter(content_type_id__in=content_type_ids)\n for permission in permissions:\n content_perms_by_ct_id[permission.content_type_id].append(permission)\n\n # Iterate using the sorted content_type_ids\n for ct_id in content_type_ids:\n content_perms = content_perms_by_ct_id[ct_id]\n content_perms_dict = {}\n custom_perms = []\n\n if content_perms[0].content_type.name == \"admin\":\n perm = content_perms[0]\n other_perms.append((perm, checkboxes_by_id[perm.id]))\n continue\n\n for perm in content_perms:\n content_perms_dict[\"object\"] = perm.content_type.name\n checkbox = checkboxes_by_id[perm.id]\n # identify the main categories of permission, and assign to\n # the relevant dict key, else bung in the 'custom_perms' list\n permission_action = perm.codename.split(\"_\", maxsplit=1)\n permission_action = permission_action[permission_action[0].lower() == \"can\"]\n permission_action = permission_action.rsplit(maxsplit=1)[0]\n\n if permission_action in main_permission_names:\n if permission_action in extra_perms_exist:\n extra_perms_exist[permission_action] = True\n content_perms_dict[permission_action] = {\n \"perm\": perm,\n \"checkbox\": checkbox,\n }\n else:\n extra_perms_exist[\"custom\"] = True\n custom_perms.append(\n {\n \"perm\": perm,\n \"name\": f\"Can {permission_action}\",\n \"selected\": checkbox.data[\"selected\"],\n }\n )\n\n content_perms_dict[\"custom\"] = custom_perms\n object_perms.append(content_perms_dict)\n return {\n \"object_perms\": object_perms,\n \"other_perms\": other_perms,\n \"extra_perms_exist\": extra_perms_exist,\n }\n\n\[email protected]_tag(\"wagtailadmin/shared/buttons.html\", takes_context=True)\ndef user_listing_buttons(context, user):\n button_hooks = hooks.get_hooks(\"register_user_listing_buttons\")\n buttons = sorted(\n itertools.chain.from_iterable(hook(context, user) for hook in button_hooks)\n )\n return {\"user\": user, \"buttons\": buttons}\n", "path": "wagtail/users/templatetags/wagtailusers_tags.py"}], "after_files": [{"content": "import itertools\nfrom collections import defaultdict\n\nfrom django import template\nfrom django.contrib.auth import get_permission_codename\nfrom django.contrib.auth.models import Permission\nfrom django.utils.text import camel_case_to_spaces\n\nfrom wagtail import hooks\nfrom wagtail.users.permission_order import CONTENT_TYPE_ORDER\n\nregister = template.Library()\n\n\ndef normalize_permission_label(permission: Permission):\n \"\"\"\n Strip model name from the end of the label, e.g. \"Can deliver pizza\" for a\n Pizza model becomes \"Can deliver\". For permissions in the model's\n Meta.default_permissions with default labels, also replace underscores\n with spaces.\n\n This is used to display custom model permissions in the admin.\n\n See https://github.com/wagtail/wagtail/issues/10982.\n \"\"\"\n label = permission.name\n content_type = permission.content_type\n model = content_type.model_class()\n verbose_name = default_verbose_name = content_type.name\n\n if model:\n default_verbose_name = camel_case_to_spaces(model._meta.object_name)\n\n # If it's in default_permissions and the label matches Django's default\n # label, remove the model name from the end of the label. Also replace\n # underscores with spaces, as Django uses the action internal name as-is\n # for the permission label, which means it tends to be in snake_case.\n for action in model._meta.default_permissions:\n default_codename = get_permission_codename(action, model._meta)\n is_default = permission.codename == default_codename\n if is_default and permission.name.startswith(f\"Can {action}\"):\n return f\"Can {action.replace('_', ' ')}\"\n\n # For all other cases (including custom permissions), try to remove the\n # verbose name from the end of the label. This only works if the label\n # matches the current verbose name or Django's default verbose name.\n for name in (default_verbose_name, verbose_name):\n if label.lower().endswith(name.lower()):\n return label[: -len(name)].strip()\n\n return label\n\n\[email protected]_tag(\"wagtailusers/groups/includes/formatted_permissions.html\")\ndef format_permissions(permission_bound_field):\n \"\"\"\n Given a bound field with a queryset of Permission objects - which must be using\n the CheckboxSelectMultiple widget - construct a list of dictionaries for 'objects':\n\n 'objects': [\n {\n 'object': name_of_some_content_object,\n 'add': checkbox,\n 'change': checkbox,\n 'delete': checkbox,\n 'publish': checkbox, # only if the model extends DraftStateMixin\n 'custom': list_of_checkboxes_for_custom_permissions\n },\n ]\n\n and a list of other permissions:\n\n 'others': [\n (any_non_add_change_delete_permission, checkbox),\n ]\n\n (where 'checkbox' is an object with a tag() method that renders the checkbox as HTML;\n this is a BoundWidget on Django >=1.11)\n\n - and returns a table template formatted with this list.\n\n \"\"\"\n permissions = permission_bound_field.field._queryset\n # get a distinct and ordered list of the content types that these permissions relate to.\n # relies on Permission model default ordering, dict.fromkeys() retaining that order\n # from the queryset, and the stability of sorted().\n content_type_ids = sorted(\n dict.fromkeys(permissions.values_list(\"content_type_id\", flat=True)),\n key=lambda ct: CONTENT_TYPE_ORDER.get(ct, float(\"inf\")),\n )\n\n # iterate over permission_bound_field to build a lookup of individual renderable\n # checkbox objects\n # checkbox.data['value'] gives a ModelChoiceIteratorValue\n checkboxes_by_id = {\n int(checkbox.data[\"value\"].value): checkbox\n for checkbox in permission_bound_field\n }\n\n object_perms = []\n other_perms = []\n\n # Permissions that are known by Wagtail, to be shown under their own columns.\n # Other permissions will be shown under the \"custom permissions\" column.\n main_permission_names = [\"add\", \"change\", \"delete\", \"publish\", \"lock\", \"unlock\"]\n\n # Only show the columns for these permissions if any of the model has them.\n extra_perms_exist = {\n \"publish\": False,\n \"lock\": False,\n \"unlock\": False,\n \"custom\": False,\n }\n # Batch the permission query for all content types, then group by content type\n # (instead of querying permissions for each content type separately)\n content_perms_by_ct_id = defaultdict(list)\n permissions = permissions.filter(content_type_id__in=content_type_ids)\n for permission in permissions:\n content_perms_by_ct_id[permission.content_type_id].append(permission)\n\n # Iterate using the sorted content_type_ids\n for ct_id in content_type_ids:\n content_perms = content_perms_by_ct_id[ct_id]\n content_perms_dict = {}\n custom_perms = []\n\n if content_perms[0].content_type.name == \"admin\":\n perm = content_perms[0]\n other_perms.append((perm, checkboxes_by_id[perm.id]))\n continue\n\n for perm in content_perms:\n content_perms_dict[\"object\"] = perm.content_type.name\n checkbox = checkboxes_by_id[perm.id]\n # identify the main categories of permission, and assign to\n # the relevant dict key, else bung in the 'custom_perms' list\n permission_action = perm.codename.split(\"_\")[0]\n if permission_action in main_permission_names:\n if permission_action in extra_perms_exist:\n extra_perms_exist[permission_action] = True\n content_perms_dict[permission_action] = {\n \"perm\": perm,\n \"checkbox\": checkbox,\n }\n else:\n extra_perms_exist[\"custom\"] = True\n perm_name = normalize_permission_label(perm)\n\n custom_perms.append(\n {\n \"perm\": perm,\n \"name\": perm_name,\n \"selected\": checkbox.data[\"selected\"],\n }\n )\n\n content_perms_dict[\"custom\"] = custom_perms\n object_perms.append(content_perms_dict)\n return {\n \"object_perms\": object_perms,\n \"other_perms\": other_perms,\n \"extra_perms_exist\": extra_perms_exist,\n }\n\n\[email protected]_tag(\"wagtailadmin/shared/buttons.html\", takes_context=True)\ndef user_listing_buttons(context, user):\n button_hooks = hooks.get_hooks(\"register_user_listing_buttons\")\n buttons = sorted(\n itertools.chain.from_iterable(hook(context, user) for hook in button_hooks)\n )\n return {\"user\": user, \"buttons\": buttons}\n", "path": "wagtail/users/templatetags/wagtailusers_tags.py"}]} | 2,632 | 827 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.