problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_30535 | rasdani/github-patches | git_diff | pantsbuild__pants-13940 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`log_level_by_target` doesn't seem to work well for Python targets
**Describe the bug**
Using the following:
```
level = "warn"
log_levels_by_target = {"pants.goal.stats_aggregator" = "info"}
```
when running `./pants --stats-log ...` I don't see the stats being logged.
**Pants version**
`main`
**OS**
Ubuntu
**Additional info**
I suspect this is because `log_levels_by_target` isn't used to configure the Python logger, and therefore the python logger for any Python module is configured to use the level set by `level`.
This can be seen by inspecting the logger in `src/python/pants/goal/stats_aggregator.py` which is set to the level `WARN`. Therefore I assume the log call never gets forwarded to the Rust-implemented handler, and therefore `log_levels_by_target` isn't considered.
</issue>
<code>
[start of src/python/pants/init/logging.py]
1 # Copyright 2018 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 import http.client
7 import locale
8 import logging
9 import sys
10 from contextlib import contextmanager
11 from io import BufferedReader, TextIOWrapper
12 from logging import Formatter, Handler, LogRecord
13 from pathlib import PurePath
14 from typing import Iterator
15
16 import pants.util.logging as pants_logging
17 from pants.engine.internals import native_engine
18 from pants.option.option_value_container import OptionValueContainer
19 from pants.util.dirutil import safe_mkdir_for
20 from pants.util.docutil import doc_url
21 from pants.util.logging import LogLevel
22 from pants.util.strutil import strip_prefix
23
24 # Although logging supports the WARN level, its not documented and could conceivably be yanked.
25 # Since pants has supported 'warn' since inception, leave the 'warn' choice as-is but explicitly
26 # setup a 'WARN' logging level name that maps to 'WARNING'.
27 logging.addLevelName(logging.WARNING, "WARN")
28 logging.addLevelName(pants_logging.TRACE, "TRACE")
29
30
31 class _NativeHandler(Handler):
32 """This class is installed as a Python logging module handler (using the logging.addHandler
33 method) and proxies logs to the Rust logging infrastructure."""
34
35 def emit(self, record: LogRecord) -> None:
36 native_engine.write_log(self.format(record), record.levelno, record.name)
37
38 def flush(self) -> None:
39 native_engine.flush_log()
40
41
42 class _ExceptionFormatter(Formatter):
43 """Possibly render the stacktrace and possibly give debug hints, based on global options."""
44
45 def __init__(self, level: LogLevel, *, print_stacktrace: bool) -> None:
46 super().__init__(None)
47 self.level = level
48 self.print_stacktrace = print_stacktrace
49
50 def formatException(self, exc_info):
51 stacktrace = super().formatException(exc_info) if self.print_stacktrace else ""
52
53 debug_instructions = []
54 if not self.print_stacktrace:
55 debug_instructions.append("--print-stacktrace for more error details")
56 if self.level not in {LogLevel.DEBUG, LogLevel.TRACE}:
57 debug_instructions.append("-ldebug for more logs")
58 debug_instructions = (
59 f"Use {' and/or '.join(debug_instructions)}. " if debug_instructions else ""
60 )
61
62 return (
63 f"{stacktrace}\n\n{debug_instructions}\nSee {doc_url('troubleshooting')} for common "
64 f"issues.\nConsider reaching out for help: {doc_url('getting-help')}\n"
65 )
66
67
68 @contextmanager
69 def stdio_destination(stdin_fileno: int, stdout_fileno: int, stderr_fileno: int) -> Iterator[None]:
70 """Sets a destination for both logging and stdio: must be called after `initialize_stdio`.
71
72 After `initialize_stdio` and outside of this contextmanager, the default stdio destination is
73 the pants.log. But inside of this block, all engine "tasks"/@rules that are spawned will have
74 thread/task-local state that directs their IO to the given destination. When the contextmanager
75 exits all tasks will be restored to the default destination (regardless of whether they have
76 completed).
77 """
78 if not logging.getLogger(None).handlers:
79 raise AssertionError("stdio_destination should only be called after initialize_stdio.")
80
81 native_engine.stdio_thread_console_set(stdin_fileno, stdout_fileno, stderr_fileno)
82 try:
83 yield
84 finally:
85 native_engine.stdio_thread_console_clear()
86
87
88 def stdio_destination_use_color(use_color: bool) -> None:
89 """Sets a color mode for the current thread's destination.
90
91 True or false force color to be used or not used: None causes TTY detection to decide whether
92 color will be used.
93
94 NB: This method is independent from either `stdio_destination` or `initialize_stdio` because
95 we cannot decide whether to use color for a particular destination until it is open AND we have
96 parsed options for the relevant connection.
97 """
98 native_engine.stdio_thread_console_color_mode_set(use_color)
99
100
101 @contextmanager
102 def _python_logging_setup(level: LogLevel, *, print_stacktrace: bool) -> Iterator[None]:
103 """Installs a root Python logger that routes all logging through a Rust logger."""
104
105 def trace_fn(self, message, *args, **kwargs):
106 if self.isEnabledFor(LogLevel.TRACE.level):
107 self._log(LogLevel.TRACE.level, message, *args, **kwargs)
108
109 logging.Logger.trace = trace_fn # type: ignore[attr-defined]
110 logger = logging.getLogger(None)
111
112 def clear_logging_handlers():
113 handlers = tuple(logger.handlers)
114 for handler in handlers:
115 logger.removeHandler(handler)
116 return handlers
117
118 def set_logging_handlers(handlers):
119 for handler in handlers:
120 logger.addHandler(handler)
121
122 # Remove existing handlers, and restore them afterward.
123 handlers = clear_logging_handlers()
124 try:
125 # This routes warnings through our loggers instead of straight to raw stderr.
126 logging.captureWarnings(True)
127 handler = _NativeHandler()
128 exc_formatter = _ExceptionFormatter(level, print_stacktrace=print_stacktrace)
129 handler.setFormatter(exc_formatter)
130 logger.addHandler(handler)
131 level.set_level_for(logger)
132
133 if logger.isEnabledFor(LogLevel.TRACE.level):
134 http.client.HTTPConnection.debuglevel = 1 # type: ignore[attr-defined]
135 requests_logger = logging.getLogger("requests.packages.urllib3")
136 LogLevel.TRACE.set_level_for(requests_logger)
137 requests_logger.propagate = True
138
139 yield
140 finally:
141 clear_logging_handlers()
142 set_logging_handlers(handlers)
143
144
145 @contextmanager
146 def initialize_stdio(global_bootstrap_options: OptionValueContainer) -> Iterator[None]:
147 """Mutates sys.std* and logging to route stdio for a Pants process to thread local destinations.
148
149 In this context, `sys.std*` and logging handlers will route through Rust code that uses
150 thread-local information to decide whether to write to a file, or to stdio file handles.
151
152 To control the stdio destination set by this method, use the `stdio_destination` context manager.
153
154 This is called in two different processes:
155 * PantsRunner, after it has determined that LocalPantsRunner will be running in process, and
156 immediately before setting a `stdio_destination` for the remainder of the run.
157 * PantsDaemon, immediately on startup. The process will then default to sending stdio to the log
158 until client connections arrive, at which point `stdio_destination` is used per-connection.
159 """
160 with initialize_stdio_raw(
161 global_bootstrap_options.level,
162 global_bootstrap_options.log_show_rust_3rdparty,
163 global_bootstrap_options.show_log_target,
164 _get_log_levels_by_target(global_bootstrap_options),
165 global_bootstrap_options.print_stacktrace,
166 global_bootstrap_options.ignore_warnings,
167 global_bootstrap_options.pants_workdir,
168 ):
169 yield
170
171
172 @contextmanager
173 def initialize_stdio_raw(
174 global_level: LogLevel,
175 log_show_rust_3rdparty: bool,
176 show_target: bool,
177 log_levels_by_target: dict[str, LogLevel],
178 print_stacktrace: bool,
179 ignore_warnings: list[str],
180 pants_workdir: str,
181 ) -> Iterator[None]:
182 literal_filters = []
183 regex_filters = []
184 for filt in ignore_warnings:
185 if filt.startswith("$regex$"):
186 regex_filters.append(strip_prefix(filt, "$regex$"))
187 else:
188 literal_filters.append(filt)
189
190 # Set the pants log destination.
191 log_path = str(pants_log_path(PurePath(pants_workdir)))
192 safe_mkdir_for(log_path)
193
194 # Initialize thread-local stdio, and replace sys.std* with proxies.
195 original_stdin, original_stdout, original_stderr = sys.stdin, sys.stdout, sys.stderr
196 try:
197 raw_stdin, sys.stdout, sys.stderr = native_engine.stdio_initialize(
198 global_level.level,
199 log_show_rust_3rdparty,
200 show_target,
201 {k: v.level for k, v in log_levels_by_target.items()},
202 tuple(literal_filters),
203 tuple(regex_filters),
204 log_path,
205 )
206 sys.stdin = TextIOWrapper(
207 BufferedReader(raw_stdin),
208 # NB: We set the default encoding explicitly to bypass logic in the TextIOWrapper
209 # constructor that would poke the underlying file (which is not valid until a
210 # `stdio_destination` is set).
211 encoding=locale.getpreferredencoding(False),
212 )
213
214 sys.__stdin__, sys.__stdout__, sys.__stderr__ = sys.stdin, sys.stdout, sys.stderr
215 # Install a Python logger that will route through the Rust logger.
216 with _python_logging_setup(global_level, print_stacktrace=print_stacktrace):
217 yield
218 finally:
219 sys.stdin, sys.stdout, sys.stderr = original_stdin, original_stdout, original_stderr
220 sys.__stdin__, sys.__stdout__, sys.__stderr__ = sys.stdin, sys.stdout, sys.stderr
221
222
223 def pants_log_path(workdir: PurePath) -> PurePath:
224 """Given the path of the workdir, returns the `pants.log` path."""
225 return workdir / "pants.log"
226
227
228 def _get_log_levels_by_target(
229 global_bootstrap_options: OptionValueContainer,
230 ) -> dict[str, LogLevel]:
231 raw_levels = global_bootstrap_options.log_levels_by_target
232 levels: dict[str, LogLevel] = {}
233 for key, value in raw_levels.items():
234 if not isinstance(key, str):
235 raise ValueError(
236 "Keys for log_domain_levels must be strings, but was given the key: {key} with type {type(key)}."
237 )
238 if not isinstance(value, str):
239 raise ValueError(
240 "Values for log_domain_levels must be strings, but was given the value: {value} with type {type(value)}."
241 )
242 log_level = LogLevel[value.upper()]
243 levels[key] = log_level
244 return levels
245
[end of src/python/pants/init/logging.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/python/pants/init/logging.py b/src/python/pants/init/logging.py
--- a/src/python/pants/init/logging.py
+++ b/src/python/pants/init/logging.py
@@ -99,7 +99,9 @@
@contextmanager
-def _python_logging_setup(level: LogLevel, *, print_stacktrace: bool) -> Iterator[None]:
+def _python_logging_setup(
+ level: LogLevel, log_levels_by_target: dict[str, LogLevel], *, print_stacktrace: bool
+) -> Iterator[None]:
"""Installs a root Python logger that routes all logging through a Rust logger."""
def trace_fn(self, message, *args, **kwargs):
@@ -130,6 +132,9 @@
logger.addHandler(handler)
level.set_level_for(logger)
+ for key, level in log_levels_by_target.items():
+ level.set_level_for(logging.getLogger(key))
+
if logger.isEnabledFor(LogLevel.TRACE.level):
http.client.HTTPConnection.debuglevel = 1 # type: ignore[attr-defined]
requests_logger = logging.getLogger("requests.packages.urllib3")
@@ -213,7 +218,9 @@
sys.__stdin__, sys.__stdout__, sys.__stderr__ = sys.stdin, sys.stdout, sys.stderr
# Install a Python logger that will route through the Rust logger.
- with _python_logging_setup(global_level, print_stacktrace=print_stacktrace):
+ with _python_logging_setup(
+ global_level, log_levels_by_target, print_stacktrace=print_stacktrace
+ ):
yield
finally:
sys.stdin, sys.stdout, sys.stderr = original_stdin, original_stdout, original_stderr
| {"golden_diff": "diff --git a/src/python/pants/init/logging.py b/src/python/pants/init/logging.py\n--- a/src/python/pants/init/logging.py\n+++ b/src/python/pants/init/logging.py\n@@ -99,7 +99,9 @@\n \n \n @contextmanager\n-def _python_logging_setup(level: LogLevel, *, print_stacktrace: bool) -> Iterator[None]:\n+def _python_logging_setup(\n+ level: LogLevel, log_levels_by_target: dict[str, LogLevel], *, print_stacktrace: bool\n+) -> Iterator[None]:\n \"\"\"Installs a root Python logger that routes all logging through a Rust logger.\"\"\"\n \n def trace_fn(self, message, *args, **kwargs):\n@@ -130,6 +132,9 @@\n logger.addHandler(handler)\n level.set_level_for(logger)\n \n+ for key, level in log_levels_by_target.items():\n+ level.set_level_for(logging.getLogger(key))\n+\n if logger.isEnabledFor(LogLevel.TRACE.level):\n http.client.HTTPConnection.debuglevel = 1 # type: ignore[attr-defined]\n requests_logger = logging.getLogger(\"requests.packages.urllib3\")\n@@ -213,7 +218,9 @@\n \n sys.__stdin__, sys.__stdout__, sys.__stderr__ = sys.stdin, sys.stdout, sys.stderr\n # Install a Python logger that will route through the Rust logger.\n- with _python_logging_setup(global_level, print_stacktrace=print_stacktrace):\n+ with _python_logging_setup(\n+ global_level, log_levels_by_target, print_stacktrace=print_stacktrace\n+ ):\n yield\n finally:\n sys.stdin, sys.stdout, sys.stderr = original_stdin, original_stdout, original_stderr\n", "issue": "`log_level_by_target` doesn't seem to work well for Python targets\n**Describe the bug**\r\nUsing the following:\r\n```\r\nlevel = \"warn\"\r\nlog_levels_by_target = {\"pants.goal.stats_aggregator\" = \"info\"}\r\n```\r\nwhen running `./pants --stats-log ...` I don't see the stats being logged.\r\n\r\n**Pants version**\r\n`main`\r\n\r\n**OS**\r\nUbuntu\r\n\r\n**Additional info**\r\nI suspect this is because `log_levels_by_target` isn't used to configure the Python logger, and therefore the python logger for any Python module is configured to use the level set by `level`.\r\n\r\nThis can be seen by inspecting the logger in `src/python/pants/goal/stats_aggregator.py` which is set to the level `WARN`. Therefore I assume the log call never gets forwarded to the Rust-implemented handler, and therefore `log_levels_by_target` isn't considered.\r\n\n", "before_files": [{"content": "# Copyright 2018 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import annotations\n\nimport http.client\nimport locale\nimport logging\nimport sys\nfrom contextlib import contextmanager\nfrom io import BufferedReader, TextIOWrapper\nfrom logging import Formatter, Handler, LogRecord\nfrom pathlib import PurePath\nfrom typing import Iterator\n\nimport pants.util.logging as pants_logging\nfrom pants.engine.internals import native_engine\nfrom pants.option.option_value_container import OptionValueContainer\nfrom pants.util.dirutil import safe_mkdir_for\nfrom pants.util.docutil import doc_url\nfrom pants.util.logging import LogLevel\nfrom pants.util.strutil import strip_prefix\n\n# Although logging supports the WARN level, its not documented and could conceivably be yanked.\n# Since pants has supported 'warn' since inception, leave the 'warn' choice as-is but explicitly\n# setup a 'WARN' logging level name that maps to 'WARNING'.\nlogging.addLevelName(logging.WARNING, \"WARN\")\nlogging.addLevelName(pants_logging.TRACE, \"TRACE\")\n\n\nclass _NativeHandler(Handler):\n \"\"\"This class is installed as a Python logging module handler (using the logging.addHandler\n method) and proxies logs to the Rust logging infrastructure.\"\"\"\n\n def emit(self, record: LogRecord) -> None:\n native_engine.write_log(self.format(record), record.levelno, record.name)\n\n def flush(self) -> None:\n native_engine.flush_log()\n\n\nclass _ExceptionFormatter(Formatter):\n \"\"\"Possibly render the stacktrace and possibly give debug hints, based on global options.\"\"\"\n\n def __init__(self, level: LogLevel, *, print_stacktrace: bool) -> None:\n super().__init__(None)\n self.level = level\n self.print_stacktrace = print_stacktrace\n\n def formatException(self, exc_info):\n stacktrace = super().formatException(exc_info) if self.print_stacktrace else \"\"\n\n debug_instructions = []\n if not self.print_stacktrace:\n debug_instructions.append(\"--print-stacktrace for more error details\")\n if self.level not in {LogLevel.DEBUG, LogLevel.TRACE}:\n debug_instructions.append(\"-ldebug for more logs\")\n debug_instructions = (\n f\"Use {' and/or '.join(debug_instructions)}. \" if debug_instructions else \"\"\n )\n\n return (\n f\"{stacktrace}\\n\\n{debug_instructions}\\nSee {doc_url('troubleshooting')} for common \"\n f\"issues.\\nConsider reaching out for help: {doc_url('getting-help')}\\n\"\n )\n\n\n@contextmanager\ndef stdio_destination(stdin_fileno: int, stdout_fileno: int, stderr_fileno: int) -> Iterator[None]:\n \"\"\"Sets a destination for both logging and stdio: must be called after `initialize_stdio`.\n\n After `initialize_stdio` and outside of this contextmanager, the default stdio destination is\n the pants.log. But inside of this block, all engine \"tasks\"/@rules that are spawned will have\n thread/task-local state that directs their IO to the given destination. When the contextmanager\n exits all tasks will be restored to the default destination (regardless of whether they have\n completed).\n \"\"\"\n if not logging.getLogger(None).handlers:\n raise AssertionError(\"stdio_destination should only be called after initialize_stdio.\")\n\n native_engine.stdio_thread_console_set(stdin_fileno, stdout_fileno, stderr_fileno)\n try:\n yield\n finally:\n native_engine.stdio_thread_console_clear()\n\n\ndef stdio_destination_use_color(use_color: bool) -> None:\n \"\"\"Sets a color mode for the current thread's destination.\n\n True or false force color to be used or not used: None causes TTY detection to decide whether\n color will be used.\n\n NB: This method is independent from either `stdio_destination` or `initialize_stdio` because\n we cannot decide whether to use color for a particular destination until it is open AND we have\n parsed options for the relevant connection.\n \"\"\"\n native_engine.stdio_thread_console_color_mode_set(use_color)\n\n\n@contextmanager\ndef _python_logging_setup(level: LogLevel, *, print_stacktrace: bool) -> Iterator[None]:\n \"\"\"Installs a root Python logger that routes all logging through a Rust logger.\"\"\"\n\n def trace_fn(self, message, *args, **kwargs):\n if self.isEnabledFor(LogLevel.TRACE.level):\n self._log(LogLevel.TRACE.level, message, *args, **kwargs)\n\n logging.Logger.trace = trace_fn # type: ignore[attr-defined]\n logger = logging.getLogger(None)\n\n def clear_logging_handlers():\n handlers = tuple(logger.handlers)\n for handler in handlers:\n logger.removeHandler(handler)\n return handlers\n\n def set_logging_handlers(handlers):\n for handler in handlers:\n logger.addHandler(handler)\n\n # Remove existing handlers, and restore them afterward.\n handlers = clear_logging_handlers()\n try:\n # This routes warnings through our loggers instead of straight to raw stderr.\n logging.captureWarnings(True)\n handler = _NativeHandler()\n exc_formatter = _ExceptionFormatter(level, print_stacktrace=print_stacktrace)\n handler.setFormatter(exc_formatter)\n logger.addHandler(handler)\n level.set_level_for(logger)\n\n if logger.isEnabledFor(LogLevel.TRACE.level):\n http.client.HTTPConnection.debuglevel = 1 # type: ignore[attr-defined]\n requests_logger = logging.getLogger(\"requests.packages.urllib3\")\n LogLevel.TRACE.set_level_for(requests_logger)\n requests_logger.propagate = True\n\n yield\n finally:\n clear_logging_handlers()\n set_logging_handlers(handlers)\n\n\n@contextmanager\ndef initialize_stdio(global_bootstrap_options: OptionValueContainer) -> Iterator[None]:\n \"\"\"Mutates sys.std* and logging to route stdio for a Pants process to thread local destinations.\n\n In this context, `sys.std*` and logging handlers will route through Rust code that uses\n thread-local information to decide whether to write to a file, or to stdio file handles.\n\n To control the stdio destination set by this method, use the `stdio_destination` context manager.\n\n This is called in two different processes:\n * PantsRunner, after it has determined that LocalPantsRunner will be running in process, and\n immediately before setting a `stdio_destination` for the remainder of the run.\n * PantsDaemon, immediately on startup. The process will then default to sending stdio to the log\n until client connections arrive, at which point `stdio_destination` is used per-connection.\n \"\"\"\n with initialize_stdio_raw(\n global_bootstrap_options.level,\n global_bootstrap_options.log_show_rust_3rdparty,\n global_bootstrap_options.show_log_target,\n _get_log_levels_by_target(global_bootstrap_options),\n global_bootstrap_options.print_stacktrace,\n global_bootstrap_options.ignore_warnings,\n global_bootstrap_options.pants_workdir,\n ):\n yield\n\n\n@contextmanager\ndef initialize_stdio_raw(\n global_level: LogLevel,\n log_show_rust_3rdparty: bool,\n show_target: bool,\n log_levels_by_target: dict[str, LogLevel],\n print_stacktrace: bool,\n ignore_warnings: list[str],\n pants_workdir: str,\n) -> Iterator[None]:\n literal_filters = []\n regex_filters = []\n for filt in ignore_warnings:\n if filt.startswith(\"$regex$\"):\n regex_filters.append(strip_prefix(filt, \"$regex$\"))\n else:\n literal_filters.append(filt)\n\n # Set the pants log destination.\n log_path = str(pants_log_path(PurePath(pants_workdir)))\n safe_mkdir_for(log_path)\n\n # Initialize thread-local stdio, and replace sys.std* with proxies.\n original_stdin, original_stdout, original_stderr = sys.stdin, sys.stdout, sys.stderr\n try:\n raw_stdin, sys.stdout, sys.stderr = native_engine.stdio_initialize(\n global_level.level,\n log_show_rust_3rdparty,\n show_target,\n {k: v.level for k, v in log_levels_by_target.items()},\n tuple(literal_filters),\n tuple(regex_filters),\n log_path,\n )\n sys.stdin = TextIOWrapper(\n BufferedReader(raw_stdin),\n # NB: We set the default encoding explicitly to bypass logic in the TextIOWrapper\n # constructor that would poke the underlying file (which is not valid until a\n # `stdio_destination` is set).\n encoding=locale.getpreferredencoding(False),\n )\n\n sys.__stdin__, sys.__stdout__, sys.__stderr__ = sys.stdin, sys.stdout, sys.stderr\n # Install a Python logger that will route through the Rust logger.\n with _python_logging_setup(global_level, print_stacktrace=print_stacktrace):\n yield\n finally:\n sys.stdin, sys.stdout, sys.stderr = original_stdin, original_stdout, original_stderr\n sys.__stdin__, sys.__stdout__, sys.__stderr__ = sys.stdin, sys.stdout, sys.stderr\n\n\ndef pants_log_path(workdir: PurePath) -> PurePath:\n \"\"\"Given the path of the workdir, returns the `pants.log` path.\"\"\"\n return workdir / \"pants.log\"\n\n\ndef _get_log_levels_by_target(\n global_bootstrap_options: OptionValueContainer,\n) -> dict[str, LogLevel]:\n raw_levels = global_bootstrap_options.log_levels_by_target\n levels: dict[str, LogLevel] = {}\n for key, value in raw_levels.items():\n if not isinstance(key, str):\n raise ValueError(\n \"Keys for log_domain_levels must be strings, but was given the key: {key} with type {type(key)}.\"\n )\n if not isinstance(value, str):\n raise ValueError(\n \"Values for log_domain_levels must be strings, but was given the value: {value} with type {type(value)}.\"\n )\n log_level = LogLevel[value.upper()]\n levels[key] = log_level\n return levels\n", "path": "src/python/pants/init/logging.py"}]} | 3,477 | 364 |
gh_patches_debug_38850 | rasdani/github-patches | git_diff | sanic-org__sanic-2170 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
deprecate CompositionView ?
Currently sanic offers a class called `CompositionView`
I really am struggling to find any utility in this class, since
```python
from sanic.views import CompositionView
def get_handler(request):
return text("I am a get method")
view = CompositionView()
view.add(["GET"], get_handler)
view.add(["POST", "PUT"], lambda request: text("I am a post/put method"))
# Use the new view to handle requests to the base URL
app.add_route(view, "/")
```
Seems much more confusing to me than
```python
def get_handler(request):
return text("I am a get method")
app.route("/", methods=["GET"])(get_handler)
app.route("/", methods=["POST", "PUT"])(lambda request: text("I am a post/put method"))
```
Can anyone offer a compelling use case for CompositionView?
If not, I would suggest to deprecate it
https://github.com/sanic-org/sanic/blob/master/sanic/views.py
</issue>
<code>
[start of sanic/views.py]
1 from typing import Any, Callable, List
2
3 from sanic.constants import HTTP_METHODS
4 from sanic.exceptions import InvalidUsage
5
6
7 class HTTPMethodView:
8 """Simple class based implementation of view for the sanic.
9 You should implement methods (get, post, put, patch, delete) for the class
10 to every HTTP method you want to support.
11
12 For example:
13
14 .. code-block:: python
15
16 class DummyView(HTTPMethodView):
17 def get(self, request, *args, **kwargs):
18 return text('I am get method')
19 def put(self, request, *args, **kwargs):
20 return text('I am put method')
21
22 If someone tries to use a non-implemented method, there will be a
23 405 response.
24
25 If you need any url params just mention them in method definition:
26
27 .. code-block:: python
28
29 class DummyView(HTTPMethodView):
30 def get(self, request, my_param_here, *args, **kwargs):
31 return text('I am get method with %s' % my_param_here)
32
33 To add the view into the routing you could use
34
35 1) ``app.add_route(DummyView.as_view(), '/')``, OR
36 2) ``app.route('/')(DummyView.as_view())``
37
38 To add any decorator you could set it into decorators variable
39 """
40
41 decorators: List[Callable[[Callable[..., Any]], Callable[..., Any]]] = []
42
43 def dispatch_request(self, request, *args, **kwargs):
44 handler = getattr(self, request.method.lower(), None)
45 return handler(request, *args, **kwargs)
46
47 @classmethod
48 def as_view(cls, *class_args, **class_kwargs):
49 """Return view function for use with the routing system, that
50 dispatches request to appropriate handler method.
51 """
52
53 def view(*args, **kwargs):
54 self = view.view_class(*class_args, **class_kwargs)
55 return self.dispatch_request(*args, **kwargs)
56
57 if cls.decorators:
58 view.__module__ = cls.__module__
59 for decorator in cls.decorators:
60 view = decorator(view)
61
62 view.view_class = cls
63 view.__doc__ = cls.__doc__
64 view.__module__ = cls.__module__
65 view.__name__ = cls.__name__
66 return view
67
68
69 def stream(func):
70 func.is_stream = True
71 return func
72
73
74 class CompositionView:
75 """Simple method-function mapped view for the sanic.
76 You can add handler functions to methods (get, post, put, patch, delete)
77 for every HTTP method you want to support.
78
79 For example:
80
81 .. code-block:: python
82
83 view = CompositionView()
84 view.add(['GET'], lambda request: text('I am get method'))
85 view.add(['POST', 'PUT'], lambda request: text('I am post/put method'))
86
87 If someone tries to use a non-implemented method, there will be a
88 405 response.
89 """
90
91 def __init__(self):
92 self.handlers = {}
93 self.name = self.__class__.__name__
94
95 def __name__(self):
96 return self.name
97
98 def add(self, methods, handler, stream=False):
99 if stream:
100 handler.is_stream = stream
101 for method in methods:
102 if method not in HTTP_METHODS:
103 raise InvalidUsage(f"{method} is not a valid HTTP method.")
104
105 if method in self.handlers:
106 raise InvalidUsage(f"Method {method} is already registered.")
107 self.handlers[method] = handler
108
109 def __call__(self, request, *args, **kwargs):
110 handler = self.handlers[request.method.upper()]
111 return handler(request, *args, **kwargs)
112
[end of sanic/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sanic/views.py b/sanic/views.py
--- a/sanic/views.py
+++ b/sanic/views.py
@@ -1,9 +1,25 @@
-from typing import Any, Callable, List
+from __future__ import annotations
+
+from typing import (
+ TYPE_CHECKING,
+ Any,
+ Callable,
+ Iterable,
+ List,
+ Optional,
+ Union,
+)
+from warnings import warn
from sanic.constants import HTTP_METHODS
from sanic.exceptions import InvalidUsage
+if TYPE_CHECKING:
+ from sanic import Sanic
+ from sanic.blueprints import Blueprint
+
+
class HTTPMethodView:
"""Simple class based implementation of view for the sanic.
You should implement methods (get, post, put, patch, delete) for the class
@@ -40,6 +56,31 @@
decorators: List[Callable[[Callable[..., Any]], Callable[..., Any]]] = []
+ def __init_subclass__(
+ cls,
+ attach: Optional[Union[Sanic, Blueprint]] = None,
+ uri: str = "",
+ methods: Iterable[str] = frozenset({"GET"}),
+ host: Optional[str] = None,
+ strict_slashes: Optional[bool] = None,
+ version: Optional[int] = None,
+ name: Optional[str] = None,
+ stream: bool = False,
+ version_prefix: str = "/v",
+ ) -> None:
+ if attach:
+ cls.attach(
+ attach,
+ uri=uri,
+ methods=methods,
+ host=host,
+ strict_slashes=strict_slashes,
+ version=version,
+ name=name,
+ stream=stream,
+ version_prefix=version_prefix,
+ )
+
def dispatch_request(self, request, *args, **kwargs):
handler = getattr(self, request.method.lower(), None)
return handler(request, *args, **kwargs)
@@ -65,6 +106,31 @@
view.__name__ = cls.__name__
return view
+ @classmethod
+ def attach(
+ cls,
+ to: Union[Sanic, Blueprint],
+ uri: str,
+ methods: Iterable[str] = frozenset({"GET"}),
+ host: Optional[str] = None,
+ strict_slashes: Optional[bool] = None,
+ version: Optional[int] = None,
+ name: Optional[str] = None,
+ stream: bool = False,
+ version_prefix: str = "/v",
+ ) -> None:
+ to.add_route(
+ cls.as_view(),
+ uri=uri,
+ methods=methods,
+ host=host,
+ strict_slashes=strict_slashes,
+ version=version,
+ name=name,
+ stream=stream,
+ version_prefix=version_prefix,
+ )
+
def stream(func):
func.is_stream = True
@@ -91,6 +157,11 @@
def __init__(self):
self.handlers = {}
self.name = self.__class__.__name__
+ warn(
+ "CompositionView has been deprecated and will be removed in "
+ "v21.12. Please update your view to HTTPMethodView.",
+ DeprecationWarning,
+ )
def __name__(self):
return self.name
| {"golden_diff": "diff --git a/sanic/views.py b/sanic/views.py\n--- a/sanic/views.py\n+++ b/sanic/views.py\n@@ -1,9 +1,25 @@\n-from typing import Any, Callable, List\n+from __future__ import annotations\n+\n+from typing import (\n+ TYPE_CHECKING,\n+ Any,\n+ Callable,\n+ Iterable,\n+ List,\n+ Optional,\n+ Union,\n+)\n+from warnings import warn\n \n from sanic.constants import HTTP_METHODS\n from sanic.exceptions import InvalidUsage\n \n \n+if TYPE_CHECKING:\n+ from sanic import Sanic\n+ from sanic.blueprints import Blueprint\n+\n+\n class HTTPMethodView:\n \"\"\"Simple class based implementation of view for the sanic.\n You should implement methods (get, post, put, patch, delete) for the class\n@@ -40,6 +56,31 @@\n \n decorators: List[Callable[[Callable[..., Any]], Callable[..., Any]]] = []\n \n+ def __init_subclass__(\n+ cls,\n+ attach: Optional[Union[Sanic, Blueprint]] = None,\n+ uri: str = \"\",\n+ methods: Iterable[str] = frozenset({\"GET\"}),\n+ host: Optional[str] = None,\n+ strict_slashes: Optional[bool] = None,\n+ version: Optional[int] = None,\n+ name: Optional[str] = None,\n+ stream: bool = False,\n+ version_prefix: str = \"/v\",\n+ ) -> None:\n+ if attach:\n+ cls.attach(\n+ attach,\n+ uri=uri,\n+ methods=methods,\n+ host=host,\n+ strict_slashes=strict_slashes,\n+ version=version,\n+ name=name,\n+ stream=stream,\n+ version_prefix=version_prefix,\n+ )\n+\n def dispatch_request(self, request, *args, **kwargs):\n handler = getattr(self, request.method.lower(), None)\n return handler(request, *args, **kwargs)\n@@ -65,6 +106,31 @@\n view.__name__ = cls.__name__\n return view\n \n+ @classmethod\n+ def attach(\n+ cls,\n+ to: Union[Sanic, Blueprint],\n+ uri: str,\n+ methods: Iterable[str] = frozenset({\"GET\"}),\n+ host: Optional[str] = None,\n+ strict_slashes: Optional[bool] = None,\n+ version: Optional[int] = None,\n+ name: Optional[str] = None,\n+ stream: bool = False,\n+ version_prefix: str = \"/v\",\n+ ) -> None:\n+ to.add_route(\n+ cls.as_view(),\n+ uri=uri,\n+ methods=methods,\n+ host=host,\n+ strict_slashes=strict_slashes,\n+ version=version,\n+ name=name,\n+ stream=stream,\n+ version_prefix=version_prefix,\n+ )\n+\n \n def stream(func):\n func.is_stream = True\n@@ -91,6 +157,11 @@\n def __init__(self):\n self.handlers = {}\n self.name = self.__class__.__name__\n+ warn(\n+ \"CompositionView has been deprecated and will be removed in \"\n+ \"v21.12. Please update your view to HTTPMethodView.\",\n+ DeprecationWarning,\n+ )\n \n def __name__(self):\n return self.name\n", "issue": "deprecate CompositionView ? \nCurrently sanic offers a class called `CompositionView`\r\n\r\nI really am struggling to find any utility in this class, since \r\n\r\n```python\r\nfrom sanic.views import CompositionView\r\n\r\ndef get_handler(request):\r\n return text(\"I am a get method\")\r\n\r\nview = CompositionView()\r\nview.add([\"GET\"], get_handler)\r\nview.add([\"POST\", \"PUT\"], lambda request: text(\"I am a post/put method\"))\r\n\r\n# Use the new view to handle requests to the base URL\r\napp.add_route(view, \"/\")\r\n```\r\n\r\n\r\nSeems much more confusing to me than\r\n\r\n```python\r\ndef get_handler(request):\r\n return text(\"I am a get method\")\r\n\r\napp.route(\"/\", methods=[\"GET\"])(get_handler)\r\napp.route(\"/\", methods=[\"POST\", \"PUT\"])(lambda request: text(\"I am a post/put method\"))\r\n```\r\n\r\nCan anyone offer a compelling use case for CompositionView?\r\n\r\nIf not, I would suggest to deprecate it \r\n\r\n\r\nhttps://github.com/sanic-org/sanic/blob/master/sanic/views.py\n", "before_files": [{"content": "from typing import Any, Callable, List\n\nfrom sanic.constants import HTTP_METHODS\nfrom sanic.exceptions import InvalidUsage\n\n\nclass HTTPMethodView:\n \"\"\"Simple class based implementation of view for the sanic.\n You should implement methods (get, post, put, patch, delete) for the class\n to every HTTP method you want to support.\n\n For example:\n\n .. code-block:: python\n\n class DummyView(HTTPMethodView):\n def get(self, request, *args, **kwargs):\n return text('I am get method')\n def put(self, request, *args, **kwargs):\n return text('I am put method')\n\n If someone tries to use a non-implemented method, there will be a\n 405 response.\n\n If you need any url params just mention them in method definition:\n\n .. code-block:: python\n\n class DummyView(HTTPMethodView):\n def get(self, request, my_param_here, *args, **kwargs):\n return text('I am get method with %s' % my_param_here)\n\n To add the view into the routing you could use\n\n 1) ``app.add_route(DummyView.as_view(), '/')``, OR\n 2) ``app.route('/')(DummyView.as_view())``\n\n To add any decorator you could set it into decorators variable\n \"\"\"\n\n decorators: List[Callable[[Callable[..., Any]], Callable[..., Any]]] = []\n\n def dispatch_request(self, request, *args, **kwargs):\n handler = getattr(self, request.method.lower(), None)\n return handler(request, *args, **kwargs)\n\n @classmethod\n def as_view(cls, *class_args, **class_kwargs):\n \"\"\"Return view function for use with the routing system, that\n dispatches request to appropriate handler method.\n \"\"\"\n\n def view(*args, **kwargs):\n self = view.view_class(*class_args, **class_kwargs)\n return self.dispatch_request(*args, **kwargs)\n\n if cls.decorators:\n view.__module__ = cls.__module__\n for decorator in cls.decorators:\n view = decorator(view)\n\n view.view_class = cls\n view.__doc__ = cls.__doc__\n view.__module__ = cls.__module__\n view.__name__ = cls.__name__\n return view\n\n\ndef stream(func):\n func.is_stream = True\n return func\n\n\nclass CompositionView:\n \"\"\"Simple method-function mapped view for the sanic.\n You can add handler functions to methods (get, post, put, patch, delete)\n for every HTTP method you want to support.\n\n For example:\n\n .. code-block:: python\n\n view = CompositionView()\n view.add(['GET'], lambda request: text('I am get method'))\n view.add(['POST', 'PUT'], lambda request: text('I am post/put method'))\n\n If someone tries to use a non-implemented method, there will be a\n 405 response.\n \"\"\"\n\n def __init__(self):\n self.handlers = {}\n self.name = self.__class__.__name__\n\n def __name__(self):\n return self.name\n\n def add(self, methods, handler, stream=False):\n if stream:\n handler.is_stream = stream\n for method in methods:\n if method not in HTTP_METHODS:\n raise InvalidUsage(f\"{method} is not a valid HTTP method.\")\n\n if method in self.handlers:\n raise InvalidUsage(f\"Method {method} is already registered.\")\n self.handlers[method] = handler\n\n def __call__(self, request, *args, **kwargs):\n handler = self.handlers[request.method.upper()]\n return handler(request, *args, **kwargs)\n", "path": "sanic/views.py"}]} | 1,791 | 757 |
gh_patches_debug_6472 | rasdani/github-patches | git_diff | spack__spack-11648 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Installation issue: lua-luafilesystem
I have a nightly CI build that now dies while trying to do a `spack install lua-luafilesystem`. This occurred upon the the merge of #11528
In other words, I am seeing that `lua-luafilesystem` will not install using commit eb584d8 but will install from 8e3fd3f.
This is on CentOS 7 , gcc 8.2.0
### Steps to reproduce the issue
```
[spack]$ git checkout eb584d8
Previous HEAD position was 8e3fd3f... tty: make tty.* print exception types
HEAD is now at eb584d8... refactor: remove unused spack.stage._get_mirrors() function
[spack]$ ./bin/spack install lua-luafilesystem
...
==> lua is already installed in /home/osolberg/temp/spack/opt/spack/linux-centos7-x86_64/gcc-8.2.0/lua-5.3.5-fulv52lir6poddzxeogk7rgrqglnkbon
==> Installing lua-luafilesystem
==> Searching for binary cache of lua-luafilesystem
==> Finding buildcaches in /bifx/apps/spack/mirror/build_cache
==> No binary for lua-luafilesystem found: installing from source
==> Using cached archive: /home/osolberg/temp/spack/var/spack/cache/lua-luafilesystem/lua-luafilesystem-1_7_0_2.tar.gz
==> Staging archive: /home/osolberg/temp/spack/var/spack/stage/lua-luafilesystem-1_7_0_2-dhumhtidskeakxzmlru6qnprnpw7lthz/v1_7_0_2.tar.gz
==> Created stage in /home/osolberg/temp/spack/var/spack/stage/lua-luafilesystem-1_7_0_2-dhumhtidskeakxzmlru6qnprnpw7lthz
==> No patches needed for lua-luafilesystem
==> Building lua-luafilesystem [Package]
==> Error: OSError: [Errno 2] No such file or directory: '/home/osolberg/temp/spack/var/spack/stage/lua-luafilesystem-1_7_0_2-dhumhtidskeakxzmlru6qnprnpw7lthz/src'
/home/osolberg/temp/spack/lib/spack/spack/package.py:1577, in build_process:
1574 echo = logger.echo
1575 self.log()
1576
>> 1577 # Run post install hooks before build stage is removed.
1578 spack.hooks.post_install(self.spec)
1579
1580 # Stop timer.
```
### Platform and user environment
```commandline
[spack]$ uname -a
Linux bifx1n03.bold.bio 3.10.0-957.1.3.el7.x86_64 #1 SMP Thu Nov 29 14:49:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
[spack]$ ./bin/spack spec --install-status lua-luafilesystem
Input spec
--------------------------------
- lua-luafilesystem
Concretized
--------------------------------
- lua-luafilesystem@1_7_0_2%[email protected] arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected]~tcltk arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected] patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected]+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac +shared+threads arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected]~symlinks~termlib arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected]~darwinssl~gssapi~libssh~libssh2~nghttp2 arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected]+systemcerts arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected]+optimize+pic+shared arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected]+libbsd arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected]+bzip2+curses+git~libunistring+libxml2 patches=9acdb4e73f67c241b5ef32505c9ddf7cf6884ca8ea661692f21dca28483b04b8 +tar+xz arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected]+shared arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected]~python arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected]+jit+multibyte+utf arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64
```
</issue>
<code>
[start of var/spack/repos/builtin/packages/lua-luafilesystem/package.py]
1 # Copyright 2013-2019 Lawrence Livermore National Security, LLC and other
2 # Spack Project Developers. See the top-level COPYRIGHT file for details.
3 #
4 # SPDX-License-Identifier: (Apache-2.0 OR MIT)
5
6 import os.path
7
8
9 class LuaLuafilesystem(Package):
10 """LuaFileSystem is a Lua library developed to complement the set of
11 functions related to file systems offered by the standard Lua distribution.
12
13 LuaFileSystem offers a portable way to access the underlying directory
14 structure and file attributes.
15
16 LuaFileSystem is free software and uses the same license as Lua 5.1
17 """
18
19 homepage = 'http://keplerproject.github.io/luafilesystem'
20 url = 'https://github.com/keplerproject/luafilesystem/archive/v1_6_3.tar.gz'
21
22 version('1_7_0_2', sha256='23b4883aeb4fb90b2d0f338659f33a631f9df7a7e67c54115775a77d4ac3cc59')
23 version('1_6_3', 'bed11874cfded8b4beed7dd054127b24')
24
25 # The version constraint here comes from this post:
26 #
27 # https://www.perforce.com/blog/git-beyond-basics-using-shallow-clones
28 #
29 # where it is claimed that full shallow clone support was added @1.9
30 depends_on('[email protected]:', type='build')
31 extends('lua')
32
33 @property
34 def rockspec(self):
35 version = self.spec.version
36 semver = version[0:3]
37 tweak_level = version[3] if len(version) > 3 else 1
38 fmt = os.path.join(
39 self.stage.path,
40 'luafilesystem-{version.underscored}',
41 'rockspecs',
42 'luafilesystem-{semver.dotted}-{tweak_level}.rockspec'
43 )
44 return fmt.format(
45 version=version, semver=semver, tweak_level=tweak_level
46 )
47
48 def install(self, spec, prefix):
49 luarocks('--tree=' + prefix, 'install', self.rockspec)
50
[end of var/spack/repos/builtin/packages/lua-luafilesystem/package.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/var/spack/repos/builtin/packages/lua-luafilesystem/package.py b/var/spack/repos/builtin/packages/lua-luafilesystem/package.py
--- a/var/spack/repos/builtin/packages/lua-luafilesystem/package.py
+++ b/var/spack/repos/builtin/packages/lua-luafilesystem/package.py
@@ -36,8 +36,7 @@
semver = version[0:3]
tweak_level = version[3] if len(version) > 3 else 1
fmt = os.path.join(
- self.stage.path,
- 'luafilesystem-{version.underscored}',
+ self.stage.source_path,
'rockspecs',
'luafilesystem-{semver.dotted}-{tweak_level}.rockspec'
)
| {"golden_diff": "diff --git a/var/spack/repos/builtin/packages/lua-luafilesystem/package.py b/var/spack/repos/builtin/packages/lua-luafilesystem/package.py\n--- a/var/spack/repos/builtin/packages/lua-luafilesystem/package.py\n+++ b/var/spack/repos/builtin/packages/lua-luafilesystem/package.py\n@@ -36,8 +36,7 @@\n semver = version[0:3]\n tweak_level = version[3] if len(version) > 3 else 1\n fmt = os.path.join(\n- self.stage.path,\n- 'luafilesystem-{version.underscored}',\n+ self.stage.source_path,\n 'rockspecs',\n 'luafilesystem-{semver.dotted}-{tweak_level}.rockspec'\n )\n", "issue": "Installation issue: lua-luafilesystem\nI have a nightly CI build that now dies while trying to do a `spack install lua-luafilesystem`. This occurred upon the the merge of #11528 \r\n\r\nIn other words, I am seeing that `lua-luafilesystem` will not install using commit eb584d8 but will install from 8e3fd3f.\r\n\r\nThis is on CentOS 7 , gcc 8.2.0\r\n\r\n### Steps to reproduce the issue\r\n\r\n```\r\n[spack]$ git checkout eb584d8\r\nPrevious HEAD position was 8e3fd3f... tty: make tty.* print exception types\r\nHEAD is now at eb584d8... refactor: remove unused spack.stage._get_mirrors() function\r\n[spack]$ ./bin/spack install lua-luafilesystem\r\n...\r\n==> lua is already installed in /home/osolberg/temp/spack/opt/spack/linux-centos7-x86_64/gcc-8.2.0/lua-5.3.5-fulv52lir6poddzxeogk7rgrqglnkbon\r\n==> Installing lua-luafilesystem\r\n==> Searching for binary cache of lua-luafilesystem\r\n==> Finding buildcaches in /bifx/apps/spack/mirror/build_cache\r\n==> No binary for lua-luafilesystem found: installing from source\r\n==> Using cached archive: /home/osolberg/temp/spack/var/spack/cache/lua-luafilesystem/lua-luafilesystem-1_7_0_2.tar.gz\r\n==> Staging archive: /home/osolberg/temp/spack/var/spack/stage/lua-luafilesystem-1_7_0_2-dhumhtidskeakxzmlru6qnprnpw7lthz/v1_7_0_2.tar.gz\r\n==> Created stage in /home/osolberg/temp/spack/var/spack/stage/lua-luafilesystem-1_7_0_2-dhumhtidskeakxzmlru6qnprnpw7lthz\r\n==> No patches needed for lua-luafilesystem\r\n==> Building lua-luafilesystem [Package]\r\n==> Error: OSError: [Errno 2] No such file or directory: '/home/osolberg/temp/spack/var/spack/stage/lua-luafilesystem-1_7_0_2-dhumhtidskeakxzmlru6qnprnpw7lthz/src'\r\n\r\n/home/osolberg/temp/spack/lib/spack/spack/package.py:1577, in build_process:\r\n 1574 echo = logger.echo\r\n 1575 self.log()\r\n 1576\r\n >> 1577 # Run post install hooks before build stage is removed.\r\n 1578 spack.hooks.post_install(self.spec)\r\n 1579\r\n 1580 # Stop timer.\r\n\r\n```\r\n\r\n### Platform and user environment\r\n\r\n```commandline\r\n[spack]$ uname -a\r\nLinux bifx1n03.bold.bio 3.10.0-957.1.3.el7.x86_64 #1 SMP Thu Nov 29 14:49:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux\r\n\r\n[spack]$ ./bin/spack spec --install-status lua-luafilesystem\r\nInput spec\r\n--------------------------------\r\n - lua-luafilesystem\r\n\r\nConcretized\r\n--------------------------------\r\n - lua-luafilesystem@1_7_0_2%[email protected] arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]~tcltk arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac +shared+threads arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]~symlinks~termlib arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]~darwinssl~gssapi~libssh~libssh2~nghttp2 arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]+systemcerts arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]+optimize+pic+shared arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]+libbsd arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]+bzip2+curses+git~libunistring+libxml2 patches=9acdb4e73f67c241b5ef32505c9ddf7cf6884ca8ea661692f21dca28483b04b8 +tar+xz arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]+shared arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]~python arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]+jit+multibyte+utf arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] arch=linux-centos7-x86_64\r\n```\n", "before_files": [{"content": "# Copyright 2013-2019 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nimport os.path\n\n\nclass LuaLuafilesystem(Package):\n \"\"\"LuaFileSystem is a Lua library developed to complement the set of\n functions related to file systems offered by the standard Lua distribution.\n\n LuaFileSystem offers a portable way to access the underlying directory\n structure and file attributes.\n\n LuaFileSystem is free software and uses the same license as Lua 5.1\n \"\"\"\n\n homepage = 'http://keplerproject.github.io/luafilesystem'\n url = 'https://github.com/keplerproject/luafilesystem/archive/v1_6_3.tar.gz'\n\n version('1_7_0_2', sha256='23b4883aeb4fb90b2d0f338659f33a631f9df7a7e67c54115775a77d4ac3cc59')\n version('1_6_3', 'bed11874cfded8b4beed7dd054127b24')\n\n # The version constraint here comes from this post:\n #\n # https://www.perforce.com/blog/git-beyond-basics-using-shallow-clones\n #\n # where it is claimed that full shallow clone support was added @1.9\n depends_on('[email protected]:', type='build')\n extends('lua')\n\n @property\n def rockspec(self):\n version = self.spec.version\n semver = version[0:3]\n tweak_level = version[3] if len(version) > 3 else 1\n fmt = os.path.join(\n self.stage.path,\n 'luafilesystem-{version.underscored}',\n 'rockspecs',\n 'luafilesystem-{semver.dotted}-{tweak_level}.rockspec'\n )\n return fmt.format(\n version=version, semver=semver, tweak_level=tweak_level\n )\n\n def install(self, spec, prefix):\n luarocks('--tree=' + prefix, 'install', self.rockspec)\n", "path": "var/spack/repos/builtin/packages/lua-luafilesystem/package.py"}]} | 3,180 | 164 |
gh_patches_debug_12022 | rasdani/github-patches | git_diff | nipy__nipype-2054 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
nipypecli should have --version
### Summary
```shell
$> nipypecli --version
Error: no such option: --version
```
### Actual behavior
```shell
$> nipypecli --version
Error: no such option: --version
```
### Expected behavior
```shell
$> nipypecli --version
0.13.1
```
### How to replicate the behavior
```shell
$> nipypecli --version
Error: no such option: --version
```
### Script/Workflow details
Please put URL to code or code here (if not too long).
http://www.onerussian.com/tmp/nipypecli
### Platform details:
Please paste the output of: `python -c "import nipype; print(nipype.get_info()); print(nipype.__version__)"`
```shell
$> python -c "import nipype; print(nipype.get_info()); print(nipype.__version__)"
{'nibabel_version': '2.1.0', 'sys_executable': '/usr/bin/python', 'networkx_version': '1.11', 'numpy_version': '1.12.0', 'sys_platform': 'linux2', 'sys_version': '2.7.13 (default, Jan 19 2017, 14:48:08) \n[GCC 6.3.0 20170118]', 'commit_source': u'archive substitution', 'commit_hash': u'8946bcab9d0e2f24e5364e42d4a7766e00237cb8', 'pkg_path': '/usr/lib/python2.7/dist-packages/nipype', 'nipype_version': u'0.13.1', 'traits_version': '4.6.0', 'scipy_version': '0.18.1'}
0.13.1
```
</issue>
<code>
[start of nipype/scripts/cli.py]
1 #!python
2 # emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-
3 # vi: set ft=python sts=4 ts=4 sw=4 et:
4 from io import open
5
6 import click
7
8 from .instance import list_interfaces
9 from .utils import (CONTEXT_SETTINGS,
10 UNKNOWN_OPTIONS,
11 ExistingDirPath,
12 ExistingFilePath,
13 UnexistingFilePath,
14 RegularExpression,
15 PythonModule,
16 check_not_none,)
17
18
19 # declare the CLI group
20 @click.group(context_settings=CONTEXT_SETTINGS)
21 def cli():
22 pass
23
24
25 @cli.command(context_settings=CONTEXT_SETTINGS)
26 @click.argument('logdir', type=ExistingDirPath, callback=check_not_none)
27 @click.option('-r', '--regex', type=RegularExpression(), callback=check_not_none,
28 help='Regular expression to be searched in each traceback.')
29 def search(logdir, regex):
30 """Search for tracebacks content.
31
32 Search for traceback inside a folder of nipype crash log files that match
33 a given regular expression.
34
35 Examples:\n
36 nipypecli search nipype/wd/log -r '.*subject123.*'
37 """
38 from .crash_files import iter_tracebacks
39
40 for file, trace in iter_tracebacks(logdir):
41 if regex.search(trace):
42 click.echo("-" * len(file))
43 click.echo(file)
44 click.echo("-" * len(file))
45 click.echo(trace)
46
47
48 @cli.command(context_settings=CONTEXT_SETTINGS)
49 @click.argument('crashfile', type=ExistingFilePath, callback=check_not_none)
50 @click.option('-r', '--rerun', is_flag=True, flag_value=True,
51 help='Rerun crashed node.')
52 @click.option('-d', '--debug', is_flag=True, flag_value=True,
53 help='Enable Python debugger when re-executing.')
54 @click.option('-i', '--ipydebug', is_flag=True, flag_value=True,
55 help='Enable IPython debugger when re-executing.')
56 @click.option('-w', '--dir', type=ExistingDirPath,
57 help='Directory where to run the node in.')
58 def crash(crashfile, rerun, debug, ipydebug, dir):
59 """Display Nipype crash files.
60
61 For certain crash files, one can rerun a failed node in a temp directory.
62
63 Examples:\n
64 nipypecli crash crashfile.pklz\n
65 nipypecli crash crashfile.pklz -r -i\n
66 """
67 from .crash_files import display_crash_file
68
69 debug = 'ipython' if ipydebug else debug
70 if debug == 'ipython':
71 import sys
72 from IPython.core import ultratb
73 sys.excepthook = ultratb.FormattedTB(mode='Verbose',
74 color_scheme='Linux',
75 call_pdb=1)
76 display_crash_file(crashfile, rerun, debug, dir)
77
78
79 @cli.command(context_settings=CONTEXT_SETTINGS)
80 @click.argument('pklz_file', type=ExistingFilePath, callback=check_not_none)
81 def show(pklz_file):
82 """Print the content of Nipype node .pklz file.
83
84 Examples:\n
85 nipypecli show node.pklz
86 """
87 from pprint import pprint
88 from ..utils.filemanip import loadpkl
89
90 pkl_data = loadpkl(pklz_file)
91 pprint(pkl_data)
92
93
94 @cli.command(context_settings=UNKNOWN_OPTIONS)
95 @click.argument('module', type=PythonModule(), required=False,
96 callback=check_not_none)
97 @click.argument('interface', type=str, required=False)
98 @click.option('--list', is_flag=True, flag_value=True,
99 help='List the available Interfaces inside the given module.')
100 @click.option('-h', '--help', is_flag=True, flag_value=True,
101 help='Show help message and exit.')
102 @click.pass_context
103 def run(ctx, module, interface, list, help):
104 """Run a Nipype Interface.
105
106 Examples:\n
107 nipypecli run nipype.interfaces.nipy --list\n
108 nipypecli run nipype.interfaces.nipy ComputeMask --help
109 """
110 import argparse
111 from .utils import add_args_options
112 from ..utils.nipype_cmd import run_instance
113
114 # print run command help if no arguments are given
115 module_given = bool(module)
116 if not module_given:
117 click.echo(ctx.command.get_help(ctx))
118
119 # print the list of available interfaces for the given module
120 elif (module_given and list) or (module_given and not interface):
121 iface_names = list_interfaces(module)
122 click.echo('Available Interfaces:')
123 for if_name in iface_names:
124 click.echo(' {}'.format(if_name))
125
126 # check the interface
127 elif (module_given and interface):
128 # create the argument parser
129 description = "Run {}".format(interface)
130 prog = " ".join([ctx.command_path,
131 module.__name__,
132 interface] + ctx.args)
133 iface_parser = argparse.ArgumentParser(description=description,
134 prog=prog)
135
136 # instantiate the interface
137 node = getattr(module, interface)()
138 iface_parser = add_args_options(iface_parser, node)
139
140 if not ctx.args:
141 # print the interface help
142 try:
143 iface_parser.print_help()
144 except:
145 print('An error ocurred when trying to print the full'
146 'command help, printing usage.')
147 finally:
148 iface_parser.print_usage()
149 else:
150 # run the interface
151 args = iface_parser.parse_args(args=ctx.args)
152 run_instance(node, args)
153
154
155 @cli.group()
156 def convert():
157 """Export nipype interfaces to other formats."""
158 pass
159
160
161 @convert.command(context_settings=CONTEXT_SETTINGS)
162 @click.option("-i", "--interface", type=str, required=True,
163 help="Name of the Nipype interface to export.")
164 @click.option("-m", "--module", type=PythonModule(), required=True,
165 callback=check_not_none,
166 help="Module where the interface is defined.")
167 @click.option("-o", "--output", type=UnexistingFilePath, required=True,
168 callback=check_not_none,
169 help="JSON file name where the Boutiques descriptor will be written.")
170 @click.option("-t", "--ignored-template-inputs", type=str, multiple=True,
171 help="Interface inputs ignored in path template creations.")
172 @click.option("-d", "--docker-image", type=str,
173 help="Name of the Docker image where the Nipype interface is available.")
174 @click.option("-r", "--docker-index", type=str,
175 help="Docker index where the Docker image is stored (e.g. http://index.docker.io).")
176 @click.option("-n", "--ignore-template-numbers", is_flag=True, flag_value=True,
177 help="Ignore all numbers in path template creations.")
178 @click.option("-v", "--verbose", is_flag=True, flag_value=True,
179 help="Enable verbose output.")
180 def boutiques(interface, module, output, ignored_template_inputs,
181 docker_image, docker_index, ignore_template_numbers,
182 verbose):
183 """Nipype to Boutiques exporter.
184
185 See Boutiques specification at https://github.com/boutiques/schema.
186 """
187 from nipype.utils.nipype2boutiques import generate_boutiques_descriptor
188
189 # Generates JSON string
190 json_string = generate_boutiques_descriptor(module,
191 interface,
192 ignored_template_inputs,
193 docker_image,
194 docker_index,
195 verbose,
196 ignore_template_numbers)
197
198 # Writes JSON string to file
199 with open(output, 'w') as f:
200 f.write(json_string)
201
[end of nipype/scripts/cli.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nipype/scripts/cli.py b/nipype/scripts/cli.py
--- a/nipype/scripts/cli.py
+++ b/nipype/scripts/cli.py
@@ -15,6 +15,7 @@
PythonModule,
check_not_none,)
+from .. import __version__
# declare the CLI group
@click.group(context_settings=CONTEXT_SETTINGS)
@@ -151,6 +152,10 @@
args = iface_parser.parse_args(args=ctx.args)
run_instance(node, args)
[email protected](context_settings=CONTEXT_SETTINGS)
+def version():
+ """Print current version of Nipype."""
+ click.echo(__version__)
@cli.group()
def convert():
| {"golden_diff": "diff --git a/nipype/scripts/cli.py b/nipype/scripts/cli.py\n--- a/nipype/scripts/cli.py\n+++ b/nipype/scripts/cli.py\n@@ -15,6 +15,7 @@\n PythonModule,\n check_not_none,)\n \n+from .. import __version__\n \n # declare the CLI group\n @click.group(context_settings=CONTEXT_SETTINGS)\n@@ -151,6 +152,10 @@\n args = iface_parser.parse_args(args=ctx.args)\n run_instance(node, args)\n \[email protected](context_settings=CONTEXT_SETTINGS)\n+def version():\n+ \"\"\"Print current version of Nipype.\"\"\"\n+ click.echo(__version__)\n \n @cli.group()\n def convert():\n", "issue": "nipypecli should have --version \n### Summary\r\n\r\n```shell\r\n$> nipypecli --version\r\nError: no such option: --version\r\n```\r\n\r\n### Actual behavior\r\n\r\n```shell\r\n$> nipypecli --version\r\nError: no such option: --version\r\n```\r\n### Expected behavior\r\n```shell\r\n$> nipypecli --version\r\n0.13.1\r\n```\r\n### How to replicate the behavior\r\n\r\n```shell\r\n$> nipypecli --version\r\nError: no such option: --version\r\n```\r\n### Script/Workflow details\r\n\r\nPlease put URL to code or code here (if not too long).\r\n\r\nhttp://www.onerussian.com/tmp/nipypecli\r\n\r\n### Platform details:\r\n\r\nPlease paste the output of: `python -c \"import nipype; print(nipype.get_info()); print(nipype.__version__)\"`\r\n\r\n```shell\r\n$> python -c \"import nipype; print(nipype.get_info()); print(nipype.__version__)\" \r\n{'nibabel_version': '2.1.0', 'sys_executable': '/usr/bin/python', 'networkx_version': '1.11', 'numpy_version': '1.12.0', 'sys_platform': 'linux2', 'sys_version': '2.7.13 (default, Jan 19 2017, 14:48:08) \\n[GCC 6.3.0 20170118]', 'commit_source': u'archive substitution', 'commit_hash': u'8946bcab9d0e2f24e5364e42d4a7766e00237cb8', 'pkg_path': '/usr/lib/python2.7/dist-packages/nipype', 'nipype_version': u'0.13.1', 'traits_version': '4.6.0', 'scipy_version': '0.18.1'}\r\n0.13.1\r\n```\r\n\r\n\r\n\n", "before_files": [{"content": "#!python\n# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-\n# vi: set ft=python sts=4 ts=4 sw=4 et:\nfrom io import open\n\nimport click\n\nfrom .instance import list_interfaces\nfrom .utils import (CONTEXT_SETTINGS,\n UNKNOWN_OPTIONS,\n ExistingDirPath,\n ExistingFilePath,\n UnexistingFilePath,\n RegularExpression,\n PythonModule,\n check_not_none,)\n\n\n# declare the CLI group\[email protected](context_settings=CONTEXT_SETTINGS)\ndef cli():\n pass\n\n\[email protected](context_settings=CONTEXT_SETTINGS)\[email protected]('logdir', type=ExistingDirPath, callback=check_not_none)\[email protected]('-r', '--regex', type=RegularExpression(), callback=check_not_none,\n help='Regular expression to be searched in each traceback.')\ndef search(logdir, regex):\n \"\"\"Search for tracebacks content.\n\n Search for traceback inside a folder of nipype crash log files that match\n a given regular expression.\n\n Examples:\\n\n nipypecli search nipype/wd/log -r '.*subject123.*'\n \"\"\"\n from .crash_files import iter_tracebacks\n\n for file, trace in iter_tracebacks(logdir):\n if regex.search(trace):\n click.echo(\"-\" * len(file))\n click.echo(file)\n click.echo(\"-\" * len(file))\n click.echo(trace)\n\n\[email protected](context_settings=CONTEXT_SETTINGS)\[email protected]('crashfile', type=ExistingFilePath, callback=check_not_none)\[email protected]('-r', '--rerun', is_flag=True, flag_value=True,\n help='Rerun crashed node.')\[email protected]('-d', '--debug', is_flag=True, flag_value=True,\n help='Enable Python debugger when re-executing.')\[email protected]('-i', '--ipydebug', is_flag=True, flag_value=True,\n help='Enable IPython debugger when re-executing.')\[email protected]('-w', '--dir', type=ExistingDirPath,\n help='Directory where to run the node in.')\ndef crash(crashfile, rerun, debug, ipydebug, dir):\n \"\"\"Display Nipype crash files.\n\n For certain crash files, one can rerun a failed node in a temp directory.\n\n Examples:\\n\n nipypecli crash crashfile.pklz\\n\n nipypecli crash crashfile.pklz -r -i\\n\n \"\"\"\n from .crash_files import display_crash_file\n\n debug = 'ipython' if ipydebug else debug\n if debug == 'ipython':\n import sys\n from IPython.core import ultratb\n sys.excepthook = ultratb.FormattedTB(mode='Verbose',\n color_scheme='Linux',\n call_pdb=1)\n display_crash_file(crashfile, rerun, debug, dir)\n\n\[email protected](context_settings=CONTEXT_SETTINGS)\[email protected]('pklz_file', type=ExistingFilePath, callback=check_not_none)\ndef show(pklz_file):\n \"\"\"Print the content of Nipype node .pklz file.\n\n Examples:\\n\n nipypecli show node.pklz\n \"\"\"\n from pprint import pprint\n from ..utils.filemanip import loadpkl\n\n pkl_data = loadpkl(pklz_file)\n pprint(pkl_data)\n\n\[email protected](context_settings=UNKNOWN_OPTIONS)\[email protected]('module', type=PythonModule(), required=False,\n callback=check_not_none)\[email protected]('interface', type=str, required=False)\[email protected]('--list', is_flag=True, flag_value=True,\n help='List the available Interfaces inside the given module.')\[email protected]('-h', '--help', is_flag=True, flag_value=True,\n help='Show help message and exit.')\[email protected]_context\ndef run(ctx, module, interface, list, help):\n \"\"\"Run a Nipype Interface.\n\n Examples:\\n\n nipypecli run nipype.interfaces.nipy --list\\n\n nipypecli run nipype.interfaces.nipy ComputeMask --help\n \"\"\"\n import argparse\n from .utils import add_args_options\n from ..utils.nipype_cmd import run_instance\n\n # print run command help if no arguments are given\n module_given = bool(module)\n if not module_given:\n click.echo(ctx.command.get_help(ctx))\n\n # print the list of available interfaces for the given module\n elif (module_given and list) or (module_given and not interface):\n iface_names = list_interfaces(module)\n click.echo('Available Interfaces:')\n for if_name in iface_names:\n click.echo(' {}'.format(if_name))\n\n # check the interface\n elif (module_given and interface):\n # create the argument parser\n description = \"Run {}\".format(interface)\n prog = \" \".join([ctx.command_path,\n module.__name__,\n interface] + ctx.args)\n iface_parser = argparse.ArgumentParser(description=description,\n prog=prog)\n\n # instantiate the interface\n node = getattr(module, interface)()\n iface_parser = add_args_options(iface_parser, node)\n\n if not ctx.args:\n # print the interface help\n try:\n iface_parser.print_help()\n except:\n print('An error ocurred when trying to print the full'\n 'command help, printing usage.')\n finally:\n iface_parser.print_usage()\n else:\n # run the interface\n args = iface_parser.parse_args(args=ctx.args)\n run_instance(node, args)\n\n\[email protected]()\ndef convert():\n \"\"\"Export nipype interfaces to other formats.\"\"\"\n pass\n\n\[email protected](context_settings=CONTEXT_SETTINGS)\[email protected](\"-i\", \"--interface\", type=str, required=True,\n help=\"Name of the Nipype interface to export.\")\[email protected](\"-m\", \"--module\", type=PythonModule(), required=True,\n callback=check_not_none,\n help=\"Module where the interface is defined.\")\[email protected](\"-o\", \"--output\", type=UnexistingFilePath, required=True,\n callback=check_not_none,\n help=\"JSON file name where the Boutiques descriptor will be written.\")\[email protected](\"-t\", \"--ignored-template-inputs\", type=str, multiple=True,\n help=\"Interface inputs ignored in path template creations.\")\[email protected](\"-d\", \"--docker-image\", type=str,\n help=\"Name of the Docker image where the Nipype interface is available.\")\[email protected](\"-r\", \"--docker-index\", type=str,\n help=\"Docker index where the Docker image is stored (e.g. http://index.docker.io).\")\[email protected](\"-n\", \"--ignore-template-numbers\", is_flag=True, flag_value=True,\n help=\"Ignore all numbers in path template creations.\")\[email protected](\"-v\", \"--verbose\", is_flag=True, flag_value=True,\n help=\"Enable verbose output.\")\ndef boutiques(interface, module, output, ignored_template_inputs,\n docker_image, docker_index, ignore_template_numbers,\n verbose):\n \"\"\"Nipype to Boutiques exporter.\n\n See Boutiques specification at https://github.com/boutiques/schema.\n \"\"\"\n from nipype.utils.nipype2boutiques import generate_boutiques_descriptor\n\n # Generates JSON string\n json_string = generate_boutiques_descriptor(module,\n interface,\n ignored_template_inputs,\n docker_image,\n docker_index,\n verbose,\n ignore_template_numbers)\n\n # Writes JSON string to file\n with open(output, 'w') as f:\n f.write(json_string)\n", "path": "nipype/scripts/cli.py"}]} | 3,091 | 156 |
gh_patches_debug_27790 | rasdani/github-patches | git_diff | great-expectations__great_expectations-6871 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use cleaner solution for non-truncating division in python 2
Prefer `from __future__ import division` to `1.*x/y`
</issue>
<code>
[start of great_expectations/expectations/core/expect_column_values_to_not_match_regex.py]
1 from typing import TYPE_CHECKING, List, Optional
2
3 from great_expectations.core import (
4 ExpectationConfiguration,
5 ExpectationValidationResult,
6 )
7 from great_expectations.expectations.expectation import (
8 ColumnMapExpectation,
9 InvalidExpectationConfigurationError,
10 render_evaluation_parameter_string,
11 )
12 from great_expectations.render import (
13 LegacyDescriptiveRendererType,
14 LegacyRendererType,
15 RenderedStringTemplateContent,
16 )
17 from great_expectations.render.renderer.renderer import renderer
18 from great_expectations.render.renderer_configuration import (
19 RendererConfiguration,
20 RendererValueType,
21 )
22 from great_expectations.render.util import (
23 num_to_str,
24 parse_row_condition_string_pandas_engine,
25 substitute_none_for_missing,
26 )
27 from great_expectations.rule_based_profiler.config.base import (
28 ParameterBuilderConfig,
29 RuleBasedProfilerConfig,
30 )
31 from great_expectations.rule_based_profiler.parameter_container import (
32 DOMAIN_KWARGS_PARAMETER_FULLY_QUALIFIED_NAME,
33 FULLY_QUALIFIED_PARAMETER_NAME_METADATA_KEY,
34 FULLY_QUALIFIED_PARAMETER_NAME_SEPARATOR_CHARACTER,
35 FULLY_QUALIFIED_PARAMETER_NAME_VALUE_KEY,
36 PARAMETER_KEY,
37 VARIABLES_KEY,
38 )
39
40 if TYPE_CHECKING:
41 from great_expectations.render.renderer_configuration import AddParamArgs
42
43 try:
44 import sqlalchemy as sa # noqa: F401
45 except ImportError:
46 pass
47
48
49 class ExpectColumnValuesToNotMatchRegex(ColumnMapExpectation):
50 """Expect the column entries to be strings that do NOT match a given regular expression.
51
52 The regex must not match \
53 any portion of the provided string. For example, "[at]+" would identify the following strings as expected: \
54 "fish", "dog", and the following as unexpected: "cat", "hat".
55
56 expect_column_values_to_not_match_regex is a \
57 [Column Map Expectation](https://docs.greatexpectations.io/docs/guides/expectations/creating_custom_expectations/how_to_create_custom_column_map_expectations).
58
59 Args:
60 column (str): \
61 The column name.
62 regex (str): \
63 The regular expression the column entries should NOT match.
64
65 Keyword Args:
66 mostly (None or a float between 0 and 1): \
67 Successful if at least mostly fraction of values match the expectation. \
68 For more detail, see [mostly](https://docs.greatexpectations.io/docs/reference/expectations/standard_arguments/#mostly).
69
70 Other Parameters:
71 result_format (str or None): \
72 Which output mode to use: BOOLEAN_ONLY, BASIC, COMPLETE, or SUMMARY. \
73 For more detail, see [result_format](https://docs.greatexpectations.io/docs/reference/expectations/result_format).
74 include_config (boolean): \
75 If True, then include the expectation config as part of the result object.
76 catch_exceptions (boolean or None): \
77 If True, then catch exceptions and include them as part of the result object. \
78 For more detail, see [catch_exceptions](https://docs.greatexpectations.io/docs/reference/expectations/standard_arguments/#catch_exceptions).
79 meta (dict or None): \
80 A JSON-serializable dictionary (nesting allowed) that will be included in the output without \
81 modification. For more detail, see [meta](https://docs.greatexpectations.io/docs/reference/expectations/standard_arguments/#meta).
82
83 Returns:
84 An [ExpectationSuiteValidationResult](https://docs.greatexpectations.io/docs/terms/validation_result)
85
86 Exact fields vary depending on the values passed to result_format, include_config, catch_exceptions, and meta.
87
88 See Also:
89 [expect_column_values_to_match_regex](https://greatexpectations.io/expectations/expect_column_values_to_match_regex)
90 [expect_column_values_to_match_regex_list](https://greatexpectations.io/expectations/expect_column_values_to_match_regex_list)
91 [expect_column_values_to_not_match_regex_list](https://greatexpectations.io/expectations/expect_column_values_to_not_match_regex_list)
92 [expect_column_values_to_match_like_pattern](https://greatexpectations.io/expectations/expect_column_values_to_match_like_pattern)
93 [expect_column_values_to_match_like_pattern_list](https://greatexpectations.io/expectations/expect_column_values_to_match_like_pattern_list)
94 [expect_column_values_to_not_match_like_pattern](https://greatexpectations.io/expectations/expect_column_values_to_not_match_like_pattern)
95 [expect_column_values_to_not_match_like_pattern_list](https://greatexpectations.io/expectations/expect_column_values_to_not_match_like_pattern_list)
96 """
97
98 library_metadata = {
99 "maturity": "production",
100 "tags": ["core expectation", "column map expectation"],
101 "contributors": [
102 "@great_expectations",
103 ],
104 "requirements": [],
105 "has_full_test_suite": True,
106 "manually_reviewed_code": True,
107 }
108
109 map_metric = "column_values.not_match_regex"
110 success_keys = (
111 "regex",
112 "mostly",
113 "auto",
114 "profiler_config",
115 )
116
117 regex_pattern_string_parameter_builder_config: ParameterBuilderConfig = (
118 ParameterBuilderConfig(
119 module_name="great_expectations.rule_based_profiler.parameter_builder",
120 class_name="RegexPatternStringParameterBuilder",
121 name="regex_pattern_string_parameter_builder",
122 metric_domain_kwargs=DOMAIN_KWARGS_PARAMETER_FULLY_QUALIFIED_NAME,
123 metric_value_kwargs=None,
124 evaluation_parameter_builder_configs=None,
125 )
126 )
127 validation_parameter_builder_configs: List[ParameterBuilderConfig] = [
128 regex_pattern_string_parameter_builder_config
129 ]
130 default_profiler_config = RuleBasedProfilerConfig(
131 name="expect_column_values_to_not_match_regex", # Convention: use "expectation_type" as profiler name.
132 config_version=1.0,
133 variables={},
134 rules={
135 "default_expect_column_values_to_not_match_regex_rule": {
136 "variables": {
137 "mostly": 1.0,
138 },
139 "domain_builder": {
140 "class_name": "ColumnDomainBuilder",
141 "module_name": "great_expectations.rule_based_profiler.domain_builder",
142 },
143 "expectation_configuration_builders": [
144 {
145 "expectation_type": "expect_column_values_to_not_match_regex",
146 "class_name": "DefaultExpectationConfigurationBuilder",
147 "module_name": "great_expectations.rule_based_profiler.expectation_configuration_builder",
148 "validation_parameter_builder_configs": validation_parameter_builder_configs,
149 "column": f"{DOMAIN_KWARGS_PARAMETER_FULLY_QUALIFIED_NAME}{FULLY_QUALIFIED_PARAMETER_NAME_SEPARATOR_CHARACTER}column",
150 "regex": f"{PARAMETER_KEY}{regex_pattern_string_parameter_builder_config.name}{FULLY_QUALIFIED_PARAMETER_NAME_SEPARATOR_CHARACTER}{FULLY_QUALIFIED_PARAMETER_NAME_VALUE_KEY}",
151 "mostly": f"{VARIABLES_KEY}mostly",
152 "meta": {
153 "profiler_details": f"{PARAMETER_KEY}{regex_pattern_string_parameter_builder_config.name}{FULLY_QUALIFIED_PARAMETER_NAME_SEPARATOR_CHARACTER}{FULLY_QUALIFIED_PARAMETER_NAME_METADATA_KEY}",
154 },
155 },
156 ],
157 },
158 },
159 )
160
161 default_kwarg_values = {
162 "row_condition": None,
163 "condition_parser": None, # we expect this to be explicitly set whenever a row_condition is passed
164 "mostly": 1,
165 "result_format": "BASIC",
166 "include_config": True,
167 "catch_exceptions": True,
168 "auto": False,
169 "profiler_config": default_profiler_config,
170 }
171 args_keys = (
172 "column",
173 "regex",
174 )
175
176 def validate_configuration(
177 self, configuration: Optional[ExpectationConfiguration] = None
178 ) -> None:
179 super().validate_configuration(configuration)
180 configuration = configuration or self.configuration
181 try:
182 assert "regex" in configuration.kwargs, "regex is required"
183 assert isinstance(
184 configuration.kwargs["regex"], (str, dict)
185 ), "regex must be a string"
186 if isinstance(configuration.kwargs["regex"], dict):
187 assert (
188 "$PARAMETER" in configuration.kwargs["regex"]
189 ), 'Evaluation Parameter dict for regex kwarg must have "$PARAMETER" key.'
190 except AssertionError as e:
191 raise InvalidExpectationConfigurationError(str(e))
192
193 @classmethod
194 def _prescriptive_template(
195 cls,
196 renderer_configuration: RendererConfiguration,
197 ) -> RendererConfiguration:
198 add_param_args: AddParamArgs = (
199 ("column", RendererValueType.STRING),
200 ("regex", RendererValueType.STRING),
201 ("mostly", RendererValueType.NUMBER),
202 )
203 for name, param_type in add_param_args:
204 renderer_configuration.add_param(name=name, param_type=param_type)
205
206 params = renderer_configuration.params
207
208 if not params.regex:
209 template_str = (
210 "values must not match a regular expression but none was specified."
211 )
212 else:
213 if renderer_configuration.include_column_name:
214 template_str = (
215 "$column values must not match this regular expression: $regex"
216 )
217 else:
218 template_str = "values must not match this regular expression: $regex"
219
220 if params.mostly and params.mostly.value < 1.0:
221 renderer_configuration = cls._add_mostly_pct_param(
222 renderer_configuration=renderer_configuration
223 )
224 template_str += ", at least $mostly_pct % of the time."
225 else:
226 template_str += "."
227
228 renderer_configuration.template_str = template_str
229
230 return renderer_configuration
231
232 @classmethod
233 @renderer(renderer_type=LegacyRendererType.PRESCRIPTIVE)
234 @render_evaluation_parameter_string
235 def _prescriptive_renderer(
236 cls,
237 configuration: Optional[ExpectationConfiguration] = None,
238 result: Optional[ExpectationValidationResult] = None,
239 runtime_configuration: Optional[dict] = None,
240 **kwargs,
241 ):
242 runtime_configuration = runtime_configuration or {}
243 include_column_name = (
244 False if runtime_configuration.get("include_column_name") is False else True
245 )
246 styling = runtime_configuration.get("styling")
247 params = substitute_none_for_missing(
248 configuration.kwargs,
249 ["column", "regex", "mostly", "row_condition", "condition_parser"],
250 )
251
252 if not params.get("regex"):
253 template_str = (
254 "values must not match a regular expression but none was specified."
255 )
256 else:
257 if params["mostly"] is not None and params["mostly"] < 1.0:
258 params["mostly_pct"] = num_to_str(
259 params["mostly"] * 100, precision=15, no_scientific=True
260 )
261 # params["mostly_pct"] = "{:.14f}".format(params["mostly"]*100).rstrip("0").rstrip(".")
262 if include_column_name:
263 template_str = "$column values must not match this regular expression: $regex, at least $mostly_pct % of the time."
264 else:
265 template_str = "values must not match this regular expression: $regex, at least $mostly_pct % of the time."
266 else:
267 if include_column_name:
268 template_str = (
269 "$column values must not match this regular expression: $regex."
270 )
271 else:
272 template_str = (
273 "values must not match this regular expression: $regex."
274 )
275
276 if params["row_condition"] is not None:
277 (
278 conditional_template_str,
279 conditional_params,
280 ) = parse_row_condition_string_pandas_engine(params["row_condition"])
281 template_str = f"{conditional_template_str}, then {template_str}"
282 params.update(conditional_params)
283
284 return [
285 RenderedStringTemplateContent(
286 **{
287 "content_block_type": "string_template",
288 "string_template": {
289 "template": template_str,
290 "params": params,
291 "styling": styling,
292 },
293 }
294 )
295 ]
296
297 @classmethod
298 @renderer(
299 renderer_type=LegacyDescriptiveRendererType.COLUMN_PROPERTIES_TABLE_REGEX_COUNT_ROW
300 )
301 def _descriptive_column_properties_table_regex_count_row_renderer(
302 cls,
303 configuration: Optional[ExpectationConfiguration] = None,
304 result: Optional[ExpectationValidationResult] = None,
305 runtime_configuration: Optional[dict] = None,
306 **kwargs,
307 ):
308 assert result, "Must pass in result."
309 expectation_config = configuration or result.expectation_config
310 expectation_kwargs = expectation_config.kwargs
311 regex = expectation_kwargs.get("regex")
312 unexpected_count = result.result.get("unexpected_count", "--")
313 if regex == "^\\s+|\\s+$":
314 return ["Leading or trailing whitespace (n)", unexpected_count]
315 else:
316 return [f"Regex: {regex}", unexpected_count]
317
[end of great_expectations/expectations/core/expect_column_values_to_not_match_regex.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/great_expectations/expectations/core/expect_column_values_to_not_match_regex.py b/great_expectations/expectations/core/expect_column_values_to_not_match_regex.py
--- a/great_expectations/expectations/core/expect_column_values_to_not_match_regex.py
+++ b/great_expectations/expectations/core/expect_column_values_to_not_match_regex.py
@@ -176,13 +176,28 @@
def validate_configuration(
self, configuration: Optional[ExpectationConfiguration] = None
) -> None:
+ """
+ Validates the configuration of an Expectation.
+
+ For `expect_column_values_to_not_match_regex` it is required that:
+ - 'regex' kwarg is of type str or dict
+ - if dict, assert a key "$PARAMETER" is present
+
+ Args:
+ configuration: An `ExpectationConfiguration` to validate. If no configuration is provided, it will be pulled
+ from the configuration attribute of the Expectation instance.
+
+ Raises:
+ `InvalidExpectationConfigurationError`: The configuration does not contain the values required by the
+ Expectation."
+ """
super().validate_configuration(configuration)
configuration = configuration or self.configuration
try:
assert "regex" in configuration.kwargs, "regex is required"
assert isinstance(
configuration.kwargs["regex"], (str, dict)
- ), "regex must be a string"
+ ), "regex must be a string or dict"
if isinstance(configuration.kwargs["regex"], dict):
assert (
"$PARAMETER" in configuration.kwargs["regex"]
| {"golden_diff": "diff --git a/great_expectations/expectations/core/expect_column_values_to_not_match_regex.py b/great_expectations/expectations/core/expect_column_values_to_not_match_regex.py\n--- a/great_expectations/expectations/core/expect_column_values_to_not_match_regex.py\n+++ b/great_expectations/expectations/core/expect_column_values_to_not_match_regex.py\n@@ -176,13 +176,28 @@\n def validate_configuration(\n self, configuration: Optional[ExpectationConfiguration] = None\n ) -> None:\n+ \"\"\"\n+ Validates the configuration of an Expectation.\n+\n+ For `expect_column_values_to_not_match_regex` it is required that:\n+ - 'regex' kwarg is of type str or dict\n+ - if dict, assert a key \"$PARAMETER\" is present\n+\n+ Args:\n+ configuration: An `ExpectationConfiguration` to validate. If no configuration is provided, it will be pulled\n+ from the configuration attribute of the Expectation instance.\n+\n+ Raises:\n+ `InvalidExpectationConfigurationError`: The configuration does not contain the values required by the\n+ Expectation.\"\n+ \"\"\"\n super().validate_configuration(configuration)\n configuration = configuration or self.configuration\n try:\n assert \"regex\" in configuration.kwargs, \"regex is required\"\n assert isinstance(\n configuration.kwargs[\"regex\"], (str, dict)\n- ), \"regex must be a string\"\n+ ), \"regex must be a string or dict\"\n if isinstance(configuration.kwargs[\"regex\"], dict):\n assert (\n \"$PARAMETER\" in configuration.kwargs[\"regex\"]\n", "issue": "Use cleaner solution for non-truncating division in python 2\nPrefer `from __future__ import division` to `1.*x/y`\n", "before_files": [{"content": "from typing import TYPE_CHECKING, List, Optional\n\nfrom great_expectations.core import (\n ExpectationConfiguration,\n ExpectationValidationResult,\n)\nfrom great_expectations.expectations.expectation import (\n ColumnMapExpectation,\n InvalidExpectationConfigurationError,\n render_evaluation_parameter_string,\n)\nfrom great_expectations.render import (\n LegacyDescriptiveRendererType,\n LegacyRendererType,\n RenderedStringTemplateContent,\n)\nfrom great_expectations.render.renderer.renderer import renderer\nfrom great_expectations.render.renderer_configuration import (\n RendererConfiguration,\n RendererValueType,\n)\nfrom great_expectations.render.util import (\n num_to_str,\n parse_row_condition_string_pandas_engine,\n substitute_none_for_missing,\n)\nfrom great_expectations.rule_based_profiler.config.base import (\n ParameterBuilderConfig,\n RuleBasedProfilerConfig,\n)\nfrom great_expectations.rule_based_profiler.parameter_container import (\n DOMAIN_KWARGS_PARAMETER_FULLY_QUALIFIED_NAME,\n FULLY_QUALIFIED_PARAMETER_NAME_METADATA_KEY,\n FULLY_QUALIFIED_PARAMETER_NAME_SEPARATOR_CHARACTER,\n FULLY_QUALIFIED_PARAMETER_NAME_VALUE_KEY,\n PARAMETER_KEY,\n VARIABLES_KEY,\n)\n\nif TYPE_CHECKING:\n from great_expectations.render.renderer_configuration import AddParamArgs\n\ntry:\n import sqlalchemy as sa # noqa: F401\nexcept ImportError:\n pass\n\n\nclass ExpectColumnValuesToNotMatchRegex(ColumnMapExpectation):\n \"\"\"Expect the column entries to be strings that do NOT match a given regular expression.\n\n The regex must not match \\\n any portion of the provided string. For example, \"[at]+\" would identify the following strings as expected: \\\n \"fish\", \"dog\", and the following as unexpected: \"cat\", \"hat\".\n\n expect_column_values_to_not_match_regex is a \\\n [Column Map Expectation](https://docs.greatexpectations.io/docs/guides/expectations/creating_custom_expectations/how_to_create_custom_column_map_expectations).\n\n Args:\n column (str): \\\n The column name.\n regex (str): \\\n The regular expression the column entries should NOT match.\n\n Keyword Args:\n mostly (None or a float between 0 and 1): \\\n Successful if at least mostly fraction of values match the expectation. \\\n For more detail, see [mostly](https://docs.greatexpectations.io/docs/reference/expectations/standard_arguments/#mostly).\n\n Other Parameters:\n result_format (str or None): \\\n Which output mode to use: BOOLEAN_ONLY, BASIC, COMPLETE, or SUMMARY. \\\n For more detail, see [result_format](https://docs.greatexpectations.io/docs/reference/expectations/result_format).\n include_config (boolean): \\\n If True, then include the expectation config as part of the result object.\n catch_exceptions (boolean or None): \\\n If True, then catch exceptions and include them as part of the result object. \\\n For more detail, see [catch_exceptions](https://docs.greatexpectations.io/docs/reference/expectations/standard_arguments/#catch_exceptions).\n meta (dict or None): \\\n A JSON-serializable dictionary (nesting allowed) that will be included in the output without \\\n modification. For more detail, see [meta](https://docs.greatexpectations.io/docs/reference/expectations/standard_arguments/#meta).\n\n Returns:\n An [ExpectationSuiteValidationResult](https://docs.greatexpectations.io/docs/terms/validation_result)\n\n Exact fields vary depending on the values passed to result_format, include_config, catch_exceptions, and meta.\n\n See Also:\n [expect_column_values_to_match_regex](https://greatexpectations.io/expectations/expect_column_values_to_match_regex)\n [expect_column_values_to_match_regex_list](https://greatexpectations.io/expectations/expect_column_values_to_match_regex_list)\n [expect_column_values_to_not_match_regex_list](https://greatexpectations.io/expectations/expect_column_values_to_not_match_regex_list)\n [expect_column_values_to_match_like_pattern](https://greatexpectations.io/expectations/expect_column_values_to_match_like_pattern)\n [expect_column_values_to_match_like_pattern_list](https://greatexpectations.io/expectations/expect_column_values_to_match_like_pattern_list)\n [expect_column_values_to_not_match_like_pattern](https://greatexpectations.io/expectations/expect_column_values_to_not_match_like_pattern)\n [expect_column_values_to_not_match_like_pattern_list](https://greatexpectations.io/expectations/expect_column_values_to_not_match_like_pattern_list)\n \"\"\"\n\n library_metadata = {\n \"maturity\": \"production\",\n \"tags\": [\"core expectation\", \"column map expectation\"],\n \"contributors\": [\n \"@great_expectations\",\n ],\n \"requirements\": [],\n \"has_full_test_suite\": True,\n \"manually_reviewed_code\": True,\n }\n\n map_metric = \"column_values.not_match_regex\"\n success_keys = (\n \"regex\",\n \"mostly\",\n \"auto\",\n \"profiler_config\",\n )\n\n regex_pattern_string_parameter_builder_config: ParameterBuilderConfig = (\n ParameterBuilderConfig(\n module_name=\"great_expectations.rule_based_profiler.parameter_builder\",\n class_name=\"RegexPatternStringParameterBuilder\",\n name=\"regex_pattern_string_parameter_builder\",\n metric_domain_kwargs=DOMAIN_KWARGS_PARAMETER_FULLY_QUALIFIED_NAME,\n metric_value_kwargs=None,\n evaluation_parameter_builder_configs=None,\n )\n )\n validation_parameter_builder_configs: List[ParameterBuilderConfig] = [\n regex_pattern_string_parameter_builder_config\n ]\n default_profiler_config = RuleBasedProfilerConfig(\n name=\"expect_column_values_to_not_match_regex\", # Convention: use \"expectation_type\" as profiler name.\n config_version=1.0,\n variables={},\n rules={\n \"default_expect_column_values_to_not_match_regex_rule\": {\n \"variables\": {\n \"mostly\": 1.0,\n },\n \"domain_builder\": {\n \"class_name\": \"ColumnDomainBuilder\",\n \"module_name\": \"great_expectations.rule_based_profiler.domain_builder\",\n },\n \"expectation_configuration_builders\": [\n {\n \"expectation_type\": \"expect_column_values_to_not_match_regex\",\n \"class_name\": \"DefaultExpectationConfigurationBuilder\",\n \"module_name\": \"great_expectations.rule_based_profiler.expectation_configuration_builder\",\n \"validation_parameter_builder_configs\": validation_parameter_builder_configs,\n \"column\": f\"{DOMAIN_KWARGS_PARAMETER_FULLY_QUALIFIED_NAME}{FULLY_QUALIFIED_PARAMETER_NAME_SEPARATOR_CHARACTER}column\",\n \"regex\": f\"{PARAMETER_KEY}{regex_pattern_string_parameter_builder_config.name}{FULLY_QUALIFIED_PARAMETER_NAME_SEPARATOR_CHARACTER}{FULLY_QUALIFIED_PARAMETER_NAME_VALUE_KEY}\",\n \"mostly\": f\"{VARIABLES_KEY}mostly\",\n \"meta\": {\n \"profiler_details\": f\"{PARAMETER_KEY}{regex_pattern_string_parameter_builder_config.name}{FULLY_QUALIFIED_PARAMETER_NAME_SEPARATOR_CHARACTER}{FULLY_QUALIFIED_PARAMETER_NAME_METADATA_KEY}\",\n },\n },\n ],\n },\n },\n )\n\n default_kwarg_values = {\n \"row_condition\": None,\n \"condition_parser\": None, # we expect this to be explicitly set whenever a row_condition is passed\n \"mostly\": 1,\n \"result_format\": \"BASIC\",\n \"include_config\": True,\n \"catch_exceptions\": True,\n \"auto\": False,\n \"profiler_config\": default_profiler_config,\n }\n args_keys = (\n \"column\",\n \"regex\",\n )\n\n def validate_configuration(\n self, configuration: Optional[ExpectationConfiguration] = None\n ) -> None:\n super().validate_configuration(configuration)\n configuration = configuration or self.configuration\n try:\n assert \"regex\" in configuration.kwargs, \"regex is required\"\n assert isinstance(\n configuration.kwargs[\"regex\"], (str, dict)\n ), \"regex must be a string\"\n if isinstance(configuration.kwargs[\"regex\"], dict):\n assert (\n \"$PARAMETER\" in configuration.kwargs[\"regex\"]\n ), 'Evaluation Parameter dict for regex kwarg must have \"$PARAMETER\" key.'\n except AssertionError as e:\n raise InvalidExpectationConfigurationError(str(e))\n\n @classmethod\n def _prescriptive_template(\n cls,\n renderer_configuration: RendererConfiguration,\n ) -> RendererConfiguration:\n add_param_args: AddParamArgs = (\n (\"column\", RendererValueType.STRING),\n (\"regex\", RendererValueType.STRING),\n (\"mostly\", RendererValueType.NUMBER),\n )\n for name, param_type in add_param_args:\n renderer_configuration.add_param(name=name, param_type=param_type)\n\n params = renderer_configuration.params\n\n if not params.regex:\n template_str = (\n \"values must not match a regular expression but none was specified.\"\n )\n else:\n if renderer_configuration.include_column_name:\n template_str = (\n \"$column values must not match this regular expression: $regex\"\n )\n else:\n template_str = \"values must not match this regular expression: $regex\"\n\n if params.mostly and params.mostly.value < 1.0:\n renderer_configuration = cls._add_mostly_pct_param(\n renderer_configuration=renderer_configuration\n )\n template_str += \", at least $mostly_pct % of the time.\"\n else:\n template_str += \".\"\n\n renderer_configuration.template_str = template_str\n\n return renderer_configuration\n\n @classmethod\n @renderer(renderer_type=LegacyRendererType.PRESCRIPTIVE)\n @render_evaluation_parameter_string\n def _prescriptive_renderer(\n cls,\n configuration: Optional[ExpectationConfiguration] = None,\n result: Optional[ExpectationValidationResult] = None,\n runtime_configuration: Optional[dict] = None,\n **kwargs,\n ):\n runtime_configuration = runtime_configuration or {}\n include_column_name = (\n False if runtime_configuration.get(\"include_column_name\") is False else True\n )\n styling = runtime_configuration.get(\"styling\")\n params = substitute_none_for_missing(\n configuration.kwargs,\n [\"column\", \"regex\", \"mostly\", \"row_condition\", \"condition_parser\"],\n )\n\n if not params.get(\"regex\"):\n template_str = (\n \"values must not match a regular expression but none was specified.\"\n )\n else:\n if params[\"mostly\"] is not None and params[\"mostly\"] < 1.0:\n params[\"mostly_pct\"] = num_to_str(\n params[\"mostly\"] * 100, precision=15, no_scientific=True\n )\n # params[\"mostly_pct\"] = \"{:.14f}\".format(params[\"mostly\"]*100).rstrip(\"0\").rstrip(\".\")\n if include_column_name:\n template_str = \"$column values must not match this regular expression: $regex, at least $mostly_pct % of the time.\"\n else:\n template_str = \"values must not match this regular expression: $regex, at least $mostly_pct % of the time.\"\n else:\n if include_column_name:\n template_str = (\n \"$column values must not match this regular expression: $regex.\"\n )\n else:\n template_str = (\n \"values must not match this regular expression: $regex.\"\n )\n\n if params[\"row_condition\"] is not None:\n (\n conditional_template_str,\n conditional_params,\n ) = parse_row_condition_string_pandas_engine(params[\"row_condition\"])\n template_str = f\"{conditional_template_str}, then {template_str}\"\n params.update(conditional_params)\n\n return [\n RenderedStringTemplateContent(\n **{\n \"content_block_type\": \"string_template\",\n \"string_template\": {\n \"template\": template_str,\n \"params\": params,\n \"styling\": styling,\n },\n }\n )\n ]\n\n @classmethod\n @renderer(\n renderer_type=LegacyDescriptiveRendererType.COLUMN_PROPERTIES_TABLE_REGEX_COUNT_ROW\n )\n def _descriptive_column_properties_table_regex_count_row_renderer(\n cls,\n configuration: Optional[ExpectationConfiguration] = None,\n result: Optional[ExpectationValidationResult] = None,\n runtime_configuration: Optional[dict] = None,\n **kwargs,\n ):\n assert result, \"Must pass in result.\"\n expectation_config = configuration or result.expectation_config\n expectation_kwargs = expectation_config.kwargs\n regex = expectation_kwargs.get(\"regex\")\n unexpected_count = result.result.get(\"unexpected_count\", \"--\")\n if regex == \"^\\\\s+|\\\\s+$\":\n return [\"Leading or trailing whitespace (n)\", unexpected_count]\n else:\n return [f\"Regex: {regex}\", unexpected_count]\n", "path": "great_expectations/expectations/core/expect_column_values_to_not_match_regex.py"}]} | 4,094 | 349 |
gh_patches_debug_12631 | rasdani/github-patches | git_diff | cupy__cupy-6118 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Indexing with assignment between broadcastable arrays is inconsistent with NumPy
When performing `A[idx, ...] = B` with `B` broadcastable over `A[idx, ...]` (so no storage expansion for `A[idx, ...]` is necessary) with `B.ndim > A.ndim` CuPy throws a shape mismatch error while NumPy handles this case.
* Code to reproduce
```python
In [1]: import numpy
In [2]: import cupy
In [3]: def test(module):
...: x = module.zeros((3, 3, 3))
...: y = module.ones((1, 3, 3))
...: x[0, ...] = y
...: return x
...:
...:
In [4]: test(numpy)
Out[4]:
array([[[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]],
[[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]],
[[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]]])
In [5]: test(cupy)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-5-3f849ce2914e> in <module>()
----> 1 test(cupy)
<ipython-input-3-450cff366473> in test(module)
2 x = module.zeros((3, 3, 3))
3 y = module.ones((1, 3, 3))
----> 4 x[0, ...] = y
5 return x
cupy/_core/core.pyx in cupy._core.core.ndarray.__setitem__()
cupy/_core/_routines_indexing.pyx in cupy._core._routines_indexing._ndarray_setitem()
cupy/_core/_routines_indexing.pyx in cupy._core._routines_indexing._scatter_op()
cupy/_core/_kernel.pyx in cupy._core._kernel.ufunc.__call__()
cupy/_core/_kernel.pyx in cupy._core._kernel._get_out_args()
ValueError: Out shape is mismatched
```
* Conditions
```
OS : Linux-5.4.0-81-generic-x86_64-with-debian-bullseye-sid
Python Version : 3.6.7
CuPy Version : 9.5.0
CuPy Platform : NVIDIA CUDA
NumPy Version : 1.19.5
SciPy Version : None
Cython Build Version : 0.29.24
Cython Runtime Version : None
CUDA Root : /home/nik/.conda/envs/pytorch-cuda-dev
nvcc PATH : /home/nik/.conda/envs/pytorch-cuda-dev/bin/nvcc
CUDA Build Version : 11020
CUDA Driver Version : 11030
CUDA Runtime Version : 11020
cuBLAS Version : (available)
cuFFT Version : 10401
cuRAND Version : 10203
cuSOLVER Version : (11, 1, 0)
cuSPARSE Version : (available)
NVRTC Version : (11, 2)
Thrust Version : 101000
CUB Build Version : 101000
Jitify Build Version : <unknown>
cuDNN Build Version : 8201
cuDNN Version : 8004
NCCL Build Version : None
NCCL Runtime Version : None
cuTENSOR Version : None
cuSPARSELt Build Version : None
Device 0 Name : NVIDIA GeForce RTX 2060
Device 0 Compute Capability : 75
Device 0 PCI Bus ID : 0000:01:00.0
Device 1 Name : NVIDIA GeForce RTX 2060
Device 1 Compute Capability : 75
Device 1 PCI Bus ID : 0000:21:00.0
```
</issue>
<code>
[start of cupy/_manipulation/basic.py]
1 import numpy
2
3 from cupy import _core
4 from cupy._core import _fusion_interface
5 from cupy._core import fusion
6 from cupy._sorting import search
7 from cupy_backends.cuda.api import runtime
8
9
10 def copyto(dst, src, casting='same_kind', where=None):
11 """Copies values from one array to another with broadcasting.
12
13 This function can be called for arrays on different devices. In this case,
14 casting, ``where``, and broadcasting is not supported, and an exception is
15 raised if these are used.
16
17 Args:
18 dst (cupy.ndarray): Target array.
19 src (cupy.ndarray): Source array.
20 casting (str): Casting rule. See :func:`numpy.can_cast` for detail.
21 where (cupy.ndarray of bool): If specified, this array acts as a mask,
22 and an element is copied only if the corresponding element of
23 ``where`` is True.
24
25 .. seealso:: :func:`numpy.copyto`
26
27 """
28
29 src_type = type(src)
30 src_is_python_scalar = src_type in (
31 int, bool, float, complex,
32 fusion._FusionVarScalar, _fusion_interface._ScalarProxy)
33 if src_is_python_scalar:
34 src_dtype = numpy.dtype(type(src))
35 can_cast = numpy.can_cast(src, dst.dtype, casting)
36 else:
37 src_dtype = src.dtype
38 can_cast = numpy.can_cast(src_dtype, dst.dtype, casting)
39
40 if not can_cast:
41 raise TypeError('Cannot cast %s to %s in %s casting mode' %
42 (src_dtype, dst.dtype, casting))
43 if fusion._is_fusing():
44 if where is None:
45 _core.elementwise_copy(src, dst)
46 else:
47 fusion._call_ufunc(search._where_ufunc, where, src, dst, dst)
48 return
49
50 if where is not None:
51 _core.elementwise_copy(src, dst, _where=where)
52 return
53
54 if dst.size == 0:
55 return
56
57 if src_is_python_scalar:
58 dst.fill(src)
59 return
60
61 if _can_memcpy(dst, src):
62 dst.data.copy_from_async(src.data, src.nbytes)
63 return
64
65 device = dst.device
66 prev_device = runtime.getDevice()
67 try:
68 runtime.setDevice(device.id)
69 if src.device != device:
70 src = src.copy()
71 _core.elementwise_copy(src, dst)
72 finally:
73 runtime.setDevice(prev_device)
74
75
76 def _can_memcpy(dst, src):
77 c_contiguous = dst.flags.c_contiguous and src.flags.c_contiguous
78 f_contiguous = dst.flags.f_contiguous and src.flags.f_contiguous
79 return (c_contiguous or f_contiguous) and dst.dtype == src.dtype and \
80 dst.size == src.size
81
[end of cupy/_manipulation/basic.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cupy/_manipulation/basic.py b/cupy/_manipulation/basic.py
--- a/cupy/_manipulation/basic.py
+++ b/cupy/_manipulation/basic.py
@@ -40,6 +40,16 @@
if not can_cast:
raise TypeError('Cannot cast %s to %s in %s casting mode' %
(src_dtype, dst.dtype, casting))
+
+ if not src_is_python_scalar and src.ndim > dst.ndim:
+ # NumPy allows stripping leading unit dimensions.
+ try:
+ src = src.squeeze(tuple(range(src.ndim - dst.ndim)))
+ except ValueError:
+ # "cannot select an axis to squeeze out
+ # which has size not equal to one"
+ pass # raise an error later
+
if fusion._is_fusing():
if where is None:
_core.elementwise_copy(src, dst)
| {"golden_diff": "diff --git a/cupy/_manipulation/basic.py b/cupy/_manipulation/basic.py\n--- a/cupy/_manipulation/basic.py\n+++ b/cupy/_manipulation/basic.py\n@@ -40,6 +40,16 @@\n if not can_cast:\n raise TypeError('Cannot cast %s to %s in %s casting mode' %\n (src_dtype, dst.dtype, casting))\n+\n+ if not src_is_python_scalar and src.ndim > dst.ndim:\n+ # NumPy allows stripping leading unit dimensions.\n+ try:\n+ src = src.squeeze(tuple(range(src.ndim - dst.ndim)))\n+ except ValueError:\n+ # \"cannot select an axis to squeeze out\n+ # which has size not equal to one\"\n+ pass # raise an error later\n+\n if fusion._is_fusing():\n if where is None:\n _core.elementwise_copy(src, dst)\n", "issue": "Indexing with assignment between broadcastable arrays is inconsistent with NumPy\nWhen performing `A[idx, ...] = B` with `B` broadcastable over `A[idx, ...]` (so no storage expansion for `A[idx, ...]` is necessary) with `B.ndim > A.ndim` CuPy throws a shape mismatch error while NumPy handles this case.\r\n\r\n* Code to reproduce\r\n```python\r\nIn [1]: import numpy\r\n\r\nIn [2]: import cupy\r\n\r\nIn [3]: def test(module):\r\n ...: x = module.zeros((3, 3, 3))\r\n ...: y = module.ones((1, 3, 3))\r\n ...: x[0, ...] = y\r\n ...: return x\r\n ...: \r\n ...: \r\n\r\nIn [4]: test(numpy)\r\nOut[4]: \r\narray([[[1., 1., 1.],\r\n [1., 1., 1.],\r\n [1., 1., 1.]],\r\n\r\n [[0., 0., 0.],\r\n [0., 0., 0.],\r\n [0., 0., 0.]],\r\n\r\n [[0., 0., 0.],\r\n [0., 0., 0.],\r\n [0., 0., 0.]]])\r\n\r\nIn [5]: test(cupy)\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-5-3f849ce2914e> in <module>()\r\n----> 1 test(cupy)\r\n\r\n<ipython-input-3-450cff366473> in test(module)\r\n 2 x = module.zeros((3, 3, 3))\r\n 3 y = module.ones((1, 3, 3))\r\n----> 4 x[0, ...] = y\r\n 5 return x\r\n\r\ncupy/_core/core.pyx in cupy._core.core.ndarray.__setitem__()\r\n\r\ncupy/_core/_routines_indexing.pyx in cupy._core._routines_indexing._ndarray_setitem()\r\n\r\ncupy/_core/_routines_indexing.pyx in cupy._core._routines_indexing._scatter_op()\r\n\r\ncupy/_core/_kernel.pyx in cupy._core._kernel.ufunc.__call__()\r\n\r\ncupy/_core/_kernel.pyx in cupy._core._kernel._get_out_args()\r\n\r\nValueError: Out shape is mismatched\r\n\r\n```\r\n\r\n* Conditions\r\n```\r\nOS : Linux-5.4.0-81-generic-x86_64-with-debian-bullseye-sid\r\nPython Version : 3.6.7\r\nCuPy Version : 9.5.0\r\nCuPy Platform : NVIDIA CUDA\r\nNumPy Version : 1.19.5\r\nSciPy Version : None\r\nCython Build Version : 0.29.24\r\nCython Runtime Version : None\r\nCUDA Root : /home/nik/.conda/envs/pytorch-cuda-dev\r\nnvcc PATH : /home/nik/.conda/envs/pytorch-cuda-dev/bin/nvcc\r\nCUDA Build Version : 11020\r\nCUDA Driver Version : 11030\r\nCUDA Runtime Version : 11020\r\ncuBLAS Version : (available)\r\ncuFFT Version : 10401\r\ncuRAND Version : 10203\r\ncuSOLVER Version : (11, 1, 0)\r\ncuSPARSE Version : (available)\r\nNVRTC Version : (11, 2)\r\nThrust Version : 101000\r\nCUB Build Version : 101000\r\nJitify Build Version : <unknown>\r\ncuDNN Build Version : 8201\r\ncuDNN Version : 8004\r\nNCCL Build Version : None\r\nNCCL Runtime Version : None\r\ncuTENSOR Version : None\r\ncuSPARSELt Build Version : None\r\nDevice 0 Name : NVIDIA GeForce RTX 2060\r\nDevice 0 Compute Capability : 75\r\nDevice 0 PCI Bus ID : 0000:01:00.0\r\nDevice 1 Name : NVIDIA GeForce RTX 2060\r\nDevice 1 Compute Capability : 75\r\nDevice 1 PCI Bus ID : 0000:21:00.0\r\n```\r\n\n", "before_files": [{"content": "import numpy\n\nfrom cupy import _core\nfrom cupy._core import _fusion_interface\nfrom cupy._core import fusion\nfrom cupy._sorting import search\nfrom cupy_backends.cuda.api import runtime\n\n\ndef copyto(dst, src, casting='same_kind', where=None):\n \"\"\"Copies values from one array to another with broadcasting.\n\n This function can be called for arrays on different devices. In this case,\n casting, ``where``, and broadcasting is not supported, and an exception is\n raised if these are used.\n\n Args:\n dst (cupy.ndarray): Target array.\n src (cupy.ndarray): Source array.\n casting (str): Casting rule. See :func:`numpy.can_cast` for detail.\n where (cupy.ndarray of bool): If specified, this array acts as a mask,\n and an element is copied only if the corresponding element of\n ``where`` is True.\n\n .. seealso:: :func:`numpy.copyto`\n\n \"\"\"\n\n src_type = type(src)\n src_is_python_scalar = src_type in (\n int, bool, float, complex,\n fusion._FusionVarScalar, _fusion_interface._ScalarProxy)\n if src_is_python_scalar:\n src_dtype = numpy.dtype(type(src))\n can_cast = numpy.can_cast(src, dst.dtype, casting)\n else:\n src_dtype = src.dtype\n can_cast = numpy.can_cast(src_dtype, dst.dtype, casting)\n\n if not can_cast:\n raise TypeError('Cannot cast %s to %s in %s casting mode' %\n (src_dtype, dst.dtype, casting))\n if fusion._is_fusing():\n if where is None:\n _core.elementwise_copy(src, dst)\n else:\n fusion._call_ufunc(search._where_ufunc, where, src, dst, dst)\n return\n\n if where is not None:\n _core.elementwise_copy(src, dst, _where=where)\n return\n\n if dst.size == 0:\n return\n\n if src_is_python_scalar:\n dst.fill(src)\n return\n\n if _can_memcpy(dst, src):\n dst.data.copy_from_async(src.data, src.nbytes)\n return\n\n device = dst.device\n prev_device = runtime.getDevice()\n try:\n runtime.setDevice(device.id)\n if src.device != device:\n src = src.copy()\n _core.elementwise_copy(src, dst)\n finally:\n runtime.setDevice(prev_device)\n\n\ndef _can_memcpy(dst, src):\n c_contiguous = dst.flags.c_contiguous and src.flags.c_contiguous\n f_contiguous = dst.flags.f_contiguous and src.flags.f_contiguous\n return (c_contiguous or f_contiguous) and dst.dtype == src.dtype and \\\n dst.size == src.size\n", "path": "cupy/_manipulation/basic.py"}]} | 2,296 | 199 |
gh_patches_debug_43430 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-3346 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider jackinthebox is broken
During the global build at 2021-06-23-14-42-18, spider **jackinthebox** failed with **0 features** and **1 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-06-23-14-42-18/logs/jackinthebox.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-23-14-42-18/output/jackinthebox.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-23-14-42-18/output/jackinthebox.geojson))
</issue>
<code>
[start of locations/spiders/jackinthebox.py]
1 import json
2 import re
3 import scrapy
4 from locations.items import GeojsonPointItem
5
6 class JackInTheBoxSpider(scrapy.Spider):
7 name = "jackinthebox"
8 item_attributes = { 'brand': "Jack In The Box" }
9 allowed_domains = ["jackinthebox.com"]
10 start_urls = (
11 "https://www.jackinthebox.com/api/locations",
12 )
13 dayMap = {
14 'monday': 'Mo',
15 'tuesday': 'Tu',
16 'wednesday': 'We',
17 'thursday': 'Th',
18 'friday': 'Fr',
19 'saturday': 'Sa',
20 'sunday': 'Su'
21 }
22 def opening_hours(self, days_hours):
23 day_groups = []
24 this_day_group = None
25 for day_hours in days_hours:
26 day = day_hours[0]
27 hours = day_hours[1]
28 match = re.search(r'^(\d{1,2}):(\d{2})\w*(a|p)m-(\d{1,2}):(\d{2})\w*(a|p)m?$', hours)
29 (f_hr, f_min, f_ampm, t_hr, t_min, t_ampm) = match.groups()
30
31 f_hr = int(f_hr)
32 if f_ampm == 'p':
33 f_hr += 12
34 elif f_ampm == 'a' and f_hr == 12:
35 f_hr = 0
36 t_hr = int(t_hr)
37 if t_ampm == 'p':
38 t_hr += 12
39 elif t_ampm == 'a' and t_hr == 12:
40 t_hr = 0
41
42 hours = '{:02d}:{}-{:02d}:{}'.format(
43 f_hr,
44 f_min,
45 t_hr,
46 t_min,
47 )
48
49 if not this_day_group:
50 this_day_group = {
51 'from_day': day,
52 'to_day': day,
53 'hours': hours
54 }
55 elif this_day_group['hours'] != hours:
56 day_groups.append(this_day_group)
57 this_day_group = {
58 'from_day': day,
59 'to_day': day,
60 'hours': hours
61 }
62 elif this_day_group['hours'] == hours:
63 this_day_group['to_day'] = day
64
65 day_groups.append(this_day_group)
66
67 opening_hours = ""
68 if len(day_groups) == 1 and day_groups[0]['hours'] in ('00:00-23:59', '00:00-00:00'):
69 opening_hours = '24/7'
70 else:
71 for day_group in day_groups:
72 if day_group['from_day'] == day_group['to_day']:
73 opening_hours += '{from_day} {hours}; '.format(**day_group)
74 elif day_group['from_day'] == 'Su' and day_group['to_day'] == 'Sa':
75 opening_hours += '{hours}; '.format(**day_group)
76 else:
77 opening_hours += '{from_day}-{to_day} {hours}; '.format(**day_group)
78 opening_hours = opening_hours[:-2]
79
80 return opening_hours
81
82 def parse(self, response):
83 stores = json.loads(response.body_as_unicode())
84 for store in stores:
85 properties = {
86 'ref': store['id'],
87 'addr_full': store['address'],
88 'city': store['city'],
89 'state': store['state'],
90 'postcode': store['postal'],
91 'lat': store['lat'],
92 'lon': store['lng'],
93 'phone': store['phone'],
94 }
95
96 if store['twentyfourhours']:
97 properties['opening_hours'] = '24/7'
98 elif 'hours' in store:
99 hours = store['hours']
100 if not all(hours[d] == '' for d in hours):
101 days_hours = []
102 for day in ['monday', 'tuesday', 'wednesday', 'thursday', 'friday', 'saturday', 'sunday']:
103 days_hours.append([
104 self.dayMap[day],
105 hours[day].lower().replace(' ', '')
106 ])
107 properties['opening_hours'] = self.opening_hours(days_hours)
108
109 yield GeojsonPointItem(**properties)
110
111
112
[end of locations/spiders/jackinthebox.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/jackinthebox.py b/locations/spiders/jackinthebox.py
--- a/locations/spiders/jackinthebox.py
+++ b/locations/spiders/jackinthebox.py
@@ -11,13 +11,13 @@
"https://www.jackinthebox.com/api/locations",
)
dayMap = {
- 'monday': 'Mo',
- 'tuesday': 'Tu',
- 'wednesday': 'We',
- 'thursday': 'Th',
- 'friday': 'Fr',
- 'saturday': 'Sa',
- 'sunday': 'Su'
+ 'Monday': 'Mo',
+ 'Tuesday': 'Tu',
+ 'Wednesday': 'We',
+ 'Thursday': 'Th',
+ 'Friday': 'Fr',
+ 'Saturday': 'Sa',
+ 'Sunday': 'Su'
}
def opening_hours(self, days_hours):
day_groups = []
@@ -25,6 +25,9 @@
for day_hours in days_hours:
day = day_hours[0]
hours = day_hours[1]
+ if not hours:
+ continue
+
match = re.search(r'^(\d{1,2}):(\d{2})\w*(a|p)m-(\d{1,2}):(\d{2})\w*(a|p)m?$', hours)
(f_hr, f_min, f_ampm, t_hr, t_min, t_ampm) = match.groups()
@@ -62,7 +65,8 @@
elif this_day_group['hours'] == hours:
this_day_group['to_day'] = day
- day_groups.append(this_day_group)
+ if this_day_group:
+ day_groups.append(this_day_group)
opening_hours = ""
if len(day_groups) == 1 and day_groups[0]['hours'] in ('00:00-23:59', '00:00-00:00'):
@@ -80,31 +84,32 @@
return opening_hours
def parse(self, response):
- stores = json.loads(response.body_as_unicode())
- for store in stores:
+ stores = json.loads(response.body_as_unicode())['Locations']
+ for store in stores:
+ address = store['Address']
properties = {
- 'ref': store['id'],
- 'addr_full': store['address'],
- 'city': store['city'],
- 'state': store['state'],
- 'postcode': store['postal'],
- 'lat': store['lat'],
- 'lon': store['lng'],
- 'phone': store['phone'],
+ 'ref': store['LocationId'],
+ 'addr_full': ", ".join([address['StreetLine1'], address['StreetLine2']]),
+ 'city': address['City'],
+ 'state': address['State'],
+ 'postcode': address['Zipcode'],
+ 'lat': store['Coordinates']['Lat'],
+ 'lon': store['Coordinates']['Lon'],
+ 'phone': store['OperationsData']['BusinessPhoneNumber'],
}
- if store['twentyfourhours']:
+ hours = store['OperatingHours']
+ if all (hours['DineInAllDay'][day] == True for day in hours['DineInAllDay']):
properties['opening_hours'] = '24/7'
- elif 'hours' in store:
- hours = store['hours']
- if not all(hours[d] == '' for d in hours):
- days_hours = []
- for day in ['monday', 'tuesday', 'wednesday', 'thursday', 'friday', 'saturday', 'sunday']:
- days_hours.append([
- self.dayMap[day],
- hours[day].lower().replace(' ', '')
- ])
- properties['opening_hours'] = self.opening_hours(days_hours)
+
+ else:
+ days_hours = []
+ for day in ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']:
+ days_hours.append([
+ self.dayMap[day],
+ hours['DineIn'][day].lower().replace(' ', '')
+ ])
+ properties['opening_hours'] = self.opening_hours(days_hours)
yield GeojsonPointItem(**properties)
| {"golden_diff": "diff --git a/locations/spiders/jackinthebox.py b/locations/spiders/jackinthebox.py\n--- a/locations/spiders/jackinthebox.py\n+++ b/locations/spiders/jackinthebox.py\n@@ -11,13 +11,13 @@\n \"https://www.jackinthebox.com/api/locations\",\n )\n dayMap = {\n- 'monday': 'Mo',\n- 'tuesday': 'Tu',\n- 'wednesday': 'We',\n- 'thursday': 'Th',\n- 'friday': 'Fr',\n- 'saturday': 'Sa',\n- 'sunday': 'Su'\n+ 'Monday': 'Mo',\n+ 'Tuesday': 'Tu',\n+ 'Wednesday': 'We',\n+ 'Thursday': 'Th',\n+ 'Friday': 'Fr',\n+ 'Saturday': 'Sa',\n+ 'Sunday': 'Su'\n }\n def opening_hours(self, days_hours):\n day_groups = []\n@@ -25,6 +25,9 @@\n for day_hours in days_hours:\n day = day_hours[0]\n hours = day_hours[1]\n+ if not hours:\n+ continue\n+\n match = re.search(r'^(\\d{1,2}):(\\d{2})\\w*(a|p)m-(\\d{1,2}):(\\d{2})\\w*(a|p)m?$', hours)\n (f_hr, f_min, f_ampm, t_hr, t_min, t_ampm) = match.groups()\n \n@@ -62,7 +65,8 @@\n elif this_day_group['hours'] == hours:\n this_day_group['to_day'] = day\n \n- day_groups.append(this_day_group)\n+ if this_day_group:\n+ day_groups.append(this_day_group)\n \n opening_hours = \"\"\n if len(day_groups) == 1 and day_groups[0]['hours'] in ('00:00-23:59', '00:00-00:00'):\n@@ -80,31 +84,32 @@\n return opening_hours\n \n def parse(self, response):\n- stores = json.loads(response.body_as_unicode())\n- for store in stores: \n+ stores = json.loads(response.body_as_unicode())['Locations']\n+ for store in stores:\n+ address = store['Address']\n properties = { \n- 'ref': store['id'], \n- 'addr_full': store['address'],\n- 'city': store['city'], \n- 'state': store['state'], \n- 'postcode': store['postal'], \n- 'lat': store['lat'], \n- 'lon': store['lng'], \n- 'phone': store['phone'],\n+ 'ref': store['LocationId'],\n+ 'addr_full': \", \".join([address['StreetLine1'], address['StreetLine2']]),\n+ 'city': address['City'],\n+ 'state': address['State'],\n+ 'postcode': address['Zipcode'],\n+ 'lat': store['Coordinates']['Lat'],\n+ 'lon': store['Coordinates']['Lon'],\n+ 'phone': store['OperationsData']['BusinessPhoneNumber'],\n } \n \n- if store['twentyfourhours']:\n+ hours = store['OperatingHours']\n+ if all (hours['DineInAllDay'][day] == True for day in hours['DineInAllDay']):\n properties['opening_hours'] = '24/7'\n- elif 'hours' in store:\n- hours = store['hours']\n- if not all(hours[d] == '' for d in hours):\n- days_hours = []\n- for day in ['monday', 'tuesday', 'wednesday', 'thursday', 'friday', 'saturday', 'sunday']:\n- days_hours.append([\n- self.dayMap[day],\n- hours[day].lower().replace(' ', '')\n- ])\n- properties['opening_hours'] = self.opening_hours(days_hours)\n+\n+ else:\n+ days_hours = []\n+ for day in ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']:\n+ days_hours.append([\n+ self.dayMap[day],\n+ hours['DineIn'][day].lower().replace(' ', '')\n+ ])\n+ properties['opening_hours'] = self.opening_hours(days_hours)\n \n yield GeojsonPointItem(**properties)\n", "issue": "Spider jackinthebox is broken\nDuring the global build at 2021-06-23-14-42-18, spider **jackinthebox** failed with **0 features** and **1 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-06-23-14-42-18/logs/jackinthebox.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-23-14-42-18/output/jackinthebox.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-23-14-42-18/output/jackinthebox.geojson))\n", "before_files": [{"content": "import json\nimport re\nimport scrapy\nfrom locations.items import GeojsonPointItem\n\nclass JackInTheBoxSpider(scrapy.Spider):\n name = \"jackinthebox\"\n item_attributes = { 'brand': \"Jack In The Box\" }\n allowed_domains = [\"jackinthebox.com\"]\n start_urls = (\n \"https://www.jackinthebox.com/api/locations\",\n )\n dayMap = {\n 'monday': 'Mo',\n 'tuesday': 'Tu',\n 'wednesday': 'We',\n 'thursday': 'Th',\n 'friday': 'Fr',\n 'saturday': 'Sa',\n 'sunday': 'Su'\n }\n def opening_hours(self, days_hours):\n day_groups = []\n this_day_group = None\n for day_hours in days_hours:\n day = day_hours[0]\n hours = day_hours[1]\n match = re.search(r'^(\\d{1,2}):(\\d{2})\\w*(a|p)m-(\\d{1,2}):(\\d{2})\\w*(a|p)m?$', hours)\n (f_hr, f_min, f_ampm, t_hr, t_min, t_ampm) = match.groups()\n\n f_hr = int(f_hr)\n if f_ampm == 'p':\n f_hr += 12\n elif f_ampm == 'a' and f_hr == 12:\n f_hr = 0\n t_hr = int(t_hr)\n if t_ampm == 'p':\n t_hr += 12\n elif t_ampm == 'a' and t_hr == 12:\n t_hr = 0\n\n hours = '{:02d}:{}-{:02d}:{}'.format(\n f_hr,\n f_min,\n t_hr,\n t_min,\n )\n\n if not this_day_group:\n this_day_group = {\n 'from_day': day,\n 'to_day': day,\n 'hours': hours\n }\n elif this_day_group['hours'] != hours:\n day_groups.append(this_day_group)\n this_day_group = {\n 'from_day': day,\n 'to_day': day,\n 'hours': hours\n }\n elif this_day_group['hours'] == hours:\n this_day_group['to_day'] = day\n\n day_groups.append(this_day_group)\n\n opening_hours = \"\"\n if len(day_groups) == 1 and day_groups[0]['hours'] in ('00:00-23:59', '00:00-00:00'):\n opening_hours = '24/7'\n else:\n for day_group in day_groups:\n if day_group['from_day'] == day_group['to_day']:\n opening_hours += '{from_day} {hours}; '.format(**day_group)\n elif day_group['from_day'] == 'Su' and day_group['to_day'] == 'Sa':\n opening_hours += '{hours}; '.format(**day_group)\n else:\n opening_hours += '{from_day}-{to_day} {hours}; '.format(**day_group)\n opening_hours = opening_hours[:-2]\n\n return opening_hours\n\n def parse(self, response):\n stores = json.loads(response.body_as_unicode())\n for store in stores: \n properties = { \n 'ref': store['id'], \n 'addr_full': store['address'],\n 'city': store['city'], \n 'state': store['state'], \n 'postcode': store['postal'], \n 'lat': store['lat'], \n 'lon': store['lng'], \n 'phone': store['phone'],\n } \n \n if store['twentyfourhours']:\n properties['opening_hours'] = '24/7'\n elif 'hours' in store:\n hours = store['hours']\n if not all(hours[d] == '' for d in hours):\n days_hours = []\n for day in ['monday', 'tuesday', 'wednesday', 'thursday', 'friday', 'saturday', 'sunday']:\n days_hours.append([\n self.dayMap[day],\n hours[day].lower().replace(' ', '')\n ])\n properties['opening_hours'] = self.opening_hours(days_hours)\n \n yield GeojsonPointItem(**properties) \n\n\n", "path": "locations/spiders/jackinthebox.py"}]} | 1,909 | 986 |
gh_patches_debug_5360 | rasdani/github-patches | git_diff | ibis-project__ibis-2884 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BUG: File pseudo-backends failing for missing pandas option
The next code is failing in master since #2833:
```python
>>> import ibis
>>> con = ibis.csv.connect('/home/mgarcia/src/ibis/ci/ibis-testing-data/')
>>> expr = con.table('functional_alltypes')['double_col'] * 2
>>> print(expr.execute())
OptionError: "No such keys(s): 'pandas.enable_trace'"
```
The problem is when the `csv` backend (or other file backends) are loaded, but the pandas backend is not. This is because `ibis.pandas` loads the pandas options, which looks like they are needed by the file pseudo-backends.
The CI is not failing, I guess because we test pandas and the file backends are tested together, and pandas is loaded when the file backends are tested.
</issue>
<code>
[start of ibis/backends/base/file/__init__.py]
1 from pathlib import Path
2
3 import ibis.expr.types as ir
4 from ibis.backends.base import BaseBackend, Client, Database
5 from ibis.backends.pandas.core import execute_and_reset
6
7
8 class FileClient(Client):
9 def __init__(self, backend, root):
10 self.backend = backend
11 self.extension = backend.extension
12 self.table_class = backend.table_class
13 self.root = Path(str(root))
14 self.dictionary = {}
15
16 def insert(self, path, expr, **kwargs):
17 raise NotImplementedError
18
19 def table(self, name, path):
20 raise NotImplementedError
21
22 def database(self, name=None, path=None):
23 if name is None:
24 return FileDatabase('root', self, path=path)
25
26 if name not in self.list_databases(path):
27 raise AttributeError(name)
28 if path is None:
29 path = self.root
30
31 new_name = "{}.{}".format(name, self.extension)
32 if (self.root / name).is_dir():
33 path /= name
34 elif not str(path).endswith(new_name):
35 path /= new_name
36
37 return FileDatabase(name, self, path=path)
38
39 def execute(self, expr, params=None, **kwargs): # noqa
40 assert isinstance(expr, ir.Expr)
41 return execute_and_reset(expr, params=params, **kwargs)
42
43 def list_tables(self, path=None):
44 raise NotImplementedError
45
46 def _list_tables_files(self, path=None):
47 # tables are files in a dir
48 if path is None:
49 path = self.root
50
51 tables = []
52 if path.is_dir():
53 for d in path.iterdir():
54 if d.is_file():
55 if str(d).endswith(self.extension):
56 tables.append(d.stem)
57 elif path.is_file():
58 if str(path).endswith(self.extension):
59 tables.append(path.stem)
60 return tables
61
62 def list_databases(self, path=None):
63 raise NotImplementedError
64
65 def _list_databases_dirs(self, path=None):
66 # databases are dir
67 if path is None:
68 path = self.root
69
70 tables = []
71 if path.is_dir():
72 for d in path.iterdir():
73 if d.is_dir():
74 tables.append(d.name)
75 return tables
76
77 def _list_databases_dirs_or_files(self, path=None):
78 # databases are dir & file
79 if path is None:
80 path = self.root
81
82 tables = []
83 if path.is_dir():
84 for d in path.iterdir():
85 if d.is_dir():
86 tables.append(d.name)
87 elif d.is_file():
88 if str(d).endswith(self.extension):
89 tables.append(d.stem)
90 elif path.is_file():
91 # by definition we are at the db level at this point
92 pass
93
94 return tables
95
96
97 class FileDatabase(Database):
98 def __init__(self, name, client, path=None):
99 super().__init__(name, client)
100 self.path = path
101
102 def __str__(self):
103 return '{0.__class__.__name__}({0.name})'.format(self)
104
105 def __dir__(self):
106 dbs = self.list_databases(path=self.path)
107 tables = self.list_tables(path=self.path)
108 return sorted(set(dbs).union(set(tables)))
109
110 def __getattr__(self, name):
111 try:
112 return self.table(name, path=self.path)
113 except AttributeError:
114 return self.database(name, path=self.path)
115
116 def table(self, name, path):
117 return self.client.table(name, path=path)
118
119 def database(self, name=None, path=None):
120 return self.client.database(name=name, path=path)
121
122 def list_databases(self, path=None):
123 if path is None:
124 path = self.path
125 return sorted(self.client.list_databases(path=path))
126
127 def list_tables(self, path=None):
128 if path is None:
129 path = self.path
130 return sorted(self.client.list_tables(path=path))
131
132
133 class BaseFileBackend(BaseBackend):
134 """
135 Base backend class for pandas pseudo-backends for file formats.
136 """
137
138 def connect(self, path):
139 """Create a Client for use with Ibis
140
141 Parameters
142 ----------
143 path : str or pathlib.Path
144
145 Returns
146 -------
147 Client
148 """
149 return self.client_class(backend=self, root=path)
150
[end of ibis/backends/base/file/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ibis/backends/base/file/__init__.py b/ibis/backends/base/file/__init__.py
--- a/ibis/backends/base/file/__init__.py
+++ b/ibis/backends/base/file/__init__.py
@@ -1,9 +1,13 @@
from pathlib import Path
+import ibis
import ibis.expr.types as ir
from ibis.backends.base import BaseBackend, Client, Database
from ibis.backends.pandas.core import execute_and_reset
+# Load options of pandas backend
+ibis.pandas
+
class FileClient(Client):
def __init__(self, backend, root):
| {"golden_diff": "diff --git a/ibis/backends/base/file/__init__.py b/ibis/backends/base/file/__init__.py\n--- a/ibis/backends/base/file/__init__.py\n+++ b/ibis/backends/base/file/__init__.py\n@@ -1,9 +1,13 @@\n from pathlib import Path\n \n+import ibis\n import ibis.expr.types as ir\n from ibis.backends.base import BaseBackend, Client, Database\n from ibis.backends.pandas.core import execute_and_reset\n \n+# Load options of pandas backend\n+ibis.pandas\n+\n \n class FileClient(Client):\n def __init__(self, backend, root):\n", "issue": "BUG: File pseudo-backends failing for missing pandas option\nThe next code is failing in master since #2833:\r\n\r\n```python\r\n>>> import ibis\r\n>>> con = ibis.csv.connect('/home/mgarcia/src/ibis/ci/ibis-testing-data/')\r\n>>> expr = con.table('functional_alltypes')['double_col'] * 2\r\n>>> print(expr.execute())\r\nOptionError: \"No such keys(s): 'pandas.enable_trace'\"\r\n```\r\n\r\nThe problem is when the `csv` backend (or other file backends) are loaded, but the pandas backend is not. This is because `ibis.pandas` loads the pandas options, which looks like they are needed by the file pseudo-backends.\r\n\r\nThe CI is not failing, I guess because we test pandas and the file backends are tested together, and pandas is loaded when the file backends are tested.\n", "before_files": [{"content": "from pathlib import Path\n\nimport ibis.expr.types as ir\nfrom ibis.backends.base import BaseBackend, Client, Database\nfrom ibis.backends.pandas.core import execute_and_reset\n\n\nclass FileClient(Client):\n def __init__(self, backend, root):\n self.backend = backend\n self.extension = backend.extension\n self.table_class = backend.table_class\n self.root = Path(str(root))\n self.dictionary = {}\n\n def insert(self, path, expr, **kwargs):\n raise NotImplementedError\n\n def table(self, name, path):\n raise NotImplementedError\n\n def database(self, name=None, path=None):\n if name is None:\n return FileDatabase('root', self, path=path)\n\n if name not in self.list_databases(path):\n raise AttributeError(name)\n if path is None:\n path = self.root\n\n new_name = \"{}.{}\".format(name, self.extension)\n if (self.root / name).is_dir():\n path /= name\n elif not str(path).endswith(new_name):\n path /= new_name\n\n return FileDatabase(name, self, path=path)\n\n def execute(self, expr, params=None, **kwargs): # noqa\n assert isinstance(expr, ir.Expr)\n return execute_and_reset(expr, params=params, **kwargs)\n\n def list_tables(self, path=None):\n raise NotImplementedError\n\n def _list_tables_files(self, path=None):\n # tables are files in a dir\n if path is None:\n path = self.root\n\n tables = []\n if path.is_dir():\n for d in path.iterdir():\n if d.is_file():\n if str(d).endswith(self.extension):\n tables.append(d.stem)\n elif path.is_file():\n if str(path).endswith(self.extension):\n tables.append(path.stem)\n return tables\n\n def list_databases(self, path=None):\n raise NotImplementedError\n\n def _list_databases_dirs(self, path=None):\n # databases are dir\n if path is None:\n path = self.root\n\n tables = []\n if path.is_dir():\n for d in path.iterdir():\n if d.is_dir():\n tables.append(d.name)\n return tables\n\n def _list_databases_dirs_or_files(self, path=None):\n # databases are dir & file\n if path is None:\n path = self.root\n\n tables = []\n if path.is_dir():\n for d in path.iterdir():\n if d.is_dir():\n tables.append(d.name)\n elif d.is_file():\n if str(d).endswith(self.extension):\n tables.append(d.stem)\n elif path.is_file():\n # by definition we are at the db level at this point\n pass\n\n return tables\n\n\nclass FileDatabase(Database):\n def __init__(self, name, client, path=None):\n super().__init__(name, client)\n self.path = path\n\n def __str__(self):\n return '{0.__class__.__name__}({0.name})'.format(self)\n\n def __dir__(self):\n dbs = self.list_databases(path=self.path)\n tables = self.list_tables(path=self.path)\n return sorted(set(dbs).union(set(tables)))\n\n def __getattr__(self, name):\n try:\n return self.table(name, path=self.path)\n except AttributeError:\n return self.database(name, path=self.path)\n\n def table(self, name, path):\n return self.client.table(name, path=path)\n\n def database(self, name=None, path=None):\n return self.client.database(name=name, path=path)\n\n def list_databases(self, path=None):\n if path is None:\n path = self.path\n return sorted(self.client.list_databases(path=path))\n\n def list_tables(self, path=None):\n if path is None:\n path = self.path\n return sorted(self.client.list_tables(path=path))\n\n\nclass BaseFileBackend(BaseBackend):\n \"\"\"\n Base backend class for pandas pseudo-backends for file formats.\n \"\"\"\n\n def connect(self, path):\n \"\"\"Create a Client for use with Ibis\n\n Parameters\n ----------\n path : str or pathlib.Path\n\n Returns\n -------\n Client\n \"\"\"\n return self.client_class(backend=self, root=path)\n", "path": "ibis/backends/base/file/__init__.py"}]} | 1,996 | 141 |
gh_patches_debug_35728 | rasdani/github-patches | git_diff | mindsdb__lightwood-518 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Lightwood.api.ensemble is not necessary
This script is deprecated, as the ensemble module has moved to `lw.ensemble` with a base abstraction. A quick inspection of the code (ex: grep for this call) and I don't see any references. Please double check if this file is required, as I think it should be removed.
The culprit link is [here](https://github.com/mindsdb/lightwood/blob/0372d292796a6d1f91ac9df9b8658ad2f128b7c9/lightwood/api/ensemble.py)
</issue>
<code>
[start of lightwood/api/ensemble.py]
1 from lightwood import Predictor
2 from lightwood.constants.lightwood import ColumnDataTypes
3 from collections import Counter
4 import numpy as np
5 import pickle
6 import os
7
8
9 class LightwoodEnsemble:
10 def __init__(self, predictors=None, load_from_path=None):
11 self.path_list = None
12 if load_from_path is not None:
13 with open(os.path.join(load_from_path, 'lightwood_data'), 'rb') as pickle_in:
14 obj = pickle.load(pickle_in)
15 self.path = load_from_path
16 self.path_list = obj.path_list
17 self.ensemble = [Predictor(load_from_path=path) for path in self.path_list]
18 elif isinstance(predictors, Predictor):
19 self.ensemble = [predictors]
20 elif isinstance(predictors, list):
21 self.ensemble = predictors
22
23 def append(self, predictor):
24 if isinstance(self.ensemble, list):
25 self.ensemble.append(predictor)
26 else:
27 self.ensemble = [predictor]
28
29 def __iter__(self):
30 yield self.ensemble
31
32 def predict(self, when_data):
33 predictions = [p.predict(when_data=when_data) for p in self.ensemble]
34 formatted_predictions = {}
35 for target in self.ensemble[0].config['output_features']:
36 target_name = target['name']
37 formatted_predictions[target_name] = {}
38 pred_arr = np.array([p[target_name]['predictions'] for p in predictions])
39 if target['type'] == ColumnDataTypes.NUMERIC:
40 final_preds = np.mean(pred_arr, axis=0).tolist()
41 elif target['type'] == ColumnDataTypes.CATEGORICAL:
42 final_preds = [max(Counter(pred_arr[:, idx])) for idx in range(pred_arr.shape[1])]
43
44 # @TODO: implement class distribution for ensembles
45 # NOTE: label set *could* grow when adding predictors, which complicates belief score computation
46 formatted_predictions[target_name]['class_distribution'] = np.ones(shape=(len(final_preds), 1))
47 else:
48 raise Exception('Only numeric and categorical datatypes are supported for ensembles')
49
50 formatted_predictions[target_name]['predictions'] = final_preds
51
52 return formatted_predictions
53
54 def save(self, path_to):
55 # TODO: potentially save predictors inside ensemble pickle, though there's the issue of nonpersistent stuff with torch.save() # noqa
56 path_list = []
57 for i, model in enumerate(self.ensemble):
58 path = os.path.join(path_to, f'lightwood_predictor_{i}')
59 path_list.append(path)
60 model.save(path_to=path)
61
62 self.path_list = path_list
63
64 # TODO: in the future, save preds inside this data struct
65 self.ensemble = None # we deref predictors for now
66 with open(os.path.join(path_to, 'lightwood_data'), 'wb') as file:
67 pickle.dump(self, file, pickle.HIGHEST_PROTOCOL)
68
[end of lightwood/api/ensemble.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lightwood/api/ensemble.py b/lightwood/api/ensemble.py
deleted file mode 100644
--- a/lightwood/api/ensemble.py
+++ /dev/null
@@ -1,67 +0,0 @@
-from lightwood import Predictor
-from lightwood.constants.lightwood import ColumnDataTypes
-from collections import Counter
-import numpy as np
-import pickle
-import os
-
-
-class LightwoodEnsemble:
- def __init__(self, predictors=None, load_from_path=None):
- self.path_list = None
- if load_from_path is not None:
- with open(os.path.join(load_from_path, 'lightwood_data'), 'rb') as pickle_in:
- obj = pickle.load(pickle_in)
- self.path = load_from_path
- self.path_list = obj.path_list
- self.ensemble = [Predictor(load_from_path=path) for path in self.path_list]
- elif isinstance(predictors, Predictor):
- self.ensemble = [predictors]
- elif isinstance(predictors, list):
- self.ensemble = predictors
-
- def append(self, predictor):
- if isinstance(self.ensemble, list):
- self.ensemble.append(predictor)
- else:
- self.ensemble = [predictor]
-
- def __iter__(self):
- yield self.ensemble
-
- def predict(self, when_data):
- predictions = [p.predict(when_data=when_data) for p in self.ensemble]
- formatted_predictions = {}
- for target in self.ensemble[0].config['output_features']:
- target_name = target['name']
- formatted_predictions[target_name] = {}
- pred_arr = np.array([p[target_name]['predictions'] for p in predictions])
- if target['type'] == ColumnDataTypes.NUMERIC:
- final_preds = np.mean(pred_arr, axis=0).tolist()
- elif target['type'] == ColumnDataTypes.CATEGORICAL:
- final_preds = [max(Counter(pred_arr[:, idx])) for idx in range(pred_arr.shape[1])]
-
- # @TODO: implement class distribution for ensembles
- # NOTE: label set *could* grow when adding predictors, which complicates belief score computation
- formatted_predictions[target_name]['class_distribution'] = np.ones(shape=(len(final_preds), 1))
- else:
- raise Exception('Only numeric and categorical datatypes are supported for ensembles')
-
- formatted_predictions[target_name]['predictions'] = final_preds
-
- return formatted_predictions
-
- def save(self, path_to):
- # TODO: potentially save predictors inside ensemble pickle, though there's the issue of nonpersistent stuff with torch.save() # noqa
- path_list = []
- for i, model in enumerate(self.ensemble):
- path = os.path.join(path_to, f'lightwood_predictor_{i}')
- path_list.append(path)
- model.save(path_to=path)
-
- self.path_list = path_list
-
- # TODO: in the future, save preds inside this data struct
- self.ensemble = None # we deref predictors for now
- with open(os.path.join(path_to, 'lightwood_data'), 'wb') as file:
- pickle.dump(self, file, pickle.HIGHEST_PROTOCOL)
| {"golden_diff": "diff --git a/lightwood/api/ensemble.py b/lightwood/api/ensemble.py\ndeleted file mode 100644\n--- a/lightwood/api/ensemble.py\n+++ /dev/null\n@@ -1,67 +0,0 @@\n-from lightwood import Predictor\n-from lightwood.constants.lightwood import ColumnDataTypes\n-from collections import Counter\n-import numpy as np\n-import pickle\n-import os\n-\n-\n-class LightwoodEnsemble:\n- def __init__(self, predictors=None, load_from_path=None):\n- self.path_list = None\n- if load_from_path is not None:\n- with open(os.path.join(load_from_path, 'lightwood_data'), 'rb') as pickle_in:\n- obj = pickle.load(pickle_in)\n- self.path = load_from_path\n- self.path_list = obj.path_list\n- self.ensemble = [Predictor(load_from_path=path) for path in self.path_list]\n- elif isinstance(predictors, Predictor):\n- self.ensemble = [predictors]\n- elif isinstance(predictors, list):\n- self.ensemble = predictors\n-\n- def append(self, predictor):\n- if isinstance(self.ensemble, list):\n- self.ensemble.append(predictor)\n- else:\n- self.ensemble = [predictor]\n-\n- def __iter__(self):\n- yield self.ensemble\n-\n- def predict(self, when_data):\n- predictions = [p.predict(when_data=when_data) for p in self.ensemble]\n- formatted_predictions = {}\n- for target in self.ensemble[0].config['output_features']:\n- target_name = target['name']\n- formatted_predictions[target_name] = {}\n- pred_arr = np.array([p[target_name]['predictions'] for p in predictions])\n- if target['type'] == ColumnDataTypes.NUMERIC:\n- final_preds = np.mean(pred_arr, axis=0).tolist()\n- elif target['type'] == ColumnDataTypes.CATEGORICAL:\n- final_preds = [max(Counter(pred_arr[:, idx])) for idx in range(pred_arr.shape[1])]\n-\n- # @TODO: implement class distribution for ensembles\n- # NOTE: label set *could* grow when adding predictors, which complicates belief score computation\n- formatted_predictions[target_name]['class_distribution'] = np.ones(shape=(len(final_preds), 1))\n- else:\n- raise Exception('Only numeric and categorical datatypes are supported for ensembles')\n-\n- formatted_predictions[target_name]['predictions'] = final_preds\n-\n- return formatted_predictions\n-\n- def save(self, path_to):\n- # TODO: potentially save predictors inside ensemble pickle, though there's the issue of nonpersistent stuff with torch.save() # noqa\n- path_list = []\n- for i, model in enumerate(self.ensemble):\n- path = os.path.join(path_to, f'lightwood_predictor_{i}')\n- path_list.append(path)\n- model.save(path_to=path)\n-\n- self.path_list = path_list\n-\n- # TODO: in the future, save preds inside this data struct\n- self.ensemble = None # we deref predictors for now\n- with open(os.path.join(path_to, 'lightwood_data'), 'wb') as file:\n- pickle.dump(self, file, pickle.HIGHEST_PROTOCOL)\n", "issue": "Lightwood.api.ensemble is not necessary\nThis script is deprecated, as the ensemble module has moved to `lw.ensemble` with a base abstraction. A quick inspection of the code (ex: grep for this call) and I don't see any references. Please double check if this file is required, as I think it should be removed.\r\n\r\nThe culprit link is [here](https://github.com/mindsdb/lightwood/blob/0372d292796a6d1f91ac9df9b8658ad2f128b7c9/lightwood/api/ensemble.py)\n", "before_files": [{"content": "from lightwood import Predictor\nfrom lightwood.constants.lightwood import ColumnDataTypes\nfrom collections import Counter\nimport numpy as np\nimport pickle\nimport os\n\n\nclass LightwoodEnsemble:\n def __init__(self, predictors=None, load_from_path=None):\n self.path_list = None\n if load_from_path is not None:\n with open(os.path.join(load_from_path, 'lightwood_data'), 'rb') as pickle_in:\n obj = pickle.load(pickle_in)\n self.path = load_from_path\n self.path_list = obj.path_list\n self.ensemble = [Predictor(load_from_path=path) for path in self.path_list]\n elif isinstance(predictors, Predictor):\n self.ensemble = [predictors]\n elif isinstance(predictors, list):\n self.ensemble = predictors\n\n def append(self, predictor):\n if isinstance(self.ensemble, list):\n self.ensemble.append(predictor)\n else:\n self.ensemble = [predictor]\n\n def __iter__(self):\n yield self.ensemble\n\n def predict(self, when_data):\n predictions = [p.predict(when_data=when_data) for p in self.ensemble]\n formatted_predictions = {}\n for target in self.ensemble[0].config['output_features']:\n target_name = target['name']\n formatted_predictions[target_name] = {}\n pred_arr = np.array([p[target_name]['predictions'] for p in predictions])\n if target['type'] == ColumnDataTypes.NUMERIC:\n final_preds = np.mean(pred_arr, axis=0).tolist()\n elif target['type'] == ColumnDataTypes.CATEGORICAL:\n final_preds = [max(Counter(pred_arr[:, idx])) for idx in range(pred_arr.shape[1])]\n\n # @TODO: implement class distribution for ensembles\n # NOTE: label set *could* grow when adding predictors, which complicates belief score computation\n formatted_predictions[target_name]['class_distribution'] = np.ones(shape=(len(final_preds), 1))\n else:\n raise Exception('Only numeric and categorical datatypes are supported for ensembles')\n\n formatted_predictions[target_name]['predictions'] = final_preds\n\n return formatted_predictions\n\n def save(self, path_to):\n # TODO: potentially save predictors inside ensemble pickle, though there's the issue of nonpersistent stuff with torch.save() # noqa\n path_list = []\n for i, model in enumerate(self.ensemble):\n path = os.path.join(path_to, f'lightwood_predictor_{i}')\n path_list.append(path)\n model.save(path_to=path)\n\n self.path_list = path_list\n\n # TODO: in the future, save preds inside this data struct\n self.ensemble = None # we deref predictors for now\n with open(os.path.join(path_to, 'lightwood_data'), 'wb') as file:\n pickle.dump(self, file, pickle.HIGHEST_PROTOCOL)\n", "path": "lightwood/api/ensemble.py"}]} | 1,417 | 724 |
gh_patches_debug_1352 | rasdani/github-patches | git_diff | pwr-Solaar__Solaar-1826 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release 1.1.7
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python3
2
3 from glob import glob as _glob
4
5 try:
6 from setuptools import setup
7 except ImportError:
8 from distutils.core import setup
9
10 # from solaar import NAME, __version__
11 __version__ = '1.1.7'
12 NAME = 'Solaar'
13
14
15 def _data_files():
16 from os.path import dirname as _dirname
17
18 yield 'share/solaar/icons', _glob('share/solaar/icons/solaar*.svg')
19 yield 'share/solaar/icons', _glob('share/solaar/icons/light_*.png')
20 yield 'share/icons/hicolor/scalable/apps', ['share/solaar/icons/solaar.svg']
21
22 for mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):
23 yield _dirname(mo), [mo]
24
25 yield 'share/applications', ['share/applications/solaar.desktop']
26 yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']
27 yield 'share/metainfo', ['share/solaar/io.github.pwr_solaar.solaar.metainfo.xml']
28
29 del _dirname
30
31
32 setup(
33 name=NAME.lower(),
34 version=__version__,
35 description='Linux device manager for Logitech receivers, keyboards, mice, and tablets.',
36 long_description='''
37 Solaar is a Linux device manager for many Logitech peripherals that connect through
38 Unifying and other receivers or via USB or Bluetooth.
39 Solaar is able to pair/unpair devices with receivers and show and modify some of the
40 modifiable features of devices.
41 For instructions on installing Solaar see https://pwr-solaar.github.io/Solaar/installation'''.strip(),
42 author='Daniel Pavel',
43 license='GPLv2',
44 url='http://pwr-solaar.github.io/Solaar/',
45 classifiers=[
46 'Development Status :: 4 - Beta',
47 'Environment :: X11 Applications :: GTK',
48 'Environment :: Console',
49 'Intended Audience :: End Users/Desktop',
50 'License :: DFSG approved',
51 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',
52 'Natural Language :: English',
53 'Programming Language :: Python :: 3 :: Only',
54 'Operating System :: POSIX :: Linux',
55 'Topic :: Utilities',
56 ],
57 platforms=['linux'],
58
59 # sudo apt install python-gi python3-gi \
60 # gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-ayatanaappindicator3-0.1
61 # os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'],
62 python_requires='>=3.7',
63 install_requires=[
64 'evdev (>= 1.1.2)',
65 'pyudev (>= 0.13)',
66 'PyYAML (>= 3.12)',
67 'python-xlib (>= 0.27)',
68 'psutil (>= 5.4.3)',
69 'typing_extensions (>=4.0.0)',
70 ],
71 extras_require={
72 'report-descriptor': ['hid-parser'],
73 'desktop-notifications': ['Notify (>= 0.7)'],
74 },
75 package_dir={'': 'lib'},
76 packages=['keysyms', 'hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],
77 data_files=list(_data_files()),
78 scripts=_glob('bin/*'),
79 )
80
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -66,7 +66,6 @@
'PyYAML (>= 3.12)',
'python-xlib (>= 0.27)',
'psutil (>= 5.4.3)',
- 'typing_extensions (>=4.0.0)',
],
extras_require={
'report-descriptor': ['hid-parser'],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -66,7 +66,6 @@\n 'PyYAML (>= 3.12)',\n 'python-xlib (>= 0.27)',\n 'psutil (>= 5.4.3)',\n- 'typing_extensions (>=4.0.0)',\n ],\n extras_require={\n 'report-descriptor': ['hid-parser'],\n", "issue": "Release 1.1.7\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nfrom glob import glob as _glob\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\n# from solaar import NAME, __version__\n__version__ = '1.1.7'\nNAME = 'Solaar'\n\n\ndef _data_files():\n from os.path import dirname as _dirname\n\n yield 'share/solaar/icons', _glob('share/solaar/icons/solaar*.svg')\n yield 'share/solaar/icons', _glob('share/solaar/icons/light_*.png')\n yield 'share/icons/hicolor/scalable/apps', ['share/solaar/icons/solaar.svg']\n\n for mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):\n yield _dirname(mo), [mo]\n\n yield 'share/applications', ['share/applications/solaar.desktop']\n yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']\n yield 'share/metainfo', ['share/solaar/io.github.pwr_solaar.solaar.metainfo.xml']\n\n del _dirname\n\n\nsetup(\n name=NAME.lower(),\n version=__version__,\n description='Linux device manager for Logitech receivers, keyboards, mice, and tablets.',\n long_description='''\nSolaar is a Linux device manager for many Logitech peripherals that connect through\nUnifying and other receivers or via USB or Bluetooth.\nSolaar is able to pair/unpair devices with receivers and show and modify some of the\nmodifiable features of devices.\nFor instructions on installing Solaar see https://pwr-solaar.github.io/Solaar/installation'''.strip(),\n author='Daniel Pavel',\n license='GPLv2',\n url='http://pwr-solaar.github.io/Solaar/',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: X11 Applications :: GTK',\n 'Environment :: Console',\n 'Intended Audience :: End Users/Desktop',\n 'License :: DFSG approved',\n 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 3 :: Only',\n 'Operating System :: POSIX :: Linux',\n 'Topic :: Utilities',\n ],\n platforms=['linux'],\n\n # sudo apt install python-gi python3-gi \\\n # gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-ayatanaappindicator3-0.1\n # os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'],\n python_requires='>=3.7',\n install_requires=[\n 'evdev (>= 1.1.2)',\n 'pyudev (>= 0.13)',\n 'PyYAML (>= 3.12)',\n 'python-xlib (>= 0.27)',\n 'psutil (>= 5.4.3)',\n 'typing_extensions (>=4.0.0)',\n ],\n extras_require={\n 'report-descriptor': ['hid-parser'],\n 'desktop-notifications': ['Notify (>= 0.7)'],\n },\n package_dir={'': 'lib'},\n packages=['keysyms', 'hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],\n data_files=list(_data_files()),\n scripts=_glob('bin/*'),\n)\n", "path": "setup.py"}]} | 1,460 | 100 |
gh_patches_debug_28360 | rasdani/github-patches | git_diff | pwr-Solaar__Solaar-711 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Running solaar does not open solaar
```
➜ solaar --version
solaar 1.0.1
```
**What happens**
When running `solaar`, it does not open the application itself. It _does_ add a try icon with a battery indicator or whatever.
Running `solaar` a second time _does_ open the application windo.
**Expected result**
Running an application should open the application (that's pretty much as obvious as it gets). I shouldn't need to run it twice.
If anyone wants just the try icon, something like `solaar --tray-icon` would probably work.
**Other notes**
I no longer have a tray bar set up on my desktop. So right now, running solaar once is a no-op. Running it twice actually opens the application.
</issue>
<code>
[start of lib/solaar/gtk.py]
1 #!/usr/bin/env python3
2 # -*- python-mode -*-
3 # -*- coding: UTF-8 -*-
4
5 ## Copyright (C) 2012-2013 Daniel Pavel
6 ##
7 ## This program is free software; you can redistribute it and/or modify
8 ## it under the terms of the GNU General Public License as published by
9 ## the Free Software Foundation; either version 2 of the License, or
10 ## (at your option) any later version.
11 ##
12 ## This program is distributed in the hope that it will be useful,
13 ## but WITHOUT ANY WARRANTY; without even the implied warranty of
14 ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 ## GNU General Public License for more details.
16 ##
17 ## You should have received a copy of the GNU General Public License along
18 ## with this program; if not, write to the Free Software Foundation, Inc.,
19 ## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
20
21 from __future__ import absolute_import, division, print_function, unicode_literals
22
23 import importlib
24
25
26 from solaar import __version__, NAME
27 import solaar.i18n as _i18n
28 import solaar.cli as _cli
29
30 #
31 #
32 #
33
34 def _require(module, os_package):
35 try:
36 return importlib.import_module(module)
37 except ImportError:
38 import sys
39 sys.exit("%s: missing required package '%s'" % (NAME, os_package))
40
41
42 def _parse_arguments():
43 import argparse
44 arg_parser = argparse.ArgumentParser(prog=NAME.lower())
45 arg_parser.add_argument('-d', '--debug', action='count', default=0,
46 help='print logging messages, for debugging purposes (may be repeated for extra verbosity)')
47 arg_parser.add_argument('-D', '--hidraw', action='store', dest='hidraw_path', metavar='PATH',
48 help='unifying receiver to use; the first detected receiver if unspecified. Example: /dev/hidraw2')
49 arg_parser.add_argument('--restart-on-wake-up', action='store_true',
50 help='restart Solaar on sleep wake-up (experimental)')
51 arg_parser.add_argument('-w', '--window', choices=('hide','show','only'), help='start with window hidden / showing / only (no tray icon)')
52 arg_parser.add_argument('-V', '--version', action='version', version='%(prog)s ' + __version__)
53 arg_parser.add_argument('--help-actions', action='store_true',
54 help='print help for the optional actions')
55 arg_parser.add_argument('action', nargs=argparse.REMAINDER, choices=_cli.actions,
56 help='optional actions to perform')
57
58 args = arg_parser.parse_args()
59
60 if args.help_actions:
61 _cli.print_help()
62 return
63
64 if args.window is None:
65 args.window = 'hide'
66
67 import logging
68 if args.debug > 0:
69 log_level = logging.WARNING - 10 * args.debug
70 log_format='%(asctime)s,%(msecs)03d %(levelname)8s [%(threadName)s] %(name)s: %(message)s'
71 logging.basicConfig(level=max(log_level, logging.DEBUG), format=log_format, datefmt='%H:%M:%S')
72 else:
73 logging.root.addHandler(logging.NullHandler())
74 logging.root.setLevel(logging.ERROR)
75
76 if not args.action:
77 if logging.root.isEnabledFor(logging.INFO):
78 logging.info("language %s (%s), translations path %s", _i18n.language, _i18n.encoding, _i18n.path)
79
80 return args
81
82
83 def main():
84 _require('pyudev', 'python-pyudev')
85
86 # handle ^C in console
87 import signal
88 signal.signal(signal.SIGINT, signal.SIG_DFL)
89
90 args = _parse_arguments()
91 if not args: return
92 if args.action:
93 # if any argument, run comandline and exit
94 return _cli.run(args.action, args.hidraw_path)
95
96 gi = _require('gi', 'python-gi')
97 gi.require_version('Gtk', '3.0')
98 _require('gi.repository.Gtk', 'gir1.2-gtk-3.0')
99
100 try:
101 import solaar.ui as ui
102 import solaar.listener as listener
103 listener.setup_scanner(ui.status_changed, ui.error_dialog)
104
105 import solaar.upower as _upower
106 if args.restart_on_wake_up:
107 _upower.watch(listener.start_all, listener.stop_all)
108 else:
109 _upower.watch(lambda: listener.ping_all(True))
110
111 # main UI event loop
112 ui.run_loop(listener.start_all, listener.stop_all, args.window!='only', args.window!='hide')
113 except Exception as e:
114 import sys
115 sys.exit('%s: error: %s' % (NAME.lower(), e))
116
117
118 if __name__ == '__main__':
119 main()
120
[end of lib/solaar/gtk.py]
[start of setup.py]
1 #!/usr/bin/env python3
2
3 from glob import glob as _glob
4 try:
5 from setuptools import setup
6 except ImportError:
7 from distutils.core import setup
8
9 autostart_path = '/etc/xdg/autostart'
10
11 import sys
12 backup_path_0 = sys.path[0]
13 sys.path[0] = backup_path_0 + '/lib'
14 #from solaar import NAME, __version__
15 __version__ = '1.0.1'
16 NAME = 'Solaar'
17
18 sys.path[0] = backup_path_0
19
20 if 'install' in sys.argv:
21 # naively guess where the autostart .desktop file should be installed
22 if '--prefix' in sys.argv or any(x.startswith('--prefix=') for x in sys.argv) or '--home' in sys.argv:
23 autostart_path = 'etc/xdg/autostart'
24 elif '--user' in sys.argv:
25 from os import environ
26 from os import path
27 xdg_config_home = environ.get('XDG_CONFIG_HOME', path.expanduser(path.join('~', '.config')))
28 autostart_path = path.join(xdg_config_home, 'autostart')
29 del environ, path, xdg_config_home
30
31 del sys, backup_path_0
32
33
34 def _data_files():
35 from os.path import dirname as _dirname
36
37 yield 'share/solaar/icons', _glob('share/solaar/icons/solaar*.svg')
38 yield 'share/solaar/icons', _glob('share/solaar/icons/light_*.png')
39 yield 'share/icons/hicolor/scalable/apps', ['share/solaar/icons/solaar.svg']
40
41 for mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):
42 yield _dirname(mo), [mo]
43
44 yield 'share/applications', ['share/applications/solaar.desktop']
45 yield autostart_path, ['share/applications/solaar.desktop']
46
47 del _dirname
48
49
50 setup(name=NAME.lower(),
51 version=__version__,
52 description='Linux devices manager for the Logitech Unifying Receiver.',
53 long_description='''
54 Solaar is a Linux device manager for Logitech's Unifying Receiver peripherals.
55 It is able to pair/unpair devices to the receiver, and for some devices read
56 battery status.
57 '''.strip(),
58 author='Daniel Pavel',
59 author_email='[email protected]',
60 license='GPLv2',
61 url='http://pwr-solaar.github.io/Solaar/',
62 classifiers=[
63 'Development Status :: 4 - Beta',
64 'Environment :: X11 Applications :: GTK',
65 'Environment :: Console',
66 'Intended Audience :: End Users/Desktop',
67 'License :: DFSG approved',
68 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',
69 'Natural Language :: English',
70 'Programming Language :: Python :: 3 :: Only',
71 'Operating System :: POSIX :: Linux',
72 'Topic :: Utilities',
73 ],
74
75 platforms=['linux'],
76
77 # sudo apt install python-gi python3-gi \
78 # gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-appindicator3-0.1
79 # os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'],
80
81 python_requires='>=3.2',
82 install_requires=['pyudev (>= 0.13)', ],
83 package_dir={'': 'lib'},
84 packages=['hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],
85 data_files=list(_data_files()),
86 scripts=_glob('bin/*'),
87 )
88
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lib/solaar/gtk.py b/lib/solaar/gtk.py
--- a/lib/solaar/gtk.py
+++ b/lib/solaar/gtk.py
@@ -48,7 +48,7 @@
help='unifying receiver to use; the first detected receiver if unspecified. Example: /dev/hidraw2')
arg_parser.add_argument('--restart-on-wake-up', action='store_true',
help='restart Solaar on sleep wake-up (experimental)')
- arg_parser.add_argument('-w', '--window', choices=('hide','show','only'), help='start with window hidden / showing / only (no tray icon)')
+ arg_parser.add_argument('-w', '--window', choices=('show','hide','only'), help='start with window showing / hidden / only (no tray icon)')
arg_parser.add_argument('-V', '--version', action='version', version='%(prog)s ' + __version__)
arg_parser.add_argument('--help-actions', action='store_true',
help='print help for the optional actions')
@@ -62,7 +62,7 @@
return
if args.window is None:
- args.window = 'hide'
+ args.window = 'show' # default behaviour is to show main window
import logging
if args.debug > 0:
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -42,7 +42,7 @@
yield _dirname(mo), [mo]
yield 'share/applications', ['share/applications/solaar.desktop']
- yield autostart_path, ['share/applications/solaar.desktop']
+ yield autostart_path, ['share/autostart/solaar.desktop']
del _dirname
| {"golden_diff": "diff --git a/lib/solaar/gtk.py b/lib/solaar/gtk.py\n--- a/lib/solaar/gtk.py\n+++ b/lib/solaar/gtk.py\n@@ -48,7 +48,7 @@\n \t\t\t\t\t\t\thelp='unifying receiver to use; the first detected receiver if unspecified. Example: /dev/hidraw2')\n \targ_parser.add_argument('--restart-on-wake-up', action='store_true',\n \t\t\t\t\t\t\thelp='restart Solaar on sleep wake-up (experimental)')\n-\targ_parser.add_argument('-w', '--window', choices=('hide','show','only'), help='start with window hidden / showing / only (no tray icon)')\n+\targ_parser.add_argument('-w', '--window', choices=('show','hide','only'), help='start with window showing / hidden / only (no tray icon)')\n \targ_parser.add_argument('-V', '--version', action='version', version='%(prog)s ' + __version__)\n \targ_parser.add_argument('--help-actions', action='store_true',\n \t\t\t\t\t\t\thelp='print help for the optional actions')\n@@ -62,7 +62,7 @@\n \t\treturn\n \n \tif args.window is None:\n-\t\targs.window = 'hide'\n+\t\targs.window = 'show' # default behaviour is to show main window\n \n \timport logging\n \tif args.debug > 0:\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -42,7 +42,7 @@\n \t\tyield _dirname(mo), [mo]\n \n \tyield 'share/applications', ['share/applications/solaar.desktop']\n-\tyield autostart_path, ['share/applications/solaar.desktop']\n+\tyield autostart_path, ['share/autostart/solaar.desktop']\n \n \tdel _dirname\n", "issue": "Running solaar does not open solaar\n```\r\n\u279c solaar --version\r\nsolaar 1.0.1\r\n```\r\n**What happens**\r\nWhen running `solaar`, it does not open the application itself. It _does_ add a try icon with a battery indicator or whatever.\r\nRunning `solaar` a second time _does_ open the application windo.\r\n\r\n**Expected result**\r\nRunning an application should open the application (that's pretty much as obvious as it gets). I shouldn't need to run it twice.\r\nIf anyone wants just the try icon, something like `solaar --tray-icon` would probably work.\r\n\r\n**Other notes**\r\nI no longer have a tray bar set up on my desktop. So right now, running solaar once is a no-op. Running it twice actually opens the application.\n", "before_files": [{"content": "#!/usr/bin/env python3\n# -*- python-mode -*-\n# -*- coding: UTF-8 -*-\n\n## Copyright (C) 2012-2013 Daniel Pavel\n##\n## This program is free software; you can redistribute it and/or modify\n## it under the terms of the GNU General Public License as published by\n## the Free Software Foundation; either version 2 of the License, or\n## (at your option) any later version.\n##\n## This program is distributed in the hope that it will be useful,\n## but WITHOUT ANY WARRANTY; without even the implied warranty of\n## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n## GNU General Public License for more details.\n##\n## You should have received a copy of the GNU General Public License along\n## with this program; if not, write to the Free Software Foundation, Inc.,\n## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport importlib\n\n\nfrom solaar import __version__, NAME\nimport solaar.i18n as _i18n\nimport solaar.cli as _cli\n\n#\n#\n#\n\ndef _require(module, os_package):\n\ttry:\n\t\treturn importlib.import_module(module)\n\texcept ImportError:\n\t\timport sys\n\t\tsys.exit(\"%s: missing required package '%s'\" % (NAME, os_package))\n\n\ndef _parse_arguments():\n\timport argparse\n\targ_parser = argparse.ArgumentParser(prog=NAME.lower())\n\targ_parser.add_argument('-d', '--debug', action='count', default=0,\n\t\t\t\t\t\t\thelp='print logging messages, for debugging purposes (may be repeated for extra verbosity)')\n\targ_parser.add_argument('-D', '--hidraw', action='store', dest='hidraw_path', metavar='PATH',\n\t\t\t\t\t\t\thelp='unifying receiver to use; the first detected receiver if unspecified. Example: /dev/hidraw2')\n\targ_parser.add_argument('--restart-on-wake-up', action='store_true',\n\t\t\t\t\t\t\thelp='restart Solaar on sleep wake-up (experimental)')\n\targ_parser.add_argument('-w', '--window', choices=('hide','show','only'), help='start with window hidden / showing / only (no tray icon)')\n\targ_parser.add_argument('-V', '--version', action='version', version='%(prog)s ' + __version__)\n\targ_parser.add_argument('--help-actions', action='store_true',\n\t\t\t\t\t\t\thelp='print help for the optional actions')\n\targ_parser.add_argument('action', nargs=argparse.REMAINDER, choices=_cli.actions,\n\t\t\t\t\t\t\thelp='optional actions to perform')\n\n\targs = arg_parser.parse_args()\n\n\tif args.help_actions:\n\t\t_cli.print_help()\n\t\treturn\n\n\tif args.window is None:\n\t\targs.window = 'hide'\n\n\timport logging\n\tif args.debug > 0:\n\t\tlog_level = logging.WARNING - 10 * args.debug\n\t\tlog_format='%(asctime)s,%(msecs)03d %(levelname)8s [%(threadName)s] %(name)s: %(message)s'\n\t\tlogging.basicConfig(level=max(log_level, logging.DEBUG), format=log_format, datefmt='%H:%M:%S')\n\telse:\n\t\tlogging.root.addHandler(logging.NullHandler())\n\t\tlogging.root.setLevel(logging.ERROR)\n\n\tif not args.action:\n\t\tif logging.root.isEnabledFor(logging.INFO):\n\t\t\tlogging.info(\"language %s (%s), translations path %s\", _i18n.language, _i18n.encoding, _i18n.path)\n\n\treturn args\n\n\ndef main():\n\t_require('pyudev', 'python-pyudev')\n\n\t# handle ^C in console\n\timport signal\n\tsignal.signal(signal.SIGINT, signal.SIG_DFL)\n\n\targs = _parse_arguments()\n\tif not args: return\n\tif args.action:\n\t\t# if any argument, run comandline and exit\n\t\treturn _cli.run(args.action, args.hidraw_path)\n\n\tgi = _require('gi', 'python-gi')\n\tgi.require_version('Gtk', '3.0')\n\t_require('gi.repository.Gtk', 'gir1.2-gtk-3.0')\n\n\ttry:\n\t\timport solaar.ui as ui\n\t\timport solaar.listener as listener\n\t\tlistener.setup_scanner(ui.status_changed, ui.error_dialog)\n\n\t\timport solaar.upower as _upower\n\t\tif args.restart_on_wake_up:\n\t\t\t_upower.watch(listener.start_all, listener.stop_all)\n\t\telse:\n\t\t\t_upower.watch(lambda: listener.ping_all(True))\n\n\t\t# main UI event loop\n\t\tui.run_loop(listener.start_all, listener.stop_all, args.window!='only', args.window!='hide')\n\texcept Exception as e:\n\t\timport sys\n\t\tsys.exit('%s: error: %s' % (NAME.lower(), e))\n\n\nif __name__ == '__main__':\n\tmain()\n", "path": "lib/solaar/gtk.py"}, {"content": "#!/usr/bin/env python3\n\nfrom glob import glob as _glob\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\nautostart_path = '/etc/xdg/autostart'\n\nimport sys\nbackup_path_0 = sys.path[0]\nsys.path[0] = backup_path_0 + '/lib'\n#from solaar import NAME, __version__\n__version__ = '1.0.1'\nNAME = 'Solaar'\n\nsys.path[0] = backup_path_0\n\nif 'install' in sys.argv:\n\t# naively guess where the autostart .desktop file should be installed\n\tif '--prefix' in sys.argv or any(x.startswith('--prefix=') for x in sys.argv) or '--home' in sys.argv:\n\t\tautostart_path = 'etc/xdg/autostart'\n\telif '--user' in sys.argv:\n\t\tfrom os import environ\n\t\tfrom os import path\n\t\txdg_config_home = environ.get('XDG_CONFIG_HOME', path.expanduser(path.join('~', '.config')))\n\t\tautostart_path = path.join(xdg_config_home, 'autostart')\n\t\tdel environ, path, xdg_config_home\n\ndel sys, backup_path_0\n\n\ndef _data_files():\n\tfrom os.path import dirname as _dirname\n\n\tyield 'share/solaar/icons', _glob('share/solaar/icons/solaar*.svg')\n\tyield 'share/solaar/icons', _glob('share/solaar/icons/light_*.png')\n\tyield 'share/icons/hicolor/scalable/apps', ['share/solaar/icons/solaar.svg']\n\n\tfor mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):\n\t\tyield _dirname(mo), [mo]\n\n\tyield 'share/applications', ['share/applications/solaar.desktop']\n\tyield autostart_path, ['share/applications/solaar.desktop']\n\n\tdel _dirname\n\n\nsetup(name=NAME.lower(),\n\t\tversion=__version__,\n\t\tdescription='Linux devices manager for the Logitech Unifying Receiver.',\n\t\tlong_description='''\nSolaar is a Linux device manager for Logitech's Unifying Receiver peripherals.\nIt is able to pair/unpair devices to the receiver, and for some devices read\nbattery status.\n'''.strip(),\n\t\tauthor='Daniel Pavel',\n\t\tauthor_email='[email protected]',\n\t\tlicense='GPLv2',\n\t\turl='http://pwr-solaar.github.io/Solaar/',\n\t\tclassifiers=[\n\t\t\t'Development Status :: 4 - Beta',\n\t\t\t'Environment :: X11 Applications :: GTK',\n\t\t\t'Environment :: Console',\n\t\t\t'Intended Audience :: End Users/Desktop',\n\t\t\t'License :: DFSG approved',\n\t\t\t'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',\n\t\t\t'Natural Language :: English',\n\t\t\t'Programming Language :: Python :: 3 :: Only',\n\t\t\t'Operating System :: POSIX :: Linux',\n\t\t\t'Topic :: Utilities',\n\t\t\t],\n\n\t\tplatforms=['linux'],\n\n\t\t# sudo apt install python-gi python3-gi \\\n\t\t# gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-appindicator3-0.1\n\t\t# os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'],\n\n\t\tpython_requires='>=3.2',\n\t\tinstall_requires=['pyudev (>= 0.13)', ],\n\t\tpackage_dir={'': 'lib'},\n\t\tpackages=['hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],\n\t\tdata_files=list(_data_files()),\n\t\tscripts=_glob('bin/*'),\n\t)\n", "path": "setup.py"}]} | 3,042 | 383 |
gh_patches_debug_6252 | rasdani/github-patches | git_diff | google__turbinia-809 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
GrepTask issue
```
2021-04-28 17:13:25 [ERROR] GrepTask Task failed with exception: [a bytes-like object is required, not 'str']
2021-04-28 17:13:25 [ERROR] Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/turbinia-20210330-py3.6.egg/turbinia/workers/__init__.py", line 893, in run_wrapper
self.result = self.run(evidence, self.result)
File "/usr/local/lib/python3.6/dist-packages/turbinia-20210330-py3.6.egg/turbinia/workers/grep.py", line 49, in run
fh.write('\n'.join(patterns))
File "/usr/lib/python3.6/tempfile.py", line 624, in func_wrapper
return func(*args, **kwargs)
TypeError: a bytes-like object is required, not 'str'
2021-04-28 17:13:26 [ERROR] GrepTask Task failed with exception: [a bytes-like object is required, not 'str']
2021-04-28 17:13:26 [INFO] Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/turbinia-20210330-py3.6.egg/turbinia/workers/__init__.py", line 893, in run_wrapper
self.result = self.run(evidence, self.result)
File "/usr/local/lib/python3.6/dist-packages/turbinia-20210330-py3.6.egg/turbinia/workers/grep.py", line 49, in run
fh.write('\n'.join(patterns))
File "/usr/lib/python3.6/tempfile.py", line 624, in func_wrapper
return func(*args, **kwargs)
TypeError: a bytes-like object is required, not 'str'
```
</issue>
<code>
[start of turbinia/workers/grep.py]
1 # -*- coding: utf-8 -*-
2 # Copyright 2015 Google Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """Task to filter a text file using extended regular expression patterns."""
16
17 from __future__ import unicode_literals
18
19 import os
20 from tempfile import NamedTemporaryFile
21
22 from turbinia.evidence import FilteredTextFile
23 from turbinia.workers import TurbiniaTask
24
25
26 class GrepTask(TurbiniaTask):
27 """Filter input based on extended regular expression patterns."""
28
29 def run(self, evidence, result):
30 """Run grep binary.
31
32 Args:
33 evidence (Evidence object): The evidence we will process
34 result (TurbiniaTaskResult): The object to place task results into.
35
36 Returns:
37 TurbiniaTaskResult object.
38 """
39
40 patterns = evidence.config.get('filter_patterns')
41 if not patterns:
42 result.close(self, success=True, status='No patterns supplied, exit task')
43 return result
44
45 # Create temporary file to write patterns to.
46 # Used as input to grep (-f).
47 with NamedTemporaryFile(dir=self.output_dir, delete=False) as fh:
48 patterns_file_path = fh.name
49 fh.write('\n'.join(patterns))
50
51 # Create a path that we can write the new file to.
52 base_name = os.path.basename(evidence.local_path)
53 output_file_path = os.path.join(
54 self.output_dir, '{0:s}.filtered'.format(base_name))
55
56 output_evidence = FilteredTextFile(source_path=output_file_path)
57 cmd = 'grep -E -b -n -f {0:s} {1:s} > {2:s}'.format(
58 patterns_file_path, evidence.local_path, output_file_path)
59
60 result.log('Running [{0:s}]'.format(cmd))
61 ret, result = self.execute(
62 cmd, result, new_evidence=[output_evidence], shell=True,
63 success_codes=[0, 1])
64
65 # Grep returns 0 on success and 1 if no results are found.
66 if ret == 0:
67 status = 'Grep Task found results in {0:s}'.format(evidence.name)
68 result.close(self, success=True, status=status)
69 elif ret == 1:
70 status = 'Grep Task did not find any results in {0:s}'.format(
71 evidence.name)
72 result.close(self, success=True, status=status)
73 else:
74 result.close(self, success=False)
75
76 return result
77
[end of turbinia/workers/grep.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/turbinia/workers/grep.py b/turbinia/workers/grep.py
--- a/turbinia/workers/grep.py
+++ b/turbinia/workers/grep.py
@@ -46,7 +46,7 @@
# Used as input to grep (-f).
with NamedTemporaryFile(dir=self.output_dir, delete=False) as fh:
patterns_file_path = fh.name
- fh.write('\n'.join(patterns))
+ fh.write('\n'.join(patterns.encode('utf-8')))
# Create a path that we can write the new file to.
base_name = os.path.basename(evidence.local_path)
| {"golden_diff": "diff --git a/turbinia/workers/grep.py b/turbinia/workers/grep.py\n--- a/turbinia/workers/grep.py\n+++ b/turbinia/workers/grep.py\n@@ -46,7 +46,7 @@\n # Used as input to grep (-f).\n with NamedTemporaryFile(dir=self.output_dir, delete=False) as fh:\n patterns_file_path = fh.name\n- fh.write('\\n'.join(patterns))\n+ fh.write('\\n'.join(patterns.encode('utf-8')))\n \n # Create a path that we can write the new file to.\n base_name = os.path.basename(evidence.local_path)\n", "issue": "GrepTask issue\n```\r\n2021-04-28 17:13:25 [ERROR] GrepTask Task failed with exception: [a bytes-like object is required, not 'str']\r\n2021-04-28 17:13:25 [ERROR] Traceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/dist-packages/turbinia-20210330-py3.6.egg/turbinia/workers/__init__.py\", line 893, in run_wrapper\r\n self.result = self.run(evidence, self.result)\r\n File \"/usr/local/lib/python3.6/dist-packages/turbinia-20210330-py3.6.egg/turbinia/workers/grep.py\", line 49, in run\r\n fh.write('\\n'.join(patterns))\r\n File \"/usr/lib/python3.6/tempfile.py\", line 624, in func_wrapper\r\n return func(*args, **kwargs)\r\nTypeError: a bytes-like object is required, not 'str'\r\n\r\n2021-04-28 17:13:26 [ERROR] GrepTask Task failed with exception: [a bytes-like object is required, not 'str']\r\n2021-04-28 17:13:26 [INFO] Traceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/dist-packages/turbinia-20210330-py3.6.egg/turbinia/workers/__init__.py\", line 893, in run_wrapper\r\n self.result = self.run(evidence, self.result)\r\n File \"/usr/local/lib/python3.6/dist-packages/turbinia-20210330-py3.6.egg/turbinia/workers/grep.py\", line 49, in run\r\n fh.write('\\n'.join(patterns))\r\n File \"/usr/lib/python3.6/tempfile.py\", line 624, in func_wrapper\r\n return func(*args, **kwargs)\r\nTypeError: a bytes-like object is required, not 'str'\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2015 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Task to filter a text file using extended regular expression patterns.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport os\nfrom tempfile import NamedTemporaryFile\n\nfrom turbinia.evidence import FilteredTextFile\nfrom turbinia.workers import TurbiniaTask\n\n\nclass GrepTask(TurbiniaTask):\n \"\"\"Filter input based on extended regular expression patterns.\"\"\"\n\n def run(self, evidence, result):\n \"\"\"Run grep binary.\n\n Args:\n evidence (Evidence object): The evidence we will process\n result (TurbiniaTaskResult): The object to place task results into.\n\n Returns:\n TurbiniaTaskResult object.\n \"\"\"\n\n patterns = evidence.config.get('filter_patterns')\n if not patterns:\n result.close(self, success=True, status='No patterns supplied, exit task')\n return result\n\n # Create temporary file to write patterns to.\n # Used as input to grep (-f).\n with NamedTemporaryFile(dir=self.output_dir, delete=False) as fh:\n patterns_file_path = fh.name\n fh.write('\\n'.join(patterns))\n\n # Create a path that we can write the new file to.\n base_name = os.path.basename(evidence.local_path)\n output_file_path = os.path.join(\n self.output_dir, '{0:s}.filtered'.format(base_name))\n\n output_evidence = FilteredTextFile(source_path=output_file_path)\n cmd = 'grep -E -b -n -f {0:s} {1:s} > {2:s}'.format(\n patterns_file_path, evidence.local_path, output_file_path)\n\n result.log('Running [{0:s}]'.format(cmd))\n ret, result = self.execute(\n cmd, result, new_evidence=[output_evidence], shell=True,\n success_codes=[0, 1])\n\n # Grep returns 0 on success and 1 if no results are found.\n if ret == 0:\n status = 'Grep Task found results in {0:s}'.format(evidence.name)\n result.close(self, success=True, status=status)\n elif ret == 1:\n status = 'Grep Task did not find any results in {0:s}'.format(\n evidence.name)\n result.close(self, success=True, status=status)\n else:\n result.close(self, success=False)\n\n return result\n", "path": "turbinia/workers/grep.py"}]} | 1,822 | 148 |
gh_patches_debug_37596 | rasdani/github-patches | git_diff | streamlink__streamlink-4550 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
plugins.useetv: log if no link has been found
<!--
Thanks for opening a pull request!
Before you continue, please make sure that you have read and understood the contribution guidelines, otherwise your changes may be rejected:
https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink
If possible, run the tests, perform code linting and build the documentation locally on your system first to avoid unnecessary build failures:
https://streamlink.github.io/latest/developing.html#validating-changes
Also don't forget to add a meaningful description of your changes, so that the reviewing process is as simple as possible for the maintainers.
Thank you very much!
-->
**Why this PR ?**
This PR has been made to verify if no link has been found. Indeed, USeeTV doesn't provide all his channels worldwide. Some channels are blocked for Indonesian people only, and some others need a subscription to work (see beIN Asia as an example). Some channels like SeaToday would work, but channels like this one :

will only show a Geo-restriction message above the player, telling the end-user he has no access to the stream.
This also reflects inside the player, meaning no link can be scraped.
</issue>
<code>
[start of src/streamlink/plugins/useetv.py]
1 """
2 $description Live TV channels and video on-demand service from UseeTV, owned by Telkom Indonesia.
3 $url useetv.com
4 $type live, vod
5 """
6
7 import re
8
9 from streamlink.plugin import Plugin, pluginmatcher
10 from streamlink.plugin.api import validate
11 from streamlink.stream.dash import DASHStream
12 from streamlink.stream.hls import HLSStream
13
14
15 @pluginmatcher(re.compile(r"https?://(?:www\.)?useetv\.com/"))
16 class UseeTV(Plugin):
17 def find_url(self):
18 url_re = re.compile(r"""['"](https://.*?/(?:[Pp]laylist\.m3u8|manifest\.mpd)[^'"]+)['"]""")
19
20 return self.session.http.get(self.url, schema=validate.Schema(
21 validate.parse_html(),
22 validate.any(
23 validate.all(
24 validate.xml_xpath_string("""
25 .//script[contains(text(), 'laylist.m3u8') or contains(text(), 'manifest.mpd')][1]/text()
26 """),
27 str,
28 validate.transform(url_re.search),
29 validate.any(None, validate.all(validate.get(1), validate.url())),
30 ),
31 validate.all(
32 validate.xml_xpath_string(".//video[@id='video-player']/source/@src"),
33 validate.any(None, validate.url()),
34 ),
35 ),
36 ))
37
38 def _get_streams(self):
39 url = self.find_url()
40
41 if url and ".m3u8" in url:
42 return HLSStream.parse_variant_playlist(self.session, url)
43 elif url and ".mpd" in url:
44 return DASHStream.parse_manifest(self.session, url)
45
46
47 __plugin__ = UseeTV
48
[end of src/streamlink/plugins/useetv.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/streamlink/plugins/useetv.py b/src/streamlink/plugins/useetv.py
--- a/src/streamlink/plugins/useetv.py
+++ b/src/streamlink/plugins/useetv.py
@@ -4,6 +4,7 @@
$type live, vod
"""
+import logging
import re
from streamlink.plugin import Plugin, pluginmatcher
@@ -11,32 +12,46 @@
from streamlink.stream.dash import DASHStream
from streamlink.stream.hls import HLSStream
+log = logging.getLogger(__name__)
+
@pluginmatcher(re.compile(r"https?://(?:www\.)?useetv\.com/"))
class UseeTV(Plugin):
- def find_url(self):
- url_re = re.compile(r"""['"](https://.*?/(?:[Pp]laylist\.m3u8|manifest\.mpd)[^'"]+)['"]""")
+ def _get_streams(self):
+ root = self.session.http.get(self.url, schema=validate.Schema(validate.parse_html()))
+
+ for needle, errormsg in (
+ (
+ "This service is not available in your Country",
+ "The content is not available in your region",
+ ),
+ (
+ "Silahkan login Menggunakan akun MyIndihome dan berlangganan minipack",
+ "The content is not available without a subscription",
+ ),
+ ):
+ if validate.Schema(validate.xml_xpath(f""".//script[contains(text(), '"{needle}"')]""")).validate(root):
+ log.error(errormsg)
+ return
- return self.session.http.get(self.url, schema=validate.Schema(
- validate.parse_html(),
+ url = validate.Schema(
validate.any(
validate.all(
validate.xml_xpath_string("""
.//script[contains(text(), 'laylist.m3u8') or contains(text(), 'manifest.mpd')][1]/text()
"""),
str,
- validate.transform(url_re.search),
- validate.any(None, validate.all(validate.get(1), validate.url())),
+ validate.transform(
+ re.compile(r"""(?P<q>['"])(?P<url>https://.*?/(?:[Pp]laylist\.m3u8|manifest\.mpd).+?)(?P=q)""").search
+ ),
+ validate.any(None, validate.all(validate.get("url"), validate.url())),
),
validate.all(
validate.xml_xpath_string(".//video[@id='video-player']/source/@src"),
validate.any(None, validate.url()),
),
- ),
- ))
-
- def _get_streams(self):
- url = self.find_url()
+ )
+ ).validate(root)
if url and ".m3u8" in url:
return HLSStream.parse_variant_playlist(self.session, url)
| {"golden_diff": "diff --git a/src/streamlink/plugins/useetv.py b/src/streamlink/plugins/useetv.py\n--- a/src/streamlink/plugins/useetv.py\n+++ b/src/streamlink/plugins/useetv.py\n@@ -4,6 +4,7 @@\n $type live, vod\n \"\"\"\n \n+import logging\n import re\n \n from streamlink.plugin import Plugin, pluginmatcher\n@@ -11,32 +12,46 @@\n from streamlink.stream.dash import DASHStream\n from streamlink.stream.hls import HLSStream\n \n+log = logging.getLogger(__name__)\n+\n \n @pluginmatcher(re.compile(r\"https?://(?:www\\.)?useetv\\.com/\"))\n class UseeTV(Plugin):\n- def find_url(self):\n- url_re = re.compile(r\"\"\"['\"](https://.*?/(?:[Pp]laylist\\.m3u8|manifest\\.mpd)[^'\"]+)['\"]\"\"\")\n+ def _get_streams(self):\n+ root = self.session.http.get(self.url, schema=validate.Schema(validate.parse_html()))\n+\n+ for needle, errormsg in (\n+ (\n+ \"This service is not available in your Country\",\n+ \"The content is not available in your region\",\n+ ),\n+ (\n+ \"Silahkan login Menggunakan akun MyIndihome dan berlangganan minipack\",\n+ \"The content is not available without a subscription\",\n+ ),\n+ ):\n+ if validate.Schema(validate.xml_xpath(f\"\"\".//script[contains(text(), '\"{needle}\"')]\"\"\")).validate(root):\n+ log.error(errormsg)\n+ return\n \n- return self.session.http.get(self.url, schema=validate.Schema(\n- validate.parse_html(),\n+ url = validate.Schema(\n validate.any(\n validate.all(\n validate.xml_xpath_string(\"\"\"\n .//script[contains(text(), 'laylist.m3u8') or contains(text(), 'manifest.mpd')][1]/text()\n \"\"\"),\n str,\n- validate.transform(url_re.search),\n- validate.any(None, validate.all(validate.get(1), validate.url())),\n+ validate.transform(\n+ re.compile(r\"\"\"(?P<q>['\"])(?P<url>https://.*?/(?:[Pp]laylist\\.m3u8|manifest\\.mpd).+?)(?P=q)\"\"\").search\n+ ),\n+ validate.any(None, validate.all(validate.get(\"url\"), validate.url())),\n ),\n validate.all(\n validate.xml_xpath_string(\".//video[@id='video-player']/source/@src\"),\n validate.any(None, validate.url()),\n ),\n- ),\n- ))\n-\n- def _get_streams(self):\n- url = self.find_url()\n+ )\n+ ).validate(root)\n \n if url and \".m3u8\" in url:\n return HLSStream.parse_variant_playlist(self.session, url)\n", "issue": "plugins.useetv: log if no link has been found\n<!--\r\nThanks for opening a pull request!\r\n\r\nBefore you continue, please make sure that you have read and understood the contribution guidelines, otherwise your changes may be rejected:\r\nhttps://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink\r\n\r\nIf possible, run the tests, perform code linting and build the documentation locally on your system first to avoid unnecessary build failures:\r\nhttps://streamlink.github.io/latest/developing.html#validating-changes\r\n\r\nAlso don't forget to add a meaningful description of your changes, so that the reviewing process is as simple as possible for the maintainers.\r\n\r\nThank you very much!\r\n-->\r\n\r\n**Why this PR ?**\r\n\r\nThis PR has been made to verify if no link has been found. Indeed, USeeTV doesn't provide all his channels worldwide. Some channels are blocked for Indonesian people only, and some others need a subscription to work (see beIN Asia as an example). Some channels like SeaToday would work, but channels like this one : \r\n\r\nwill only show a Geo-restriction message above the player, telling the end-user he has no access to the stream. \r\n\r\nThis also reflects inside the player, meaning no link can be scraped.\r\n\n", "before_files": [{"content": "\"\"\"\n$description Live TV channels and video on-demand service from UseeTV, owned by Telkom Indonesia.\n$url useetv.com\n$type live, vod\n\"\"\"\n\nimport re\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.dash import DASHStream\nfrom streamlink.stream.hls import HLSStream\n\n\n@pluginmatcher(re.compile(r\"https?://(?:www\\.)?useetv\\.com/\"))\nclass UseeTV(Plugin):\n def find_url(self):\n url_re = re.compile(r\"\"\"['\"](https://.*?/(?:[Pp]laylist\\.m3u8|manifest\\.mpd)[^'\"]+)['\"]\"\"\")\n\n return self.session.http.get(self.url, schema=validate.Schema(\n validate.parse_html(),\n validate.any(\n validate.all(\n validate.xml_xpath_string(\"\"\"\n .//script[contains(text(), 'laylist.m3u8') or contains(text(), 'manifest.mpd')][1]/text()\n \"\"\"),\n str,\n validate.transform(url_re.search),\n validate.any(None, validate.all(validate.get(1), validate.url())),\n ),\n validate.all(\n validate.xml_xpath_string(\".//video[@id='video-player']/source/@src\"),\n validate.any(None, validate.url()),\n ),\n ),\n ))\n\n def _get_streams(self):\n url = self.find_url()\n\n if url and \".m3u8\" in url:\n return HLSStream.parse_variant_playlist(self.session, url)\n elif url and \".mpd\" in url:\n return DASHStream.parse_manifest(self.session, url)\n\n\n__plugin__ = UseeTV\n", "path": "src/streamlink/plugins/useetv.py"}]} | 1,316 | 618 |
gh_patches_debug_12345 | rasdani/github-patches | git_diff | meltano__meltano-7636 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
bug: When meltano.yml is empty, no error message is printed. Rather, it just mentions to reach out to community
### Meltano Version
2.19.0
### Python Version
3.9
### Bug scope
CLI (options, error messages, logging, etc.)
### Operating System
Windows - WSL(Ubuntu)
### Description
when `meltano.yml` is empty, `cli`(`meltano.cli.__init__.py: 105`) raises `EmptyMeltanoFileException` exception whenever we try to run any command such as `meltano add` or `meltano ui`. But, since there's no exception message, it just prints the troubleshooting message and blank lines as follows
```
Need help fixing this problem? Visit http://melta.no/ for troubleshooting steps, or to
join our friendly Slack community.
```
### Code
_No response_
</issue>
<code>
[start of src/meltano/core/error.py]
1 """Base Error classes."""
2
3 from __future__ import annotations
4
5 import typing as t
6 from asyncio.streams import StreamReader
7 from asyncio.subprocess import Process
8 from enum import Enum
9
10 if t.TYPE_CHECKING:
11 from meltano.core.project import Project
12
13
14 class ExitCode(int, Enum): # noqa: D101
15 OK = 0
16 FAIL = 1
17 NO_RETRY = 2
18
19
20 class MeltanoError(Exception):
21 """Base class for all user-facing errors."""
22
23 def __init__(
24 self,
25 reason: str,
26 instruction: str | None = None,
27 *args: t.Any,
28 **kwargs: t.Any,
29 ) -> None:
30 """Initialize a MeltanoError.
31
32 Args:
33 reason: A short explanation of the error.
34 instruction: A short instruction on how to fix the error.
35 args: Additional arguments to pass to the base exception class.
36 kwargs: Keyword arguments to pass to the base exception class.
37 """
38 self.reason = reason
39 self.instruction = instruction
40 super().__init__(reason, instruction, *args, **kwargs)
41
42 def __str__(self) -> str:
43 """Return a string representation of the error.
44
45 Returns:
46 A string representation of the error.
47 """
48 return (
49 f"{self.reason}. {self.instruction}."
50 if self.instruction
51 else f"{self.reason}."
52 )
53
54
55 class Error(Exception):
56 """Base exception for ELT errors."""
57
58 def exit_code(self): # noqa: D102
59 return ExitCode.FAIL
60
61
62 class ExtractError(Error):
63 """Error in the extraction process, like API errors."""
64
65 def exit_code(self): # noqa: D102
66 return ExitCode.NO_RETRY
67
68
69 class AsyncSubprocessError(Exception):
70 """Happens when an async subprocess exits with a resultcode != 0."""
71
72 def __init__(
73 self,
74 message: str,
75 process: Process,
76 stderr: str | None = None,
77 ): # noqa: DAR101
78 """Initialize AsyncSubprocessError."""
79 self.process = process
80 self._stderr: str | StreamReader | None = stderr or process.stderr
81 super().__init__(message)
82
83 @property
84 async def stderr(self) -> str | None:
85 """Return the output of the process to stderr."""
86 if not self._stderr: # noqa: DAR201
87 return None
88 elif not isinstance(self._stderr, str):
89 stream = await self._stderr.read()
90 self._stderr = stream.decode("utf-8")
91
92 return self._stderr
93
94
95 class PluginInstallError(Exception):
96 """Exception for when a plugin fails to install."""
97
98
99 class PluginInstallWarning(Exception):
100 """Exception for when a plugin optional optional step fails to install."""
101
102
103 class EmptyMeltanoFileException(Exception):
104 """Exception for empty meltano.yml file."""
105
106
107 class MeltanoConfigurationError(MeltanoError):
108 """Exception for when Meltano is inproperly configured."""
109
110
111 class ProjectNotFound(Error):
112 """A Project is instantiated outside of a meltano project structure."""
113
114 def __init__(self, project: Project):
115 """Instantiate the error.
116
117 Args:
118 project: the name of the project which cannot be found
119 """
120 super().__init__(
121 f"Cannot find `{project.meltanofile}`. Are you in a meltano project?",
122 )
123
124
125 class ProjectReadonly(Error):
126 """Attempting to update a readonly project."""
127
128 def __init__(self):
129 """Instantiate the error."""
130 super().__init__("This Meltano project is deployed as read-only")
131
[end of src/meltano/core/error.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/meltano/core/error.py b/src/meltano/core/error.py
--- a/src/meltano/core/error.py
+++ b/src/meltano/core/error.py
@@ -100,9 +100,15 @@
"""Exception for when a plugin optional optional step fails to install."""
-class EmptyMeltanoFileException(Exception):
+class EmptyMeltanoFileException(MeltanoError):
"""Exception for empty meltano.yml file."""
+ def __init__(self) -> None:
+ """Instantiate the error."""
+ reason = "Your meltano.yml file is empty"
+ instruction = "Please update your meltano file with a valid configuration"
+ super().__init__(reason, instruction)
+
class MeltanoConfigurationError(MeltanoError):
"""Exception for when Meltano is inproperly configured."""
| {"golden_diff": "diff --git a/src/meltano/core/error.py b/src/meltano/core/error.py\n--- a/src/meltano/core/error.py\n+++ b/src/meltano/core/error.py\n@@ -100,9 +100,15 @@\n \"\"\"Exception for when a plugin optional optional step fails to install.\"\"\"\n \n \n-class EmptyMeltanoFileException(Exception):\n+class EmptyMeltanoFileException(MeltanoError):\n \"\"\"Exception for empty meltano.yml file.\"\"\"\n \n+ def __init__(self) -> None:\n+ \"\"\"Instantiate the error.\"\"\"\n+ reason = \"Your meltano.yml file is empty\"\n+ instruction = \"Please update your meltano file with a valid configuration\"\n+ super().__init__(reason, instruction)\n+\n \n class MeltanoConfigurationError(MeltanoError):\n \"\"\"Exception for when Meltano is inproperly configured.\"\"\"\n", "issue": "bug: When meltano.yml is empty, no error message is printed. Rather, it just mentions to reach out to community\n### Meltano Version\r\n\r\n2.19.0\r\n\r\n### Python Version\r\n\r\n3.9\r\n\r\n### Bug scope\r\n\r\nCLI (options, error messages, logging, etc.)\r\n\r\n### Operating System\r\n\r\nWindows - WSL(Ubuntu)\r\n\r\n### Description\r\n\r\nwhen `meltano.yml` is empty, `cli`(`meltano.cli.__init__.py: 105`) raises `EmptyMeltanoFileException` exception whenever we try to run any command such as `meltano add` or `meltano ui`. But, since there's no exception message, it just prints the troubleshooting message and blank lines as follows\r\n\r\n```\r\nNeed help fixing this problem? Visit http://melta.no/ for troubleshooting steps, or to\r\njoin our friendly Slack community.\r\n\r\n```\r\n\r\n\r\n### Code\r\n\r\n_No response_\n", "before_files": [{"content": "\"\"\"Base Error classes.\"\"\"\n\nfrom __future__ import annotations\n\nimport typing as t\nfrom asyncio.streams import StreamReader\nfrom asyncio.subprocess import Process\nfrom enum import Enum\n\nif t.TYPE_CHECKING:\n from meltano.core.project import Project\n\n\nclass ExitCode(int, Enum): # noqa: D101\n OK = 0\n FAIL = 1\n NO_RETRY = 2\n\n\nclass MeltanoError(Exception):\n \"\"\"Base class for all user-facing errors.\"\"\"\n\n def __init__(\n self,\n reason: str,\n instruction: str | None = None,\n *args: t.Any,\n **kwargs: t.Any,\n ) -> None:\n \"\"\"Initialize a MeltanoError.\n\n Args:\n reason: A short explanation of the error.\n instruction: A short instruction on how to fix the error.\n args: Additional arguments to pass to the base exception class.\n kwargs: Keyword arguments to pass to the base exception class.\n \"\"\"\n self.reason = reason\n self.instruction = instruction\n super().__init__(reason, instruction, *args, **kwargs)\n\n def __str__(self) -> str:\n \"\"\"Return a string representation of the error.\n\n Returns:\n A string representation of the error.\n \"\"\"\n return (\n f\"{self.reason}. {self.instruction}.\"\n if self.instruction\n else f\"{self.reason}.\"\n )\n\n\nclass Error(Exception):\n \"\"\"Base exception for ELT errors.\"\"\"\n\n def exit_code(self): # noqa: D102\n return ExitCode.FAIL\n\n\nclass ExtractError(Error):\n \"\"\"Error in the extraction process, like API errors.\"\"\"\n\n def exit_code(self): # noqa: D102\n return ExitCode.NO_RETRY\n\n\nclass AsyncSubprocessError(Exception):\n \"\"\"Happens when an async subprocess exits with a resultcode != 0.\"\"\"\n\n def __init__(\n self,\n message: str,\n process: Process,\n stderr: str | None = None,\n ): # noqa: DAR101\n \"\"\"Initialize AsyncSubprocessError.\"\"\"\n self.process = process\n self._stderr: str | StreamReader | None = stderr or process.stderr\n super().__init__(message)\n\n @property\n async def stderr(self) -> str | None:\n \"\"\"Return the output of the process to stderr.\"\"\"\n if not self._stderr: # noqa: DAR201\n return None\n elif not isinstance(self._stderr, str):\n stream = await self._stderr.read()\n self._stderr = stream.decode(\"utf-8\")\n\n return self._stderr\n\n\nclass PluginInstallError(Exception):\n \"\"\"Exception for when a plugin fails to install.\"\"\"\n\n\nclass PluginInstallWarning(Exception):\n \"\"\"Exception for when a plugin optional optional step fails to install.\"\"\"\n\n\nclass EmptyMeltanoFileException(Exception):\n \"\"\"Exception for empty meltano.yml file.\"\"\"\n\n\nclass MeltanoConfigurationError(MeltanoError):\n \"\"\"Exception for when Meltano is inproperly configured.\"\"\"\n\n\nclass ProjectNotFound(Error):\n \"\"\"A Project is instantiated outside of a meltano project structure.\"\"\"\n\n def __init__(self, project: Project):\n \"\"\"Instantiate the error.\n\n Args:\n project: the name of the project which cannot be found\n \"\"\"\n super().__init__(\n f\"Cannot find `{project.meltanofile}`. Are you in a meltano project?\",\n )\n\n\nclass ProjectReadonly(Error):\n \"\"\"Attempting to update a readonly project.\"\"\"\n\n def __init__(self):\n \"\"\"Instantiate the error.\"\"\"\n super().__init__(\"This Meltano project is deployed as read-only\")\n", "path": "src/meltano/core/error.py"}]} | 1,818 | 187 |
gh_patches_debug_19026 | rasdani/github-patches | git_diff | Kinto__kinto-135 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Missing CORS header on /v1/buckets/default/collections/tasks/records
> 09:19:55,733 Blocage d'une requête multi-origines (Cross-Origin Request) : la politique « Same Origin » ne permet pas de consulter la ressource distante située sur http://0.0.0.0:8888/v1/buckets/default/collections/tasks/records?_since=1436512795672. Raison : l'en-tête CORS « Access-Control-Allow-Origin » est manquant.1 <inconnu>
</issue>
<code>
[start of kinto/views/buckets.py]
1 from six import text_type
2 from uuid import UUID
3
4 from pyramid.httpexceptions import HTTPForbidden, HTTPPreconditionFailed
5 from pyramid.security import NO_PERMISSION_REQUIRED
6 from pyramid.view import view_config
7
8 from cliquet import resource
9 from cliquet.utils import hmac_digest, build_request
10
11 from kinto.views import NameGenerator
12
13
14 def create_bucket(request, bucket_id):
15 """Create a bucket if it doesn't exists."""
16 bucket_put = (request.method.lower() == 'put' and
17 request.path.endswith('buckets/default'))
18
19 if not bucket_put:
20 subrequest = build_request(request, {
21 'method': 'PUT',
22 'path': '/buckets/%s' % bucket_id,
23 'body': {"data": {}},
24 'headers': {'If-None-Match': '*'.encode('utf-8')}
25 })
26
27 try:
28 request.invoke_subrequest(subrequest)
29 except HTTPPreconditionFailed:
30 # The bucket already exists
31 pass
32
33
34 def create_collection(request, bucket_id):
35 subpath = request.matchdict['subpath']
36 if subpath.startswith('/collections/'):
37 collection_id = subpath.split('/')[2]
38 collection_put = (request.method.lower() == 'put' and
39 request.path.endswith(collection_id))
40 if not collection_put:
41 subrequest = build_request(request, {
42 'method': 'PUT',
43 'path': '/buckets/%s/collections/%s' % (
44 bucket_id, collection_id),
45 'body': {"data": {}},
46 'headers': {'If-None-Match': '*'.encode('utf-8')}
47 })
48 try:
49 request.invoke_subrequest(subrequest)
50 except HTTPPreconditionFailed:
51 # The collection already exists
52 pass
53
54
55 @view_config(route_name='default_bucket', permission=NO_PERMISSION_REQUIRED)
56 def default_bucket(request):
57 if request.method.lower() == 'options':
58 path = request.path.replace('default', 'unknown')
59 subrequest = build_request(request, {
60 'method': 'OPTIONS',
61 'path': path
62 })
63 return request.invoke_subrequest(subrequest)
64
65 if getattr(request, 'prefixed_userid', None) is None:
66 raise HTTPForbidden # Pass through the forbidden_view_config
67
68 settings = request.registry.settings
69 hmac_secret = settings['cliquet.userid_hmac_secret']
70 # Build the user unguessable bucket_id UUID from its user_id
71 digest = hmac_digest(hmac_secret, request.prefixed_userid)
72 bucket_id = text_type(UUID(digest[:32]))
73 path = request.path.replace('default', bucket_id)
74 querystring = request.url[(request.url.index(request.path) +
75 len(request.path)):]
76
77 # Make sure bucket exists
78 create_bucket(request, bucket_id)
79
80 # Make sure the collection exists
81 create_collection(request, bucket_id)
82
83 subrequest = build_request(request, {
84 'method': request.method,
85 'path': path + querystring,
86 'body': request.body
87 })
88
89 return request.invoke_subrequest(subrequest)
90
91
92 @resource.register(name='bucket',
93 collection_methods=('GET',),
94 collection_path='/buckets',
95 record_path='/buckets/{{id}}')
96 class Bucket(resource.ProtectedResource):
97 permissions = ('read', 'write', 'collection:create', 'group:create')
98
99 def __init__(self, *args, **kwargs):
100 super(Bucket, self).__init__(*args, **kwargs)
101 self.collection.id_generator = NameGenerator()
102
103 def get_parent_id(self, request):
104 # Buckets are not isolated by user, unlike Cliquet resources.
105 return ''
106
107 def delete(self):
108 result = super(Bucket, self).delete()
109
110 # Delete groups.
111 storage = self.collection.storage
112 parent_id = '/buckets/%s' % self.record_id
113 storage.delete_all(collection_id='group', parent_id=parent_id)
114
115 # Delete collections.
116 deleted = storage.delete_all(collection_id='collection',
117 parent_id=parent_id)
118
119 # Delete records.
120 id_field = self.collection.id_field
121 for collection in deleted:
122 parent_id = '/buckets/%s/collections/%s' % (self.record_id,
123 collection[id_field])
124 storage.delete_all(collection_id='record', parent_id=parent_id)
125
126 return result
127
[end of kinto/views/buckets.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kinto/views/buckets.py b/kinto/views/buckets.py
--- a/kinto/views/buckets.py
+++ b/kinto/views/buckets.py
@@ -1,12 +1,13 @@
from six import text_type
from uuid import UUID
-from pyramid.httpexceptions import HTTPForbidden, HTTPPreconditionFailed
+from pyramid.httpexceptions import (HTTPForbidden, HTTPPreconditionFailed,
+ HTTPException)
from pyramid.security import NO_PERMISSION_REQUIRED
from pyramid.view import view_config
from cliquet import resource
-from cliquet.utils import hmac_digest, build_request
+from cliquet.utils import hmac_digest, build_request, reapply_cors
from kinto.views import NameGenerator
@@ -86,7 +87,11 @@
'body': request.body
})
- return request.invoke_subrequest(subrequest)
+ try:
+ response = request.invoke_subrequest(subrequest)
+ except HTTPException as error:
+ response = reapply_cors(subrequest, error)
+ return response
@resource.register(name='bucket',
| {"golden_diff": "diff --git a/kinto/views/buckets.py b/kinto/views/buckets.py\n--- a/kinto/views/buckets.py\n+++ b/kinto/views/buckets.py\n@@ -1,12 +1,13 @@\n from six import text_type\n from uuid import UUID\n \n-from pyramid.httpexceptions import HTTPForbidden, HTTPPreconditionFailed\n+from pyramid.httpexceptions import (HTTPForbidden, HTTPPreconditionFailed,\n+ HTTPException)\n from pyramid.security import NO_PERMISSION_REQUIRED\n from pyramid.view import view_config\n \n from cliquet import resource\n-from cliquet.utils import hmac_digest, build_request\n+from cliquet.utils import hmac_digest, build_request, reapply_cors\n \n from kinto.views import NameGenerator\n \n@@ -86,7 +87,11 @@\n 'body': request.body\n })\n \n- return request.invoke_subrequest(subrequest)\n+ try:\n+ response = request.invoke_subrequest(subrequest)\n+ except HTTPException as error:\n+ response = reapply_cors(subrequest, error)\n+ return response\n \n \n @resource.register(name='bucket',\n", "issue": "Missing CORS header on /v1/buckets/default/collections/tasks/records\n> 09:19:55,733 Blocage d'une requ\u00eate multi-origines (Cross-Origin Request)\u00a0: la politique \u00ab\u00a0Same Origin\u00a0\u00bb ne permet pas de consulter la ressource distante situ\u00e9e sur http://0.0.0.0:8888/v1/buckets/default/collections/tasks/records?_since=1436512795672. Raison\u00a0: l'en-t\u00eate CORS \u00ab\u00a0Access-Control-Allow-Origin\u00a0\u00bb est manquant.1 <inconnu>\n\n", "before_files": [{"content": "from six import text_type\nfrom uuid import UUID\n\nfrom pyramid.httpexceptions import HTTPForbidden, HTTPPreconditionFailed\nfrom pyramid.security import NO_PERMISSION_REQUIRED\nfrom pyramid.view import view_config\n\nfrom cliquet import resource\nfrom cliquet.utils import hmac_digest, build_request\n\nfrom kinto.views import NameGenerator\n\n\ndef create_bucket(request, bucket_id):\n \"\"\"Create a bucket if it doesn't exists.\"\"\"\n bucket_put = (request.method.lower() == 'put' and\n request.path.endswith('buckets/default'))\n\n if not bucket_put:\n subrequest = build_request(request, {\n 'method': 'PUT',\n 'path': '/buckets/%s' % bucket_id,\n 'body': {\"data\": {}},\n 'headers': {'If-None-Match': '*'.encode('utf-8')}\n })\n\n try:\n request.invoke_subrequest(subrequest)\n except HTTPPreconditionFailed:\n # The bucket already exists\n pass\n\n\ndef create_collection(request, bucket_id):\n subpath = request.matchdict['subpath']\n if subpath.startswith('/collections/'):\n collection_id = subpath.split('/')[2]\n collection_put = (request.method.lower() == 'put' and\n request.path.endswith(collection_id))\n if not collection_put:\n subrequest = build_request(request, {\n 'method': 'PUT',\n 'path': '/buckets/%s/collections/%s' % (\n bucket_id, collection_id),\n 'body': {\"data\": {}},\n 'headers': {'If-None-Match': '*'.encode('utf-8')}\n })\n try:\n request.invoke_subrequest(subrequest)\n except HTTPPreconditionFailed:\n # The collection already exists\n pass\n\n\n@view_config(route_name='default_bucket', permission=NO_PERMISSION_REQUIRED)\ndef default_bucket(request):\n if request.method.lower() == 'options':\n path = request.path.replace('default', 'unknown')\n subrequest = build_request(request, {\n 'method': 'OPTIONS',\n 'path': path\n })\n return request.invoke_subrequest(subrequest)\n\n if getattr(request, 'prefixed_userid', None) is None:\n raise HTTPForbidden # Pass through the forbidden_view_config\n\n settings = request.registry.settings\n hmac_secret = settings['cliquet.userid_hmac_secret']\n # Build the user unguessable bucket_id UUID from its user_id\n digest = hmac_digest(hmac_secret, request.prefixed_userid)\n bucket_id = text_type(UUID(digest[:32]))\n path = request.path.replace('default', bucket_id)\n querystring = request.url[(request.url.index(request.path) +\n len(request.path)):]\n\n # Make sure bucket exists\n create_bucket(request, bucket_id)\n\n # Make sure the collection exists\n create_collection(request, bucket_id)\n\n subrequest = build_request(request, {\n 'method': request.method,\n 'path': path + querystring,\n 'body': request.body\n })\n\n return request.invoke_subrequest(subrequest)\n\n\[email protected](name='bucket',\n collection_methods=('GET',),\n collection_path='/buckets',\n record_path='/buckets/{{id}}')\nclass Bucket(resource.ProtectedResource):\n permissions = ('read', 'write', 'collection:create', 'group:create')\n\n def __init__(self, *args, **kwargs):\n super(Bucket, self).__init__(*args, **kwargs)\n self.collection.id_generator = NameGenerator()\n\n def get_parent_id(self, request):\n # Buckets are not isolated by user, unlike Cliquet resources.\n return ''\n\n def delete(self):\n result = super(Bucket, self).delete()\n\n # Delete groups.\n storage = self.collection.storage\n parent_id = '/buckets/%s' % self.record_id\n storage.delete_all(collection_id='group', parent_id=parent_id)\n\n # Delete collections.\n deleted = storage.delete_all(collection_id='collection',\n parent_id=parent_id)\n\n # Delete records.\n id_field = self.collection.id_field\n for collection in deleted:\n parent_id = '/buckets/%s/collections/%s' % (self.record_id,\n collection[id_field])\n storage.delete_all(collection_id='record', parent_id=parent_id)\n\n return result\n", "path": "kinto/views/buckets.py"}]} | 1,867 | 231 |
gh_patches_debug_25220 | rasdani/github-patches | git_diff | pytorch__examples-189 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[super_resolution]
def _get_orthogonal_init_weights(weights):
fan_out = weights.size(0)
fan_in = weights.size(1) * weights.size(2) * weights.size(3)
u, _, v = svd(normal(0.0, 1.0, (fan_out, fan_in)), full_matrices=False)
if u.shape == (fan_out, fan_in):
return torch.Tensor(u.reshape(weights.size()))
else:
return torch.Tensor(v.reshape(weights.size()))
Why do the above operation?
</issue>
<code>
[start of super_resolution/model.py]
1 import torch
2 import torch.nn as nn
3 from numpy.random import normal
4 from numpy.linalg import svd
5 from math import sqrt
6
7
8 def _get_orthogonal_init_weights(weights):
9 fan_out = weights.size(0)
10 fan_in = weights.size(1) * weights.size(2) * weights.size(3)
11
12 u, _, v = svd(normal(0.0, 1.0, (fan_out, fan_in)), full_matrices=False)
13
14 if u.shape == (fan_out, fan_in):
15 return torch.Tensor(u.reshape(weights.size()))
16 else:
17 return torch.Tensor(v.reshape(weights.size()))
18
19
20 class Net(nn.Module):
21 def __init__(self, upscale_factor):
22 super(Net, self).__init__()
23
24 self.relu = nn.ReLU()
25 self.conv1 = nn.Conv2d(1, 64, (5, 5), (1, 1), (2, 2))
26 self.conv2 = nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1))
27 self.conv3 = nn.Conv2d(64, 32, (3, 3), (1, 1), (1, 1))
28 self.conv4 = nn.Conv2d(32, upscale_factor ** 2, (3, 3), (1, 1), (1, 1))
29 self.pixel_shuffle = nn.PixelShuffle(upscale_factor)
30
31 self._initialize_weights()
32
33 def forward(self, x):
34 x = self.relu(self.conv1(x))
35 x = self.relu(self.conv2(x))
36 x = self.relu(self.conv3(x))
37 x = self.pixel_shuffle(self.conv4(x))
38 return x
39
40 def _initialize_weights(self):
41 self.conv1.weight.data.copy_(_get_orthogonal_init_weights(self.conv1.weight) * sqrt(2))
42 self.conv2.weight.data.copy_(_get_orthogonal_init_weights(self.conv2.weight) * sqrt(2))
43 self.conv3.weight.data.copy_(_get_orthogonal_init_weights(self.conv3.weight) * sqrt(2))
44 self.conv4.weight.data.copy_(_get_orthogonal_init_weights(self.conv4.weight))
45
[end of super_resolution/model.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/super_resolution/model.py b/super_resolution/model.py
--- a/super_resolution/model.py
+++ b/super_resolution/model.py
@@ -1,22 +1,11 @@
import torch
import torch.nn as nn
+import torch.nn.init as init
from numpy.random import normal
from numpy.linalg import svd
from math import sqrt
-def _get_orthogonal_init_weights(weights):
- fan_out = weights.size(0)
- fan_in = weights.size(1) * weights.size(2) * weights.size(3)
-
- u, _, v = svd(normal(0.0, 1.0, (fan_out, fan_in)), full_matrices=False)
-
- if u.shape == (fan_out, fan_in):
- return torch.Tensor(u.reshape(weights.size()))
- else:
- return torch.Tensor(v.reshape(weights.size()))
-
-
class Net(nn.Module):
def __init__(self, upscale_factor):
super(Net, self).__init__()
@@ -38,7 +27,7 @@
return x
def _initialize_weights(self):
- self.conv1.weight.data.copy_(_get_orthogonal_init_weights(self.conv1.weight) * sqrt(2))
- self.conv2.weight.data.copy_(_get_orthogonal_init_weights(self.conv2.weight) * sqrt(2))
- self.conv3.weight.data.copy_(_get_orthogonal_init_weights(self.conv3.weight) * sqrt(2))
- self.conv4.weight.data.copy_(_get_orthogonal_init_weights(self.conv4.weight))
+ init.orthogonal(self.conv1.weight, init.gain('relu'))
+ init.orthogonal(self.conv2.weight, init.gain('relu'))
+ init.orthogonal(self.conv3.weight, init.gain('relu'))
+ init.orthogonal(self.conv4.weight)
| {"golden_diff": "diff --git a/super_resolution/model.py b/super_resolution/model.py\n--- a/super_resolution/model.py\n+++ b/super_resolution/model.py\n@@ -1,22 +1,11 @@\n import torch\n import torch.nn as nn\n+import torch.nn.init as init\n from numpy.random import normal\n from numpy.linalg import svd\n from math import sqrt\n \n \n-def _get_orthogonal_init_weights(weights):\n- fan_out = weights.size(0)\n- fan_in = weights.size(1) * weights.size(2) * weights.size(3)\n-\n- u, _, v = svd(normal(0.0, 1.0, (fan_out, fan_in)), full_matrices=False)\n-\n- if u.shape == (fan_out, fan_in):\n- return torch.Tensor(u.reshape(weights.size()))\n- else:\n- return torch.Tensor(v.reshape(weights.size()))\n-\n-\n class Net(nn.Module):\n def __init__(self, upscale_factor):\n super(Net, self).__init__()\n@@ -38,7 +27,7 @@\n return x\n \n def _initialize_weights(self):\n- self.conv1.weight.data.copy_(_get_orthogonal_init_weights(self.conv1.weight) * sqrt(2))\n- self.conv2.weight.data.copy_(_get_orthogonal_init_weights(self.conv2.weight) * sqrt(2))\n- self.conv3.weight.data.copy_(_get_orthogonal_init_weights(self.conv3.weight) * sqrt(2))\n- self.conv4.weight.data.copy_(_get_orthogonal_init_weights(self.conv4.weight))\n+ init.orthogonal(self.conv1.weight, init.gain('relu'))\n+ init.orthogonal(self.conv2.weight, init.gain('relu'))\n+ init.orthogonal(self.conv3.weight, init.gain('relu'))\n+ init.orthogonal(self.conv4.weight)\n", "issue": "[super_resolution]\ndef _get_orthogonal_init_weights(weights):\r\n fan_out = weights.size(0)\r\n fan_in = weights.size(1) * weights.size(2) * weights.size(3)\r\n u, _, v = svd(normal(0.0, 1.0, (fan_out, fan_in)), full_matrices=False)\r\n if u.shape == (fan_out, fan_in):\r\n return torch.Tensor(u.reshape(weights.size()))\r\n else:\r\n return torch.Tensor(v.reshape(weights.size()))\r\n\r\nWhy do the above operation\uff1f\n", "before_files": [{"content": "import torch\nimport torch.nn as nn\nfrom numpy.random import normal\nfrom numpy.linalg import svd\nfrom math import sqrt\n\n\ndef _get_orthogonal_init_weights(weights):\n fan_out = weights.size(0)\n fan_in = weights.size(1) * weights.size(2) * weights.size(3)\n\n u, _, v = svd(normal(0.0, 1.0, (fan_out, fan_in)), full_matrices=False)\n\n if u.shape == (fan_out, fan_in):\n return torch.Tensor(u.reshape(weights.size()))\n else:\n return torch.Tensor(v.reshape(weights.size()))\n\n\nclass Net(nn.Module):\n def __init__(self, upscale_factor):\n super(Net, self).__init__()\n\n self.relu = nn.ReLU()\n self.conv1 = nn.Conv2d(1, 64, (5, 5), (1, 1), (2, 2))\n self.conv2 = nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1))\n self.conv3 = nn.Conv2d(64, 32, (3, 3), (1, 1), (1, 1))\n self.conv4 = nn.Conv2d(32, upscale_factor ** 2, (3, 3), (1, 1), (1, 1))\n self.pixel_shuffle = nn.PixelShuffle(upscale_factor)\n\n self._initialize_weights()\n\n def forward(self, x):\n x = self.relu(self.conv1(x))\n x = self.relu(self.conv2(x))\n x = self.relu(self.conv3(x))\n x = self.pixel_shuffle(self.conv4(x))\n return x\n\n def _initialize_weights(self):\n self.conv1.weight.data.copy_(_get_orthogonal_init_weights(self.conv1.weight) * sqrt(2))\n self.conv2.weight.data.copy_(_get_orthogonal_init_weights(self.conv2.weight) * sqrt(2))\n self.conv3.weight.data.copy_(_get_orthogonal_init_weights(self.conv3.weight) * sqrt(2))\n self.conv4.weight.data.copy_(_get_orthogonal_init_weights(self.conv4.weight))\n", "path": "super_resolution/model.py"}]} | 1,214 | 402 |
gh_patches_debug_1214 | rasdani/github-patches | git_diff | opsdroid__opsdroid-1241 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Exiting opsdroid with ctrl+c fails with exception
<!-- Before you post an issue or if you are unsure about something join our matrix channel https://riot.im/app/#/room/#opsdroid-general:matrix.org and ask away! We are more than happy to help you. -->
# Description
I am trying to build a Slack bot using Opsdroid (master branch). When pressing `ctrl+c` to exit opsdroid, the process does not stop and throws an error.
## Steps to Reproduce
1. Start opsdroid and wait for it to run
```
opsdroid start
```
2. Press `ctrl+c` to exit the process
## Expected Functionality
The opsdroid process should exit on pressing `ctrl+c`.
## Experienced Functionality
The opsdroid process fails to exit with an exception. The debug log is as follows:
```
INFO opsdroid.logging: ========================================
INFO opsdroid.logging: Started opsdroid v0.16.0+82.g4c55e97
INFO opsdroid: ========================================
INFO opsdroid: You can customise your opsdroid by modifying your configuration.yaml
INFO opsdroid: Read more at: http://opsdroid.readthedocs.io/#configuration
INFO opsdroid: Watch the Get Started Videos at: http://bit.ly/2fnC0Fh
INFO opsdroid: Install Opsdroid Desktop at:
https://github.com/opsdroid/opsdroid-desktop/releases
INFO opsdroid: ========================================
WARNING opsdroid.loader: No databases in configuration.This will cause skills which store things in memory to lose data when opsdroid is restarted.
INFO opsdroid.connector.slack: Connecting to Slack
INFO opsdroid.connector.slack: Connected successfully
INFO opsdroid.web: Started web server on http://0.0.0.0:8080
INFO opsdroid.core: Opsdroid is now running, press ctrl+c to exit.
^CINFO opsdroid.core: Received stop signal, exiting.
INFO opsdroid.core: Removing skills...
INFO opsdroid.core: Removed hello
INFO opsdroid.core: Removed seen
INFO opsdroid.core: Removed help
INFO opsdroid.core: Stopping connector slack...
ERROR: Unhandled exception in opsdroid, exiting...
Caught exception
{'message': 'Task exception was never retrieved', 'exception': TypeError("object NoneType can't be used in 'await' expression",), 'future': <Task finished coro=<OpsDroid.handle_signal() done, defined at /home/daniccan/c8/OpsDroid/c8-alertbot/env/lib/python3.6/site-packages/opsdroid/core.py:147> exception=TypeError("object NoneType can't be used in 'await' expression",)>}
WARNING slack.rtm.client: Websocket was closed.
```
## Versions
- **Opsdroid version:** master branch in git
- **Python version:** 3.6.8
- **OS/Docker version:** Ubuntu 18.04 LTS
## Configuration File
Please include your version of the configuration file below.
```yaml
# Your code goes here.
welcome-message: true
connectors:
- name: slack
api-token: "<Bot OAuth Token>"
skills:
- name: hello
- name: seen
- name: help
```
## Additional Details
Any other details you wish to include such as screenshots, console messages, etc.
<!-- Love opsdroid? Please consider supporting our collective:
+👉 https://opencollective.com/opsdroid/donate -->
</issue>
<code>
[start of opsdroid/connector/slack/__init__.py]
1 """A connector for Slack."""
2 import logging
3 import re
4 import ssl
5 import certifi
6
7 import slack
8 from emoji import demojize
9
10 from opsdroid.connector import Connector, register_event
11 from opsdroid.events import Message, Reaction
12 from opsdroid.connector.slack.events import Blocks
13
14
15 _LOGGER = logging.getLogger(__name__)
16
17
18 class ConnectorSlack(Connector):
19 """A connector for Slack."""
20
21 def __init__(self, config, opsdroid=None):
22 """Create the connector."""
23 super().__init__(config, opsdroid=opsdroid)
24 _LOGGER.debug(_("Starting Slack connector"))
25 self.name = "slack"
26 self.default_target = config.get("default-room", "#general")
27 self.icon_emoji = config.get("icon-emoji", ":robot_face:")
28 self.token = config["api-token"]
29 self.timeout = config.get("connect-timeout", 10)
30 self.ssl_context = ssl.create_default_context(cafile=certifi.where())
31 self.slack = slack.WebClient(
32 token=self.token, run_async=True, ssl=self.ssl_context
33 )
34 self.slack_rtm = slack.RTMClient(
35 token=self.token, run_async=True, ssl=self.ssl_context
36 )
37 self.websocket = None
38 self.bot_name = config.get("bot-name", "opsdroid")
39 self.auth_info = None
40 self.user_info = None
41 self.bot_id = None
42 self.known_users = {}
43 self.keepalive = None
44 self.reconnecting = False
45 self.listening = True
46 self._message_id = 0
47
48 # Register callbacks
49 slack.RTMClient.on(event="message", callback=self.process_message)
50
51 async def connect(self):
52 """Connect to the chat service."""
53 _LOGGER.info(_("Connecting to Slack"))
54
55 try:
56 # The slack library recommends you call `self.slack_rtm.start()`` here but it
57 # seems to mess with the event loop's signal handlers which breaks opsdroid.
58 # Therefore we need to directly call the private `_connect_and_read` method
59 # instead. This method also blocks so we need to dispatch it to the loop as a task.
60 self.opsdroid.eventloop.create_task(self.slack_rtm._connect_and_read())
61
62 self.auth_info = (await self.slack.api_call("auth.test")).data
63 self.user_info = (
64 await self.slack.api_call(
65 "users.info",
66 http_verb="GET",
67 params={"user": self.auth_info["user_id"]},
68 )
69 ).data
70 self.bot_id = self.user_info["user"]["profile"]["bot_id"]
71
72 _LOGGER.debug(_("Connected as %s"), self.bot_name)
73 _LOGGER.debug(_("Using icon %s"), self.icon_emoji)
74 _LOGGER.debug(_("Default room is %s"), self.default_target)
75 _LOGGER.info(_("Connected successfully"))
76 except slack.errors.SlackApiError as error:
77 _LOGGER.error(
78 _(
79 "Unable to connect to Slack due to %s - "
80 "The Slack Connector will not be available."
81 ),
82 error,
83 )
84 except Exception:
85 await self.disconnect()
86 raise
87
88 async def disconnect(self):
89 """Disconnect from Slack."""
90 await self.slack_rtm.stop()
91 self.listening = False
92
93 async def listen(self):
94 """Listen for and parse new messages."""
95
96 async def process_message(self, **payload):
97 """Process a raw message and pass it to the parser."""
98 message = payload["data"]
99
100 # Ignore message edits
101 if "subtype" in message and message["subtype"] == "message_changed":
102 return
103
104 # Ignore own messages
105 if (
106 "subtype" in message
107 and message["subtype"] == "bot_message"
108 and message["bot_id"] == self.bot_id
109 ):
110 return
111
112 # Lookup username
113 _LOGGER.debug(_("Looking up sender username"))
114 try:
115 user_info = await self.lookup_username(message["user"])
116 except ValueError:
117 return
118
119 # Replace usernames in the message
120 _LOGGER.debug(_("Replacing userids in message with usernames"))
121 message["text"] = await self.replace_usernames(message["text"])
122
123 await self.opsdroid.parse(
124 Message(
125 message["text"],
126 user_info["name"],
127 message["channel"],
128 self,
129 raw_event=message,
130 )
131 )
132
133 @register_event(Message)
134 async def send_message(self, message):
135 """Respond with a message."""
136 _LOGGER.debug(
137 _("Responding with: '%s' in room %s"), message.text, message.target
138 )
139 await self.slack.api_call(
140 "chat.postMessage",
141 data={
142 "channel": message.target,
143 "text": message.text,
144 "as_user": False,
145 "username": self.bot_name,
146 "icon_emoji": self.icon_emoji,
147 },
148 )
149
150 @register_event(Blocks)
151 async def send_blocks(self, blocks):
152 """Respond with structured blocks."""
153 _LOGGER.debug(
154 _("Responding with interactive blocks in room %s"), blocks.target
155 )
156 await self.slack.api_call(
157 "chat.postMessage",
158 data={
159 "channel": blocks.target,
160 "username": self.bot_name,
161 "blocks": blocks.blocks,
162 "icon_emoji": self.icon_emoji,
163 },
164 )
165
166 @register_event(Reaction)
167 async def send_reaction(self, reaction):
168 """React to a message."""
169 emoji = demojize(reaction.emoji).replace(":", "")
170 _LOGGER.debug(_("Reacting with: %s"), emoji)
171 try:
172 await self.slack.api_call(
173 "reactions.add",
174 data={
175 "name": emoji,
176 "channel": reaction.target,
177 "timestamp": reaction.linked_event.raw_event["ts"],
178 },
179 )
180 except slack.errors.SlackApiError as error:
181 if "invalid_name" in str(error):
182 _LOGGER.warning(_("Slack does not support the emoji %s"), emoji)
183 else:
184 raise
185
186 async def lookup_username(self, userid):
187 """Lookup a username and cache it."""
188 if userid in self.known_users:
189 user_info = self.known_users[userid]
190 else:
191 response = await self.slack.users_info(user=userid)
192 user_info = response.data["user"]
193 if isinstance(user_info, dict):
194 self.known_users[userid] = user_info
195 else:
196 raise ValueError("Returned user is not a dict.")
197 return user_info
198
199 async def replace_usernames(self, message):
200 """Replace User ID with username in message text."""
201 userids = re.findall(r"\<\@([A-Z0-9]+)(?:\|.+)?\>", message)
202 for userid in userids:
203 user_info = await self.lookup_username(userid)
204 message = message.replace(
205 "<@{userid}>".format(userid=userid), user_info["name"]
206 )
207 return message
208
[end of opsdroid/connector/slack/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/opsdroid/connector/slack/__init__.py b/opsdroid/connector/slack/__init__.py
--- a/opsdroid/connector/slack/__init__.py
+++ b/opsdroid/connector/slack/__init__.py
@@ -87,7 +87,7 @@
async def disconnect(self):
"""Disconnect from Slack."""
- await self.slack_rtm.stop()
+ self.slack_rtm.stop()
self.listening = False
async def listen(self):
| {"golden_diff": "diff --git a/opsdroid/connector/slack/__init__.py b/opsdroid/connector/slack/__init__.py\n--- a/opsdroid/connector/slack/__init__.py\n+++ b/opsdroid/connector/slack/__init__.py\n@@ -87,7 +87,7 @@\n \n async def disconnect(self):\n \"\"\"Disconnect from Slack.\"\"\"\n- await self.slack_rtm.stop()\n+ self.slack_rtm.stop()\n self.listening = False\n \n async def listen(self):\n", "issue": "Exiting opsdroid with ctrl+c fails with exception\n<!-- Before you post an issue or if you are unsure about something join our matrix channel https://riot.im/app/#/room/#opsdroid-general:matrix.org and ask away! We are more than happy to help you. -->\r\n# Description\r\nI am trying to build a Slack bot using Opsdroid (master branch). When pressing `ctrl+c` to exit opsdroid, the process does not stop and throws an error.\r\n\r\n\r\n## Steps to Reproduce\r\n1. Start opsdroid and wait for it to run\r\n\r\n```\r\nopsdroid start\r\n```\r\n\r\n2. Press `ctrl+c` to exit the process\r\n\r\n\r\n## Expected Functionality\r\nThe opsdroid process should exit on pressing `ctrl+c`.\r\n\r\n\r\n## Experienced Functionality\r\nThe opsdroid process fails to exit with an exception. The debug log is as follows:\r\n\r\n```\r\nINFO opsdroid.logging: ========================================\r\nINFO opsdroid.logging: Started opsdroid v0.16.0+82.g4c55e97\r\nINFO opsdroid: ========================================\r\nINFO opsdroid: You can customise your opsdroid by modifying your configuration.yaml\r\nINFO opsdroid: Read more at: http://opsdroid.readthedocs.io/#configuration\r\nINFO opsdroid: Watch the Get Started Videos at: http://bit.ly/2fnC0Fh\r\nINFO opsdroid: Install Opsdroid Desktop at: \r\nhttps://github.com/opsdroid/opsdroid-desktop/releases\r\nINFO opsdroid: ========================================\r\nWARNING opsdroid.loader: No databases in configuration.This will cause skills which store things in memory to lose data when opsdroid is restarted.\r\nINFO opsdroid.connector.slack: Connecting to Slack\r\nINFO opsdroid.connector.slack: Connected successfully\r\nINFO opsdroid.web: Started web server on http://0.0.0.0:8080\r\nINFO opsdroid.core: Opsdroid is now running, press ctrl+c to exit.\r\n^CINFO opsdroid.core: Received stop signal, exiting.\r\nINFO opsdroid.core: Removing skills...\r\nINFO opsdroid.core: Removed hello\r\nINFO opsdroid.core: Removed seen\r\nINFO opsdroid.core: Removed help\r\nINFO opsdroid.core: Stopping connector slack...\r\nERROR: Unhandled exception in opsdroid, exiting...\r\nCaught exception\r\n{'message': 'Task exception was never retrieved', 'exception': TypeError(\"object NoneType can't be used in 'await' expression\",), 'future': <Task finished coro=<OpsDroid.handle_signal() done, defined at /home/daniccan/c8/OpsDroid/c8-alertbot/env/lib/python3.6/site-packages/opsdroid/core.py:147> exception=TypeError(\"object NoneType can't be used in 'await' expression\",)>}\r\nWARNING slack.rtm.client: Websocket was closed.\r\n```\r\n\r\n## Versions\r\n- **Opsdroid version:** master branch in git\r\n- **Python version:** 3.6.8\r\n- **OS/Docker version:** Ubuntu 18.04 LTS\r\n\r\n## Configuration File\r\nPlease include your version of the configuration file below.\r\n\r\n```yaml\r\n# Your code goes here.\r\nwelcome-message: true\r\n\r\nconnectors:\r\n - name: slack\r\n api-token: \"<Bot OAuth Token>\"\r\n\r\nskills:\r\n - name: hello\r\n - name: seen\r\n - name: help\r\n```\r\n\r\n## Additional Details\r\nAny other details you wish to include such as screenshots, console messages, etc.\r\n\r\n\r\n<!-- Love opsdroid? Please consider supporting our collective:\r\n +\ud83d\udc49 https://opencollective.com/opsdroid/donate -->\r\n\n", "before_files": [{"content": "\"\"\"A connector for Slack.\"\"\"\nimport logging\nimport re\nimport ssl\nimport certifi\n\nimport slack\nfrom emoji import demojize\n\nfrom opsdroid.connector import Connector, register_event\nfrom opsdroid.events import Message, Reaction\nfrom opsdroid.connector.slack.events import Blocks\n\n\n_LOGGER = logging.getLogger(__name__)\n\n\nclass ConnectorSlack(Connector):\n \"\"\"A connector for Slack.\"\"\"\n\n def __init__(self, config, opsdroid=None):\n \"\"\"Create the connector.\"\"\"\n super().__init__(config, opsdroid=opsdroid)\n _LOGGER.debug(_(\"Starting Slack connector\"))\n self.name = \"slack\"\n self.default_target = config.get(\"default-room\", \"#general\")\n self.icon_emoji = config.get(\"icon-emoji\", \":robot_face:\")\n self.token = config[\"api-token\"]\n self.timeout = config.get(\"connect-timeout\", 10)\n self.ssl_context = ssl.create_default_context(cafile=certifi.where())\n self.slack = slack.WebClient(\n token=self.token, run_async=True, ssl=self.ssl_context\n )\n self.slack_rtm = slack.RTMClient(\n token=self.token, run_async=True, ssl=self.ssl_context\n )\n self.websocket = None\n self.bot_name = config.get(\"bot-name\", \"opsdroid\")\n self.auth_info = None\n self.user_info = None\n self.bot_id = None\n self.known_users = {}\n self.keepalive = None\n self.reconnecting = False\n self.listening = True\n self._message_id = 0\n\n # Register callbacks\n slack.RTMClient.on(event=\"message\", callback=self.process_message)\n\n async def connect(self):\n \"\"\"Connect to the chat service.\"\"\"\n _LOGGER.info(_(\"Connecting to Slack\"))\n\n try:\n # The slack library recommends you call `self.slack_rtm.start()`` here but it\n # seems to mess with the event loop's signal handlers which breaks opsdroid.\n # Therefore we need to directly call the private `_connect_and_read` method\n # instead. This method also blocks so we need to dispatch it to the loop as a task.\n self.opsdroid.eventloop.create_task(self.slack_rtm._connect_and_read())\n\n self.auth_info = (await self.slack.api_call(\"auth.test\")).data\n self.user_info = (\n await self.slack.api_call(\n \"users.info\",\n http_verb=\"GET\",\n params={\"user\": self.auth_info[\"user_id\"]},\n )\n ).data\n self.bot_id = self.user_info[\"user\"][\"profile\"][\"bot_id\"]\n\n _LOGGER.debug(_(\"Connected as %s\"), self.bot_name)\n _LOGGER.debug(_(\"Using icon %s\"), self.icon_emoji)\n _LOGGER.debug(_(\"Default room is %s\"), self.default_target)\n _LOGGER.info(_(\"Connected successfully\"))\n except slack.errors.SlackApiError as error:\n _LOGGER.error(\n _(\n \"Unable to connect to Slack due to %s - \"\n \"The Slack Connector will not be available.\"\n ),\n error,\n )\n except Exception:\n await self.disconnect()\n raise\n\n async def disconnect(self):\n \"\"\"Disconnect from Slack.\"\"\"\n await self.slack_rtm.stop()\n self.listening = False\n\n async def listen(self):\n \"\"\"Listen for and parse new messages.\"\"\"\n\n async def process_message(self, **payload):\n \"\"\"Process a raw message and pass it to the parser.\"\"\"\n message = payload[\"data\"]\n\n # Ignore message edits\n if \"subtype\" in message and message[\"subtype\"] == \"message_changed\":\n return\n\n # Ignore own messages\n if (\n \"subtype\" in message\n and message[\"subtype\"] == \"bot_message\"\n and message[\"bot_id\"] == self.bot_id\n ):\n return\n\n # Lookup username\n _LOGGER.debug(_(\"Looking up sender username\"))\n try:\n user_info = await self.lookup_username(message[\"user\"])\n except ValueError:\n return\n\n # Replace usernames in the message\n _LOGGER.debug(_(\"Replacing userids in message with usernames\"))\n message[\"text\"] = await self.replace_usernames(message[\"text\"])\n\n await self.opsdroid.parse(\n Message(\n message[\"text\"],\n user_info[\"name\"],\n message[\"channel\"],\n self,\n raw_event=message,\n )\n )\n\n @register_event(Message)\n async def send_message(self, message):\n \"\"\"Respond with a message.\"\"\"\n _LOGGER.debug(\n _(\"Responding with: '%s' in room %s\"), message.text, message.target\n )\n await self.slack.api_call(\n \"chat.postMessage\",\n data={\n \"channel\": message.target,\n \"text\": message.text,\n \"as_user\": False,\n \"username\": self.bot_name,\n \"icon_emoji\": self.icon_emoji,\n },\n )\n\n @register_event(Blocks)\n async def send_blocks(self, blocks):\n \"\"\"Respond with structured blocks.\"\"\"\n _LOGGER.debug(\n _(\"Responding with interactive blocks in room %s\"), blocks.target\n )\n await self.slack.api_call(\n \"chat.postMessage\",\n data={\n \"channel\": blocks.target,\n \"username\": self.bot_name,\n \"blocks\": blocks.blocks,\n \"icon_emoji\": self.icon_emoji,\n },\n )\n\n @register_event(Reaction)\n async def send_reaction(self, reaction):\n \"\"\"React to a message.\"\"\"\n emoji = demojize(reaction.emoji).replace(\":\", \"\")\n _LOGGER.debug(_(\"Reacting with: %s\"), emoji)\n try:\n await self.slack.api_call(\n \"reactions.add\",\n data={\n \"name\": emoji,\n \"channel\": reaction.target,\n \"timestamp\": reaction.linked_event.raw_event[\"ts\"],\n },\n )\n except slack.errors.SlackApiError as error:\n if \"invalid_name\" in str(error):\n _LOGGER.warning(_(\"Slack does not support the emoji %s\"), emoji)\n else:\n raise\n\n async def lookup_username(self, userid):\n \"\"\"Lookup a username and cache it.\"\"\"\n if userid in self.known_users:\n user_info = self.known_users[userid]\n else:\n response = await self.slack.users_info(user=userid)\n user_info = response.data[\"user\"]\n if isinstance(user_info, dict):\n self.known_users[userid] = user_info\n else:\n raise ValueError(\"Returned user is not a dict.\")\n return user_info\n\n async def replace_usernames(self, message):\n \"\"\"Replace User ID with username in message text.\"\"\"\n userids = re.findall(r\"\\<\\@([A-Z0-9]+)(?:\\|.+)?\\>\", message)\n for userid in userids:\n user_info = await self.lookup_username(userid)\n message = message.replace(\n \"<@{userid}>\".format(userid=userid), user_info[\"name\"]\n )\n return message\n", "path": "opsdroid/connector/slack/__init__.py"}]} | 3,351 | 117 |
gh_patches_debug_10562 | rasdani/github-patches | git_diff | plotly__plotly.py-2132 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
plotly.express import raises ModuleNotFound in environment without pandas.
Importing plotly.express when pandas is not available raises `ModuleNotFoundError: No module named 'pandas'`, instead of the intended `ImportError: Plotly express requires pandas to be installed.`
This happens on `from ._imshow import imshow`.
Perhaps this import should be moved below the code that will output a more helpful message?
</issue>
<code>
[start of packages/python/plotly/plotly/express/__init__.py]
1 """
2 `plotly.express` is a terse, consistent, high-level wrapper around `plotly.graph_objects`
3 for rapid data exploration and figure generation. Learn more at https://plotly.express/
4 """
5 from __future__ import absolute_import
6 from plotly import optional_imports
7 from ._imshow import imshow
8
9 pd = optional_imports.get_module("pandas")
10 if pd is None:
11 raise ImportError(
12 """\
13 Plotly express requires pandas to be installed."""
14 )
15
16 from ._chart_types import ( # noqa: F401
17 scatter,
18 scatter_3d,
19 scatter_polar,
20 scatter_ternary,
21 scatter_mapbox,
22 scatter_geo,
23 line,
24 line_3d,
25 line_polar,
26 line_ternary,
27 line_mapbox,
28 line_geo,
29 area,
30 bar,
31 bar_polar,
32 violin,
33 box,
34 strip,
35 histogram,
36 scatter_matrix,
37 parallel_coordinates,
38 parallel_categories,
39 choropleth,
40 density_contour,
41 density_heatmap,
42 pie,
43 sunburst,
44 treemap,
45 funnel,
46 funnel_area,
47 choropleth_mapbox,
48 density_mapbox,
49 )
50
51
52 from ._core import ( # noqa: F401
53 set_mapbox_access_token,
54 defaults,
55 get_trendline_results,
56 )
57
58 from . import data, colors # noqa: F401
59
60 __all__ = [
61 "scatter",
62 "scatter_3d",
63 "scatter_polar",
64 "scatter_ternary",
65 "scatter_mapbox",
66 "scatter_geo",
67 "scatter_matrix",
68 "density_contour",
69 "density_heatmap",
70 "density_mapbox",
71 "line",
72 "line_3d",
73 "line_polar",
74 "line_ternary",
75 "line_mapbox",
76 "line_geo",
77 "parallel_coordinates",
78 "parallel_categories",
79 "area",
80 "bar",
81 "bar_polar",
82 "violin",
83 "box",
84 "strip",
85 "histogram",
86 "choropleth",
87 "choropleth_mapbox",
88 "pie",
89 "sunburst",
90 "treemap",
91 "funnel",
92 "funnel_area",
93 "imshow",
94 "data",
95 "colors",
96 "set_mapbox_access_token",
97 "get_trendline_results",
98 ]
99
[end of packages/python/plotly/plotly/express/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/packages/python/plotly/plotly/express/__init__.py b/packages/python/plotly/plotly/express/__init__.py
--- a/packages/python/plotly/plotly/express/__init__.py
+++ b/packages/python/plotly/plotly/express/__init__.py
@@ -4,7 +4,6 @@
"""
from __future__ import absolute_import
from plotly import optional_imports
-from ._imshow import imshow
pd = optional_imports.get_module("pandas")
if pd is None:
@@ -13,6 +12,7 @@
Plotly express requires pandas to be installed."""
)
+from ._imshow import imshow
from ._chart_types import ( # noqa: F401
scatter,
scatter_3d,
| {"golden_diff": "diff --git a/packages/python/plotly/plotly/express/__init__.py b/packages/python/plotly/plotly/express/__init__.py\n--- a/packages/python/plotly/plotly/express/__init__.py\n+++ b/packages/python/plotly/plotly/express/__init__.py\n@@ -4,7 +4,6 @@\n \"\"\"\n from __future__ import absolute_import\n from plotly import optional_imports\n-from ._imshow import imshow\n \n pd = optional_imports.get_module(\"pandas\")\n if pd is None:\n@@ -13,6 +12,7 @@\n Plotly express requires pandas to be installed.\"\"\"\n )\n \n+from ._imshow import imshow\n from ._chart_types import ( # noqa: F401\n scatter,\n scatter_3d,\n", "issue": "plotly.express import raises ModuleNotFound in environment without pandas.\nImporting plotly.express when pandas is not available raises `ModuleNotFoundError: No module named 'pandas'`, instead of the intended `ImportError: Plotly express requires pandas to be installed.`\r\nThis happens on `from ._imshow import imshow`.\r\nPerhaps this import should be moved below the code that will output a more helpful message?\n", "before_files": [{"content": "\"\"\"\n`plotly.express` is a terse, consistent, high-level wrapper around `plotly.graph_objects`\nfor rapid data exploration and figure generation. Learn more at https://plotly.express/\n\"\"\"\nfrom __future__ import absolute_import\nfrom plotly import optional_imports\nfrom ._imshow import imshow\n\npd = optional_imports.get_module(\"pandas\")\nif pd is None:\n raise ImportError(\n \"\"\"\\\nPlotly express requires pandas to be installed.\"\"\"\n )\n\nfrom ._chart_types import ( # noqa: F401\n scatter,\n scatter_3d,\n scatter_polar,\n scatter_ternary,\n scatter_mapbox,\n scatter_geo,\n line,\n line_3d,\n line_polar,\n line_ternary,\n line_mapbox,\n line_geo,\n area,\n bar,\n bar_polar,\n violin,\n box,\n strip,\n histogram,\n scatter_matrix,\n parallel_coordinates,\n parallel_categories,\n choropleth,\n density_contour,\n density_heatmap,\n pie,\n sunburst,\n treemap,\n funnel,\n funnel_area,\n choropleth_mapbox,\n density_mapbox,\n)\n\n\nfrom ._core import ( # noqa: F401\n set_mapbox_access_token,\n defaults,\n get_trendline_results,\n)\n\nfrom . import data, colors # noqa: F401\n\n__all__ = [\n \"scatter\",\n \"scatter_3d\",\n \"scatter_polar\",\n \"scatter_ternary\",\n \"scatter_mapbox\",\n \"scatter_geo\",\n \"scatter_matrix\",\n \"density_contour\",\n \"density_heatmap\",\n \"density_mapbox\",\n \"line\",\n \"line_3d\",\n \"line_polar\",\n \"line_ternary\",\n \"line_mapbox\",\n \"line_geo\",\n \"parallel_coordinates\",\n \"parallel_categories\",\n \"area\",\n \"bar\",\n \"bar_polar\",\n \"violin\",\n \"box\",\n \"strip\",\n \"histogram\",\n \"choropleth\",\n \"choropleth_mapbox\",\n \"pie\",\n \"sunburst\",\n \"treemap\",\n \"funnel\",\n \"funnel_area\",\n \"imshow\",\n \"data\",\n \"colors\",\n \"set_mapbox_access_token\",\n \"get_trendline_results\",\n]\n", "path": "packages/python/plotly/plotly/express/__init__.py"}]} | 1,338 | 172 |
gh_patches_debug_14247 | rasdani/github-patches | git_diff | HypothesisWorks__hypothesis-1025 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Importing hypothesis mutates global warnings state
`hypothesis.errors` mutates the global warnings state:
https://github.com/HypothesisWorks/hypothesis-python/blob/master/src/hypothesis/errors.py#L182
This causes hypothesis to override any warnings settings that have already been applied. E.g., setting PYTHONWARNINGS="error" will not be respected, because hypothesis will change HypothesisDeprecationWarnings to be instead printed.
The filter there should presumably not do anything if the user has already modified any warnings defaults.
</issue>
<code>
[start of src/hypothesis/errors.py]
1 # coding=utf-8
2 #
3 # This file is part of Hypothesis, which may be found at
4 # https://github.com/HypothesisWorks/hypothesis-python
5 #
6 # Most of this work is copyright (C) 2013-2017 David R. MacIver
7 # ([email protected]), but it contains contributions by others. See
8 # CONTRIBUTING.rst for a full list of people who may hold copyright, and
9 # consult the git log if you need to determine who owns an individual
10 # contribution.
11 #
12 # This Source Code Form is subject to the terms of the Mozilla Public License,
13 # v. 2.0. If a copy of the MPL was not distributed with this file, You can
14 # obtain one at http://mozilla.org/MPL/2.0/.
15 #
16 # END HEADER
17
18 from __future__ import division, print_function, absolute_import
19
20 import warnings
21
22
23 class HypothesisException(Exception):
24
25 """Generic parent class for exceptions thrown by Hypothesis."""
26
27
28 class CleanupFailed(HypothesisException):
29
30 """At least one cleanup task failed and no other exception was raised."""
31
32
33 class UnsatisfiedAssumption(HypothesisException):
34
35 """An internal error raised by assume.
36
37 If you're seeing this error something has gone wrong.
38
39 """
40
41
42 class BadTemplateDraw(HypothesisException):
43
44 """An internal error raised when something unfortunate happened during
45 template generation and you should restart the draw, preferably with a new
46 parameter.
47
48 This is not an error condition internally, but if you ever see this
49 in your code it's probably a Hypothesis bug
50
51 """
52
53
54 class NoSuchExample(HypothesisException):
55
56 """The condition we have been asked to satisfy appears to be always false.
57
58 This does not guarantee that no example exists, only that we were
59 unable to find one.
60
61 """
62
63 def __init__(self, condition_string, extra=''):
64 super(NoSuchExample, self).__init__(
65 'No examples found of condition %s%s' % (
66 condition_string, extra)
67 )
68
69
70 class DefinitelyNoSuchExample(NoSuchExample): # pragma: no cover
71 """Hypothesis used to be able to detect exhaustive coverage of a search
72 space and no longer can.
73
74 This exception remains for compatibility reasons for now but can
75 never actually be thrown.
76
77 """
78
79
80 class NoExamples(HypothesisException):
81
82 """Raised when example() is called on a strategy but we cannot find any
83 examples after enough tries that we really should have been able to if this
84 was ever going to work."""
85
86
87 class Unsatisfiable(HypothesisException):
88
89 """We ran out of time or examples before we could find enough examples
90 which satisfy the assumptions of this hypothesis.
91
92 This could be because the function is too slow. If so, try upping
93 the timeout. It could also be because the function is using assume
94 in a way that is too hard to satisfy. If so, try writing a custom
95 strategy or using a better starting point (e.g if you are requiring
96 a list has unique values you could instead filter out all duplicate
97 values from the list)
98
99 """
100
101
102 class Flaky(HypothesisException):
103
104 """This function appears to fail non-deterministically: We have seen it
105 fail when passed this example at least once, but a subsequent invocation
106 did not fail.
107
108 Common causes for this problem are:
109 1. The function depends on external state. e.g. it uses an external
110 random number generator. Try to make a version that passes all the
111 relevant state in from Hypothesis.
112 2. The function is suffering from too much recursion and its failure
113 depends sensitively on where it's been called from.
114 3. The function is timing sensitive and can fail or pass depending on
115 how long it takes. Try breaking it up into smaller functions which
116 don't do that and testing those instead.
117
118 """
119
120
121 class Timeout(Unsatisfiable):
122
123 """We were unable to find enough examples that satisfied the preconditions
124 of this hypothesis in the amount of time allotted to us."""
125
126
127 class WrongFormat(HypothesisException, ValueError):
128
129 """An exception indicating you have attempted to serialize a value that
130 does not match the type described by this format."""
131
132
133 class BadData(HypothesisException, ValueError):
134
135 """The data that we got out of the database does not seem to match the data
136 we could have put into the database given this schema."""
137
138
139 class InvalidArgument(HypothesisException, TypeError):
140
141 """Used to indicate that the arguments to a Hypothesis function were in
142 some manner incorrect."""
143
144
145 class ResolutionFailed(InvalidArgument):
146
147 """Hypothesis had to resolve a type to a strategy, but this failed.
148
149 Type inference is best-effort, so this only happens when an
150 annotation exists but could not be resolved for a required argument
151 to the target of ``builds()``, or where the user passed ``infer``.
152
153 """
154
155
156 class InvalidState(HypothesisException):
157
158 """The system is not in a state where you were allowed to do that."""
159
160
161 class InvalidDefinition(HypothesisException, TypeError):
162
163 """Used to indicate that a class definition was not well put together and
164 has something wrong with it."""
165
166
167 class AbnormalExit(HypothesisException):
168
169 """Raised when a test running in a child process exits without returning or
170 raising an exception."""
171
172
173 class FailedHealthCheck(HypothesisException, Warning):
174 """Raised when a test fails a preliminary healthcheck that occurs before
175 execution."""
176
177 def __init__(self, message, check):
178 super(FailedHealthCheck, self).__init__(message)
179 self.health_check = check
180
181
182 class HypothesisDeprecationWarning(HypothesisException, DeprecationWarning):
183 pass
184
185
186 warnings.simplefilter('once', HypothesisDeprecationWarning)
187
188
189 class Frozen(HypothesisException):
190
191 """Raised when a mutation method has been called on a ConjectureData object
192 after freeze() has been called."""
193
194
195 class MultipleFailures(HypothesisException):
196 """Indicates that Hypothesis found more than one distinct bug when testing
197 your code."""
198
199
200 class DeadlineExceeded(HypothesisException):
201 """Raised when an individual test body has taken too long to run."""
202
203 def __init__(self, runtime, deadline):
204 super(DeadlineExceeded, self).__init__((
205 'Test took %.2fms, which exceeds the deadline of '
206 '%.2fms') % (runtime, deadline))
207 self.runtime = runtime
208 self.deadline = deadline
209
210
211 class StopTest(BaseException):
212
213 def __init__(self, testcounter):
214 super(StopTest, self).__init__(repr(testcounter))
215 self.testcounter = testcounter
216
[end of src/hypothesis/errors.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/hypothesis/errors.py b/src/hypothesis/errors.py
--- a/src/hypothesis/errors.py
+++ b/src/hypothesis/errors.py
@@ -17,8 +17,6 @@
from __future__ import division, print_function, absolute_import
-import warnings
-
class HypothesisException(Exception):
@@ -179,13 +177,10 @@
self.health_check = check
-class HypothesisDeprecationWarning(HypothesisException, DeprecationWarning):
+class HypothesisDeprecationWarning(HypothesisException, FutureWarning):
pass
-warnings.simplefilter('once', HypothesisDeprecationWarning)
-
-
class Frozen(HypothesisException):
"""Raised when a mutation method has been called on a ConjectureData object
| {"golden_diff": "diff --git a/src/hypothesis/errors.py b/src/hypothesis/errors.py\n--- a/src/hypothesis/errors.py\n+++ b/src/hypothesis/errors.py\n@@ -17,8 +17,6 @@\n \n from __future__ import division, print_function, absolute_import\n \n-import warnings\n-\n \n class HypothesisException(Exception):\n \n@@ -179,13 +177,10 @@\n self.health_check = check\n \n \n-class HypothesisDeprecationWarning(HypothesisException, DeprecationWarning):\n+class HypothesisDeprecationWarning(HypothesisException, FutureWarning):\n pass\n \n \n-warnings.simplefilter('once', HypothesisDeprecationWarning)\n-\n-\n class Frozen(HypothesisException):\n \n \"\"\"Raised when a mutation method has been called on a ConjectureData object\n", "issue": "Importing hypothesis mutates global warnings state\n`hypothesis.errors` mutates the global warnings state:\r\n\r\nhttps://github.com/HypothesisWorks/hypothesis-python/blob/master/src/hypothesis/errors.py#L182\r\n\r\nThis causes hypothesis to override any warnings settings that have already been applied. E.g., setting PYTHONWARNINGS=\"error\" will not be respected, because hypothesis will change HypothesisDeprecationWarnings to be instead printed.\r\n\r\nThe filter there should presumably not do anything if the user has already modified any warnings defaults.\n", "before_files": [{"content": "# coding=utf-8\n#\n# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis-python\n#\n# Most of this work is copyright (C) 2013-2017 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at http://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nfrom __future__ import division, print_function, absolute_import\n\nimport warnings\n\n\nclass HypothesisException(Exception):\n\n \"\"\"Generic parent class for exceptions thrown by Hypothesis.\"\"\"\n\n\nclass CleanupFailed(HypothesisException):\n\n \"\"\"At least one cleanup task failed and no other exception was raised.\"\"\"\n\n\nclass UnsatisfiedAssumption(HypothesisException):\n\n \"\"\"An internal error raised by assume.\n\n If you're seeing this error something has gone wrong.\n\n \"\"\"\n\n\nclass BadTemplateDraw(HypothesisException):\n\n \"\"\"An internal error raised when something unfortunate happened during\n template generation and you should restart the draw, preferably with a new\n parameter.\n\n This is not an error condition internally, but if you ever see this\n in your code it's probably a Hypothesis bug\n\n \"\"\"\n\n\nclass NoSuchExample(HypothesisException):\n\n \"\"\"The condition we have been asked to satisfy appears to be always false.\n\n This does not guarantee that no example exists, only that we were\n unable to find one.\n\n \"\"\"\n\n def __init__(self, condition_string, extra=''):\n super(NoSuchExample, self).__init__(\n 'No examples found of condition %s%s' % (\n condition_string, extra)\n )\n\n\nclass DefinitelyNoSuchExample(NoSuchExample): # pragma: no cover\n \"\"\"Hypothesis used to be able to detect exhaustive coverage of a search\n space and no longer can.\n\n This exception remains for compatibility reasons for now but can\n never actually be thrown.\n\n \"\"\"\n\n\nclass NoExamples(HypothesisException):\n\n \"\"\"Raised when example() is called on a strategy but we cannot find any\n examples after enough tries that we really should have been able to if this\n was ever going to work.\"\"\"\n\n\nclass Unsatisfiable(HypothesisException):\n\n \"\"\"We ran out of time or examples before we could find enough examples\n which satisfy the assumptions of this hypothesis.\n\n This could be because the function is too slow. If so, try upping\n the timeout. It could also be because the function is using assume\n in a way that is too hard to satisfy. If so, try writing a custom\n strategy or using a better starting point (e.g if you are requiring\n a list has unique values you could instead filter out all duplicate\n values from the list)\n\n \"\"\"\n\n\nclass Flaky(HypothesisException):\n\n \"\"\"This function appears to fail non-deterministically: We have seen it\n fail when passed this example at least once, but a subsequent invocation\n did not fail.\n\n Common causes for this problem are:\n 1. The function depends on external state. e.g. it uses an external\n random number generator. Try to make a version that passes all the\n relevant state in from Hypothesis.\n 2. The function is suffering from too much recursion and its failure\n depends sensitively on where it's been called from.\n 3. The function is timing sensitive and can fail or pass depending on\n how long it takes. Try breaking it up into smaller functions which\n don't do that and testing those instead.\n\n \"\"\"\n\n\nclass Timeout(Unsatisfiable):\n\n \"\"\"We were unable to find enough examples that satisfied the preconditions\n of this hypothesis in the amount of time allotted to us.\"\"\"\n\n\nclass WrongFormat(HypothesisException, ValueError):\n\n \"\"\"An exception indicating you have attempted to serialize a value that\n does not match the type described by this format.\"\"\"\n\n\nclass BadData(HypothesisException, ValueError):\n\n \"\"\"The data that we got out of the database does not seem to match the data\n we could have put into the database given this schema.\"\"\"\n\n\nclass InvalidArgument(HypothesisException, TypeError):\n\n \"\"\"Used to indicate that the arguments to a Hypothesis function were in\n some manner incorrect.\"\"\"\n\n\nclass ResolutionFailed(InvalidArgument):\n\n \"\"\"Hypothesis had to resolve a type to a strategy, but this failed.\n\n Type inference is best-effort, so this only happens when an\n annotation exists but could not be resolved for a required argument\n to the target of ``builds()``, or where the user passed ``infer``.\n\n \"\"\"\n\n\nclass InvalidState(HypothesisException):\n\n \"\"\"The system is not in a state where you were allowed to do that.\"\"\"\n\n\nclass InvalidDefinition(HypothesisException, TypeError):\n\n \"\"\"Used to indicate that a class definition was not well put together and\n has something wrong with it.\"\"\"\n\n\nclass AbnormalExit(HypothesisException):\n\n \"\"\"Raised when a test running in a child process exits without returning or\n raising an exception.\"\"\"\n\n\nclass FailedHealthCheck(HypothesisException, Warning):\n \"\"\"Raised when a test fails a preliminary healthcheck that occurs before\n execution.\"\"\"\n\n def __init__(self, message, check):\n super(FailedHealthCheck, self).__init__(message)\n self.health_check = check\n\n\nclass HypothesisDeprecationWarning(HypothesisException, DeprecationWarning):\n pass\n\n\nwarnings.simplefilter('once', HypothesisDeprecationWarning)\n\n\nclass Frozen(HypothesisException):\n\n \"\"\"Raised when a mutation method has been called on a ConjectureData object\n after freeze() has been called.\"\"\"\n\n\nclass MultipleFailures(HypothesisException):\n \"\"\"Indicates that Hypothesis found more than one distinct bug when testing\n your code.\"\"\"\n\n\nclass DeadlineExceeded(HypothesisException):\n \"\"\"Raised when an individual test body has taken too long to run.\"\"\"\n\n def __init__(self, runtime, deadline):\n super(DeadlineExceeded, self).__init__((\n 'Test took %.2fms, which exceeds the deadline of '\n '%.2fms') % (runtime, deadline))\n self.runtime = runtime\n self.deadline = deadline\n\n\nclass StopTest(BaseException):\n\n def __init__(self, testcounter):\n super(StopTest, self).__init__(repr(testcounter))\n self.testcounter = testcounter\n", "path": "src/hypothesis/errors.py"}]} | 2,693 | 181 |
gh_patches_debug_25506 | rasdani/github-patches | git_diff | angr__angr-2677 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SimSegfaultException due to collision of stack and heap when tracing a binary
**Describe the bug.**
When tracing a CGC binary using a PoV for it, a `SimSegfaultException` is raised due to the collision of stack and heap.
**Environment Information.**
Platform: linux-x86_64
Python version: 3.8.5 (default, Jan 27 2021, 15:41:15)
[GCC 9.3.0]
######## angr #########
Python found it in /home/dnivra/angr-dev/angr/angr
Pip version angr 9.0.gitrolling
Git info:
Current commit 762becbf9c66d4798b8c23cfa512a7f893e2bcf9 from branch master
Checked out from remote origin: https://github.com/angr/angr
######## ailment #########
Python found it in /home/dnivra/angr-dev/ailment/ailment
Pip version ailment 9.0.gitrolling
Git info:
Current commit 4e2bba6f0299d1eda6ae570ceabd91eb8a0c72be from branch master
Checked out from remote origin: https://github.com/angr/ailment
######## cle #########
Python found it in /home/dnivra/angr-dev/cle/cle
Pip version cle 9.0.gitrolling
Git info:
Current commit 80dcd50abfaa70cbd5b2e360fe41b71406acbfb4 from branch master
Checked out from remote origin: https://github.com/angr/cle
######## pyvex #########
Python found it in /home/dnivra/angr-dev/pyvex/pyvex
Pip version pyvex 9.0.gitrolling
Git info:
Current commit 969ec1f10d3e3b15407ee986052aa4b6f2e9df05 from branch master
Checked out from remote origin: https://github.com/angr/pyvex
######## claripy #########
Python found it in /home/dnivra/angr-dev/claripy/claripy
Pip version claripy 9.0.gitrolling
Git info:
Current commit 34f31c487f7453f4666cd6fd1d529f417ff6ca08 from branch master
Checked out from remote origin: https://github.com/angr/claripy
######## archinfo #########
Python found it in /home/dnivra/angr-dev/archinfo/archinfo
Pip version archinfo 9.0.gitrolling
Git info:
Current commit 437b194538ccb0bf118b4b674613b88832b0b342 from branch master
Checked out from remote origin: https://github.com/angr/archinfo
######## z3 #########
Python found it in /home/dnivra/.virtualenvs/angr-dev/lib/python3.8/site-packages/z3
Pip version z3-solver 4.8.9.0
Couldn't find git info
######## unicorn #########
Python found it in /home/dnivra/.virtualenvs/angr-dev/lib/python3.8/site-packages/unicorn
Pip version unicorn 1.0.2rc4
Couldn't find git info
######### Native Module Info ##########
angr: <CDLL '/home/dnivra/angr-dev/angr/angr/lib/angr_native.so', handle 3375570 at 0x7f938b7ee220>
unicorn: <CDLL '/home/dnivra/.virtualenvs/angr-dev/lib/python3.8/site-packages/unicorn/lib/libunicorn.so', handle 2aa9f70 at 0x7f93913291c0>
pyvex: <cffi.api._make_ffi_library.<locals>.FFILibrary object at 0x7f9391f39130>
z3: <CDLL '/home/dnivra/.virtualenvs/angr-dev/lib/python3.8/site-packages/z3/lib/libz3.so', handle 2d099d0 at 0x7f938ea3a340>
**To Reproduce.**
[CROMU_00004-stack-heap-collision-repro.zip](https://github.com/angr/angr/files/6481681/CROMU_00004-stack-heap-collision-repro.zip) has script, input and binary to reproduce the issue. `SimSegfaultException` is raised in the `receive` syscall after block 0x804b87b is executed for first time in the VEX engine(13th overall execution). It takes about 10 minutes for execution to reach this location
.
</issue>
<code>
[start of angr/simos/cgc.py]
1 import logging
2
3 import claripy
4 from cle import BackedCGC
5
6 from ..procedures import SIM_LIBRARIES as L
7 from ..state_plugins import SimActionData
8 from .. import sim_options as o
9 from .userland import SimUserland
10
11 _l = logging.getLogger(name=__name__)
12
13
14 class SimCGC(SimUserland):
15 """
16 Environment configuration for the CGC DECREE platform
17 """
18
19 def __init__(self, project, **kwargs):
20 super(SimCGC, self).__init__(project,
21 syscall_library=L['cgcabi'],
22 syscall_addr_alignment=1,
23 name="CGC",
24 **kwargs)
25
26 # pylint: disable=arguments-differ
27 def state_blank(self, flag_page=None, **kwargs):
28 """
29 :param flag_page: Flag page content, either a string or a list of BV8s
30 """
31 # default stack as specified in the cgc abi
32 if kwargs.get('stack_end', None) is None:
33 kwargs['stack_end'] = 0xbaaab000
34 if kwargs.get('stack_size', None) is None:
35 kwargs['stack_size'] = 1024*1024*8
36
37 s = super(SimCGC, self).state_blank(**kwargs) # pylint:disable=invalid-name
38
39 # pre-grow the stack by 20 pages. unsure if this is strictly required or just a hack around a compiler bug
40 if hasattr(s.memory, 'allocate_stack_pages'):
41 s.memory.allocate_stack_pages(kwargs['stack_end'] - 1, 20 * 0x1000)
42
43 # Map the flag page
44 if o.ABSTRACT_MEMORY not in s.options:
45 s.memory.map_region(0x4347c000, 4096, 1)
46
47 # Create the CGC plugin
48 s.get_plugin('cgc')
49
50 # Set up the flag page
51 if flag_page is None:
52 flag_page = [s.solver.BVS("cgc-flag-byte-%d" % i, 8, key=('flag', i), eternal=True) for i in range(0x1000)]
53 elif type(flag_page) is bytes:
54 flag_page = [s.solver.BVV(c, 8) for c in flag_page]
55 elif type(flag_page) is list:
56 pass
57 else:
58 raise ValueError("Bad flag page: expected None, bytestring, or list, but got %s" % type(flag_page))
59
60 s.cgc.flag_bytes = flag_page
61 if s.mode != 'static':
62 s.memory.store(0x4347c000, claripy.Concat(*s.cgc.flag_bytes), priv=True)
63
64 # set up the address for concrete transmits
65 s.unicorn.transmit_addr = self.syscall_from_number(2).addr
66
67 s.libc.max_str_len = 1000000
68 s.libc.max_strtol_len = 10
69 s.libc.max_memcpy_size = 0x100000
70 s.libc.max_buffer_size = 0x100000
71
72 return s
73
74 def state_entry(self, add_options=None, **kwargs):
75 if isinstance(self.project.loader.main_object, BackedCGC):
76 kwargs['permissions_backer'] = (True, self.project.loader.main_object.permissions_map)
77 if add_options is None:
78 add_options = set()
79 add_options.add(o.ZERO_FILL_UNCONSTRAINED_MEMORY)
80
81 state = super(SimCGC, self).state_entry(add_options=add_options, **kwargs)
82
83 if isinstance(self.project.loader.main_object, BackedCGC):
84 # Update allocation base
85 state.cgc.allocation_base = self.project.loader.main_object.current_allocation_base
86
87 # Do all the writes
88 writes_backer = self.project.loader.main_object.writes_backer
89 stdout = state.posix.get_fd(1)
90 pos = 0
91 for size in writes_backer:
92 if size == 0:
93 continue
94 str_to_write = state.solver.BVS('file_write', size*8)
95 a = SimActionData(
96 state,
97 'file_1_0',
98 'write',
99 addr=claripy.BVV(pos, state.arch.bits),
100 data=str_to_write,
101 size=size)
102 stdout.write_data(str_to_write)
103 state.history.add_action(a)
104 pos += size
105
106 else:
107 # Set CGC-specific variables
108 state.regs.eax = 0
109 state.regs.ebx = 0
110 state.regs.ecx = 0x4347c000
111 state.regs.edx = 0
112 state.regs.edi = 0
113 state.regs.esi = 0
114 state.regs.esp = 0xbaaaaffc
115 state.regs.ebp = 0
116 state.regs.cc_dep1 = 0x202 # default eflags
117 state.regs.cc_op = 0 # OP_COPY
118 state.regs.cc_dep2 = 0 # doesn't matter
119 state.regs.cc_ndep = 0 # doesn't matter
120
121 # fpu values
122 state.regs.mm0 = 0
123 state.regs.mm1 = 0
124 state.regs.mm2 = 0
125 state.regs.mm3 = 0
126 state.regs.mm4 = 0
127 state.regs.mm5 = 0
128 state.regs.mm6 = 0
129 state.regs.mm7 = 0
130 state.regs.fpu_tags = 0
131 state.regs.fpround = 0
132 state.regs.fc3210 = 0x0300
133 state.regs.ftop = 0
134
135 # sse values
136 state.regs.sseround = 0
137 state.regs.xmm0 = 0
138 state.regs.xmm1 = 0
139 state.regs.xmm2 = 0
140 state.regs.xmm3 = 0
141 state.regs.xmm4 = 0
142 state.regs.xmm5 = 0
143 state.regs.xmm6 = 0
144 state.regs.xmm7 = 0
145
146 # segmentation registers
147 state.regs.ds = 0
148 state.regs.es = 0
149 state.regs.fs = 0
150 state.regs.gs = 0
151 state.regs.ss = 0
152 state.regs.cs = 0
153
154 return state
155
[end of angr/simos/cgc.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/angr/simos/cgc.py b/angr/simos/cgc.py
--- a/angr/simos/cgc.py
+++ b/angr/simos/cgc.py
@@ -24,9 +24,10 @@
**kwargs)
# pylint: disable=arguments-differ
- def state_blank(self, flag_page=None, **kwargs):
+ def state_blank(self, flag_page=None, allocate_stack_page_count=0x100, **kwargs):
"""
- :param flag_page: Flag page content, either a string or a list of BV8s
+ :param flag_page: Flag page content, either a string or a list of BV8s
+ :param allocate_stack_page_count: Number of pages to pre-allocate for stack
"""
# default stack as specified in the cgc abi
if kwargs.get('stack_end', None) is None:
@@ -36,9 +37,9 @@
s = super(SimCGC, self).state_blank(**kwargs) # pylint:disable=invalid-name
- # pre-grow the stack by 20 pages. unsure if this is strictly required or just a hack around a compiler bug
+ # pre-grow the stack. unsure if this is strictly required or just a hack around a compiler bug
if hasattr(s.memory, 'allocate_stack_pages'):
- s.memory.allocate_stack_pages(kwargs['stack_end'] - 1, 20 * 0x1000)
+ s.memory.allocate_stack_pages(kwargs['stack_end'] - 1, allocate_stack_page_count * 0x1000)
# Map the flag page
if o.ABSTRACT_MEMORY not in s.options:
| {"golden_diff": "diff --git a/angr/simos/cgc.py b/angr/simos/cgc.py\n--- a/angr/simos/cgc.py\n+++ b/angr/simos/cgc.py\n@@ -24,9 +24,10 @@\n **kwargs)\n \n # pylint: disable=arguments-differ\n- def state_blank(self, flag_page=None, **kwargs):\n+ def state_blank(self, flag_page=None, allocate_stack_page_count=0x100, **kwargs):\n \"\"\"\n- :param flag_page: Flag page content, either a string or a list of BV8s\n+ :param flag_page: Flag page content, either a string or a list of BV8s\n+ :param allocate_stack_page_count: Number of pages to pre-allocate for stack\n \"\"\"\n # default stack as specified in the cgc abi\n if kwargs.get('stack_end', None) is None:\n@@ -36,9 +37,9 @@\n \n s = super(SimCGC, self).state_blank(**kwargs) # pylint:disable=invalid-name\n \n- # pre-grow the stack by 20 pages. unsure if this is strictly required or just a hack around a compiler bug\n+ # pre-grow the stack. unsure if this is strictly required or just a hack around a compiler bug\n if hasattr(s.memory, 'allocate_stack_pages'):\n- s.memory.allocate_stack_pages(kwargs['stack_end'] - 1, 20 * 0x1000)\n+ s.memory.allocate_stack_pages(kwargs['stack_end'] - 1, allocate_stack_page_count * 0x1000)\n \n # Map the flag page\n if o.ABSTRACT_MEMORY not in s.options:\n", "issue": "SimSegfaultException due to collision of stack and heap when tracing a binary\n**Describe the bug.**\r\n\r\nWhen tracing a CGC binary using a PoV for it, a `SimSegfaultException` is raised due to the collision of stack and heap.\r\n\r\n**Environment Information.**\r\n\r\nPlatform: linux-x86_64\r\nPython version: 3.8.5 (default, Jan 27 2021, 15:41:15) \r\n[GCC 9.3.0]\r\n######## angr #########\r\nPython found it in /home/dnivra/angr-dev/angr/angr\r\nPip version angr 9.0.gitrolling\r\nGit info:\r\n Current commit 762becbf9c66d4798b8c23cfa512a7f893e2bcf9 from branch master\r\n Checked out from remote origin: https://github.com/angr/angr\r\n######## ailment #########\r\nPython found it in /home/dnivra/angr-dev/ailment/ailment\r\nPip version ailment 9.0.gitrolling\r\nGit info:\r\n Current commit 4e2bba6f0299d1eda6ae570ceabd91eb8a0c72be from branch master\r\n Checked out from remote origin: https://github.com/angr/ailment\r\n######## cle #########\r\nPython found it in /home/dnivra/angr-dev/cle/cle\r\nPip version cle 9.0.gitrolling\r\nGit info:\r\n Current commit 80dcd50abfaa70cbd5b2e360fe41b71406acbfb4 from branch master\r\n Checked out from remote origin: https://github.com/angr/cle\r\n######## pyvex #########\r\nPython found it in /home/dnivra/angr-dev/pyvex/pyvex\r\nPip version pyvex 9.0.gitrolling\r\nGit info:\r\n Current commit 969ec1f10d3e3b15407ee986052aa4b6f2e9df05 from branch master\r\n Checked out from remote origin: https://github.com/angr/pyvex\r\n######## claripy #########\r\nPython found it in /home/dnivra/angr-dev/claripy/claripy\r\nPip version claripy 9.0.gitrolling\r\nGit info:\r\n Current commit 34f31c487f7453f4666cd6fd1d529f417ff6ca08 from branch master\r\n Checked out from remote origin: https://github.com/angr/claripy\r\n######## archinfo #########\r\nPython found it in /home/dnivra/angr-dev/archinfo/archinfo\r\nPip version archinfo 9.0.gitrolling\r\nGit info:\r\n Current commit 437b194538ccb0bf118b4b674613b88832b0b342 from branch master\r\n Checked out from remote origin: https://github.com/angr/archinfo\r\n######## z3 #########\r\nPython found it in /home/dnivra/.virtualenvs/angr-dev/lib/python3.8/site-packages/z3\r\nPip version z3-solver 4.8.9.0\r\nCouldn't find git info\r\n######## unicorn #########\r\nPython found it in /home/dnivra/.virtualenvs/angr-dev/lib/python3.8/site-packages/unicorn\r\nPip version unicorn 1.0.2rc4\r\nCouldn't find git info\r\n######### Native Module Info ##########\r\nangr: <CDLL '/home/dnivra/angr-dev/angr/angr/lib/angr_native.so', handle 3375570 at 0x7f938b7ee220>\r\nunicorn: <CDLL '/home/dnivra/.virtualenvs/angr-dev/lib/python3.8/site-packages/unicorn/lib/libunicorn.so', handle 2aa9f70 at 0x7f93913291c0>\r\npyvex: <cffi.api._make_ffi_library.<locals>.FFILibrary object at 0x7f9391f39130>\r\nz3: <CDLL '/home/dnivra/.virtualenvs/angr-dev/lib/python3.8/site-packages/z3/lib/libz3.so', handle 2d099d0 at 0x7f938ea3a340>\r\n\r\n**To Reproduce.**\r\n\r\n[CROMU_00004-stack-heap-collision-repro.zip](https://github.com/angr/angr/files/6481681/CROMU_00004-stack-heap-collision-repro.zip) has script, input and binary to reproduce the issue. `SimSegfaultException` is raised in the `receive` syscall after block 0x804b87b is executed for first time in the VEX engine(13th overall execution). It takes about 10 minutes for execution to reach this location\r\n\r\n.\n", "before_files": [{"content": "import logging\n\nimport claripy\nfrom cle import BackedCGC\n\nfrom ..procedures import SIM_LIBRARIES as L\nfrom ..state_plugins import SimActionData\nfrom .. import sim_options as o\nfrom .userland import SimUserland\n\n_l = logging.getLogger(name=__name__)\n\n\nclass SimCGC(SimUserland):\n \"\"\"\n Environment configuration for the CGC DECREE platform\n \"\"\"\n\n def __init__(self, project, **kwargs):\n super(SimCGC, self).__init__(project,\n syscall_library=L['cgcabi'],\n syscall_addr_alignment=1,\n name=\"CGC\",\n **kwargs)\n\n # pylint: disable=arguments-differ\n def state_blank(self, flag_page=None, **kwargs):\n \"\"\"\n :param flag_page: Flag page content, either a string or a list of BV8s\n \"\"\"\n # default stack as specified in the cgc abi\n if kwargs.get('stack_end', None) is None:\n kwargs['stack_end'] = 0xbaaab000\n if kwargs.get('stack_size', None) is None:\n kwargs['stack_size'] = 1024*1024*8\n\n s = super(SimCGC, self).state_blank(**kwargs) # pylint:disable=invalid-name\n\n # pre-grow the stack by 20 pages. unsure if this is strictly required or just a hack around a compiler bug\n if hasattr(s.memory, 'allocate_stack_pages'):\n s.memory.allocate_stack_pages(kwargs['stack_end'] - 1, 20 * 0x1000)\n\n # Map the flag page\n if o.ABSTRACT_MEMORY not in s.options:\n s.memory.map_region(0x4347c000, 4096, 1)\n\n # Create the CGC plugin\n s.get_plugin('cgc')\n\n # Set up the flag page\n if flag_page is None:\n flag_page = [s.solver.BVS(\"cgc-flag-byte-%d\" % i, 8, key=('flag', i), eternal=True) for i in range(0x1000)]\n elif type(flag_page) is bytes:\n flag_page = [s.solver.BVV(c, 8) for c in flag_page]\n elif type(flag_page) is list:\n pass\n else:\n raise ValueError(\"Bad flag page: expected None, bytestring, or list, but got %s\" % type(flag_page))\n\n s.cgc.flag_bytes = flag_page\n if s.mode != 'static':\n s.memory.store(0x4347c000, claripy.Concat(*s.cgc.flag_bytes), priv=True)\n\n # set up the address for concrete transmits\n s.unicorn.transmit_addr = self.syscall_from_number(2).addr\n\n s.libc.max_str_len = 1000000\n s.libc.max_strtol_len = 10\n s.libc.max_memcpy_size = 0x100000\n s.libc.max_buffer_size = 0x100000\n\n return s\n\n def state_entry(self, add_options=None, **kwargs):\n if isinstance(self.project.loader.main_object, BackedCGC):\n kwargs['permissions_backer'] = (True, self.project.loader.main_object.permissions_map)\n if add_options is None:\n add_options = set()\n add_options.add(o.ZERO_FILL_UNCONSTRAINED_MEMORY)\n\n state = super(SimCGC, self).state_entry(add_options=add_options, **kwargs)\n\n if isinstance(self.project.loader.main_object, BackedCGC):\n # Update allocation base\n state.cgc.allocation_base = self.project.loader.main_object.current_allocation_base\n\n # Do all the writes\n writes_backer = self.project.loader.main_object.writes_backer\n stdout = state.posix.get_fd(1)\n pos = 0\n for size in writes_backer:\n if size == 0:\n continue\n str_to_write = state.solver.BVS('file_write', size*8)\n a = SimActionData(\n state,\n 'file_1_0',\n 'write',\n addr=claripy.BVV(pos, state.arch.bits),\n data=str_to_write,\n size=size)\n stdout.write_data(str_to_write)\n state.history.add_action(a)\n pos += size\n\n else:\n # Set CGC-specific variables\n state.regs.eax = 0\n state.regs.ebx = 0\n state.regs.ecx = 0x4347c000\n state.regs.edx = 0\n state.regs.edi = 0\n state.regs.esi = 0\n state.regs.esp = 0xbaaaaffc\n state.regs.ebp = 0\n state.regs.cc_dep1 = 0x202 # default eflags\n state.regs.cc_op = 0 # OP_COPY\n state.regs.cc_dep2 = 0 # doesn't matter\n state.regs.cc_ndep = 0 # doesn't matter\n\n # fpu values\n state.regs.mm0 = 0\n state.regs.mm1 = 0\n state.regs.mm2 = 0\n state.regs.mm3 = 0\n state.regs.mm4 = 0\n state.regs.mm5 = 0\n state.regs.mm6 = 0\n state.regs.mm7 = 0\n state.regs.fpu_tags = 0\n state.regs.fpround = 0\n state.regs.fc3210 = 0x0300\n state.regs.ftop = 0\n\n # sse values\n state.regs.sseround = 0\n state.regs.xmm0 = 0\n state.regs.xmm1 = 0\n state.regs.xmm2 = 0\n state.regs.xmm3 = 0\n state.regs.xmm4 = 0\n state.regs.xmm5 = 0\n state.regs.xmm6 = 0\n state.regs.xmm7 = 0\n\n # segmentation registers\n state.regs.ds = 0\n state.regs.es = 0\n state.regs.fs = 0\n state.regs.gs = 0\n state.regs.ss = 0\n state.regs.cs = 0\n\n return state\n", "path": "angr/simos/cgc.py"}]} | 3,519 | 382 |
gh_patches_debug_41739 | rasdani/github-patches | git_diff | CiviWiki__OpenCiviWiki-1044 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
When uploading a profile image failed then user needs to go back and click on Edit Profile again to upload new image
When users uploading a profile picture and this failed with the error message "Please use an image that 1280 x 960 pixels or smaller" then users need to go back to the profile and click on Edit Profile again in order to upload a new picture. Just clicking on Choose Picture doesn't do it.
</issue>
<code>
[start of project/accounts/models.py]
1 from django.contrib.auth.models import AbstractUser
2 import os
3 import io
4 from django.core.files.storage import default_storage
5 from django.conf import settings
6 from django.db import models
7 from PIL import Image, ImageOps
8 from django.core.files.uploadedfile import InMemoryUploadedFile
9
10 from taggit.managers import TaggableManager
11
12 from api.models.category import Category
13 from common.utils import PathAndRename
14
15
16 class User(AbstractUser):
17 """
18 A new custom User model for any functionality needed in the future. Extending AbstractUser
19 allows for adding new fields to the user model as needed.
20 """
21
22 class Meta:
23 db_table = "users"
24
25
26 # Image manipulation constants
27 PROFILE_IMG_SIZE = (171, 171)
28 PROFILE_IMG_THUMB_SIZE = (40, 40)
29 WHITE_BG = (255, 255, 255)
30
31
32 class ProfileManager(models.Manager):
33 def summarize(self, profile):
34 from api.models.civi import Civi
35
36 data = {
37 "username": profile.user.username,
38 "first_name": profile.first_name,
39 "last_name": profile.last_name,
40 "about_me": profile.about_me,
41 "history": [
42 Civi.objects.serialize(c)
43 for c in Civi.objects.filter(author_id=profile.id).order_by("-created")
44 ],
45 "profile_image": profile.profile_image_url,
46 "followers": self.followers(profile),
47 "following": self.following(profile),
48 }
49 return data
50
51 def chip_summarize(self, profile):
52 data = {
53 "username": profile.user.username,
54 "first_name": profile.first_name,
55 "last_name": profile.last_name,
56 "profile_image": profile.profile_image_url,
57 }
58 return data
59
60 def card_summarize(self, profile, request_profile):
61 # Length at which to truncate 'about me' text
62 about_me_truncate_length = 150
63
64 # If 'about me' text is longer than 150 characters... add elipsis (truncate)
65 ellipsis_if_too_long = (
66 "" if len(profile.about_me) <= about_me_truncate_length else "..."
67 )
68
69 data = {
70 "id": profile.user.id,
71 "username": profile.user.username,
72 "first_name": profile.first_name,
73 "last_name": profile.last_name,
74 "about_me": profile.about_me[:about_me_truncate_length] + ellipsis_if_too_long,
75 "profile_image": profile.profile_image_url,
76 "follow_state": True
77 if profile in request_profile.following.all()
78 else False,
79 "request_profile": request_profile.first_name,
80 }
81 return data
82
83 def followers(self, profile):
84 return [self.chip_summarize(follower) for follower in profile.followers.all()]
85
86 def following(self, profile):
87 return [self.chip_summarize(following) for following in profile.following.all()]
88
89
90 profile_upload_path = PathAndRename("")
91
92
93 class Profile(models.Model):
94 user = models.ForeignKey(User, on_delete=models.CASCADE)
95 first_name = models.CharField(max_length=63, blank=False)
96 last_name = models.CharField(max_length=63, blank=False)
97 about_me = models.CharField(max_length=511, blank=True)
98
99 categories = models.ManyToManyField(
100 Category, related_name="user_categories", symmetrical=False
101 )
102 tags = TaggableManager()
103
104 followers = models.ManyToManyField(
105 "self", related_name="follower", symmetrical=False
106 )
107 following = models.ManyToManyField(
108 "self", related_name="followings", symmetrical=False
109 )
110
111 is_verified = models.BooleanField(default=False)
112 full_profile = models.BooleanField(default=False)
113
114 objects = ProfileManager()
115 profile_image = models.ImageField(
116 upload_to=profile_upload_path, blank=True, null=True
117 )
118 profile_image_thumb = models.ImageField(
119 upload_to=profile_upload_path, blank=True, null=True
120 )
121
122 @property
123 def full_name(self):
124 """Returns the person's full name."""
125
126 return f"{self.first_name} {self.last_name}"
127
128 @property
129 def profile_image_url(self):
130 """Return placeholder profile image if user didn't upload one"""
131
132 if self.profile_image:
133 file_exists = default_storage.exists(
134 os.path.join(settings.MEDIA_ROOT, self.profile_image.name)
135 )
136 if file_exists:
137 return self.profile_image.url
138
139 return "/static/img/no_image_md.png"
140
141 @property
142 def profile_image_thumb_url(self):
143 """Return placeholder profile image if user didn't upload one"""
144
145 if self.profile_image_thumb:
146 file_exists = default_storage.exists(
147 os.path.join(settings.MEDIA_ROOT, self.profile_image_thumb.name)
148 )
149 if file_exists:
150 return self.profile_image_thumb.url
151
152 return "/static/img/no_image_md.png"
153
154 def __init__(self, *args, **kwargs):
155 super(Profile, self).__init__(*args, **kwargs)
156
157 def save(self, *args, **kwargs):
158 """ Image crop/resize and thumbnail creation """
159
160 # New Profile image --
161 if self.profile_image:
162 self.resize_profile_image()
163
164 self.full_profile = self.is_full_profile()
165
166 super(Profile, self).save(*args, **kwargs)
167
168 def resize_profile_image(self):
169 """
170 Resizes and crops the user uploaded image and creates a thumbnail version of it
171 """
172 profile_image_field = self.profile_image
173 image_file = io.StringIO(profile_image_field.read())
174 profile_image = Image.open(image_file)
175 profile_image.load()
176
177 # Resize image
178 profile_image = ImageOps.fit(
179 profile_image, PROFILE_IMG_SIZE, Image.ANTIALIAS, centering=(0.5, 0.5)
180 )
181
182 # Convert to JPG image format with white background
183 if profile_image.mode not in ("L", "RGB"):
184 white_bg_img = Image.new("RGB", PROFILE_IMG_SIZE, WHITE_BG)
185 white_bg_img.paste(profile_image, mask=profile_image.split()[3])
186 profile_image = white_bg_img
187
188 # Save new cropped image
189 tmp_image_file = io.StringIO()
190 profile_image.save(tmp_image_file, "JPEG", quality=90)
191 tmp_image_file.seek(0)
192 self.profile_image = InMemoryUploadedFile(
193 tmp_image_file,
194 "ImageField",
195 self.profile_image.name,
196 "image/jpeg",
197 tmp_image_file.len,
198 None,
199 )
200 # Make a Thumbnail Image for the new resized image
201 thumb_image = profile_image.copy()
202 thumb_image.thumbnail(PROFILE_IMG_THUMB_SIZE, resample=Image.ANTIALIAS)
203 tmp_image_file = io.StringIO()
204 thumb_image.save(tmp_image_file, "JPEG", quality=90)
205 tmp_image_file.seek(0)
206 self.profile_image_thumb = InMemoryUploadedFile(
207 tmp_image_file,
208 "ImageField",
209 self.profile_image.name,
210 "image/jpeg",
211 tmp_image_file.len,
212 None,
213 )
214
215 def is_full_profile(self):
216 if self.first_name and self.last_name:
217 return True
218 else:
219 return False
220
[end of project/accounts/models.py]
[start of project/accounts/admin.py]
1 from django.contrib import admin
2
3 # Register your models here.
4
[end of project/accounts/admin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/project/accounts/admin.py b/project/accounts/admin.py
--- a/project/accounts/admin.py
+++ b/project/accounts/admin.py
@@ -1,3 +1,6 @@
from django.contrib import admin
+from .models import User
+
# Register your models here.
+admin.site.register(User)
diff --git a/project/accounts/models.py b/project/accounts/models.py
--- a/project/accounts/models.py
+++ b/project/accounts/models.py
@@ -71,7 +71,8 @@
"username": profile.user.username,
"first_name": profile.first_name,
"last_name": profile.last_name,
- "about_me": profile.about_me[:about_me_truncate_length] + ellipsis_if_too_long,
+ "about_me": profile.about_me[:about_me_truncate_length]
+ + ellipsis_if_too_long,
"profile_image": profile.profile_image_url,
"follow_state": True
if profile in request_profile.following.all()
@@ -155,7 +156,7 @@
super(Profile, self).__init__(*args, **kwargs)
def save(self, *args, **kwargs):
- """ Image crop/resize and thumbnail creation """
+ """Image crop/resize and thumbnail creation"""
# New Profile image --
if self.profile_image:
@@ -169,11 +170,7 @@
"""
Resizes and crops the user uploaded image and creates a thumbnail version of it
"""
- profile_image_field = self.profile_image
- image_file = io.StringIO(profile_image_field.read())
- profile_image = Image.open(image_file)
- profile_image.load()
-
+ profile_image = Image.open(self.profile_image)
# Resize image
profile_image = ImageOps.fit(
profile_image, PROFILE_IMG_SIZE, Image.ANTIALIAS, centering=(0.5, 0.5)
@@ -186,7 +183,7 @@
profile_image = white_bg_img
# Save new cropped image
- tmp_image_file = io.StringIO()
+ tmp_image_file = io.BytesIO()
profile_image.save(tmp_image_file, "JPEG", quality=90)
tmp_image_file.seek(0)
self.profile_image = InMemoryUploadedFile(
@@ -194,21 +191,23 @@
"ImageField",
self.profile_image.name,
"image/jpeg",
- tmp_image_file.len,
+ profile_image.tell(),
None,
)
# Make a Thumbnail Image for the new resized image
thumb_image = profile_image.copy()
+
thumb_image.thumbnail(PROFILE_IMG_THUMB_SIZE, resample=Image.ANTIALIAS)
- tmp_image_file = io.StringIO()
- thumb_image.save(tmp_image_file, "JPEG", quality=90)
- tmp_image_file.seek(0)
+ tmp_thumb_file = io.BytesIO()
+ thumb_image.save(tmp_thumb_file, "JPEG", quality=90)
+ tmp_thumb_file.seek(0)
+
self.profile_image_thumb = InMemoryUploadedFile(
- tmp_image_file,
+ tmp_thumb_file,
"ImageField",
self.profile_image.name,
"image/jpeg",
- tmp_image_file.len,
+ thumb_image.tell(),
None,
)
| {"golden_diff": "diff --git a/project/accounts/admin.py b/project/accounts/admin.py\n--- a/project/accounts/admin.py\n+++ b/project/accounts/admin.py\n@@ -1,3 +1,6 @@\n from django.contrib import admin\n+from .models import User\n+\n \n # Register your models here.\n+admin.site.register(User)\ndiff --git a/project/accounts/models.py b/project/accounts/models.py\n--- a/project/accounts/models.py\n+++ b/project/accounts/models.py\n@@ -71,7 +71,8 @@\n \"username\": profile.user.username,\n \"first_name\": profile.first_name,\n \"last_name\": profile.last_name,\n- \"about_me\": profile.about_me[:about_me_truncate_length] + ellipsis_if_too_long,\n+ \"about_me\": profile.about_me[:about_me_truncate_length]\n+ + ellipsis_if_too_long,\n \"profile_image\": profile.profile_image_url,\n \"follow_state\": True\n if profile in request_profile.following.all()\n@@ -155,7 +156,7 @@\n super(Profile, self).__init__(*args, **kwargs)\n \n def save(self, *args, **kwargs):\n- \"\"\" Image crop/resize and thumbnail creation \"\"\"\n+ \"\"\"Image crop/resize and thumbnail creation\"\"\"\n \n # New Profile image --\n if self.profile_image:\n@@ -169,11 +170,7 @@\n \"\"\"\n Resizes and crops the user uploaded image and creates a thumbnail version of it\n \"\"\"\n- profile_image_field = self.profile_image\n- image_file = io.StringIO(profile_image_field.read())\n- profile_image = Image.open(image_file)\n- profile_image.load()\n-\n+ profile_image = Image.open(self.profile_image)\n # Resize image\n profile_image = ImageOps.fit(\n profile_image, PROFILE_IMG_SIZE, Image.ANTIALIAS, centering=(0.5, 0.5)\n@@ -186,7 +183,7 @@\n profile_image = white_bg_img\n \n # Save new cropped image\n- tmp_image_file = io.StringIO()\n+ tmp_image_file = io.BytesIO()\n profile_image.save(tmp_image_file, \"JPEG\", quality=90)\n tmp_image_file.seek(0)\n self.profile_image = InMemoryUploadedFile(\n@@ -194,21 +191,23 @@\n \"ImageField\",\n self.profile_image.name,\n \"image/jpeg\",\n- tmp_image_file.len,\n+ profile_image.tell(),\n None,\n )\n # Make a Thumbnail Image for the new resized image\n thumb_image = profile_image.copy()\n+\n thumb_image.thumbnail(PROFILE_IMG_THUMB_SIZE, resample=Image.ANTIALIAS)\n- tmp_image_file = io.StringIO()\n- thumb_image.save(tmp_image_file, \"JPEG\", quality=90)\n- tmp_image_file.seek(0)\n+ tmp_thumb_file = io.BytesIO()\n+ thumb_image.save(tmp_thumb_file, \"JPEG\", quality=90)\n+ tmp_thumb_file.seek(0)\n+\n self.profile_image_thumb = InMemoryUploadedFile(\n- tmp_image_file,\n+ tmp_thumb_file,\n \"ImageField\",\n self.profile_image.name,\n \"image/jpeg\",\n- tmp_image_file.len,\n+ thumb_image.tell(),\n None,\n )\n", "issue": "When uploading a profile image failed then user needs to go back and click on Edit Profile again to upload new image\nWhen users uploading a profile picture and this failed with the error message \"Please use an image that 1280 x 960 pixels or smaller\" then users need to go back to the profile and click on Edit Profile again in order to upload a new picture. Just clicking on Choose Picture doesn't do it.\n", "before_files": [{"content": "from django.contrib.auth.models import AbstractUser\nimport os\nimport io\nfrom django.core.files.storage import default_storage\nfrom django.conf import settings\nfrom django.db import models\nfrom PIL import Image, ImageOps\nfrom django.core.files.uploadedfile import InMemoryUploadedFile\n\nfrom taggit.managers import TaggableManager\n\nfrom api.models.category import Category\nfrom common.utils import PathAndRename\n\n\nclass User(AbstractUser):\n \"\"\"\n A new custom User model for any functionality needed in the future. Extending AbstractUser\n allows for adding new fields to the user model as needed.\n \"\"\"\n\n class Meta:\n db_table = \"users\"\n\n\n# Image manipulation constants\nPROFILE_IMG_SIZE = (171, 171)\nPROFILE_IMG_THUMB_SIZE = (40, 40)\nWHITE_BG = (255, 255, 255)\n\n\nclass ProfileManager(models.Manager):\n def summarize(self, profile):\n from api.models.civi import Civi\n\n data = {\n \"username\": profile.user.username,\n \"first_name\": profile.first_name,\n \"last_name\": profile.last_name,\n \"about_me\": profile.about_me,\n \"history\": [\n Civi.objects.serialize(c)\n for c in Civi.objects.filter(author_id=profile.id).order_by(\"-created\")\n ],\n \"profile_image\": profile.profile_image_url,\n \"followers\": self.followers(profile),\n \"following\": self.following(profile),\n }\n return data\n\n def chip_summarize(self, profile):\n data = {\n \"username\": profile.user.username,\n \"first_name\": profile.first_name,\n \"last_name\": profile.last_name,\n \"profile_image\": profile.profile_image_url,\n }\n return data\n\n def card_summarize(self, profile, request_profile):\n # Length at which to truncate 'about me' text\n about_me_truncate_length = 150\n\n # If 'about me' text is longer than 150 characters... add elipsis (truncate)\n ellipsis_if_too_long = (\n \"\" if len(profile.about_me) <= about_me_truncate_length else \"...\"\n )\n\n data = {\n \"id\": profile.user.id,\n \"username\": profile.user.username,\n \"first_name\": profile.first_name,\n \"last_name\": profile.last_name,\n \"about_me\": profile.about_me[:about_me_truncate_length] + ellipsis_if_too_long,\n \"profile_image\": profile.profile_image_url,\n \"follow_state\": True\n if profile in request_profile.following.all()\n else False,\n \"request_profile\": request_profile.first_name,\n }\n return data\n\n def followers(self, profile):\n return [self.chip_summarize(follower) for follower in profile.followers.all()]\n\n def following(self, profile):\n return [self.chip_summarize(following) for following in profile.following.all()]\n\n\nprofile_upload_path = PathAndRename(\"\")\n\n\nclass Profile(models.Model):\n user = models.ForeignKey(User, on_delete=models.CASCADE)\n first_name = models.CharField(max_length=63, blank=False)\n last_name = models.CharField(max_length=63, blank=False)\n about_me = models.CharField(max_length=511, blank=True)\n\n categories = models.ManyToManyField(\n Category, related_name=\"user_categories\", symmetrical=False\n )\n tags = TaggableManager()\n\n followers = models.ManyToManyField(\n \"self\", related_name=\"follower\", symmetrical=False\n )\n following = models.ManyToManyField(\n \"self\", related_name=\"followings\", symmetrical=False\n )\n\n is_verified = models.BooleanField(default=False)\n full_profile = models.BooleanField(default=False)\n\n objects = ProfileManager()\n profile_image = models.ImageField(\n upload_to=profile_upload_path, blank=True, null=True\n )\n profile_image_thumb = models.ImageField(\n upload_to=profile_upload_path, blank=True, null=True\n )\n\n @property\n def full_name(self):\n \"\"\"Returns the person's full name.\"\"\"\n\n return f\"{self.first_name} {self.last_name}\"\n\n @property\n def profile_image_url(self):\n \"\"\"Return placeholder profile image if user didn't upload one\"\"\"\n\n if self.profile_image:\n file_exists = default_storage.exists(\n os.path.join(settings.MEDIA_ROOT, self.profile_image.name)\n )\n if file_exists:\n return self.profile_image.url\n\n return \"/static/img/no_image_md.png\"\n\n @property\n def profile_image_thumb_url(self):\n \"\"\"Return placeholder profile image if user didn't upload one\"\"\"\n\n if self.profile_image_thumb:\n file_exists = default_storage.exists(\n os.path.join(settings.MEDIA_ROOT, self.profile_image_thumb.name)\n )\n if file_exists:\n return self.profile_image_thumb.url\n\n return \"/static/img/no_image_md.png\"\n\n def __init__(self, *args, **kwargs):\n super(Profile, self).__init__(*args, **kwargs)\n\n def save(self, *args, **kwargs):\n \"\"\" Image crop/resize and thumbnail creation \"\"\"\n\n # New Profile image --\n if self.profile_image:\n self.resize_profile_image()\n\n self.full_profile = self.is_full_profile()\n\n super(Profile, self).save(*args, **kwargs)\n\n def resize_profile_image(self):\n \"\"\"\n Resizes and crops the user uploaded image and creates a thumbnail version of it\n \"\"\"\n profile_image_field = self.profile_image\n image_file = io.StringIO(profile_image_field.read())\n profile_image = Image.open(image_file)\n profile_image.load()\n\n # Resize image\n profile_image = ImageOps.fit(\n profile_image, PROFILE_IMG_SIZE, Image.ANTIALIAS, centering=(0.5, 0.5)\n )\n\n # Convert to JPG image format with white background\n if profile_image.mode not in (\"L\", \"RGB\"):\n white_bg_img = Image.new(\"RGB\", PROFILE_IMG_SIZE, WHITE_BG)\n white_bg_img.paste(profile_image, mask=profile_image.split()[3])\n profile_image = white_bg_img\n\n # Save new cropped image\n tmp_image_file = io.StringIO()\n profile_image.save(tmp_image_file, \"JPEG\", quality=90)\n tmp_image_file.seek(0)\n self.profile_image = InMemoryUploadedFile(\n tmp_image_file,\n \"ImageField\",\n self.profile_image.name,\n \"image/jpeg\",\n tmp_image_file.len,\n None,\n )\n # Make a Thumbnail Image for the new resized image\n thumb_image = profile_image.copy()\n thumb_image.thumbnail(PROFILE_IMG_THUMB_SIZE, resample=Image.ANTIALIAS)\n tmp_image_file = io.StringIO()\n thumb_image.save(tmp_image_file, \"JPEG\", quality=90)\n tmp_image_file.seek(0)\n self.profile_image_thumb = InMemoryUploadedFile(\n tmp_image_file,\n \"ImageField\",\n self.profile_image.name,\n \"image/jpeg\",\n tmp_image_file.len,\n None,\n )\n\n def is_full_profile(self):\n if self.first_name and self.last_name:\n return True\n else:\n return False\n", "path": "project/accounts/models.py"}, {"content": "from django.contrib import admin\n\n# Register your models here.\n", "path": "project/accounts/admin.py"}]} | 2,753 | 712 |
gh_patches_debug_35944 | rasdani/github-patches | git_diff | pytorch__text-146 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Escape csv header lines
I haven't been able to see how to skip first csv line in case of loading from a file with header. I could of course preprocess the file, but it'd be nice if there was an option to TabularDataset to tell it to skip the first line.
</issue>
<code>
[start of torchtext/data/dataset.py]
1 import io
2 import os
3 import zipfile
4 import tarfile
5
6 import torch.utils.data
7 from six.moves import urllib
8
9 from .example import Example
10
11
12 class Dataset(torch.utils.data.Dataset):
13 """Defines a dataset composed of Examples along with its Fields.
14
15 Attributes:
16 sort_key (callable): A key to use for sorting dataset examples for batching
17 together examples with similar lengths to minimize padding.
18 examples (list(Example)): The examples in this dataset.
19 fields: A dictionary containing the name of each column together with
20 its corresponding Field object. Two columns with the same Field
21 object will share a vocabulary.
22 fields (dict[str, Field]): Contains the name of each column or field, together
23 with the corresponding Field object. Two fields with the same Field object
24 will have a shared vocabulary.
25 """
26 sort_key = None
27
28 def __init__(self, examples, fields, filter_pred=None):
29 """Create a dataset from a list of Examples and Fields.
30
31 Arguments:
32 examples: List of Examples.
33 fields (List(tuple(str, Field))): The Fields to use in this tuple. The
34 string is a field name, and the Field is the associated field.
35 filter_pred (callable or None): Use only examples for which
36 filter_pred(example) is True, or use all examples if None.
37 Default is None.
38 """
39 if filter_pred is not None:
40 examples = list(filter(filter_pred, examples))
41 self.examples = examples
42 self.fields = dict(fields)
43
44 @classmethod
45 def splits(cls, path, train=None, validation=None, test=None, **kwargs):
46 """Create Dataset objects for multiple splits of a dataset.
47
48 Arguments:
49 path (str): Common prefix of the splits' file paths.
50 train (str): Suffix to add to path for the train set, or None for no
51 train set. Default is None.
52 validation (str): Suffix to add to path for the validation set, or None
53 for no validation set. Default is None.
54 test (str): Suffix to add to path for the test set, or None for no test
55 set. Default is None.
56 Remaining keyword arguments: Passed to the constructor of the
57 Dataset (sub)class being used.
58
59 Returns:
60 split_datasets (tuple(Dataset)): Datasets for train, validation, and
61 test splits in that order, if provided.
62 """
63 train_data = None if train is None else cls(path + train, **kwargs)
64 val_data = None if validation is None else cls(path + validation,
65 **kwargs)
66 test_data = None if test is None else cls(path + test, **kwargs)
67 return tuple(d for d in (train_data, val_data, test_data)
68 if d is not None)
69
70 def __getitem__(self, i):
71 return self.examples[i]
72
73 def __len__(self):
74 try:
75 return len(self.examples)
76 except TypeError:
77 return 2**32
78
79 def __iter__(self):
80 for x in self.examples:
81 yield x
82
83 def __getattr__(self, attr):
84 if attr in self.fields:
85 for x in self.examples:
86 yield getattr(x, attr)
87
88 @classmethod
89 def download(cls, root, check=None):
90 """Download and unzip an online archive (.zip, .gz, or .tgz).
91
92 Arguments:
93 root (str): Folder to download data to.
94 check (str or None): Folder whose existence indicates
95 that the dataset has already been downloaded, or
96 None to check the existence of root.
97
98 Returns:
99 dataset_path (str): Path to extracted dataset.
100 """
101 path = os.path.join(root, cls.name)
102 check = path if check is None else check
103 if not os.path.isdir(check):
104 for url in cls.urls:
105 filename = os.path.basename(url)
106 zpath = os.path.join(path, filename)
107 if not os.path.isfile(zpath):
108 if not os.path.exists(os.path.dirname(zpath)):
109 os.makedirs(os.path.dirname(zpath))
110 print('downloading {}'.format(filename))
111 urllib.request.urlretrieve(url, zpath)
112 ext = os.path.splitext(filename)[-1]
113 if ext == '.zip':
114 with zipfile.ZipFile(zpath, 'r') as zfile:
115 print('extracting')
116 zfile.extractall(path)
117 elif ext in ['.gz', '.tgz']:
118 with tarfile.open(zpath, 'r:gz') as tar:
119 dirs = [member for member in tar.getmembers()]
120 tar.extractall(path=path, members=dirs)
121 return os.path.join(path, cls.dirname)
122
123
124 class TabularDataset(Dataset):
125 """Defines a Dataset of columns stored in CSV, TSV, or JSON format."""
126
127 def __init__(self, path, format, fields, **kwargs):
128 """Create a TabularDataset given a path, file format, and field list.
129
130 Arguments:
131 path (str): Path to the data file.
132 format (str): The format of the data file. One of "CSV", "TSV", or
133 "JSON" (case-insensitive).
134 fields (list(tuple(str, Field)) or dict[str, (name, Field)]: For CSV and
135 TSV formats, list of tuples of (name, field). The list should be in
136 the same order as the columns in the CSV or TSV file, while tuples of
137 (name, None) represent columns that will be ignored. For JSON format,
138 dictionary whose keys are the JSON keys and whose values are tuples of
139 (name, field). This allows the user to rename columns from their JSON key
140 names and also enables selecting a subset of columns to load
141 (since JSON keys not present in the input dictionary are ignored).
142 """
143 make_example = {
144 'json': Example.fromJSON, 'dict': Example.fromdict,
145 'tsv': Example.fromTSV, 'csv': Example.fromCSV}[format.lower()]
146
147 with io.open(os.path.expanduser(path), encoding="utf8") as f:
148 examples = [make_example(line, fields) for line in f]
149
150 if make_example in (Example.fromdict, Example.fromJSON):
151 fields, field_dict = [], fields
152 for field in field_dict.values():
153 if isinstance(field, list):
154 fields.extend(field)
155 else:
156 fields.append(field)
157
158 super(TabularDataset, self).__init__(examples, fields, **kwargs)
159
[end of torchtext/data/dataset.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/torchtext/data/dataset.py b/torchtext/data/dataset.py
--- a/torchtext/data/dataset.py
+++ b/torchtext/data/dataset.py
@@ -124,14 +124,14 @@
class TabularDataset(Dataset):
"""Defines a Dataset of columns stored in CSV, TSV, or JSON format."""
- def __init__(self, path, format, fields, **kwargs):
+ def __init__(self, path, format, fields, skip_header=False, **kwargs):
"""Create a TabularDataset given a path, file format, and field list.
Arguments:
path (str): Path to the data file.
format (str): The format of the data file. One of "CSV", "TSV", or
"JSON" (case-insensitive).
- fields (list(tuple(str, Field)) or dict[str, (name, Field)]: For CSV and
+ fields (list(tuple(str, Field)) or dict[str: tuple(str, Field)]: For CSV and
TSV formats, list of tuples of (name, field). The list should be in
the same order as the columns in the CSV or TSV file, while tuples of
(name, None) represent columns that will be ignored. For JSON format,
@@ -139,12 +139,15 @@
(name, field). This allows the user to rename columns from their JSON key
names and also enables selecting a subset of columns to load
(since JSON keys not present in the input dictionary are ignored).
+ skip_header (bool): Whether to skip the first line of the input file.
"""
make_example = {
'json': Example.fromJSON, 'dict': Example.fromdict,
'tsv': Example.fromTSV, 'csv': Example.fromCSV}[format.lower()]
with io.open(os.path.expanduser(path), encoding="utf8") as f:
+ if skip_header:
+ next(f)
examples = [make_example(line, fields) for line in f]
if make_example in (Example.fromdict, Example.fromJSON):
| {"golden_diff": "diff --git a/torchtext/data/dataset.py b/torchtext/data/dataset.py\n--- a/torchtext/data/dataset.py\n+++ b/torchtext/data/dataset.py\n@@ -124,14 +124,14 @@\n class TabularDataset(Dataset):\n \"\"\"Defines a Dataset of columns stored in CSV, TSV, or JSON format.\"\"\"\n \n- def __init__(self, path, format, fields, **kwargs):\n+ def __init__(self, path, format, fields, skip_header=False, **kwargs):\n \"\"\"Create a TabularDataset given a path, file format, and field list.\n \n Arguments:\n path (str): Path to the data file.\n format (str): The format of the data file. One of \"CSV\", \"TSV\", or\n \"JSON\" (case-insensitive).\n- fields (list(tuple(str, Field)) or dict[str, (name, Field)]: For CSV and\n+ fields (list(tuple(str, Field)) or dict[str: tuple(str, Field)]: For CSV and\n TSV formats, list of tuples of (name, field). The list should be in\n the same order as the columns in the CSV or TSV file, while tuples of\n (name, None) represent columns that will be ignored. For JSON format,\n@@ -139,12 +139,15 @@\n (name, field). This allows the user to rename columns from their JSON key\n names and also enables selecting a subset of columns to load\n (since JSON keys not present in the input dictionary are ignored).\n+ skip_header (bool): Whether to skip the first line of the input file.\n \"\"\"\n make_example = {\n 'json': Example.fromJSON, 'dict': Example.fromdict,\n 'tsv': Example.fromTSV, 'csv': Example.fromCSV}[format.lower()]\n \n with io.open(os.path.expanduser(path), encoding=\"utf8\") as f:\n+ if skip_header:\n+ next(f)\n examples = [make_example(line, fields) for line in f]\n \n if make_example in (Example.fromdict, Example.fromJSON):\n", "issue": "Escape csv header lines\nI haven't been able to see how to skip first csv line in case of loading from a file with header. I could of course preprocess the file, but it'd be nice if there was an option to TabularDataset to tell it to skip the first line.\n", "before_files": [{"content": "import io\nimport os\nimport zipfile\nimport tarfile\n\nimport torch.utils.data\nfrom six.moves import urllib\n\nfrom .example import Example\n\n\nclass Dataset(torch.utils.data.Dataset):\n \"\"\"Defines a dataset composed of Examples along with its Fields.\n\n Attributes:\n sort_key (callable): A key to use for sorting dataset examples for batching\n together examples with similar lengths to minimize padding.\n examples (list(Example)): The examples in this dataset.\n fields: A dictionary containing the name of each column together with\n its corresponding Field object. Two columns with the same Field\n object will share a vocabulary.\n fields (dict[str, Field]): Contains the name of each column or field, together\n with the corresponding Field object. Two fields with the same Field object\n will have a shared vocabulary.\n \"\"\"\n sort_key = None\n\n def __init__(self, examples, fields, filter_pred=None):\n \"\"\"Create a dataset from a list of Examples and Fields.\n\n Arguments:\n examples: List of Examples.\n fields (List(tuple(str, Field))): The Fields to use in this tuple. The\n string is a field name, and the Field is the associated field.\n filter_pred (callable or None): Use only examples for which\n filter_pred(example) is True, or use all examples if None.\n Default is None.\n \"\"\"\n if filter_pred is not None:\n examples = list(filter(filter_pred, examples))\n self.examples = examples\n self.fields = dict(fields)\n\n @classmethod\n def splits(cls, path, train=None, validation=None, test=None, **kwargs):\n \"\"\"Create Dataset objects for multiple splits of a dataset.\n\n Arguments:\n path (str): Common prefix of the splits' file paths.\n train (str): Suffix to add to path for the train set, or None for no\n train set. Default is None.\n validation (str): Suffix to add to path for the validation set, or None\n for no validation set. Default is None.\n test (str): Suffix to add to path for the test set, or None for no test\n set. Default is None.\n Remaining keyword arguments: Passed to the constructor of the\n Dataset (sub)class being used.\n\n Returns:\n split_datasets (tuple(Dataset)): Datasets for train, validation, and\n test splits in that order, if provided.\n \"\"\"\n train_data = None if train is None else cls(path + train, **kwargs)\n val_data = None if validation is None else cls(path + validation,\n **kwargs)\n test_data = None if test is None else cls(path + test, **kwargs)\n return tuple(d for d in (train_data, val_data, test_data)\n if d is not None)\n\n def __getitem__(self, i):\n return self.examples[i]\n\n def __len__(self):\n try:\n return len(self.examples)\n except TypeError:\n return 2**32\n\n def __iter__(self):\n for x in self.examples:\n yield x\n\n def __getattr__(self, attr):\n if attr in self.fields:\n for x in self.examples:\n yield getattr(x, attr)\n\n @classmethod\n def download(cls, root, check=None):\n \"\"\"Download and unzip an online archive (.zip, .gz, or .tgz).\n\n Arguments:\n root (str): Folder to download data to.\n check (str or None): Folder whose existence indicates\n that the dataset has already been downloaded, or\n None to check the existence of root.\n\n Returns:\n dataset_path (str): Path to extracted dataset.\n \"\"\"\n path = os.path.join(root, cls.name)\n check = path if check is None else check\n if not os.path.isdir(check):\n for url in cls.urls:\n filename = os.path.basename(url)\n zpath = os.path.join(path, filename)\n if not os.path.isfile(zpath):\n if not os.path.exists(os.path.dirname(zpath)):\n os.makedirs(os.path.dirname(zpath))\n print('downloading {}'.format(filename))\n urllib.request.urlretrieve(url, zpath)\n ext = os.path.splitext(filename)[-1]\n if ext == '.zip':\n with zipfile.ZipFile(zpath, 'r') as zfile:\n print('extracting')\n zfile.extractall(path)\n elif ext in ['.gz', '.tgz']:\n with tarfile.open(zpath, 'r:gz') as tar:\n dirs = [member for member in tar.getmembers()]\n tar.extractall(path=path, members=dirs)\n return os.path.join(path, cls.dirname)\n\n\nclass TabularDataset(Dataset):\n \"\"\"Defines a Dataset of columns stored in CSV, TSV, or JSON format.\"\"\"\n\n def __init__(self, path, format, fields, **kwargs):\n \"\"\"Create a TabularDataset given a path, file format, and field list.\n\n Arguments:\n path (str): Path to the data file.\n format (str): The format of the data file. One of \"CSV\", \"TSV\", or\n \"JSON\" (case-insensitive).\n fields (list(tuple(str, Field)) or dict[str, (name, Field)]: For CSV and\n TSV formats, list of tuples of (name, field). The list should be in\n the same order as the columns in the CSV or TSV file, while tuples of\n (name, None) represent columns that will be ignored. For JSON format,\n dictionary whose keys are the JSON keys and whose values are tuples of\n (name, field). This allows the user to rename columns from their JSON key\n names and also enables selecting a subset of columns to load\n (since JSON keys not present in the input dictionary are ignored).\n \"\"\"\n make_example = {\n 'json': Example.fromJSON, 'dict': Example.fromdict,\n 'tsv': Example.fromTSV, 'csv': Example.fromCSV}[format.lower()]\n\n with io.open(os.path.expanduser(path), encoding=\"utf8\") as f:\n examples = [make_example(line, fields) for line in f]\n\n if make_example in (Example.fromdict, Example.fromJSON):\n fields, field_dict = [], fields\n for field in field_dict.values():\n if isinstance(field, list):\n fields.extend(field)\n else:\n fields.append(field)\n\n super(TabularDataset, self).__init__(examples, fields, **kwargs)\n", "path": "torchtext/data/dataset.py"}]} | 2,360 | 470 |
gh_patches_debug_39382 | rasdani/github-patches | git_diff | scikit-hep__pyhf-1208 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Participate in iminuit v2.0 beta?
Dear pyhf team,
I am about to finish a major rewrite of iminuit, version 2.0, that replaces Cython as the tool to wrap C++ Minuit2 with pybind11, which is going to solve several issues that the legacy code had. All the good things that this will bring are listed on top of this PR:
scikit-hep/iminuit#502
Switching to the new version of iminuit should be completely transparent to you, since the new version passes the comprehensive suite of unit tests of iminuit-v1.x. However, I would like to use this opportunity to finally remove interface that has been successively marked as deprecated in versions 1.3 to 1.5.
Therefore my two question to you:
* Did you take note of the deprecation warnings in iminuit and did you keep up with the interface changes so far?
* Are you interested in trying out a Beta release of v2.0 to work out any possible bugs in the new version before the release?
Best regards,
Hans, iminuit maintainer
</issue>
<code>
[start of src/pyhf/optimize/opt_minuit.py]
1 """Minuit Optimizer Class."""
2 from .. import default_backend, exceptions
3 from .mixins import OptimizerMixin
4 import scipy
5 import iminuit
6
7
8 class minuit_optimizer(OptimizerMixin):
9 """
10 Optimizer that uses iminuit.Minuit.migrad.
11 """
12
13 __slots__ = ['name', 'errordef', 'steps', 'strategy', 'tolerance']
14
15 def __init__(self, *args, **kwargs):
16 """
17 Create MINUIT Optimizer.
18
19 .. note::
20
21 ``errordef`` should be 1.0 for a least-squares cost function and 0.5
22 for negative log-likelihood function. See page 37 of
23 http://hep.fi.infn.it/minuit.pdf. This parameter is sometimes
24 called ``UP`` in the ``MINUIT`` docs.
25
26
27 Args:
28 errordef (:obj:`float`): See minuit docs. Default is 1.0.
29 steps (:obj:`int`): Number of steps for the bounds. Default is 1000.
30 strategy (:obj:`int`): See :attr:`iminuit.Minuit.strategy`. Default is None.
31 tolerance (:obj:`float`): tolerance for termination. See specific optimizer for detailed meaning. Default is 0.1.
32 """
33 self.name = 'minuit'
34 self.errordef = kwargs.pop('errordef', 1)
35 self.steps = kwargs.pop('steps', 1000)
36 self.strategy = kwargs.pop('strategy', None)
37 self.tolerance = kwargs.pop('tolerance', 0.1)
38 super().__init__(*args, **kwargs)
39
40 def _get_minimizer(
41 self, objective_and_grad, init_pars, init_bounds, fixed_vals=None, do_grad=False
42 ):
43
44 step_sizes = [(b[1] - b[0]) / float(self.steps) for b in init_bounds]
45 fixed_vals = fixed_vals or []
46 # Minuit wants True/False for each parameter
47 fixed_bools = [False] * len(init_pars)
48 for index, val in fixed_vals:
49 fixed_bools[index] = True
50 init_pars[index] = val
51 step_sizes[index] = 0.0
52
53 # Minuit requires jac=callable
54 if do_grad:
55 wrapped_objective = lambda pars: objective_and_grad(pars)[0] # noqa: E731
56 jac = lambda pars: objective_and_grad(pars)[1] # noqa: E731
57 else:
58 wrapped_objective = objective_and_grad
59 jac = None
60
61 kwargs = dict(
62 fcn=wrapped_objective,
63 grad=jac,
64 start=init_pars,
65 error=step_sizes,
66 limit=init_bounds,
67 fix=fixed_bools,
68 print_level=self.verbose,
69 errordef=self.errordef,
70 )
71 return iminuit.Minuit.from_array_func(**kwargs)
72
73 def _minimize(
74 self,
75 minimizer,
76 func,
77 x0,
78 do_grad=False,
79 bounds=None,
80 fixed_vals=None,
81 return_uncertainties=False,
82 options={},
83 ):
84
85 """
86 Same signature as :func:`scipy.optimize.minimize`.
87
88 Note: an additional `minuit` is injected into the fitresult to get the
89 underlying minimizer.
90
91 Minimizer Options:
92 maxiter (:obj:`int`): maximum number of iterations. Default is 100000.
93 return_uncertainties (:obj:`bool`): Return uncertainties on the fitted parameters. Default is off.
94 strategy (:obj:`int`): See :attr:`iminuit.Minuit.strategy`. Default is to configure in response to `do_grad`.
95
96 Returns:
97 fitresult (scipy.optimize.OptimizeResult): the fit result
98 """
99 maxiter = options.pop('maxiter', self.maxiter)
100 return_uncertainties = options.pop('return_uncertainties', False)
101 # 0: Fast, user-provided gradient
102 # 1: Default, no user-provided gradient
103 strategy = options.pop(
104 'strategy', self.strategy if self.strategy else not do_grad
105 )
106 tolerance = options.pop('tolerance', self.tolerance)
107 if options:
108 raise exceptions.Unsupported(
109 f"Unsupported options were passed in: {list(options.keys())}."
110 )
111
112 minimizer.strategy = strategy
113 minimizer.tol = tolerance
114 minimizer.migrad(ncall=maxiter)
115 # Following lines below come from:
116 # https://github.com/scikit-hep/iminuit/blob/64acac11cfa2fb91ccbd02d1b3c51f8a9e2cc484/src/iminuit/_minimize.py#L102-L121
117 message = "Optimization terminated successfully."
118 if not minimizer.valid:
119 message = "Optimization failed."
120 fmin = minimizer.fmin
121 if fmin.has_reached_call_limit:
122 message += " Call limit was reached."
123 if fmin.is_above_max_edm:
124 message += " Estimated distance to minimum too large."
125
126 n = len(x0)
127 hess_inv = default_backend.ones((n, n))
128 if minimizer.valid:
129 # Extra call to hesse() after migrad() is always needed for good error estimates. If you pass a user-provided gradient to MINUIT, convergence is faster.
130 minimizer.hesse()
131 hess_inv = minimizer.np_covariance()
132
133 unc = None
134 if return_uncertainties:
135 unc = minimizer.np_errors()
136
137 return scipy.optimize.OptimizeResult(
138 x=minimizer.np_values(),
139 unc=unc,
140 success=minimizer.valid,
141 fun=minimizer.fval,
142 hess_inv=hess_inv,
143 message=message,
144 nfev=minimizer.ncalls_total,
145 njev=minimizer.ngrads_total,
146 minuit=minimizer,
147 )
148
[end of src/pyhf/optimize/opt_minuit.py]
[start of setup.py]
1 from setuptools import setup
2
3 extras_require = {
4 'shellcomplete': ['click_completion'],
5 'tensorflow': [
6 'tensorflow~=2.2.0', # TensorFlow minor releases are as volatile as major
7 'tensorflow-probability~=0.10.0',
8 ],
9 'torch': ['torch~=1.2'],
10 'jax': ['jax~=0.2.4', 'jaxlib~=0.1.56'],
11 'xmlio': ['uproot3~=3.14'], # Future proof against uproot4 API changes
12 'minuit': ['iminuit~=1.5.3'],
13 }
14 extras_require['backends'] = sorted(
15 set(
16 extras_require['tensorflow']
17 + extras_require['torch']
18 + extras_require['jax']
19 + extras_require['minuit']
20 )
21 )
22 extras_require['contrib'] = sorted({'matplotlib', 'requests'})
23 extras_require['lint'] = sorted({'flake8', 'black'})
24
25 extras_require['test'] = sorted(
26 set(
27 extras_require['backends']
28 + extras_require['xmlio']
29 + extras_require['contrib']
30 + extras_require['shellcomplete']
31 + [
32 'pytest~=6.0',
33 'pytest-cov>=2.5.1',
34 'pytest-mock',
35 'pytest-benchmark[histogram]',
36 'pytest-console-scripts',
37 'pytest-mpl',
38 'pydocstyle',
39 'coverage>=4.0', # coveralls
40 'papermill~=2.0',
41 'nteract-scrapbook~=0.2',
42 'jupyter',
43 'graphviz',
44 'jsonpatch',
45 ]
46 )
47 )
48 extras_require['docs'] = sorted(
49 {
50 'sphinx>=3.1.2',
51 'sphinxcontrib-bibtex~=1.0',
52 'sphinx-click',
53 'sphinx_rtd_theme',
54 'nbsphinx',
55 'ipywidgets',
56 'sphinx-issues',
57 'sphinx-copybutton>0.2.9',
58 }
59 )
60 extras_require['develop'] = sorted(
61 set(
62 extras_require['docs']
63 + extras_require['lint']
64 + extras_require['test']
65 + [
66 'nbdime',
67 'bump2version',
68 'ipython',
69 'pre-commit',
70 'check-manifest',
71 'codemetapy>=0.3.4',
72 'twine',
73 ]
74 )
75 )
76 extras_require['complete'] = sorted(set(sum(extras_require.values(), [])))
77
78
79 setup(
80 extras_require=extras_require,
81 use_scm_version=lambda: {'local_scheme': lambda version: ''},
82 )
83
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -9,7 +9,7 @@
'torch': ['torch~=1.2'],
'jax': ['jax~=0.2.4', 'jaxlib~=0.1.56'],
'xmlio': ['uproot3~=3.14'], # Future proof against uproot4 API changes
- 'minuit': ['iminuit~=1.5.3'],
+ 'minuit': ['iminuit~=2.1'],
}
extras_require['backends'] = sorted(
set(
diff --git a/src/pyhf/optimize/opt_minuit.py b/src/pyhf/optimize/opt_minuit.py
--- a/src/pyhf/optimize/opt_minuit.py
+++ b/src/pyhf/optimize/opt_minuit.py
@@ -58,17 +58,13 @@
wrapped_objective = objective_and_grad
jac = None
- kwargs = dict(
- fcn=wrapped_objective,
- grad=jac,
- start=init_pars,
- error=step_sizes,
- limit=init_bounds,
- fix=fixed_bools,
- print_level=self.verbose,
- errordef=self.errordef,
- )
- return iminuit.Minuit.from_array_func(**kwargs)
+ minuit = iminuit.Minuit(wrapped_objective, init_pars, grad=jac)
+ minuit.errors = step_sizes
+ minuit.limits = init_bounds
+ minuit.fixed = fixed_bools
+ minuit.print_level = self.verbose
+ minuit.errordef = self.errordef
+ return minuit
def _minimize(
self,
@@ -113,7 +109,7 @@
minimizer.tol = tolerance
minimizer.migrad(ncall=maxiter)
# Following lines below come from:
- # https://github.com/scikit-hep/iminuit/blob/64acac11cfa2fb91ccbd02d1b3c51f8a9e2cc484/src/iminuit/_minimize.py#L102-L121
+ # https://github.com/scikit-hep/iminuit/blob/23bad7697e39d363f259ca8349684df939b1b2e6/src/iminuit/_minimize.py#L111-L130
message = "Optimization terminated successfully."
if not minimizer.valid:
message = "Optimization failed."
@@ -128,20 +124,20 @@
if minimizer.valid:
# Extra call to hesse() after migrad() is always needed for good error estimates. If you pass a user-provided gradient to MINUIT, convergence is faster.
minimizer.hesse()
- hess_inv = minimizer.np_covariance()
+ hess_inv = minimizer.covariance
unc = None
if return_uncertainties:
- unc = minimizer.np_errors()
+ unc = minimizer.errors
return scipy.optimize.OptimizeResult(
- x=minimizer.np_values(),
+ x=minimizer.values,
unc=unc,
success=minimizer.valid,
fun=minimizer.fval,
hess_inv=hess_inv,
message=message,
- nfev=minimizer.ncalls_total,
- njev=minimizer.ngrads_total,
+ nfev=minimizer.nfcn,
+ njev=minimizer.ngrad,
minuit=minimizer,
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -9,7 +9,7 @@\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.2.4', 'jaxlib~=0.1.56'],\n 'xmlio': ['uproot3~=3.14'], # Future proof against uproot4 API changes\n- 'minuit': ['iminuit~=1.5.3'],\n+ 'minuit': ['iminuit~=2.1'],\n }\n extras_require['backends'] = sorted(\n set(\ndiff --git a/src/pyhf/optimize/opt_minuit.py b/src/pyhf/optimize/opt_minuit.py\n--- a/src/pyhf/optimize/opt_minuit.py\n+++ b/src/pyhf/optimize/opt_minuit.py\n@@ -58,17 +58,13 @@\n wrapped_objective = objective_and_grad\n jac = None\n \n- kwargs = dict(\n- fcn=wrapped_objective,\n- grad=jac,\n- start=init_pars,\n- error=step_sizes,\n- limit=init_bounds,\n- fix=fixed_bools,\n- print_level=self.verbose,\n- errordef=self.errordef,\n- )\n- return iminuit.Minuit.from_array_func(**kwargs)\n+ minuit = iminuit.Minuit(wrapped_objective, init_pars, grad=jac)\n+ minuit.errors = step_sizes\n+ minuit.limits = init_bounds\n+ minuit.fixed = fixed_bools\n+ minuit.print_level = self.verbose\n+ minuit.errordef = self.errordef\n+ return minuit\n \n def _minimize(\n self,\n@@ -113,7 +109,7 @@\n minimizer.tol = tolerance\n minimizer.migrad(ncall=maxiter)\n # Following lines below come from:\n- # https://github.com/scikit-hep/iminuit/blob/64acac11cfa2fb91ccbd02d1b3c51f8a9e2cc484/src/iminuit/_minimize.py#L102-L121\n+ # https://github.com/scikit-hep/iminuit/blob/23bad7697e39d363f259ca8349684df939b1b2e6/src/iminuit/_minimize.py#L111-L130\n message = \"Optimization terminated successfully.\"\n if not minimizer.valid:\n message = \"Optimization failed.\"\n@@ -128,20 +124,20 @@\n if minimizer.valid:\n # Extra call to hesse() after migrad() is always needed for good error estimates. If you pass a user-provided gradient to MINUIT, convergence is faster.\n minimizer.hesse()\n- hess_inv = minimizer.np_covariance()\n+ hess_inv = minimizer.covariance\n \n unc = None\n if return_uncertainties:\n- unc = minimizer.np_errors()\n+ unc = minimizer.errors\n \n return scipy.optimize.OptimizeResult(\n- x=minimizer.np_values(),\n+ x=minimizer.values,\n unc=unc,\n success=minimizer.valid,\n fun=minimizer.fval,\n hess_inv=hess_inv,\n message=message,\n- nfev=minimizer.ncalls_total,\n- njev=minimizer.ngrads_total,\n+ nfev=minimizer.nfcn,\n+ njev=minimizer.ngrad,\n minuit=minimizer,\n )\n", "issue": " Participate in iminuit v2.0 beta?\nDear pyhf team,\r\n\r\nI am about to finish a major rewrite of iminuit, version 2.0, that replaces Cython as the tool to wrap C++ Minuit2 with pybind11, which is going to solve several issues that the legacy code had. All the good things that this will bring are listed on top of this PR:\r\nscikit-hep/iminuit#502\r\n\r\nSwitching to the new version of iminuit should be completely transparent to you, since the new version passes the comprehensive suite of unit tests of iminuit-v1.x. However, I would like to use this opportunity to finally remove interface that has been successively marked as deprecated in versions 1.3 to 1.5.\r\n\r\nTherefore my two question to you:\r\n\r\n * Did you take note of the deprecation warnings in iminuit and did you keep up with the interface changes so far?\r\n * Are you interested in trying out a Beta release of v2.0 to work out any possible bugs in the new version before the release?\r\n\r\nBest regards,\r\nHans, iminuit maintainer\n", "before_files": [{"content": "\"\"\"Minuit Optimizer Class.\"\"\"\nfrom .. import default_backend, exceptions\nfrom .mixins import OptimizerMixin\nimport scipy\nimport iminuit\n\n\nclass minuit_optimizer(OptimizerMixin):\n \"\"\"\n Optimizer that uses iminuit.Minuit.migrad.\n \"\"\"\n\n __slots__ = ['name', 'errordef', 'steps', 'strategy', 'tolerance']\n\n def __init__(self, *args, **kwargs):\n \"\"\"\n Create MINUIT Optimizer.\n\n .. note::\n\n ``errordef`` should be 1.0 for a least-squares cost function and 0.5\n for negative log-likelihood function. See page 37 of\n http://hep.fi.infn.it/minuit.pdf. This parameter is sometimes\n called ``UP`` in the ``MINUIT`` docs.\n\n\n Args:\n errordef (:obj:`float`): See minuit docs. Default is 1.0.\n steps (:obj:`int`): Number of steps for the bounds. Default is 1000.\n strategy (:obj:`int`): See :attr:`iminuit.Minuit.strategy`. Default is None.\n tolerance (:obj:`float`): tolerance for termination. See specific optimizer for detailed meaning. Default is 0.1.\n \"\"\"\n self.name = 'minuit'\n self.errordef = kwargs.pop('errordef', 1)\n self.steps = kwargs.pop('steps', 1000)\n self.strategy = kwargs.pop('strategy', None)\n self.tolerance = kwargs.pop('tolerance', 0.1)\n super().__init__(*args, **kwargs)\n\n def _get_minimizer(\n self, objective_and_grad, init_pars, init_bounds, fixed_vals=None, do_grad=False\n ):\n\n step_sizes = [(b[1] - b[0]) / float(self.steps) for b in init_bounds]\n fixed_vals = fixed_vals or []\n # Minuit wants True/False for each parameter\n fixed_bools = [False] * len(init_pars)\n for index, val in fixed_vals:\n fixed_bools[index] = True\n init_pars[index] = val\n step_sizes[index] = 0.0\n\n # Minuit requires jac=callable\n if do_grad:\n wrapped_objective = lambda pars: objective_and_grad(pars)[0] # noqa: E731\n jac = lambda pars: objective_and_grad(pars)[1] # noqa: E731\n else:\n wrapped_objective = objective_and_grad\n jac = None\n\n kwargs = dict(\n fcn=wrapped_objective,\n grad=jac,\n start=init_pars,\n error=step_sizes,\n limit=init_bounds,\n fix=fixed_bools,\n print_level=self.verbose,\n errordef=self.errordef,\n )\n return iminuit.Minuit.from_array_func(**kwargs)\n\n def _minimize(\n self,\n minimizer,\n func,\n x0,\n do_grad=False,\n bounds=None,\n fixed_vals=None,\n return_uncertainties=False,\n options={},\n ):\n\n \"\"\"\n Same signature as :func:`scipy.optimize.minimize`.\n\n Note: an additional `minuit` is injected into the fitresult to get the\n underlying minimizer.\n\n Minimizer Options:\n maxiter (:obj:`int`): maximum number of iterations. Default is 100000.\n return_uncertainties (:obj:`bool`): Return uncertainties on the fitted parameters. Default is off.\n strategy (:obj:`int`): See :attr:`iminuit.Minuit.strategy`. Default is to configure in response to `do_grad`.\n\n Returns:\n fitresult (scipy.optimize.OptimizeResult): the fit result\n \"\"\"\n maxiter = options.pop('maxiter', self.maxiter)\n return_uncertainties = options.pop('return_uncertainties', False)\n # 0: Fast, user-provided gradient\n # 1: Default, no user-provided gradient\n strategy = options.pop(\n 'strategy', self.strategy if self.strategy else not do_grad\n )\n tolerance = options.pop('tolerance', self.tolerance)\n if options:\n raise exceptions.Unsupported(\n f\"Unsupported options were passed in: {list(options.keys())}.\"\n )\n\n minimizer.strategy = strategy\n minimizer.tol = tolerance\n minimizer.migrad(ncall=maxiter)\n # Following lines below come from:\n # https://github.com/scikit-hep/iminuit/blob/64acac11cfa2fb91ccbd02d1b3c51f8a9e2cc484/src/iminuit/_minimize.py#L102-L121\n message = \"Optimization terminated successfully.\"\n if not minimizer.valid:\n message = \"Optimization failed.\"\n fmin = minimizer.fmin\n if fmin.has_reached_call_limit:\n message += \" Call limit was reached.\"\n if fmin.is_above_max_edm:\n message += \" Estimated distance to minimum too large.\"\n\n n = len(x0)\n hess_inv = default_backend.ones((n, n))\n if minimizer.valid:\n # Extra call to hesse() after migrad() is always needed for good error estimates. If you pass a user-provided gradient to MINUIT, convergence is faster.\n minimizer.hesse()\n hess_inv = minimizer.np_covariance()\n\n unc = None\n if return_uncertainties:\n unc = minimizer.np_errors()\n\n return scipy.optimize.OptimizeResult(\n x=minimizer.np_values(),\n unc=unc,\n success=minimizer.valid,\n fun=minimizer.fval,\n hess_inv=hess_inv,\n message=message,\n nfev=minimizer.ncalls_total,\n njev=minimizer.ngrads_total,\n minuit=minimizer,\n )\n", "path": "src/pyhf/optimize/opt_minuit.py"}, {"content": "from setuptools import setup\n\nextras_require = {\n 'shellcomplete': ['click_completion'],\n 'tensorflow': [\n 'tensorflow~=2.2.0', # TensorFlow minor releases are as volatile as major\n 'tensorflow-probability~=0.10.0',\n ],\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.2.4', 'jaxlib~=0.1.56'],\n 'xmlio': ['uproot3~=3.14'], # Future proof against uproot4 API changes\n 'minuit': ['iminuit~=1.5.3'],\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted({'matplotlib', 'requests'})\nextras_require['lint'] = sorted({'flake8', 'black'})\n\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + extras_require['shellcomplete']\n + [\n 'pytest~=6.0',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'coverage>=4.0', # coveralls\n 'papermill~=2.0',\n 'nteract-scrapbook~=0.2',\n 'jupyter',\n 'graphviz',\n 'jsonpatch',\n ]\n )\n)\nextras_require['docs'] = sorted(\n {\n 'sphinx>=3.1.2',\n 'sphinxcontrib-bibtex~=1.0',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>0.2.9',\n }\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['lint']\n + extras_require['test']\n + [\n 'nbdime',\n 'bump2version',\n 'ipython',\n 'pre-commit',\n 'check-manifest',\n 'codemetapy>=0.3.4',\n 'twine',\n ]\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n extras_require=extras_require,\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py"}]} | 3,161 | 806 |
gh_patches_debug_9121 | rasdani/github-patches | git_diff | OCHA-DAP__hdx-ckan-1053 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Org Admin: Dataset management page is broken
Log in as a sysadmin user.
Go to:
http://data.hdx.rwlabs.org/organization/bulk_process/ocha-fiss-geneva
</issue>
<code>
[start of ckanext-hdx_orgs/ckanext/hdx_orgs/plugin.py]
1 import logging
2 import ckan.plugins as plugins
3 import ckan.plugins.toolkit as tk
4 import ckan.lib.plugins as lib_plugins
5
6 class HDXOrgFormPlugin(plugins.SingletonPlugin, lib_plugins.DefaultOrganizationForm):
7 plugins.implements(plugins.IConfigurer, inherit=False)
8 plugins.implements(plugins.IRoutes, inherit=True)
9 plugins.implements(plugins.IGroupForm, inherit=False)
10 plugins.implements(plugins.ITemplateHelpers, inherit=False)
11
12 num_times_new_template_called = 0
13 num_times_read_template_called = 0
14 num_times_edit_template_called = 0
15 num_times_search_template_called = 0
16 num_times_history_template_called = 0
17 num_times_package_form_called = 0
18 num_times_check_data_dict_called = 0
19 num_times_setup_template_variables_called = 0
20
21 def update_config(self, config):
22 tk.add_template_directory(config, 'templates')
23
24 def get_helpers(self):
25 return {}
26
27 def is_fallback(self):
28 return False
29
30 def group_types(self):
31 return ['organization']
32
33 def _modify_group_schema(self, schema):
34 schema.update({
35 'description':[tk.get_validator('not_empty')],
36 'org_url':[tk.get_validator('not_missing'), tk.get_converter('convert_to_extras')],
37 })
38 return schema
39
40 def form_to_db_schema(self):
41 schema = super(HDXOrgFormPlugin, self).form_to_db_schema()
42 schema = self._modify_group_schema(schema)
43 return schema
44
45 # def check_data_dict(self, data_dict):
46 # return super(HDXOrgFormPlugin, self).check_data_dict(self, data_dict)
47
48 def db_to_form_schema(self):
49 # There's a bug in dictionary validation when form isn't present
50 if tk.request.urlvars['action'] == 'index' or tk.request.urlvars['action'] == 'edit' or tk.request.urlvars['action'] == 'new':
51 schema = super(HDXOrgFormPlugin, self).form_to_db_schema()
52 schema.update({'description':[tk.get_validator('not_empty')] })
53 schema.update({'org_url':[tk.get_validator('not_missing'), tk.get_converter('convert_to_extras')]})
54 return schema
55 else:
56 return None
57
58 def before_map(self, map):
59 map.connect('user_dashboard', '/dashboard', controller='ckanext.hdx_orgs.dashboard:DashboardController', action='dashboard',
60 ckan_icon='list')
61 return map
62
[end of ckanext-hdx_orgs/ckanext/hdx_orgs/plugin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ckanext-hdx_orgs/ckanext/hdx_orgs/plugin.py b/ckanext-hdx_orgs/ckanext/hdx_orgs/plugin.py
--- a/ckanext-hdx_orgs/ckanext/hdx_orgs/plugin.py
+++ b/ckanext-hdx_orgs/ckanext/hdx_orgs/plugin.py
@@ -58,4 +58,6 @@
def before_map(self, map):
map.connect('user_dashboard', '/dashboard', controller='ckanext.hdx_orgs.dashboard:DashboardController', action='dashboard',
ckan_icon='list')
+ map.connect('organization_bulk_process', '/organization/bulk_process/{org_id}', controller='organization', action='index')
+ map.connect('organization_bulk_process_no_id', '/organization/bulk_process', controller='organization', action='index')
return map
| {"golden_diff": "diff --git a/ckanext-hdx_orgs/ckanext/hdx_orgs/plugin.py b/ckanext-hdx_orgs/ckanext/hdx_orgs/plugin.py\n--- a/ckanext-hdx_orgs/ckanext/hdx_orgs/plugin.py\n+++ b/ckanext-hdx_orgs/ckanext/hdx_orgs/plugin.py\n@@ -58,4 +58,6 @@\n def before_map(self, map):\n map.connect('user_dashboard', '/dashboard', controller='ckanext.hdx_orgs.dashboard:DashboardController', action='dashboard',\n ckan_icon='list')\n+ map.connect('organization_bulk_process', '/organization/bulk_process/{org_id}', controller='organization', action='index')\n+ map.connect('organization_bulk_process_no_id', '/organization/bulk_process', controller='organization', action='index')\n return map\n", "issue": "Org Admin: Dataset management page is broken\nLog in as a sysadmin user.\nGo to:\nhttp://data.hdx.rwlabs.org/organization/bulk_process/ocha-fiss-geneva\n\n", "before_files": [{"content": "import logging\nimport ckan.plugins as plugins\nimport ckan.plugins.toolkit as tk\nimport ckan.lib.plugins as lib_plugins\n\nclass HDXOrgFormPlugin(plugins.SingletonPlugin, lib_plugins.DefaultOrganizationForm):\n plugins.implements(plugins.IConfigurer, inherit=False)\n plugins.implements(plugins.IRoutes, inherit=True)\n plugins.implements(plugins.IGroupForm, inherit=False)\n plugins.implements(plugins.ITemplateHelpers, inherit=False)\n\n num_times_new_template_called = 0\n num_times_read_template_called = 0\n num_times_edit_template_called = 0\n num_times_search_template_called = 0\n num_times_history_template_called = 0\n num_times_package_form_called = 0\n num_times_check_data_dict_called = 0\n num_times_setup_template_variables_called = 0\n\n def update_config(self, config):\n tk.add_template_directory(config, 'templates')\n\n def get_helpers(self):\n return {}\n\n def is_fallback(self):\n return False\n\n def group_types(self):\n return ['organization']\n\n def _modify_group_schema(self, schema):\n schema.update({\n 'description':[tk.get_validator('not_empty')],\n 'org_url':[tk.get_validator('not_missing'), tk.get_converter('convert_to_extras')],\n })\n return schema\n\n def form_to_db_schema(self):\n schema = super(HDXOrgFormPlugin, self).form_to_db_schema()\n schema = self._modify_group_schema(schema)\n return schema\n \n# def check_data_dict(self, data_dict):\n# return super(HDXOrgFormPlugin, self).check_data_dict(self, data_dict)\n \n def db_to_form_schema(self):\n # There's a bug in dictionary validation when form isn't present\n if tk.request.urlvars['action'] == 'index' or tk.request.urlvars['action'] == 'edit' or tk.request.urlvars['action'] == 'new':\n schema = super(HDXOrgFormPlugin, self).form_to_db_schema()\n schema.update({'description':[tk.get_validator('not_empty')] })\n schema.update({'org_url':[tk.get_validator('not_missing'), tk.get_converter('convert_to_extras')]})\n return schema\n else:\n return None\n\n def before_map(self, map):\n map.connect('user_dashboard', '/dashboard', controller='ckanext.hdx_orgs.dashboard:DashboardController', action='dashboard',\n ckan_icon='list')\n return map\n", "path": "ckanext-hdx_orgs/ckanext/hdx_orgs/plugin.py"}]} | 1,250 | 194 |
gh_patches_debug_5383 | rasdani/github-patches | git_diff | quantumlib__Cirq-606 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Engine measurements are bytes but simulator measurements are bools
This causes code that works with the simulator to fail when given engine results. We should make these consistent.
Example code that works with simulator results but not engine results:
```python
a = np.zeros([repetition_count], dtype=np.bool)
a ^= results.measurements['x'][:, 0]
a ^= results.measurements['y'][:, 0]
```
</issue>
<code>
[start of cirq/google/programs.py]
1 # Copyright 2018 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from typing import Dict, Iterable, Sequence, Tuple, TYPE_CHECKING
15
16 import numpy as np
17
18 from cirq.api.google.v1 import operations_pb2
19 from cirq.google import xmon_gates, xmon_gate_ext
20 from cirq.google.xmon_device import XmonDevice
21 from cirq.schedules import Schedule, ScheduledOperation
22 from cirq.value import Timestamp
23
24 if TYPE_CHECKING:
25 from typing import Optional # pylint: disable=unused-import
26
27
28 def schedule_to_proto(schedule: Schedule) -> Iterable[operations_pb2.Operation]:
29 """Convert a schedule into protobufs.
30
31 Args:
32 schedule: The schedule to convert to protobufs. Must contain only gates
33 that can be cast to xmon gates.
34
35 Yields:
36 operations_pb2.Operation
37 """
38 last_time_picos = None # type: Optional[int]
39 for so in schedule.scheduled_operations:
40 gate = xmon_gate_ext.cast(xmon_gates.XmonGate, so.operation.gate)
41 op = gate.to_proto(*so.operation.qubits)
42 time_picos = so.time.raw_picos()
43 if last_time_picos is None:
44 op.incremental_delay_picoseconds = time_picos
45 else:
46 op.incremental_delay_picoseconds = time_picos - last_time_picos
47 last_time_picos = time_picos
48 yield op
49
50
51 def schedule_from_proto(
52 device: XmonDevice,
53 ops: Iterable[operations_pb2.Operation],
54 ) -> Schedule:
55 """Convert protobufs into a Schedule for the given device."""
56 scheduled_ops = []
57 last_time_picos = 0
58 for op in ops:
59 time_picos = last_time_picos + op.incremental_delay_picoseconds
60 last_time_picos = time_picos
61 xmon_op = xmon_gates.XmonGate.from_proto(op)
62 scheduled_ops.append(ScheduledOperation.op_at_on(
63 operation=xmon_op,
64 time=Timestamp(picos=time_picos),
65 device=device,
66 ))
67 return Schedule(device, scheduled_ops)
68
69
70 def pack_results(measurements: Sequence[Tuple[str, np.ndarray]]) -> bytes:
71 """Pack measurement results into a byte string.
72
73 Args:
74 measurements: A sequence of tuples, one for each measurement, consisting
75 of a string key and an array of boolean data. The data should be
76 a 2-D array indexed by (repetition, qubit_index). All data for all
77 measurements must have the same number of repetitions.
78
79 Returns:
80 Packed bytes, as described in the unpack_results docstring below.
81
82 Raises:
83 ValueError if the measurement data do not have the compatible shapes.
84 """
85 if not measurements:
86 return b''
87
88 shapes = [(key, np.shape(data)) for key, data in measurements]
89 if not all(len(shape) == 2 for _, shape in shapes):
90 raise ValueError("Expected 2-D data: shapes={}".format(shapes))
91
92 reps = shapes[0][1][0]
93 if not all(shape[0] == reps for _, shape in shapes):
94 raise ValueError(
95 "Expected same reps for all keys: shapes={}".format(shapes))
96
97 bits = np.hstack(np.asarray(data, dtype=bool) for _, data in measurements)
98 bits = bits.reshape(-1)
99
100 # Pad length to multiple of 8 if needed.
101 remainder = len(bits) % 8
102 if remainder:
103 bits = np.pad(bits, (0, 8 - remainder), 'constant')
104
105 # Pack in little-endian bit order.
106 bits = bits.reshape((-1, 8))[:, ::-1]
107 byte_arr = np.packbits(bits, axis=1).reshape(-1)
108
109 return byte_arr.tobytes()
110
111
112 def unpack_results(
113 data: bytes,
114 repetitions: int,
115 key_sizes: Sequence[Tuple[str, int]]
116 ) -> Dict[str, np.ndarray]:
117 """Unpack data from a bitstring into individual measurement results.
118
119 Args:
120 data: Packed measurement results, in the form <rep0><rep1>...
121 where each repetition is <key0_0>..<key0_{size0-1}><key1_0>...
122 with bits packed in little-endian order in each byte.
123 repetitions: number of repetitions.
124 key_sizes: Keys and sizes of the measurements in the data.
125
126 Returns:
127 Dict mapping measurement key to a 2D array of boolean results. Each
128 array has shape (repetitions, size) with size for that measurement.
129 """
130 bits_per_rep = sum(size for _, size in key_sizes)
131 total_bits = repetitions * bits_per_rep
132
133 byte_arr = np.frombuffer(data, dtype='uint8').reshape((len(data), 1))
134 bits = np.unpackbits(byte_arr, axis=1)[:, ::-1].reshape(-1)
135 bits = bits[:total_bits].reshape((repetitions, bits_per_rep))
136
137 results = {}
138 ofs = 0
139 for key, size in key_sizes:
140 results[key] = bits[:, ofs:ofs + size]
141 ofs += size
142
143 return results
144
[end of cirq/google/programs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cirq/google/programs.py b/cirq/google/programs.py
--- a/cirq/google/programs.py
+++ b/cirq/google/programs.py
@@ -131,7 +131,7 @@
total_bits = repetitions * bits_per_rep
byte_arr = np.frombuffer(data, dtype='uint8').reshape((len(data), 1))
- bits = np.unpackbits(byte_arr, axis=1)[:, ::-1].reshape(-1)
+ bits = np.unpackbits(byte_arr, axis=1)[:, ::-1].reshape(-1).astype(bool)
bits = bits[:total_bits].reshape((repetitions, bits_per_rep))
results = {}
| {"golden_diff": "diff --git a/cirq/google/programs.py b/cirq/google/programs.py\n--- a/cirq/google/programs.py\n+++ b/cirq/google/programs.py\n@@ -131,7 +131,7 @@\n total_bits = repetitions * bits_per_rep\n \n byte_arr = np.frombuffer(data, dtype='uint8').reshape((len(data), 1))\n- bits = np.unpackbits(byte_arr, axis=1)[:, ::-1].reshape(-1)\n+ bits = np.unpackbits(byte_arr, axis=1)[:, ::-1].reshape(-1).astype(bool)\n bits = bits[:total_bits].reshape((repetitions, bits_per_rep))\n \n results = {}\n", "issue": "Engine measurements are bytes but simulator measurements are bools\nThis causes code that works with the simulator to fail when given engine results. We should make these consistent.\r\n\r\nExample code that works with simulator results but not engine results:\r\n\r\n```python\r\na = np.zeros([repetition_count], dtype=np.bool)\r\na ^= results.measurements['x'][:, 0]\r\na ^= results.measurements['y'][:, 0]\r\n```\n", "before_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Dict, Iterable, Sequence, Tuple, TYPE_CHECKING\n\nimport numpy as np\n\nfrom cirq.api.google.v1 import operations_pb2\nfrom cirq.google import xmon_gates, xmon_gate_ext\nfrom cirq.google.xmon_device import XmonDevice\nfrom cirq.schedules import Schedule, ScheduledOperation\nfrom cirq.value import Timestamp\n\nif TYPE_CHECKING:\n from typing import Optional # pylint: disable=unused-import\n\n\ndef schedule_to_proto(schedule: Schedule) -> Iterable[operations_pb2.Operation]:\n \"\"\"Convert a schedule into protobufs.\n\n Args:\n schedule: The schedule to convert to protobufs. Must contain only gates\n that can be cast to xmon gates.\n\n Yields:\n operations_pb2.Operation\n \"\"\"\n last_time_picos = None # type: Optional[int]\n for so in schedule.scheduled_operations:\n gate = xmon_gate_ext.cast(xmon_gates.XmonGate, so.operation.gate)\n op = gate.to_proto(*so.operation.qubits)\n time_picos = so.time.raw_picos()\n if last_time_picos is None:\n op.incremental_delay_picoseconds = time_picos\n else:\n op.incremental_delay_picoseconds = time_picos - last_time_picos\n last_time_picos = time_picos\n yield op\n\n\ndef schedule_from_proto(\n device: XmonDevice,\n ops: Iterable[operations_pb2.Operation],\n) -> Schedule:\n \"\"\"Convert protobufs into a Schedule for the given device.\"\"\"\n scheduled_ops = []\n last_time_picos = 0\n for op in ops:\n time_picos = last_time_picos + op.incremental_delay_picoseconds\n last_time_picos = time_picos\n xmon_op = xmon_gates.XmonGate.from_proto(op)\n scheduled_ops.append(ScheduledOperation.op_at_on(\n operation=xmon_op,\n time=Timestamp(picos=time_picos),\n device=device,\n ))\n return Schedule(device, scheduled_ops)\n\n\ndef pack_results(measurements: Sequence[Tuple[str, np.ndarray]]) -> bytes:\n \"\"\"Pack measurement results into a byte string.\n\n Args:\n measurements: A sequence of tuples, one for each measurement, consisting\n of a string key and an array of boolean data. The data should be\n a 2-D array indexed by (repetition, qubit_index). All data for all\n measurements must have the same number of repetitions.\n\n Returns:\n Packed bytes, as described in the unpack_results docstring below.\n\n Raises:\n ValueError if the measurement data do not have the compatible shapes.\n \"\"\"\n if not measurements:\n return b''\n\n shapes = [(key, np.shape(data)) for key, data in measurements]\n if not all(len(shape) == 2 for _, shape in shapes):\n raise ValueError(\"Expected 2-D data: shapes={}\".format(shapes))\n\n reps = shapes[0][1][0]\n if not all(shape[0] == reps for _, shape in shapes):\n raise ValueError(\n \"Expected same reps for all keys: shapes={}\".format(shapes))\n\n bits = np.hstack(np.asarray(data, dtype=bool) for _, data in measurements)\n bits = bits.reshape(-1)\n\n # Pad length to multiple of 8 if needed.\n remainder = len(bits) % 8\n if remainder:\n bits = np.pad(bits, (0, 8 - remainder), 'constant')\n\n # Pack in little-endian bit order.\n bits = bits.reshape((-1, 8))[:, ::-1]\n byte_arr = np.packbits(bits, axis=1).reshape(-1)\n\n return byte_arr.tobytes()\n\n\ndef unpack_results(\n data: bytes,\n repetitions: int,\n key_sizes: Sequence[Tuple[str, int]]\n) -> Dict[str, np.ndarray]:\n \"\"\"Unpack data from a bitstring into individual measurement results.\n\n Args:\n data: Packed measurement results, in the form <rep0><rep1>...\n where each repetition is <key0_0>..<key0_{size0-1}><key1_0>...\n with bits packed in little-endian order in each byte.\n repetitions: number of repetitions.\n key_sizes: Keys and sizes of the measurements in the data.\n\n Returns:\n Dict mapping measurement key to a 2D array of boolean results. Each\n array has shape (repetitions, size) with size for that measurement.\n \"\"\"\n bits_per_rep = sum(size for _, size in key_sizes)\n total_bits = repetitions * bits_per_rep\n\n byte_arr = np.frombuffer(data, dtype='uint8').reshape((len(data), 1))\n bits = np.unpackbits(byte_arr, axis=1)[:, ::-1].reshape(-1)\n bits = bits[:total_bits].reshape((repetitions, bits_per_rep))\n\n results = {}\n ofs = 0\n for key, size in key_sizes:\n results[key] = bits[:, ofs:ofs + size]\n ofs += size\n\n return results\n", "path": "cirq/google/programs.py"}]} | 2,185 | 154 |
gh_patches_debug_30606 | rasdani/github-patches | git_diff | streamlink__streamlink-5444 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
plugins.lrt: stream is reported Forbidden (though plays if opened manually)
### Checklist
- [X] This is a plugin issue and not a different kind of issue
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
Latest stable release
### Description
When trying to open https://www.lrt.lt/mediateka/tiesiogiai/lrt-televizija or https://www.lrt.lt/mediateka/tiesiogiai/lrt-plius, an error is reported (see the log below).
However, if I try to manually pass the m3u8 URL mentioned in the error to `mpv`, like this (the URL taken from the log below, note the absence of the `%0A` at the end of it):
mpv https://af5dcb595ac445ab94d7da3af2ebb360.dlvr1.net/lrt_hd/master.m3u8?RxKc3mPWTMxjM1SuDkHZeW1Fw3jEx0oqyryrSQODiHo-Bs31UZVEBEPkLtrdbPKVKrlorJgTLUnSwqks_5Y1QrSQRYfbtlWddOuLrpnY9-kuyM_3QE_yBbqwzhre
...then, after a few ffmpeg errors and warnings, it does open.
The error started to appear a few days ago, worked perfectly before that (so, probably, they changed something at their side).
Thanks.
### Debug log
```text
[cli][debug] OS: Linux-5.15.0-76-generic-x86_64-with-glibc2.35
[cli][debug] Python: 3.11.3
[cli][debug] Streamlink: 5.5.1
[cli][debug] Dependencies:
[cli][debug] certifi: 2023.5.7
[cli][debug] isodate: 0.6.1
[cli][debug] lxml: 4.9.2
[cli][debug] pycountry: 22.3.5
[cli][debug] pycryptodome: 3.18.0
[cli][debug] PySocks: 1.7.1
[cli][debug] requests: 2.31.0
[cli][debug] urllib3: 2.0.2
[cli][debug] websocket-client: 1.5.2
[cli][debug] Arguments:
[cli][debug] url=https://www.lrt.lt/mediateka/tiesiogiai/lrt-televizija
[cli][debug] --loglevel=debug
[cli][info] Found matching plugin lrt for URL https://www.lrt.lt/mediateka/tiesiogiai/lrt-televizija
[utils.l10n][debug] Language code: en_US
error: Unable to open URL: https://af5dcb595ac445ab94d7da3af2ebb360.dlvr1.net/lrt_hd/master.m3u8?RxKc3mPWTMxjM1SuDkHZeW1Fw3jEx0oqyryrSQODiHo-Bs31UZVEBEPkLtrdbPKVKrlorJgTLUnSwqks_5Y1QrSQRYfbtlWddOuLrpnY9-kuyM_3QE_yBbqwzhre
(403 Client Error: Forbidden for url: https://af5dcb595ac445ab94d7da3af2ebb360.dlvr1.net/lrt_hd/master.m3u8?RxKc3mPWTMxjM1SuDkHZeW1Fw3jEx0oqyryrSQODiHo-Bs31UZVEBEPkLtrdbPKVKrlorJgTLUnSwqks_5Y1QrSQRYfbtlWddOuLrpnY9-kuyM_3QE_yBbqwzhre%0A)
```
</issue>
<code>
[start of src/streamlink/plugins/lrt.py]
1 """
2 $description Live TV channels from LRT, a Lithuanian public, state-owned broadcaster.
3 $url lrt.lt
4 $type live
5 """
6
7 import logging
8 import re
9
10 from streamlink.plugin import Plugin, pluginmatcher
11 from streamlink.stream.hls import HLSStream
12
13
14 log = logging.getLogger(__name__)
15
16
17 @pluginmatcher(re.compile(
18 r"https?://(?:www\.)?lrt\.lt/mediateka/tiesiogiai/",
19 ))
20 class LRT(Plugin):
21 _video_id_re = re.compile(r"""var\svideo_id\s*=\s*["'](?P<video_id>\w+)["']""")
22 API_URL = "https://www.lrt.lt/servisai/stream_url/live/get_live_url.php?channel={0}"
23
24 def _get_streams(self):
25 page = self.session.http.get(self.url)
26 m = self._video_id_re.search(page.text)
27 if m:
28 video_id = m.group("video_id")
29 data = self.session.http.get(self.API_URL.format(video_id)).json()
30 hls_url = data["response"]["data"]["content"]
31
32 yield from HLSStream.parse_variant_playlist(self.session, hls_url).items()
33 else:
34 log.debug("No match for video_id regex")
35
36
37 __plugin__ = LRT
38
[end of src/streamlink/plugins/lrt.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/streamlink/plugins/lrt.py b/src/streamlink/plugins/lrt.py
--- a/src/streamlink/plugins/lrt.py
+++ b/src/streamlink/plugins/lrt.py
@@ -4,34 +4,42 @@
$type live
"""
-import logging
import re
from streamlink.plugin import Plugin, pluginmatcher
+from streamlink.plugin.api import validate
from streamlink.stream.hls import HLSStream
-log = logging.getLogger(__name__)
-
-
@pluginmatcher(re.compile(
r"https?://(?:www\.)?lrt\.lt/mediateka/tiesiogiai/",
))
class LRT(Plugin):
- _video_id_re = re.compile(r"""var\svideo_id\s*=\s*["'](?P<video_id>\w+)["']""")
- API_URL = "https://www.lrt.lt/servisai/stream_url/live/get_live_url.php?channel={0}"
-
def _get_streams(self):
- page = self.session.http.get(self.url)
- m = self._video_id_re.search(page.text)
- if m:
- video_id = m.group("video_id")
- data = self.session.http.get(self.API_URL.format(video_id)).json()
- hls_url = data["response"]["data"]["content"]
-
- yield from HLSStream.parse_variant_playlist(self.session, hls_url).items()
- else:
- log.debug("No match for video_id regex")
+ token_url = self.session.http.get(self.url, schema=validate.Schema(
+ re.compile(r"""var\s+tokenURL\s*=\s*(?P<q>["'])(?P<url>https://\S+)(?P=q)"""),
+ validate.none_or_all(validate.get("url")),
+ ))
+ if not token_url:
+ return
+
+ hls_url = self.session.http.get(token_url, schema=validate.Schema(
+ validate.parse_json(),
+ {
+ "response": {
+ "data": {
+ "content": validate.all(
+ str,
+ validate.transform(lambda url: url.strip()),
+ validate.url(path=validate.endswith(".m3u8")),
+ ),
+ },
+ },
+ },
+ validate.get(("response", "data", "content")),
+ ))
+
+ return HLSStream.parse_variant_playlist(self.session, hls_url)
__plugin__ = LRT
| {"golden_diff": "diff --git a/src/streamlink/plugins/lrt.py b/src/streamlink/plugins/lrt.py\n--- a/src/streamlink/plugins/lrt.py\n+++ b/src/streamlink/plugins/lrt.py\n@@ -4,34 +4,42 @@\n $type live\n \"\"\"\n \n-import logging\n import re\n \n from streamlink.plugin import Plugin, pluginmatcher\n+from streamlink.plugin.api import validate\n from streamlink.stream.hls import HLSStream\n \n \n-log = logging.getLogger(__name__)\n-\n-\n @pluginmatcher(re.compile(\n r\"https?://(?:www\\.)?lrt\\.lt/mediateka/tiesiogiai/\",\n ))\n class LRT(Plugin):\n- _video_id_re = re.compile(r\"\"\"var\\svideo_id\\s*=\\s*[\"'](?P<video_id>\\w+)[\"']\"\"\")\n- API_URL = \"https://www.lrt.lt/servisai/stream_url/live/get_live_url.php?channel={0}\"\n-\n def _get_streams(self):\n- page = self.session.http.get(self.url)\n- m = self._video_id_re.search(page.text)\n- if m:\n- video_id = m.group(\"video_id\")\n- data = self.session.http.get(self.API_URL.format(video_id)).json()\n- hls_url = data[\"response\"][\"data\"][\"content\"]\n-\n- yield from HLSStream.parse_variant_playlist(self.session, hls_url).items()\n- else:\n- log.debug(\"No match for video_id regex\")\n+ token_url = self.session.http.get(self.url, schema=validate.Schema(\n+ re.compile(r\"\"\"var\\s+tokenURL\\s*=\\s*(?P<q>[\"'])(?P<url>https://\\S+)(?P=q)\"\"\"),\n+ validate.none_or_all(validate.get(\"url\")),\n+ ))\n+ if not token_url:\n+ return\n+\n+ hls_url = self.session.http.get(token_url, schema=validate.Schema(\n+ validate.parse_json(),\n+ {\n+ \"response\": {\n+ \"data\": {\n+ \"content\": validate.all(\n+ str,\n+ validate.transform(lambda url: url.strip()),\n+ validate.url(path=validate.endswith(\".m3u8\")),\n+ ),\n+ },\n+ },\n+ },\n+ validate.get((\"response\", \"data\", \"content\")),\n+ ))\n+\n+ return HLSStream.parse_variant_playlist(self.session, hls_url)\n \n \n __plugin__ = LRT\n", "issue": "plugins.lrt: stream is reported Forbidden (though plays if opened manually)\n### Checklist\n\n- [X] This is a plugin issue and not a different kind of issue\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\n\n### Streamlink version\n\nLatest stable release\n\n### Description\n\nWhen trying to open https://www.lrt.lt/mediateka/tiesiogiai/lrt-televizija or https://www.lrt.lt/mediateka/tiesiogiai/lrt-plius, an error is reported (see the log below).\r\n\r\nHowever, if I try to manually pass the m3u8 URL mentioned in the error to `mpv`, like this (the URL taken from the log below, note the absence of the `%0A` at the end of it):\r\n\r\n mpv https://af5dcb595ac445ab94d7da3af2ebb360.dlvr1.net/lrt_hd/master.m3u8?RxKc3mPWTMxjM1SuDkHZeW1Fw3jEx0oqyryrSQODiHo-Bs31UZVEBEPkLtrdbPKVKrlorJgTLUnSwqks_5Y1QrSQRYfbtlWddOuLrpnY9-kuyM_3QE_yBbqwzhre\r\n\r\n...then, after a few ffmpeg errors and warnings, it does open.\r\n\r\nThe error started to appear a few days ago, worked perfectly before that (so, probably, they changed something at their side).\r\n\r\nThanks.\n\n### Debug log\n\n```text\n[cli][debug] OS: Linux-5.15.0-76-generic-x86_64-with-glibc2.35\r\n[cli][debug] Python: 3.11.3\r\n[cli][debug] Streamlink: 5.5.1\r\n[cli][debug] Dependencies:\r\n[cli][debug] certifi: 2023.5.7\r\n[cli][debug] isodate: 0.6.1\r\n[cli][debug] lxml: 4.9.2\r\n[cli][debug] pycountry: 22.3.5\r\n[cli][debug] pycryptodome: 3.18.0\r\n[cli][debug] PySocks: 1.7.1\r\n[cli][debug] requests: 2.31.0\r\n[cli][debug] urllib3: 2.0.2\r\n[cli][debug] websocket-client: 1.5.2\r\n[cli][debug] Arguments:\r\n[cli][debug] url=https://www.lrt.lt/mediateka/tiesiogiai/lrt-televizija\r\n[cli][debug] --loglevel=debug\r\n[cli][info] Found matching plugin lrt for URL https://www.lrt.lt/mediateka/tiesiogiai/lrt-televizija\r\n[utils.l10n][debug] Language code: en_US\r\nerror: Unable to open URL: https://af5dcb595ac445ab94d7da3af2ebb360.dlvr1.net/lrt_hd/master.m3u8?RxKc3mPWTMxjM1SuDkHZeW1Fw3jEx0oqyryrSQODiHo-Bs31UZVEBEPkLtrdbPKVKrlorJgTLUnSwqks_5Y1QrSQRYfbtlWddOuLrpnY9-kuyM_3QE_yBbqwzhre\r\n (403 Client Error: Forbidden for url: https://af5dcb595ac445ab94d7da3af2ebb360.dlvr1.net/lrt_hd/master.m3u8?RxKc3mPWTMxjM1SuDkHZeW1Fw3jEx0oqyryrSQODiHo-Bs31UZVEBEPkLtrdbPKVKrlorJgTLUnSwqks_5Y1QrSQRYfbtlWddOuLrpnY9-kuyM_3QE_yBbqwzhre%0A)\n```\n\n", "before_files": [{"content": "\"\"\"\n$description Live TV channels from LRT, a Lithuanian public, state-owned broadcaster.\n$url lrt.lt\n$type live\n\"\"\"\n\nimport logging\nimport re\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.stream.hls import HLSStream\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(\n r\"https?://(?:www\\.)?lrt\\.lt/mediateka/tiesiogiai/\",\n))\nclass LRT(Plugin):\n _video_id_re = re.compile(r\"\"\"var\\svideo_id\\s*=\\s*[\"'](?P<video_id>\\w+)[\"']\"\"\")\n API_URL = \"https://www.lrt.lt/servisai/stream_url/live/get_live_url.php?channel={0}\"\n\n def _get_streams(self):\n page = self.session.http.get(self.url)\n m = self._video_id_re.search(page.text)\n if m:\n video_id = m.group(\"video_id\")\n data = self.session.http.get(self.API_URL.format(video_id)).json()\n hls_url = data[\"response\"][\"data\"][\"content\"]\n\n yield from HLSStream.parse_variant_playlist(self.session, hls_url).items()\n else:\n log.debug(\"No match for video_id regex\")\n\n\n__plugin__ = LRT\n", "path": "src/streamlink/plugins/lrt.py"}]} | 1,940 | 525 |
gh_patches_debug_16743 | rasdani/github-patches | git_diff | ytdl-org__youtube-dl-20646 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[YourPorn] Domain changed to sxyprn.com
YourPorn.sexy Domain changed to sxyprn.com which breaks the extractor and then falls back to the generic downloader which gets the thumbnail instead of video
```
youtube-dl --verbose 'https://sxyprn.com/post/5cad9f8a26d51.html?sk=Many-Vids&so=0&ss=latest'
[debug] System config: []
[debug] Custom config: []
[debug] Command-line args: ['--verbose', 'https://sxyprn.com/post/5cad9f8a26d51.html?sk=Many-Vids&so=0&ss=latest']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2019.04.07
[debug] Python version 3.7.3 (CPython) - Linux-5.0.7-arch1-1-ARCH-x86_64-with-arch-Arch-Linux
[debug] Proxy map: {}
[generic] 5cad9f8a26d51: Requesting header
WARNING: Falling back on generic information extractor.
[generic] 5cad9f8a26d51: Downloading webpage
[generic] 5cad9f8a26d51: Extracting information
[generic] playlist Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn: Collected 12 video ids (downloading 12 of them)
[download] Downloading video 1 of 12
[debug] Default format spec: bestvideo+bestaudio/best
[debug] Invoking downloader on 'http://s10.trafficdeposit.com//blog/vid/5a6534e03bbb4/5cab7d0bd2a31/vidthumb.mp4'
1-1.mp4
[download] 100% of 249.05KiB in 00:00
[download] Downloading video 2 of 12
[debug] Default format spec: bestvideo+bestaudio/best
[debug] Invoking downloader on 'http://s18.trafficdeposit.com//blog/vid/5c7aa58c8c195/5caa34cd5927b/vidthumb.mp4'
1-2.mp4
[download] Downloading video 3 of 12
[debug] Default format spec: bestvideo+bestaudio/best
[debug] Invoking downloader on 'http://s14.trafficdeposit.com//blog/vid/5ba53b584947a/5cacd0a1ad653/vidthumb.mp4'
1-3.mp4
[download] 100% of 123.50KiB in 00:00
[download] Downloading video 4 of 12
[debug] Default format spec: bestvideo+bestaudio/best
[debug] Invoking downloader on 'http://s19.trafficdeposit.com//blog/vid/5ab2a85d635c4/5ca9a3d3675fb/vidthumb.mp4'
1-4.mp4
[download] 100% of 172.44KiB in 00:00
[download] Downloading video 5 of 12
[debug] Default format spec: bestvideo+bestaudio/best
[debug] Invoking downloader on 'http://s19.trafficdeposit.com//blog/vid/5ba53b584947a/5cab801a05f9b/vidthumb.mp4'
1-5.mp4
[download] 100% of 118.19KiB in 00:00
[debug] Default format spec: bestvideo+bestaudio/best
[debug] Invoking downloader on 'http://s4.trafficdeposit.com//blog/vid/5b68a7a3c3d95/5cad071036ebd/vidthumb.mp4'
[download] Destination: Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn (6)-5cad9f8a26d51-6.mp4
[download] 100% of 135.05KiB in 00:00
[download] Downloading video 7 of 12
[debug] Default format spec: bestvideo+bestaudio/best
[debug] Invoking downloader on 'http://s20.trafficdeposit.com//blog/vid/59b613729e694/5cada7480fcec/vidthumb.mp4'
[download] Destination: Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn (7)-5cad9f8a26d51-7.mp4
[download] 100% of 176.89KiB in 00:00
[download] Downloading video 8 of 12
[debug] Default format spec: bestvideo+bestaudio/best
[debug] Invoking downloader on 'http://s8.trafficdeposit.com//blog/vid/5ab2a85d635c4/5cad9dce1958a/vidthumb.mp4'
[download] Destination: Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn (8)-5cad9f8a26d51-8.mp4
[download] 100% of 155.98KiB in 00:00
[download] Downloading video 9 of 12
[debug] Default format spec: bestvideo+bestaudio/best
[debug] Invoking downloader on 'http://s15.trafficdeposit.com//blog/vid/5ab2a85d635c4/5cad966a5596a/vidthumb.mp4'
[download] Destination: Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn (9)-5cad9f8a26d51-9.mp4
[download] 100% of 380.13KiB in 00:01
[download] Downloading video 10 of 12
[debug] Default format spec: bestvideo+bestaudio/best
[debug] Invoking downloader on 'http://s13.trafficdeposit.com//blog/vid/5ba53b584947a/5cad35c5ce1e4/vidthumb.mp4'
[download] Destination: Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn (10)-5cad9f8a26d51-10.mp4
[download] 100% of 254.63KiB in 00:00
[download] Downloading video 11 of 12
[debug] Default format spec: bestvideo+bestaudio/best
[debug] Invoking downloader on 'http://s4.trafficdeposit.com//blog/vid/5ba53b584947a/5cad2dd3ee706/vidthumb.mp4'
[download] Destination: Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn (11)-5cad9f8a26d51-11.mp4
[download] 100% of 272.29KiB in 00:00
[download] Downloading video 12 of 12
[debug] Default format spec: bestvideo+bestaudio/best
[debug] Invoking downloader on 'http://s18.trafficdeposit.com//blog/vid/899334366634979328/5cad288db659a/vidthumb.mp4'
[download] Destination: Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn (12)-5cad9f8a26d51-12.mp4
[download] 100% of 279.43KiB in 00:00
[download] Finished downloading playlist: Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn
```
</issue>
<code>
[start of youtube_dl/extractor/yourporn.py]
1 from __future__ import unicode_literals
2
3 from .common import InfoExtractor
4 from ..utils import (
5 parse_duration,
6 urljoin,
7 )
8
9
10 class YourPornIE(InfoExtractor):
11 _VALID_URL = r'https?://(?:www\.)?yourporn\.sexy/post/(?P<id>[^/?#&.]+)'
12 _TEST = {
13 'url': 'https://yourporn.sexy/post/57ffcb2e1179b.html',
14 'md5': '6f8682b6464033d87acaa7a8ff0c092e',
15 'info_dict': {
16 'id': '57ffcb2e1179b',
17 'ext': 'mp4',
18 'title': 'md5:c9f43630bd968267672651ba905a7d35',
19 'thumbnail': r're:^https?://.*\.jpg$',
20 'duration': 165,
21 'age_limit': 18,
22 },
23 'params': {
24 'skip_download': True,
25 },
26 }
27
28 def _real_extract(self, url):
29 video_id = self._match_id(url)
30
31 webpage = self._download_webpage(url, video_id)
32
33 video_url = urljoin(url, self._parse_json(
34 self._search_regex(
35 r'data-vnfo=(["\'])(?P<data>{.+?})\1', webpage, 'data info',
36 group='data'),
37 video_id)[video_id]).replace('/cdn/', '/cdn4/')
38
39 title = (self._search_regex(
40 r'<[^>]+\bclass=["\']PostEditTA[^>]+>([^<]+)', webpage, 'title',
41 default=None) or self._og_search_description(webpage)).strip()
42 thumbnail = self._og_search_thumbnail(webpage)
43 duration = parse_duration(self._search_regex(
44 r'duration\s*:\s*<[^>]+>([\d:]+)', webpage, 'duration',
45 default=None))
46
47 return {
48 'id': video_id,
49 'url': video_url,
50 'title': title,
51 'thumbnail': thumbnail,
52 'duration': duration,
53 'age_limit': 18,
54 }
55
[end of youtube_dl/extractor/yourporn.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/youtube_dl/extractor/yourporn.py b/youtube_dl/extractor/yourporn.py
--- a/youtube_dl/extractor/yourporn.py
+++ b/youtube_dl/extractor/yourporn.py
@@ -8,8 +8,8 @@
class YourPornIE(InfoExtractor):
- _VALID_URL = r'https?://(?:www\.)?yourporn\.sexy/post/(?P<id>[^/?#&.]+)'
- _TEST = {
+ _VALID_URL = r'https?://(?:www\.)?(?:yourporn\.sexy|sxyprn\.com)/post/(?P<id>[^/?#&.]+)'
+ _TESTS = [{
'url': 'https://yourporn.sexy/post/57ffcb2e1179b.html',
'md5': '6f8682b6464033d87acaa7a8ff0c092e',
'info_dict': {
@@ -23,7 +23,10 @@
'params': {
'skip_download': True,
},
- }
+ }, {
+ 'url': 'https://sxyprn.com/post/57ffcb2e1179b.html',
+ 'only_matching': True,
+ }]
def _real_extract(self, url):
video_id = self._match_id(url)
| {"golden_diff": "diff --git a/youtube_dl/extractor/yourporn.py b/youtube_dl/extractor/yourporn.py\n--- a/youtube_dl/extractor/yourporn.py\n+++ b/youtube_dl/extractor/yourporn.py\n@@ -8,8 +8,8 @@\n \n \n class YourPornIE(InfoExtractor):\n- _VALID_URL = r'https?://(?:www\\.)?yourporn\\.sexy/post/(?P<id>[^/?#&.]+)'\n- _TEST = {\n+ _VALID_URL = r'https?://(?:www\\.)?(?:yourporn\\.sexy|sxyprn\\.com)/post/(?P<id>[^/?#&.]+)'\n+ _TESTS = [{\n 'url': 'https://yourporn.sexy/post/57ffcb2e1179b.html',\n 'md5': '6f8682b6464033d87acaa7a8ff0c092e',\n 'info_dict': {\n@@ -23,7 +23,10 @@\n 'params': {\n 'skip_download': True,\n },\n- }\n+ }, {\n+ 'url': 'https://sxyprn.com/post/57ffcb2e1179b.html',\n+ 'only_matching': True,\n+ }]\n \n def _real_extract(self, url):\n video_id = self._match_id(url)\n", "issue": "[YourPorn] Domain changed to sxyprn.com\nYourPorn.sexy Domain changed to sxyprn.com which breaks the extractor and then falls back to the generic downloader which gets the thumbnail instead of video \r\n```\r\nyoutube-dl --verbose 'https://sxyprn.com/post/5cad9f8a26d51.html?sk=Many-Vids&so=0&ss=latest' \r\n[debug] System config: []\r\n[debug] Custom config: []\r\n[debug] Command-line args: ['--verbose', 'https://sxyprn.com/post/5cad9f8a26d51.html?sk=Many-Vids&so=0&ss=latest']\r\n[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8\r\n[debug] youtube-dl version 2019.04.07\r\n[debug] Python version 3.7.3 (CPython) - Linux-5.0.7-arch1-1-ARCH-x86_64-with-arch-Arch-Linux\r\n[debug] Proxy map: {}\r\n[generic] 5cad9f8a26d51: Requesting header\r\nWARNING: Falling back on generic information extractor.\r\n[generic] 5cad9f8a26d51: Downloading webpage\r\n[generic] 5cad9f8a26d51: Extracting information\r\n[generic] playlist Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn: Collected 12 video ids (downloading 12 of them)\r\n[download] Downloading video 1 of 12\r\n[debug] Default format spec: bestvideo+bestaudio/best\r\n[debug] Invoking downloader on 'http://s10.trafficdeposit.com//blog/vid/5a6534e03bbb4/5cab7d0bd2a31/vidthumb.mp4'\r\n1-1.mp4\r\n[download] 100% of 249.05KiB in 00:00\r\n[download] Downloading video 2 of 12\r\n[debug] Default format spec: bestvideo+bestaudio/best\r\n[debug] Invoking downloader on 'http://s18.trafficdeposit.com//blog/vid/5c7aa58c8c195/5caa34cd5927b/vidthumb.mp4'\r\n1-2.mp4\r\n[download] Downloading video 3 of 12\r\n[debug] Default format spec: bestvideo+bestaudio/best\r\n[debug] Invoking downloader on 'http://s14.trafficdeposit.com//blog/vid/5ba53b584947a/5cacd0a1ad653/vidthumb.mp4'\r\n1-3.mp4\r\n[download] 100% of 123.50KiB in 00:00\r\n[download] Downloading video 4 of 12\r\n[debug] Default format spec: bestvideo+bestaudio/best\r\n[debug] Invoking downloader on 'http://s19.trafficdeposit.com//blog/vid/5ab2a85d635c4/5ca9a3d3675fb/vidthumb.mp4'\r\n1-4.mp4\r\n[download] 100% of 172.44KiB in 00:00\r\n[download] Downloading video 5 of 12\r\n[debug] Default format spec: bestvideo+bestaudio/best\r\n[debug] Invoking downloader on 'http://s19.trafficdeposit.com//blog/vid/5ba53b584947a/5cab801a05f9b/vidthumb.mp4'\r\n1-5.mp4\r\n[download] 100% of 118.19KiB in 00:00\r\n[debug] Default format spec: bestvideo+bestaudio/best\r\n[debug] Invoking downloader on 'http://s4.trafficdeposit.com//blog/vid/5b68a7a3c3d95/5cad071036ebd/vidthumb.mp4'\r\n[download] Destination: Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn (6)-5cad9f8a26d51-6.mp4\r\n[download] 100% of 135.05KiB in 00:00\r\n[download] Downloading video 7 of 12\r\n[debug] Default format spec: bestvideo+bestaudio/best\r\n[debug] Invoking downloader on 'http://s20.trafficdeposit.com//blog/vid/59b613729e694/5cada7480fcec/vidthumb.mp4'\r\n[download] Destination: Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn (7)-5cad9f8a26d51-7.mp4\r\n[download] 100% of 176.89KiB in 00:00\r\n[download] Downloading video 8 of 12\r\n[debug] Default format spec: bestvideo+bestaudio/best\r\n[debug] Invoking downloader on 'http://s8.trafficdeposit.com//blog/vid/5ab2a85d635c4/5cad9dce1958a/vidthumb.mp4'\r\n[download] Destination: Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn (8)-5cad9f8a26d51-8.mp4\r\n[download] 100% of 155.98KiB in 00:00\r\n[download] Downloading video 9 of 12\r\n[debug] Default format spec: bestvideo+bestaudio/best\r\n[debug] Invoking downloader on 'http://s15.trafficdeposit.com//blog/vid/5ab2a85d635c4/5cad966a5596a/vidthumb.mp4'\r\n[download] Destination: Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn (9)-5cad9f8a26d51-9.mp4\r\n[download] 100% of 380.13KiB in 00:01\r\n[download] Downloading video 10 of 12\r\n[debug] Default format spec: bestvideo+bestaudio/best\r\n[debug] Invoking downloader on 'http://s13.trafficdeposit.com//blog/vid/5ba53b584947a/5cad35c5ce1e4/vidthumb.mp4'\r\n[download] Destination: Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn (10)-5cad9f8a26d51-10.mp4\r\n[download] 100% of 254.63KiB in 00:00\r\n[download] Downloading video 11 of 12\r\n[debug] Default format spec: bestvideo+bestaudio/best\r\n[debug] Invoking downloader on 'http://s4.trafficdeposit.com//blog/vid/5ba53b584947a/5cad2dd3ee706/vidthumb.mp4'\r\n[download] Destination: Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn (11)-5cad9f8a26d51-11.mp4\r\n[download] 100% of 272.29KiB in 00:00\r\n[download] Downloading video 12 of 12\r\n[debug] Default format spec: bestvideo+bestaudio/best\r\n[debug] Invoking downloader on 'http://s18.trafficdeposit.com//blog/vid/899334366634979328/5cad288db659a/vidthumb.mp4'\r\n[download] Destination: Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn (12)-5cad9f8a26d51-12.mp4\r\n[download] 100% of 279.43KiB in 00:00\r\n[download] Finished downloading playlist: Lunaxjames - Fucking My Asian Sex Doll - Manyvids #asian #bubblebutt #pale #roleplay #pigtails on SexyPorn\r\n```\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nfrom .common import InfoExtractor\nfrom ..utils import (\n parse_duration,\n urljoin,\n)\n\n\nclass YourPornIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?yourporn\\.sexy/post/(?P<id>[^/?#&.]+)'\n _TEST = {\n 'url': 'https://yourporn.sexy/post/57ffcb2e1179b.html',\n 'md5': '6f8682b6464033d87acaa7a8ff0c092e',\n 'info_dict': {\n 'id': '57ffcb2e1179b',\n 'ext': 'mp4',\n 'title': 'md5:c9f43630bd968267672651ba905a7d35',\n 'thumbnail': r're:^https?://.*\\.jpg$',\n 'duration': 165,\n 'age_limit': 18,\n },\n 'params': {\n 'skip_download': True,\n },\n }\n\n def _real_extract(self, url):\n video_id = self._match_id(url)\n\n webpage = self._download_webpage(url, video_id)\n\n video_url = urljoin(url, self._parse_json(\n self._search_regex(\n r'data-vnfo=([\"\\'])(?P<data>{.+?})\\1', webpage, 'data info',\n group='data'),\n video_id)[video_id]).replace('/cdn/', '/cdn4/')\n\n title = (self._search_regex(\n r'<[^>]+\\bclass=[\"\\']PostEditTA[^>]+>([^<]+)', webpage, 'title',\n default=None) or self._og_search_description(webpage)).strip()\n thumbnail = self._og_search_thumbnail(webpage)\n duration = parse_duration(self._search_regex(\n r'duration\\s*:\\s*<[^>]+>([\\d:]+)', webpage, 'duration',\n default=None))\n\n return {\n 'id': video_id,\n 'url': video_url,\n 'title': title,\n 'thumbnail': thumbnail,\n 'duration': duration,\n 'age_limit': 18,\n }\n", "path": "youtube_dl/extractor/yourporn.py"}]} | 3,230 | 318 |
gh_patches_debug_56707 | rasdani/github-patches | git_diff | openshift__openshift-ansible-2630 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
maximum recursion depth exceeded -- related to callback/default.py
Running the `ansible-playbook -b --become-user root -i ansible-ose-inventory /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml`
i am getting
```
statically included: /usr/share/ansible/openshift-ansible/roles/openshift_hosted/tasks/registry/registry.yml
statically included: /usr/share/ansible/openshift-ansible/roles/openshift_metrics/tasks/install.yml
ERROR! Unexpected Exception: maximum recursion depth exceeded while calling a Python object
the full traceback was:
Traceback (most recent call last):
File "/bin/ansible-playbook", line 103, in <module>
exit_code = cli.run()
File "/usr/lib/python2.7/site-packages/ansible/cli/playbook.py", line 159, in run
results = pbex.run()
File "/usr/lib/python2.7/site-packages/ansible/executor/playbook_executor.py", line 89, in run
self._tqm.load_callbacks()
File "/usr/lib/python2.7/site-packages/ansible/executor/task_queue_manager.py", line 172, in load_callbacks
self._stdout_callback = callback_loader.get(self._stdout_callback)
File "/usr/lib/python2.7/site-packages/ansible/plugins/__init__.py", line 358, in get
obj = obj(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/ansible/plugins/callback/default.py", line 41, in __init__
super(CallbackModule, self).__init__()
...
super(CallbackModule, self).__init__()
File "/usr/lib/python2.7/site-packages/ansible/plugins/callback/default.py", line 41, in __init__
super(CallbackModule, self).__init__()
File "/usr/lib/python2.7/site-packages/ansible/plugins/callback/default.py", line 41, in __init__
super(CallbackModule, self).__init__()
RuntimeError: maximum recursion depth exceeded while calling a Python object
```
##### Version
```
atomic-openshift-utils-3.3.37-1.git.0.10ff25b.el7.noarch
openshift-ansible-3.3.37-1.git.0.10ff25b.el7.noarch
```
The playbooks are installed from AtomicOpenShift/3.3/2016-10-18.2
The 3.4 has same problem. 3.2 Doesn't
```
openshift-ansible.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle
openshift-ansible-callback-plugins.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle
openshift-ansible-docs.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle
openshift-ansible-filter-plugins.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle
openshift-ansible-lookup-plugins.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle
openshift-ansible-playbooks.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle
openshift-ansible-roles.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle
ansible-playbook 2.2.0.0
config file = /root/ansible.cfg
configured module search path = Default w/o overrides
```
##### Steps To Reproduce
In description
##### Current Result
Infinite recursion with ansible 2.2.0.0
No problem with ansible 2.1.2.0
The difference seems to be that the 2.1.2.0 do not have the `__init__` in the
```
/usr/lib/python2.7/site-packages/ansible/plugins/callback/default.py
```
```
class CallbackModule(CallbackBase):
...
def __init__(self):
self._play = None
self._last_task_banner = None
super(CallbackModule, self).__init__()
```
If I remove it from the same file on the old ansible, deployment seems
to work. Though I have no idea why it get's to the infinite recursion.
It doesn't make sense to me.
##### Expected Result
No problems with the infinite recursion
##### Additional Information
Red Hat Enterprise Linux Server release 7.2 (Maipo)
The inventory file
```
[OSEv3:children]
masters
nodes
[OSEv3:vars]
deployment_type=openshift-enterprise
ansible_ssh_user=cloud-user
ansible_sudo=true
ansible_sudo_user=root
openshift_use_manageiq=True
#use_cluster_metrics=true
openshift_additional_repos=[{'id': 'ose-devel', 'name': 'ose-devel', 'baseurl': 'http://download.xxx.redhat.com/rcm-guest/puddles/RHAOS/AtomicOpenShift-errata/3.3/latest/RH7-RHAOS-3.3/x86_64/os/', 'enabled': 1, 'gpgcheck': 0, 'skip_if_unavailable': 1}, {'id':'rhel-extras-candidate','name':'rhel-extras-candidate','baseurl':'http://download.xxx..redhat.com/brewroot/repos/extras-rhel-7.2-candidate/latest/x86_64/', 'enabled': 1, 'gpgcheck': 0, 'skip_if_unavailable': 1}]
openshift_docker_additional_registries=brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888
openshift_docker_insecure_registries=brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888
[masters]
ose3-master-08w85 openshift_scheduleable=True openshift_hostname=ose3-master-08w85 openshift_public_hostname=ose3-master-08w85
[nodes]
ose3-master-08w85 openshift_node_labels="{'region':'infra','zone':'default'}" openshift_hostname=ose3-master-08w85 openshift_public_hostname=ose3-master-08w85
ose3-node0-08w85 openshift_node_labels="{'region':'primary','zone':'east'}" openshift_hostname=ose3-node0-08w85 openshift_public_hostname=ose3-node0-08w85
ose3-node1-08w85 openshift_node_labels="{'region':'primary','zone':'west'}" openshift_hostname=ose3-node1-08w85 openshift_public_hostname=ose3-node1-08w85
```
</issue>
<code>
[start of callback_plugins/default.py]
1 '''Plugin to override the default output logic.'''
2
3 # upstream: https://gist.github.com/cliffano/9868180
4
5 # This program is free software: you can redistribute it and/or modify
6 # it under the terms of the GNU General Public License as published by
7 # the Free Software Foundation, either version 3 of the License, or
8 # (at your option) any later version.
9 #
10 # This program is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
14 #
15 # You should have received a copy of the GNU General Public License
16 # along with this program. If not, see <http://www.gnu.org/licenses/>.
17
18
19 # For some reason this has to be done
20 import imp
21 import os
22
23 ANSIBLE_PATH = imp.find_module('ansible')[1]
24 DEFAULT_PATH = os.path.join(ANSIBLE_PATH, 'plugins/callback/default.py')
25 DEFAULT_MODULE = imp.load_source(
26 'ansible.plugins.callback.default',
27 DEFAULT_PATH
28 )
29
30 try:
31 from ansible.plugins.callback import CallbackBase
32 BASECLASS = CallbackBase
33 except ImportError: # < ansible 2.1
34 BASECLASS = DEFAULT_MODULE.CallbackModule
35
36
37 class CallbackModule(DEFAULT_MODULE.CallbackModule): # pylint: disable=too-few-public-methods,no-init
38 '''
39 Override for the default callback module.
40
41 Render std err/out outside of the rest of the result which it prints with
42 indentation.
43 '''
44 CALLBACK_VERSION = 2.0
45 CALLBACK_TYPE = 'stdout'
46 CALLBACK_NAME = 'default'
47
48 def _dump_results(self, result):
49 '''Return the text to output for a result.'''
50 result['_ansible_verbose_always'] = True
51
52 save = {}
53 for key in ['stdout', 'stdout_lines', 'stderr', 'stderr_lines', 'msg']:
54 if key in result:
55 save[key] = result.pop(key)
56
57 output = BASECLASS._dump_results(self, result) # pylint: disable=protected-access
58
59 for key in ['stdout', 'stderr', 'msg']:
60 if key in save and save[key]:
61 output += '\n\n%s:\n\n%s\n' % (key.upper(), save[key])
62
63 for key, value in save.items():
64 result[key] = value
65
66 return output
67
[end of callback_plugins/default.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/callback_plugins/default.py b/callback_plugins/default.py
--- a/callback_plugins/default.py
+++ b/callback_plugins/default.py
@@ -45,6 +45,9 @@
CALLBACK_TYPE = 'stdout'
CALLBACK_NAME = 'default'
+ def __init__(self, *args, **kwargs):
+ BASECLASS.__init__(self, *args, **kwargs)
+
def _dump_results(self, result):
'''Return the text to output for a result.'''
result['_ansible_verbose_always'] = True
| {"golden_diff": "diff --git a/callback_plugins/default.py b/callback_plugins/default.py\n--- a/callback_plugins/default.py\n+++ b/callback_plugins/default.py\n@@ -45,6 +45,9 @@\n CALLBACK_TYPE = 'stdout'\n CALLBACK_NAME = 'default'\n \n+ def __init__(self, *args, **kwargs):\n+ BASECLASS.__init__(self, *args, **kwargs)\n+\n def _dump_results(self, result):\n '''Return the text to output for a result.'''\n result['_ansible_verbose_always'] = True\n", "issue": "maximum recursion depth exceeded -- related to callback/default.py\nRunning the `ansible-playbook -b --become-user root -i ansible-ose-inventory /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml`\n\ni am getting\n\n```\nstatically included: /usr/share/ansible/openshift-ansible/roles/openshift_hosted/tasks/registry/registry.yml\nstatically included: /usr/share/ansible/openshift-ansible/roles/openshift_metrics/tasks/install.yml\nERROR! Unexpected Exception: maximum recursion depth exceeded while calling a Python object\nthe full traceback was:\n\nTraceback (most recent call last):\n File \"/bin/ansible-playbook\", line 103, in <module>\n exit_code = cli.run()\n File \"/usr/lib/python2.7/site-packages/ansible/cli/playbook.py\", line 159, in run\n results = pbex.run()\n File \"/usr/lib/python2.7/site-packages/ansible/executor/playbook_executor.py\", line 89, in run\n self._tqm.load_callbacks()\n File \"/usr/lib/python2.7/site-packages/ansible/executor/task_queue_manager.py\", line 172, in load_callbacks\n self._stdout_callback = callback_loader.get(self._stdout_callback)\n File \"/usr/lib/python2.7/site-packages/ansible/plugins/__init__.py\", line 358, in get\n obj = obj(*args, **kwargs)\n File \"/usr/lib/python2.7/site-packages/ansible/plugins/callback/default.py\", line 41, in __init__\n super(CallbackModule, self).__init__()\n...\n super(CallbackModule, self).__init__()\n File \"/usr/lib/python2.7/site-packages/ansible/plugins/callback/default.py\", line 41, in __init__\n super(CallbackModule, self).__init__()\n File \"/usr/lib/python2.7/site-packages/ansible/plugins/callback/default.py\", line 41, in __init__\n super(CallbackModule, self).__init__()\nRuntimeError: maximum recursion depth exceeded while calling a Python object\n```\n##### Version\n\n```\natomic-openshift-utils-3.3.37-1.git.0.10ff25b.el7.noarch\nopenshift-ansible-3.3.37-1.git.0.10ff25b.el7.noarch\n```\n\nThe playbooks are installed from AtomicOpenShift/3.3/2016-10-18.2\nThe 3.4 has same problem. 3.2 Doesn't\n\n```\nopenshift-ansible.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle\nopenshift-ansible-callback-plugins.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle\nopenshift-ansible-docs.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle\nopenshift-ansible-filter-plugins.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle\nopenshift-ansible-lookup-plugins.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle\nopenshift-ansible-playbooks.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle\nopenshift-ansible-roles.noarch 3.3.37-1.git.0.10ff25b.el7 @AtomicOpenShift-3.3-Puddle\n\nansible-playbook 2.2.0.0\n config file = /root/ansible.cfg\n configured module search path = Default w/o overrides\n```\n##### Steps To Reproduce\n\nIn description\n##### Current Result\n\nInfinite recursion with ansible 2.2.0.0\nNo problem with ansible 2.1.2.0\n\n The difference seems to be that the 2.1.2.0 do not have the `__init__` in the\n\n```\n /usr/lib/python2.7/site-packages/ansible/plugins/callback/default.py\n```\n\n```\nclass CallbackModule(CallbackBase):\n...\n def __init__(self):\n\n self._play = None\n self._last_task_banner = None\n super(CallbackModule, self).__init__()\n```\n\nIf I remove it from the same file on the old ansible, deployment seems\nto work. Though I have no idea why it get's to the infinite recursion.\nIt doesn't make sense to me.\n##### Expected Result\n\nNo problems with the infinite recursion\n##### Additional Information\n\nRed Hat Enterprise Linux Server release 7.2 (Maipo)\n\nThe inventory file\n\n```\n[OSEv3:children]\nmasters\nnodes\n\n[OSEv3:vars]\ndeployment_type=openshift-enterprise\nansible_ssh_user=cloud-user\nansible_sudo=true\nansible_sudo_user=root\nopenshift_use_manageiq=True\n#use_cluster_metrics=true\n\nopenshift_additional_repos=[{'id': 'ose-devel', 'name': 'ose-devel', 'baseurl': 'http://download.xxx.redhat.com/rcm-guest/puddles/RHAOS/AtomicOpenShift-errata/3.3/latest/RH7-RHAOS-3.3/x86_64/os/', 'enabled': 1, 'gpgcheck': 0, 'skip_if_unavailable': 1}, {'id':'rhel-extras-candidate','name':'rhel-extras-candidate','baseurl':'http://download.xxx..redhat.com/brewroot/repos/extras-rhel-7.2-candidate/latest/x86_64/', 'enabled': 1, 'gpgcheck': 0, 'skip_if_unavailable': 1}]\nopenshift_docker_additional_registries=brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888\nopenshift_docker_insecure_registries=brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888\n\n[masters]\nose3-master-08w85 openshift_scheduleable=True openshift_hostname=ose3-master-08w85 openshift_public_hostname=ose3-master-08w85\n\n[nodes]\nose3-master-08w85 openshift_node_labels=\"{'region':'infra','zone':'default'}\" openshift_hostname=ose3-master-08w85 openshift_public_hostname=ose3-master-08w85\n\nose3-node0-08w85 openshift_node_labels=\"{'region':'primary','zone':'east'}\" openshift_hostname=ose3-node0-08w85 openshift_public_hostname=ose3-node0-08w85\nose3-node1-08w85 openshift_node_labels=\"{'region':'primary','zone':'west'}\" openshift_hostname=ose3-node1-08w85 openshift_public_hostname=ose3-node1-08w85\n```\n\n", "before_files": [{"content": "'''Plugin to override the default output logic.'''\n\n# upstream: https://gist.github.com/cliffano/9868180\n\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n\n\n# For some reason this has to be done\nimport imp\nimport os\n\nANSIBLE_PATH = imp.find_module('ansible')[1]\nDEFAULT_PATH = os.path.join(ANSIBLE_PATH, 'plugins/callback/default.py')\nDEFAULT_MODULE = imp.load_source(\n 'ansible.plugins.callback.default',\n DEFAULT_PATH\n)\n\ntry:\n from ansible.plugins.callback import CallbackBase\n BASECLASS = CallbackBase\nexcept ImportError: # < ansible 2.1\n BASECLASS = DEFAULT_MODULE.CallbackModule\n\n\nclass CallbackModule(DEFAULT_MODULE.CallbackModule): # pylint: disable=too-few-public-methods,no-init\n '''\n Override for the default callback module.\n\n Render std err/out outside of the rest of the result which it prints with\n indentation.\n '''\n CALLBACK_VERSION = 2.0\n CALLBACK_TYPE = 'stdout'\n CALLBACK_NAME = 'default'\n\n def _dump_results(self, result):\n '''Return the text to output for a result.'''\n result['_ansible_verbose_always'] = True\n\n save = {}\n for key in ['stdout', 'stdout_lines', 'stderr', 'stderr_lines', 'msg']:\n if key in result:\n save[key] = result.pop(key)\n\n output = BASECLASS._dump_results(self, result) # pylint: disable=protected-access\n\n for key in ['stdout', 'stderr', 'msg']:\n if key in save and save[key]:\n output += '\\n\\n%s:\\n\\n%s\\n' % (key.upper(), save[key])\n\n for key, value in save.items():\n result[key] = value\n\n return output\n", "path": "callback_plugins/default.py"}]} | 2,769 | 116 |
gh_patches_debug_39191 | rasdani/github-patches | git_diff | wandb__wandb-516 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
use six for configparser for py2 compat
</issue>
<code>
[start of wandb/settings.py]
1 import os
2 import configparser
3
4 import wandb.util as util
5 from wandb import core, env, wandb_dir
6
7
8 class Settings(object):
9 """Global W&B settings stored under $WANDB_CONFIG_DIR/settings.
10 """
11
12 DEFAULT_SECTION = "default"
13
14 def __init__(self, load_settings=True):
15 config_dir = os.environ.get(env.CONFIG_DIR, os.path.join(os.path.expanduser("~"), ".config", "wandb"))
16
17 # Ensure the config directory and settings file both exist.
18 util.mkdir_exists_ok(config_dir)
19 util.mkdir_exists_ok(wandb_dir())
20
21 self._global_settings_path = os.path.join(config_dir, 'settings')
22 self._global_settings = Settings._settings_wth_defaults({})
23
24 self._local_settings_path = os.path.join(wandb_dir(), 'settings')
25 self._local_settings = Settings._settings_wth_defaults({})
26
27 if load_settings:
28 self._global_settings.read([self._global_settings_path])
29 self._local_settings.read([self._local_settings_path])
30
31 def get(self, section, key, fallback=configparser._UNSET):
32 # Try the local settings first. If we can't find the key, then try the global settings.
33 # If a fallback is provided, return it if we can't find the key in either the local or global
34 # settings.
35 try:
36 return self._local_settings.get(section, key)
37 except configparser.NoOptionError:
38 return self._global_settings.get(section, key, fallback=fallback)
39
40 def set(self, section, key, value, globally=False):
41 def write_setting(settings, settings_path):
42 if not settings.has_section(section):
43 settings.add_section(section)
44 settings.set(section, key, str(value))
45 with open(settings_path, "w+") as f:
46 settings.write(f)
47
48 if globally:
49 write_setting(self._global_settings, self._global_settings_path)
50 else:
51 write_setting(self._local_settings, self._local_settings_path)
52
53 def clear(self, section, key, globally=False):
54 def clear_setting(settings, settings_path):
55 settings.remove_option(section, key)
56 with open(settings_path, "w+") as f:
57 settings.write(f)
58
59 if globally:
60 clear_setting(self._global_settings, self._global_settings_path)
61 else:
62 clear_setting(self._local_settings, self._local_settings_path)
63
64 def items(self, section=None):
65 section = section if section is not None else Settings.DEFAULT_SECTION
66
67 result = {'section': section}
68
69 try:
70 if section in self._global_settings.sections():
71 for option in self._global_settings.options(section):
72 result[option] = self._global_settings.get(section, option)
73 if section in self._local_settings.sections():
74 for option in self._local_settings.options(section):
75 result[option] = self._local_settings.get(section, option)
76 except configparser.InterpolationSyntaxError:
77 core.termwarn("Unable to parse settings file")
78
79 return result
80
81 @staticmethod
82 def _settings_wth_defaults(default_settings):
83 config = configparser.ConfigParser()
84 config.add_section(Settings.DEFAULT_SECTION)
85 for key, value in default_settings.items():
86 config.set(Settings.DEFAULT_SECTION, key, str(value))
87 return config
88
[end of wandb/settings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/wandb/settings.py b/wandb/settings.py
--- a/wandb/settings.py
+++ b/wandb/settings.py
@@ -1,5 +1,6 @@
import os
-import configparser
+
+from six.moves import configparser
import wandb.util as util
from wandb import core, env, wandb_dir
@@ -9,7 +10,9 @@
"""Global W&B settings stored under $WANDB_CONFIG_DIR/settings.
"""
- DEFAULT_SECTION = "default"
+ DEFAULT_SECTION = "client"
+
+ _UNSET = object()
def __init__(self, load_settings=True):
config_dir = os.environ.get(env.CONFIG_DIR, os.path.join(os.path.expanduser("~"), ".config", "wandb"))
@@ -19,23 +22,29 @@
util.mkdir_exists_ok(wandb_dir())
self._global_settings_path = os.path.join(config_dir, 'settings')
- self._global_settings = Settings._settings_wth_defaults({})
+ self._global_settings = Settings._settings()
self._local_settings_path = os.path.join(wandb_dir(), 'settings')
- self._local_settings = Settings._settings_wth_defaults({})
+ self._local_settings = Settings._settings()
if load_settings:
self._global_settings.read([self._global_settings_path])
self._local_settings.read([self._local_settings_path])
- def get(self, section, key, fallback=configparser._UNSET):
+ def get(self, section, key, fallback=_UNSET):
# Try the local settings first. If we can't find the key, then try the global settings.
# If a fallback is provided, return it if we can't find the key in either the local or global
# settings.
try:
return self._local_settings.get(section, key)
except configparser.NoOptionError:
- return self._global_settings.get(section, key, fallback=fallback)
+ try:
+ return self._global_settings.get(section, key)
+ except configparser.NoOptionError:
+ if fallback is not Settings._UNSET:
+ return fallback
+ else:
+ raise
def set(self, section, key, value, globally=False):
def write_setting(settings, settings_path):
@@ -79,7 +88,7 @@
return result
@staticmethod
- def _settings_wth_defaults(default_settings):
+ def _settings(default_settings={}):
config = configparser.ConfigParser()
config.add_section(Settings.DEFAULT_SECTION)
for key, value in default_settings.items():
| {"golden_diff": "diff --git a/wandb/settings.py b/wandb/settings.py\n--- a/wandb/settings.py\n+++ b/wandb/settings.py\n@@ -1,5 +1,6 @@\n import os\n-import configparser\n+\n+from six.moves import configparser\n \n import wandb.util as util\n from wandb import core, env, wandb_dir\n@@ -9,7 +10,9 @@\n \"\"\"Global W&B settings stored under $WANDB_CONFIG_DIR/settings.\n \"\"\"\n \n- DEFAULT_SECTION = \"default\"\n+ DEFAULT_SECTION = \"client\"\n+\n+ _UNSET = object()\n \n def __init__(self, load_settings=True):\n config_dir = os.environ.get(env.CONFIG_DIR, os.path.join(os.path.expanduser(\"~\"), \".config\", \"wandb\"))\n@@ -19,23 +22,29 @@\n util.mkdir_exists_ok(wandb_dir())\n \n self._global_settings_path = os.path.join(config_dir, 'settings')\n- self._global_settings = Settings._settings_wth_defaults({})\n+ self._global_settings = Settings._settings()\n \n self._local_settings_path = os.path.join(wandb_dir(), 'settings')\n- self._local_settings = Settings._settings_wth_defaults({})\n+ self._local_settings = Settings._settings()\n \n if load_settings:\n self._global_settings.read([self._global_settings_path])\n self._local_settings.read([self._local_settings_path])\n \n- def get(self, section, key, fallback=configparser._UNSET):\n+ def get(self, section, key, fallback=_UNSET):\n # Try the local settings first. If we can't find the key, then try the global settings.\n # If a fallback is provided, return it if we can't find the key in either the local or global\n # settings.\n try:\n return self._local_settings.get(section, key)\n except configparser.NoOptionError:\n- return self._global_settings.get(section, key, fallback=fallback)\n+ try:\n+ return self._global_settings.get(section, key)\n+ except configparser.NoOptionError:\n+ if fallback is not Settings._UNSET:\n+ return fallback\n+ else:\n+ raise\n \n def set(self, section, key, value, globally=False):\n def write_setting(settings, settings_path):\n@@ -79,7 +88,7 @@\n return result\n \n @staticmethod\n- def _settings_wth_defaults(default_settings):\n+ def _settings(default_settings={}):\n config = configparser.ConfigParser()\n config.add_section(Settings.DEFAULT_SECTION)\n for key, value in default_settings.items():\n", "issue": "use six for configparser for py2 compat\n\n", "before_files": [{"content": "import os\nimport configparser\n\nimport wandb.util as util\nfrom wandb import core, env, wandb_dir\n\n\nclass Settings(object):\n \"\"\"Global W&B settings stored under $WANDB_CONFIG_DIR/settings.\n \"\"\"\n\n DEFAULT_SECTION = \"default\"\n\n def __init__(self, load_settings=True):\n config_dir = os.environ.get(env.CONFIG_DIR, os.path.join(os.path.expanduser(\"~\"), \".config\", \"wandb\"))\n\n # Ensure the config directory and settings file both exist.\n util.mkdir_exists_ok(config_dir)\n util.mkdir_exists_ok(wandb_dir())\n\n self._global_settings_path = os.path.join(config_dir, 'settings')\n self._global_settings = Settings._settings_wth_defaults({})\n\n self._local_settings_path = os.path.join(wandb_dir(), 'settings')\n self._local_settings = Settings._settings_wth_defaults({})\n\n if load_settings:\n self._global_settings.read([self._global_settings_path])\n self._local_settings.read([self._local_settings_path])\n\n def get(self, section, key, fallback=configparser._UNSET):\n # Try the local settings first. If we can't find the key, then try the global settings.\n # If a fallback is provided, return it if we can't find the key in either the local or global\n # settings.\n try:\n return self._local_settings.get(section, key)\n except configparser.NoOptionError:\n return self._global_settings.get(section, key, fallback=fallback)\n\n def set(self, section, key, value, globally=False):\n def write_setting(settings, settings_path):\n if not settings.has_section(section):\n settings.add_section(section)\n settings.set(section, key, str(value))\n with open(settings_path, \"w+\") as f:\n settings.write(f)\n\n if globally:\n write_setting(self._global_settings, self._global_settings_path)\n else:\n write_setting(self._local_settings, self._local_settings_path)\n\n def clear(self, section, key, globally=False):\n def clear_setting(settings, settings_path):\n settings.remove_option(section, key)\n with open(settings_path, \"w+\") as f:\n settings.write(f)\n\n if globally:\n clear_setting(self._global_settings, self._global_settings_path)\n else:\n clear_setting(self._local_settings, self._local_settings_path)\n\n def items(self, section=None):\n section = section if section is not None else Settings.DEFAULT_SECTION\n\n result = {'section': section}\n\n try:\n if section in self._global_settings.sections():\n for option in self._global_settings.options(section):\n result[option] = self._global_settings.get(section, option)\n if section in self._local_settings.sections():\n for option in self._local_settings.options(section):\n result[option] = self._local_settings.get(section, option)\n except configparser.InterpolationSyntaxError:\n core.termwarn(\"Unable to parse settings file\")\n\n return result\n\n @staticmethod\n def _settings_wth_defaults(default_settings):\n config = configparser.ConfigParser()\n config.add_section(Settings.DEFAULT_SECTION)\n for key, value in default_settings.items():\n config.set(Settings.DEFAULT_SECTION, key, str(value))\n return config\n", "path": "wandb/settings.py"}]} | 1,406 | 567 |
gh_patches_debug_24433 | rasdani/github-patches | git_diff | opensearch-project__opensearch-build-1652 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Only send `-d` to core components, instead of everything.
As of now, build workflow will not send `-d` if user does not specify `--distribution`.
However, if user specify things such as `--distribution rpm` and try to build core+all plugins,
`-d` will be sent to plugins as well.
The plugin build script does not know how to interpret `-d` thus fail.
```
+ echo 'Invalid option: -?'
Invalid option: -?
+ exit 1
2022-02-17 23:58:36 ERROR Error building common-utils, retry with: ./build.sh manifests/1.3.0/opensearch-1.3.0.yml --component common-utils
Traceback (most recent call last):
File "./src/run_build.py", line 79, in <module>
sys.exit(main())
File "./src/run_build.py", line 67, in main
builder.build(build_recorder)
File "/local/home/zhujiaxi/opensearch-build-peterzhuamazon/src/build_workflow/builder_from_source.py", line 49, in build
self.git_repo.execute(build_command)
File "/local/home/zhujiaxi/opensearch-build-peterzhuamazon/src/git/git_repository.py", line 83, in execute
subprocess.check_call(command, cwd=cwd, shell=True)
File "/usr/lib64/python3.7/subprocess.py", line 363, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command 'bash /local/home/zhujiaxi/opensearch-build-peterzhuamazon/scripts/components/common-utils/build.sh -v 1.3.0 -p linux -a x64 -d rpm -s false -o builds' returned non-zero exit status 1.
```
Need to add a condition where if component != OpenSearch/OpenSearch-Dashboards, then `-d` will not be sent even if not None.
</issue>
<code>
[start of src/build_workflow/builder_from_source.py]
1 # SPDX-License-Identifier: Apache-2.0
2 #
3 # The OpenSearch Contributors require contributions made to
4 # this file be licensed under the Apache-2.0 license or a
5 # compatible open source license.
6
7 import os
8
9 from build_workflow.build_recorder import BuildRecorder
10 from build_workflow.builder import Builder
11 from git.git_repository import GitRepository
12 from paths.script_finder import ScriptFinder
13
14 """
15 This class is responsible for executing the build for a component and passing the results to a build recorder.
16 It will notify the build recorder of build information such as repository and git ref, and any artifacts generated by the build.
17 Artifacts found in "<build root>/artifacts/<maven|plugins|libs|dist|core-plugins>" will be recognized and recorded.
18 """
19
20
21 class BuilderFromSource(Builder):
22 def checkout(self, work_dir: str) -> None:
23 self.git_repo = GitRepository(
24 self.component.repository,
25 self.component.ref,
26 os.path.join(work_dir, self.component.name),
27 self.component.working_directory,
28 )
29
30 def build(self, build_recorder: BuildRecorder) -> None:
31 build_script = ScriptFinder.find_build_script(self.target.name, self.component.name, self.git_repo.working_directory)
32
33 build_command = " ".join(
34 filter(
35 None,
36 [
37 "bash",
38 build_script,
39 f"-v {self.target.version}",
40 f"-p {self.target.platform}",
41 f"-a {self.target.architecture}",
42 f"-d {self.target.distribution}" if self.target.distribution else None,
43 f"-s {str(self.target.snapshot).lower()}",
44 f"-o {self.output_path}",
45 ]
46 )
47 )
48
49 self.git_repo.execute(build_command)
50 build_recorder.record_component(self.component.name, self.git_repo)
51
52 def export_artifacts(self, build_recorder: BuildRecorder) -> None:
53 artifacts_path = os.path.join(self.git_repo.working_directory, self.output_path)
54 for artifact_type in ["maven", "dist", "plugins", "libs", "core-plugins"]:
55 for dir, _, files in os.walk(os.path.join(artifacts_path, artifact_type)):
56 for file_name in files:
57 absolute_path = os.path.join(dir, file_name)
58 relative_path = os.path.relpath(absolute_path, artifacts_path)
59 build_recorder.record_artifact(self.component.name, artifact_type, relative_path, absolute_path)
60
[end of src/build_workflow/builder_from_source.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/build_workflow/builder_from_source.py b/src/build_workflow/builder_from_source.py
--- a/src/build_workflow/builder_from_source.py
+++ b/src/build_workflow/builder_from_source.py
@@ -28,6 +28,11 @@
)
def build(self, build_recorder: BuildRecorder) -> None:
+
+ # List of components whose build scripts support `-d` parameter
+ # Bundled plugins do not need `-d` as they are java based zips
+ DISTRIBUTION_SUPPORTED_COMPONENTS = ["OpenSearch", "OpenSearch-Dashboards"]
+
build_script = ScriptFinder.find_build_script(self.target.name, self.component.name, self.git_repo.working_directory)
build_command = " ".join(
@@ -39,7 +44,7 @@
f"-v {self.target.version}",
f"-p {self.target.platform}",
f"-a {self.target.architecture}",
- f"-d {self.target.distribution}" if self.target.distribution else None,
+ f"-d {self.target.distribution}" if self.target.distribution and (self.component.name in DISTRIBUTION_SUPPORTED_COMPONENTS) else None,
f"-s {str(self.target.snapshot).lower()}",
f"-o {self.output_path}",
]
| {"golden_diff": "diff --git a/src/build_workflow/builder_from_source.py b/src/build_workflow/builder_from_source.py\n--- a/src/build_workflow/builder_from_source.py\n+++ b/src/build_workflow/builder_from_source.py\n@@ -28,6 +28,11 @@\n )\n \n def build(self, build_recorder: BuildRecorder) -> None:\n+\n+ # List of components whose build scripts support `-d` parameter\n+ # Bundled plugins do not need `-d` as they are java based zips\n+ DISTRIBUTION_SUPPORTED_COMPONENTS = [\"OpenSearch\", \"OpenSearch-Dashboards\"]\n+\n build_script = ScriptFinder.find_build_script(self.target.name, self.component.name, self.git_repo.working_directory)\n \n build_command = \" \".join(\n@@ -39,7 +44,7 @@\n f\"-v {self.target.version}\",\n f\"-p {self.target.platform}\",\n f\"-a {self.target.architecture}\",\n- f\"-d {self.target.distribution}\" if self.target.distribution else None,\n+ f\"-d {self.target.distribution}\" if self.target.distribution and (self.component.name in DISTRIBUTION_SUPPORTED_COMPONENTS) else None,\n f\"-s {str(self.target.snapshot).lower()}\",\n f\"-o {self.output_path}\",\n ]\n", "issue": "[BUG] Only send `-d` to core components, instead of everything.\nAs of now, build workflow will not send `-d` if user does not specify `--distribution`.\r\nHowever, if user specify things such as `--distribution rpm` and try to build core+all plugins,\r\n`-d` will be sent to plugins as well.\r\n\r\nThe plugin build script does not know how to interpret `-d` thus fail.\r\n```\r\n+ echo 'Invalid option: -?'\r\nInvalid option: -?\r\n+ exit 1\r\n2022-02-17 23:58:36 ERROR Error building common-utils, retry with: ./build.sh manifests/1.3.0/opensearch-1.3.0.yml --component common-utils\r\nTraceback (most recent call last):\r\n File \"./src/run_build.py\", line 79, in <module>\r\n sys.exit(main())\r\n File \"./src/run_build.py\", line 67, in main\r\n builder.build(build_recorder)\r\n File \"/local/home/zhujiaxi/opensearch-build-peterzhuamazon/src/build_workflow/builder_from_source.py\", line 49, in build\r\n self.git_repo.execute(build_command)\r\n File \"/local/home/zhujiaxi/opensearch-build-peterzhuamazon/src/git/git_repository.py\", line 83, in execute\r\n subprocess.check_call(command, cwd=cwd, shell=True)\r\n File \"/usr/lib64/python3.7/subprocess.py\", line 363, in check_call\r\n raise CalledProcessError(retcode, cmd)\r\nsubprocess.CalledProcessError: Command 'bash /local/home/zhujiaxi/opensearch-build-peterzhuamazon/scripts/components/common-utils/build.sh -v 1.3.0 -p linux -a x64 -d rpm -s false -o builds' returned non-zero exit status 1.\r\n```\r\n\r\nNeed to add a condition where if component != OpenSearch/OpenSearch-Dashboards, then `-d` will not be sent even if not None.\n", "before_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport os\n\nfrom build_workflow.build_recorder import BuildRecorder\nfrom build_workflow.builder import Builder\nfrom git.git_repository import GitRepository\nfrom paths.script_finder import ScriptFinder\n\n\"\"\"\nThis class is responsible for executing the build for a component and passing the results to a build recorder.\nIt will notify the build recorder of build information such as repository and git ref, and any artifacts generated by the build.\nArtifacts found in \"<build root>/artifacts/<maven|plugins|libs|dist|core-plugins>\" will be recognized and recorded.\n\"\"\"\n\n\nclass BuilderFromSource(Builder):\n def checkout(self, work_dir: str) -> None:\n self.git_repo = GitRepository(\n self.component.repository,\n self.component.ref,\n os.path.join(work_dir, self.component.name),\n self.component.working_directory,\n )\n\n def build(self, build_recorder: BuildRecorder) -> None:\n build_script = ScriptFinder.find_build_script(self.target.name, self.component.name, self.git_repo.working_directory)\n\n build_command = \" \".join(\n filter(\n None,\n [\n \"bash\",\n build_script,\n f\"-v {self.target.version}\",\n f\"-p {self.target.platform}\",\n f\"-a {self.target.architecture}\",\n f\"-d {self.target.distribution}\" if self.target.distribution else None,\n f\"-s {str(self.target.snapshot).lower()}\",\n f\"-o {self.output_path}\",\n ]\n )\n )\n\n self.git_repo.execute(build_command)\n build_recorder.record_component(self.component.name, self.git_repo)\n\n def export_artifacts(self, build_recorder: BuildRecorder) -> None:\n artifacts_path = os.path.join(self.git_repo.working_directory, self.output_path)\n for artifact_type in [\"maven\", \"dist\", \"plugins\", \"libs\", \"core-plugins\"]:\n for dir, _, files in os.walk(os.path.join(artifacts_path, artifact_type)):\n for file_name in files:\n absolute_path = os.path.join(dir, file_name)\n relative_path = os.path.relpath(absolute_path, artifacts_path)\n build_recorder.record_artifact(self.component.name, artifact_type, relative_path, absolute_path)\n", "path": "src/build_workflow/builder_from_source.py"}]} | 1,604 | 278 |
gh_patches_debug_18245 | rasdani/github-patches | git_diff | streamlink__streamlink-338 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TVCatchup addon not working anymore
root@ovh2:/data# streamlink http://tvcatchup.com/watch/channel4
[cli][info] streamlink is running as root! Be careful!
[cli][info] Found matching plugin tvcatchup for URL http://tvcatchup.com/watch/channel4
error: No streams found on this URL: http://tvcatchup.com/watch/channel4
root@ovh2:/data# streamlink --plugins
[cli][info] streamlink is running as root! Be careful!
Loaded plugins: adultswim, afreeca, afreecatv, aftonbladet, alieztv, antenna, ard_live, ard_mediathek, artetv, atresplayer, azubutv, bambuser, beam, beattv, bigo, bilibili, bliptv, chaturbate, cinergroup, connectcast, crunchyroll, cybergame, dailymotion, dingittv, disney_de, dmcloud, dmcloud_embed, dogan, dogus, dommune, douyutv, dplay, drdk, euronews, expressen, filmon, filmon_us, foxtr, furstream, gaminglive, gomexp, goodgame, hitbox, itvplayer, kanal7, letontv, livecodingtv, livestation, livestream, media_ccc_de, mediaklikk, meerkat, mips, mlgtv, nhkworld, nineanime, nos, npo, nrk, oldlivestream, openrectv, orf_tvthek, pandatv, periscope, picarto, piczel, powerapp, rtlxl, rtve, ruv, seemeplay, servustv, speedrunslive, sportschau, ssh101, stream, streamboat, streamingvideoprovider, streamlive, streamme, streamupcom, svtplay, tga, tigerdile, trt, turkuvaz, tv360, tv3cat, tv4play, tv8, tvcatchup, tvplayer, twitch, ustreamtv, vaughnlive, veetle, vgtv, viagame, viasat, viasat_embed, vidio, wattv, webtv, weeb, younow, youtube, zdf_mediathek
</issue>
<code>
[start of src/streamlink/plugins/tvcatchup.py]
1 import re
2
3 from streamlink.plugin import Plugin
4 from streamlink.plugin.api import http
5 from streamlink.stream import HLSStream
6
7 USER_AGENT = "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36"
8 _url_re = re.compile("http://(?:www\.)?tvcatchup.com/watch/\w+")
9 _stream_re = re.compile(r"\"(?P<stream_url>https?://.*m3u8\?.*clientKey=[^\"]*)\";")
10
11
12 class TVCatchup(Plugin):
13 @classmethod
14 def can_handle_url(cls, url):
15 return _url_re.match(url)
16
17 def _get_streams(self):
18 """
19 Finds the streams from tvcatchup.com.
20 """
21 http.headers.update({"User-Agent": USER_AGENT})
22 res = http.get(self.url)
23
24 match = _stream_re.search(res.text, re.IGNORECASE | re.MULTILINE)
25
26 if match:
27 stream_url = match.groupdict()["stream_url"]
28
29 if stream_url:
30 if "_adp" in stream_url:
31 return HLSStream.parse_variant_playlist(self.session, stream_url)
32 else:
33 return {'576p': HLSStream(self.session, stream_url)}
34
35
36 __plugin__ = TVCatchup
37
[end of src/streamlink/plugins/tvcatchup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/streamlink/plugins/tvcatchup.py b/src/streamlink/plugins/tvcatchup.py
--- a/src/streamlink/plugins/tvcatchup.py
+++ b/src/streamlink/plugins/tvcatchup.py
@@ -6,7 +6,7 @@
USER_AGENT = "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36"
_url_re = re.compile("http://(?:www\.)?tvcatchup.com/watch/\w+")
-_stream_re = re.compile(r"\"(?P<stream_url>https?://.*m3u8\?.*clientKey=[^\"]*)\";")
+_stream_re = re.compile(r'''(?P<q>["'])(?P<stream_url>https?://.*m3u8\?.*clientKey=.*?)(?P=q)''')
class TVCatchup(Plugin):
@@ -24,7 +24,7 @@
match = _stream_re.search(res.text, re.IGNORECASE | re.MULTILINE)
if match:
- stream_url = match.groupdict()["stream_url"]
+ stream_url = match.group("stream_url")
if stream_url:
if "_adp" in stream_url:
| {"golden_diff": "diff --git a/src/streamlink/plugins/tvcatchup.py b/src/streamlink/plugins/tvcatchup.py\n--- a/src/streamlink/plugins/tvcatchup.py\n+++ b/src/streamlink/plugins/tvcatchup.py\n@@ -6,7 +6,7 @@\n \n USER_AGENT = \"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36\"\n _url_re = re.compile(\"http://(?:www\\.)?tvcatchup.com/watch/\\w+\")\n-_stream_re = re.compile(r\"\\\"(?P<stream_url>https?://.*m3u8\\?.*clientKey=[^\\\"]*)\\\";\")\n+_stream_re = re.compile(r'''(?P<q>[\"'])(?P<stream_url>https?://.*m3u8\\?.*clientKey=.*?)(?P=q)''')\n \n \n class TVCatchup(Plugin):\n@@ -24,7 +24,7 @@\n match = _stream_re.search(res.text, re.IGNORECASE | re.MULTILINE)\n \n if match:\n- stream_url = match.groupdict()[\"stream_url\"]\n+ stream_url = match.group(\"stream_url\")\n \n if stream_url:\n if \"_adp\" in stream_url:\n", "issue": "TVCatchup addon not working anymore\nroot@ovh2:/data# streamlink http://tvcatchup.com/watch/channel4\r\n[cli][info] streamlink is running as root! Be careful!\r\n[cli][info] Found matching plugin tvcatchup for URL http://tvcatchup.com/watch/channel4\r\nerror: No streams found on this URL: http://tvcatchup.com/watch/channel4\r\nroot@ovh2:/data# streamlink --plugins\r\n[cli][info] streamlink is running as root! Be careful!\r\nLoaded plugins: adultswim, afreeca, afreecatv, aftonbladet, alieztv, antenna, ard_live, ard_mediathek, artetv, atresplayer, azubutv, bambuser, beam, beattv, bigo, bilibili, bliptv, chaturbate, cinergroup, connectcast, crunchyroll, cybergame, dailymotion, dingittv, disney_de, dmcloud, dmcloud_embed, dogan, dogus, dommune, douyutv, dplay, drdk, euronews, expressen, filmon, filmon_us, foxtr, furstream, gaminglive, gomexp, goodgame, hitbox, itvplayer, kanal7, letontv, livecodingtv, livestation, livestream, media_ccc_de, mediaklikk, meerkat, mips, mlgtv, nhkworld, nineanime, nos, npo, nrk, oldlivestream, openrectv, orf_tvthek, pandatv, periscope, picarto, piczel, powerapp, rtlxl, rtve, ruv, seemeplay, servustv, speedrunslive, sportschau, ssh101, stream, streamboat, streamingvideoprovider, streamlive, streamme, streamupcom, svtplay, tga, tigerdile, trt, turkuvaz, tv360, tv3cat, tv4play, tv8, tvcatchup, tvplayer, twitch, ustreamtv, vaughnlive, veetle, vgtv, viagame, viasat, viasat_embed, vidio, wattv, webtv, weeb, younow, youtube, zdf_mediathek\r\n\n", "before_files": [{"content": "import re\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http\nfrom streamlink.stream import HLSStream\n\nUSER_AGENT = \"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36\"\n_url_re = re.compile(\"http://(?:www\\.)?tvcatchup.com/watch/\\w+\")\n_stream_re = re.compile(r\"\\\"(?P<stream_url>https?://.*m3u8\\?.*clientKey=[^\\\"]*)\\\";\")\n\n\nclass TVCatchup(Plugin):\n @classmethod\n def can_handle_url(cls, url):\n return _url_re.match(url)\n\n def _get_streams(self):\n \"\"\"\n Finds the streams from tvcatchup.com.\n \"\"\"\n http.headers.update({\"User-Agent\": USER_AGENT})\n res = http.get(self.url)\n\n match = _stream_re.search(res.text, re.IGNORECASE | re.MULTILINE)\n\n if match:\n stream_url = match.groupdict()[\"stream_url\"]\n\n if stream_url:\n if \"_adp\" in stream_url:\n return HLSStream.parse_variant_playlist(self.session, stream_url)\n else:\n return {'576p': HLSStream(self.session, stream_url)}\n\n\n__plugin__ = TVCatchup\n", "path": "src/streamlink/plugins/tvcatchup.py"}]} | 1,424 | 296 |
gh_patches_debug_4935 | rasdani/github-patches | git_diff | quantumlib__Cirq-4249 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Push to PyPi failing
```
error in cirq setup command: 'extras_require' must be a dictionary whose values are strings or lists of strings containing valid project/version requirement specifiers.
```
See https://github.com/quantumlib/Cirq/runs/2851981344
</issue>
<code>
[start of setup.py]
1 # Copyright 2018 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import io
16 import os
17 from setuptools import setup
18
19 # This reads the __version__ variable from cirq/_version.py
20 __version__ = ''
21
22 from dev_tools import modules
23 from dev_tools.requirements import explode
24
25 exec(open('cirq-core/cirq/_version.py').read())
26
27 name = 'cirq'
28
29 description = (
30 'A framework for creating, editing, and invoking '
31 'Noisy Intermediate Scale Quantum (NISQ) circuits.'
32 )
33
34 # README file as long_description.
35 long_description = io.open('README.rst', encoding='utf-8').read()
36
37 # If CIRQ_PRE_RELEASE_VERSION is set then we update the version to this value.
38 # It is assumed that it ends with one of `.devN`, `.aN`, `.bN`, `.rcN` and hence
39 # it will be a pre-release version on PyPi. See
40 # https://packaging.python.org/guides/distributing-packages-using-setuptools/#pre-release-versioning
41 # for more details.
42 if 'CIRQ_PRE_RELEASE_VERSION' in os.environ:
43 __version__ = os.environ['CIRQ_PRE_RELEASE_VERSION']
44 long_description = (
45 "**This is a development version of Cirq and may be "
46 "unstable.**\n\n**For the latest stable release of Cirq "
47 "see**\n`here <https://pypi.org/project/cirq>`__.\n\n" + long_description
48 )
49
50 # Sanity check
51 assert __version__, 'Version string cannot be empty'
52
53 # This is a pure metapackage that installs all our packages
54 requirements = [f'{p.name}=={p.version}' for p in modules.list_modules()]
55
56 dev_requirements = explode('dev_tools/requirements/deps/dev-tools.txt')
57 dev_requirements = [r.strip() for r in dev_requirements]
58
59 setup(
60 name=name,
61 version=__version__,
62 url='http://github.com/quantumlib/cirq',
63 author='The Cirq Developers',
64 author_email='[email protected]',
65 python_requires='>=3.6.0',
66 install_requires=requirements,
67 extras_require={
68 'dev_env': dev_requirements,
69 },
70 license='Apache 2',
71 description=description,
72 long_description=long_description,
73 )
74
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -54,7 +54,9 @@
requirements = [f'{p.name}=={p.version}' for p in modules.list_modules()]
dev_requirements = explode('dev_tools/requirements/deps/dev-tools.txt')
-dev_requirements = [r.strip() for r in dev_requirements]
+
+# filter out direct urls (https://github.com/pypa/pip/issues/6301)
+dev_requirements = [r.strip() for r in dev_requirements if "git+http" not in r]
setup(
name=name,
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -54,7 +54,9 @@\n requirements = [f'{p.name}=={p.version}' for p in modules.list_modules()]\n \n dev_requirements = explode('dev_tools/requirements/deps/dev-tools.txt')\n-dev_requirements = [r.strip() for r in dev_requirements]\n+\n+# filter out direct urls (https://github.com/pypa/pip/issues/6301)\n+dev_requirements = [r.strip() for r in dev_requirements if \"git+http\" not in r]\n \n setup(\n name=name,\n", "issue": "Push to PyPi failing\n```\r\nerror in cirq setup command: 'extras_require' must be a dictionary whose values are strings or lists of strings containing valid project/version requirement specifiers.\r\n```\r\n\r\nSee https://github.com/quantumlib/Cirq/runs/2851981344\r\n\n", "before_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\nfrom setuptools import setup\n\n# This reads the __version__ variable from cirq/_version.py\n__version__ = ''\n\nfrom dev_tools import modules\nfrom dev_tools.requirements import explode\n\nexec(open('cirq-core/cirq/_version.py').read())\n\nname = 'cirq'\n\ndescription = (\n 'A framework for creating, editing, and invoking '\n 'Noisy Intermediate Scale Quantum (NISQ) circuits.'\n)\n\n# README file as long_description.\nlong_description = io.open('README.rst', encoding='utf-8').read()\n\n# If CIRQ_PRE_RELEASE_VERSION is set then we update the version to this value.\n# It is assumed that it ends with one of `.devN`, `.aN`, `.bN`, `.rcN` and hence\n# it will be a pre-release version on PyPi. See\n# https://packaging.python.org/guides/distributing-packages-using-setuptools/#pre-release-versioning\n# for more details.\nif 'CIRQ_PRE_RELEASE_VERSION' in os.environ:\n __version__ = os.environ['CIRQ_PRE_RELEASE_VERSION']\n long_description = (\n \"**This is a development version of Cirq and may be \"\n \"unstable.**\\n\\n**For the latest stable release of Cirq \"\n \"see**\\n`here <https://pypi.org/project/cirq>`__.\\n\\n\" + long_description\n )\n\n# Sanity check\nassert __version__, 'Version string cannot be empty'\n\n# This is a pure metapackage that installs all our packages\nrequirements = [f'{p.name}=={p.version}' for p in modules.list_modules()]\n\ndev_requirements = explode('dev_tools/requirements/deps/dev-tools.txt')\ndev_requirements = [r.strip() for r in dev_requirements]\n\nsetup(\n name=name,\n version=__version__,\n url='http://github.com/quantumlib/cirq',\n author='The Cirq Developers',\n author_email='[email protected]',\n python_requires='>=3.6.0',\n install_requires=requirements,\n extras_require={\n 'dev_env': dev_requirements,\n },\n license='Apache 2',\n description=description,\n long_description=long_description,\n)\n", "path": "setup.py"}]} | 1,354 | 134 |
gh_patches_debug_6783 | rasdani/github-patches | git_diff | pytorch__ignite-1048 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug adding handler in case of decoration + class function + filtered event
## 🐛 Bug description
I would like to report a bug using handler defined by decorated function in a class with filtered event.
The following code reproduces all possible situations to add an handler defined w/wo decoration in a class or not, w/wo engine (or args), using an event w/wo filter
```python
engine = Engine(lambda e, b: b)
# decorator
def decorated(fun):
@functools.wraps(fun)
def wrapper(*args, **kwargs):
return fun(*args, **kwargs)
return wrapper
# handler as a function
def foo():
print("foo")
# handler as a decorated function
@decorated
def decorated_foo():
print("decorated_foo")
# register handler as a function -- OK
engine.add_event_handler(Events.EPOCH_STARTED, foo)
# register handler as a function with filter -- OK
engine.add_event_handler(Events.EPOCH_STARTED(every=2), foo)
# register handler as a decorated function -- OK
engine.add_event_handler(Events.EPOCH_STARTED, decorated_foo)
# register handler as a decorated function with filter -- OK
engine.add_event_handler(Events.EPOCH_STARTED(every=2), decorated_foo)
# handler as a function with engine (here args)
def foo_args(args):
print("foo_args", args)
# handler as a decorated function with engine
@decorated
def decorated_foo_args(args):
print("decorated_foo_args", args)
# register handler as a function with engine -- OK
engine.add_event_handler(Events.EPOCH_STARTED, foo_args)
# register handler as a function with engine and filter -- OK
engine.add_event_handler(Events.EPOCH_STARTED(every=2), foo_args)
# register handler as a decorated function with engine -- OK
engine.add_event_handler(Events.EPOCH_STARTED, decorated_foo_args)
# register handler as a decorated function with engine and filter -- OK
engine.add_event_handler(Events.EPOCH_STARTED(every=2), decorated_foo_args)
class Foo:
# handler as a class function (ie method)
def foo(self):
print("foo")
# handler as a decorated method
@decorated
def decorated_foo(self):
print("decorated_foo")
# handler as a method with engine
def foo_args(self, args):
print("foo_args", args)
# handler as a decorated method with engine
@decorated
def decorated_foo_args(self, args):
print("decorated_foo_args", args)
foo = Foo()
# register handler as a method -- OK
engine.add_event_handler(Events.EPOCH_STARTED, foo.foo)
# register handler as a method with filter -- OK
engine.add_event_handler(Events.EPOCH_STARTED(every=2), foo.foo)
# register handler as a decorated method -- OK
engine.add_event_handler(Events.EPOCH_STARTED, foo.decorated_foo)
# register handler as a decorated method with filter -- OK
engine.add_event_handler(Events.EPOCH_STARTED(every=2), foo.decorated_foo)
# register handler as a method with engine -- OK
engine.add_event_handler(Events.EPOCH_STARTED, foo.foo_args)
# register handler as a method with engine and filter -- OK
engine.add_event_handler(Events.EPOCH_STARTED(every=2), foo.foo_args)
# register handler as a decorated method with engine -- OK
engine.add_event_handler(Events.EPOCH_STARTED, foo.decorated_foo_args)
# register handler as a decorated method with engine and filter -- FAILED
engine.add_event_handler(Events.EPOCH_STARTED(every=2), foo.decorated_foo_args)
engine.run([0])
```
The error is
```
Error adding <function Foo.decorated_foo_args at 0x1229b6af0> 'handler': takes parameters ['self', 'args'] but will be called with [](missing a required argument: 'self').
```
Why ?
First, a handler defined with a filtered event is wrapped with decoration. See https://github.com/sdesrozis/ignite/blob/93be57aa3f71ce601391d59096c3b430c4d9487b/ignite/engine/engine.py#L198. Note that `functools.wraps` is used to fit the signature of the related handler.
The failed case is decorated method with engine. So, I guess `functools.wraps` works perfectly and catch `self` and `engine` as arguments. But the signature checking search (using `inspect.signature`) fails because missing `self`...
See signature checking
https://github.com/pytorch/ignite/blob/0de7156bb284bd01d788252469a3b386f10abbd7/ignite/engine/utils.py#L5
I think this is related to `follow_wrapped=True` argument of `inspect.signature`.
## Environment
- PyTorch Version (e.g., 1.4): 1.5
- Ignite Version (e.g., 0.3.0): 0.4
- OS (e.g., Linux): MacOS
- How you installed Ignite (`conda`, `pip`, source): Honda
- Python version: 3.7
- Any other relevant information:
Bug adding handler in case of decoration + class function + filtered event
## 🐛 Bug description
I would like to report a bug using handler defined by decorated function in a class with filtered event.
The following code reproduces all possible situations to add an handler defined w/wo decoration in a class or not, w/wo engine (or args), using an event w/wo filter
```python
engine = Engine(lambda e, b: b)
# decorator
def decorated(fun):
@functools.wraps(fun)
def wrapper(*args, **kwargs):
return fun(*args, **kwargs)
return wrapper
# handler as a function
def foo():
print("foo")
# handler as a decorated function
@decorated
def decorated_foo():
print("decorated_foo")
# register handler as a function -- OK
engine.add_event_handler(Events.EPOCH_STARTED, foo)
# register handler as a function with filter -- OK
engine.add_event_handler(Events.EPOCH_STARTED(every=2), foo)
# register handler as a decorated function -- OK
engine.add_event_handler(Events.EPOCH_STARTED, decorated_foo)
# register handler as a decorated function with filter -- OK
engine.add_event_handler(Events.EPOCH_STARTED(every=2), decorated_foo)
# handler as a function with engine (here args)
def foo_args(args):
print("foo_args", args)
# handler as a decorated function with engine
@decorated
def decorated_foo_args(args):
print("decorated_foo_args", args)
# register handler as a function with engine -- OK
engine.add_event_handler(Events.EPOCH_STARTED, foo_args)
# register handler as a function with engine and filter -- OK
engine.add_event_handler(Events.EPOCH_STARTED(every=2), foo_args)
# register handler as a decorated function with engine -- OK
engine.add_event_handler(Events.EPOCH_STARTED, decorated_foo_args)
# register handler as a decorated function with engine and filter -- OK
engine.add_event_handler(Events.EPOCH_STARTED(every=2), decorated_foo_args)
class Foo:
# handler as a class function (ie method)
def foo(self):
print("foo")
# handler as a decorated method
@decorated
def decorated_foo(self):
print("decorated_foo")
# handler as a method with engine
def foo_args(self, args):
print("foo_args", args)
# handler as a decorated method with engine
@decorated
def decorated_foo_args(self, args):
print("decorated_foo_args", args)
foo = Foo()
# register handler as a method -- OK
engine.add_event_handler(Events.EPOCH_STARTED, foo.foo)
# register handler as a method with filter -- OK
engine.add_event_handler(Events.EPOCH_STARTED(every=2), foo.foo)
# register handler as a decorated method -- OK
engine.add_event_handler(Events.EPOCH_STARTED, foo.decorated_foo)
# register handler as a decorated method with filter -- OK
engine.add_event_handler(Events.EPOCH_STARTED(every=2), foo.decorated_foo)
# register handler as a method with engine -- OK
engine.add_event_handler(Events.EPOCH_STARTED, foo.foo_args)
# register handler as a method with engine and filter -- OK
engine.add_event_handler(Events.EPOCH_STARTED(every=2), foo.foo_args)
# register handler as a decorated method with engine -- OK
engine.add_event_handler(Events.EPOCH_STARTED, foo.decorated_foo_args)
# register handler as a decorated method with engine and filter -- FAILED
engine.add_event_handler(Events.EPOCH_STARTED(every=2), foo.decorated_foo_args)
engine.run([0])
```
The error is
```
Error adding <function Foo.decorated_foo_args at 0x1229b6af0> 'handler': takes parameters ['self', 'args'] but will be called with [](missing a required argument: 'self').
```
Why ?
First, a handler defined with a filtered event is wrapped with decoration. See https://github.com/sdesrozis/ignite/blob/93be57aa3f71ce601391d59096c3b430c4d9487b/ignite/engine/engine.py#L198. Note that `functools.wraps` is used to fit the signature of the related handler.
The failed case is decorated method with engine. So, I guess `functools.wraps` works perfectly and catch `self` and `engine` as arguments. But the signature checking search (using `inspect.signature`) fails because missing `self`...
See signature checking
https://github.com/pytorch/ignite/blob/0de7156bb284bd01d788252469a3b386f10abbd7/ignite/engine/utils.py#L5
I think this is related to `follow_wrapped=True` argument of `inspect.signature`.
## Environment
- PyTorch Version (e.g., 1.4): 1.5
- Ignite Version (e.g., 0.3.0): 0.4
- OS (e.g., Linux): MacOS
- How you installed Ignite (`conda`, `pip`, source): Honda
- Python version: 3.7
- Any other relevant information:
</issue>
<code>
[start of ignite/engine/utils.py]
1 import inspect
2 from typing import Callable
3
4
5 def _check_signature(fn: Callable, fn_description: str, *args, **kwargs) -> None:
6 signature = inspect.signature(fn)
7 try: # try without engine
8 signature.bind(*args, **kwargs)
9 except TypeError as exc:
10 fn_params = list(signature.parameters)
11 exception_msg = str(exc)
12 passed_params = list(args) + list(kwargs)
13 raise ValueError(
14 "Error adding {} '{}': "
15 "takes parameters {} but will be called with {}"
16 "({}).".format(fn, fn_description, fn_params, passed_params, exception_msg)
17 )
18
[end of ignite/engine/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ignite/engine/utils.py b/ignite/engine/utils.py
--- a/ignite/engine/utils.py
+++ b/ignite/engine/utils.py
@@ -3,7 +3,11 @@
def _check_signature(fn: Callable, fn_description: str, *args, **kwargs) -> None:
- signature = inspect.signature(fn)
+ # if handler with filter, check the handler rather than the decorator
+ if hasattr(fn, "_parent"):
+ signature = inspect.signature(fn._parent())
+ else:
+ signature = inspect.signature(fn)
try: # try without engine
signature.bind(*args, **kwargs)
except TypeError as exc:
| {"golden_diff": "diff --git a/ignite/engine/utils.py b/ignite/engine/utils.py\n--- a/ignite/engine/utils.py\n+++ b/ignite/engine/utils.py\n@@ -3,7 +3,11 @@\n \n \n def _check_signature(fn: Callable, fn_description: str, *args, **kwargs) -> None:\n- signature = inspect.signature(fn)\n+ # if handler with filter, check the handler rather than the decorator\n+ if hasattr(fn, \"_parent\"):\n+ signature = inspect.signature(fn._parent())\n+ else:\n+ signature = inspect.signature(fn)\n try: # try without engine\n signature.bind(*args, **kwargs)\n except TypeError as exc:\n", "issue": "Bug adding handler in case of decoration + class function + filtered event\n## \ud83d\udc1b Bug description\r\n\r\nI would like to report a bug using handler defined by decorated function in a class with filtered event.\r\n\r\nThe following code reproduces all possible situations to add an handler defined w/wo decoration in a class or not, w/wo engine (or args), using an event w/wo filter\r\n\r\n```python\r\nengine = Engine(lambda e, b: b)\r\n\r\n# decorator\r\ndef decorated(fun):\r\n @functools.wraps(fun)\r\n def wrapper(*args, **kwargs):\r\n return fun(*args, **kwargs)\r\n return wrapper\r\n\r\n# handler as a function\r\ndef foo():\r\n print(\"foo\")\r\n\r\n# handler as a decorated function\r\n@decorated\r\ndef decorated_foo():\r\n print(\"decorated_foo\")\r\n\r\n# register handler as a function -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED, foo)\r\n# register handler as a function with filter -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED(every=2), foo)\r\n# register handler as a decorated function -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED, decorated_foo)\r\n# register handler as a decorated function with filter -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED(every=2), decorated_foo)\r\n\r\n\r\n# handler as a function with engine (here args)\r\ndef foo_args(args):\r\n print(\"foo_args\", args)\r\n\r\n\r\n# handler as a decorated function with engine \r\n@decorated\r\ndef decorated_foo_args(args):\r\n print(\"decorated_foo_args\", args)\r\n\r\n# register handler as a function with engine -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED, foo_args)\r\n# register handler as a function with engine and filter -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED(every=2), foo_args)\r\n# register handler as a decorated function with engine -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED, decorated_foo_args)\r\n# register handler as a decorated function with engine and filter -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED(every=2), decorated_foo_args)\r\n\r\nclass Foo:\r\n # handler as a class function (ie method)\r\n def foo(self):\r\n print(\"foo\")\r\n\r\n # handler as a decorated method\r\n @decorated\r\n def decorated_foo(self):\r\n print(\"decorated_foo\")\r\n\r\n # handler as a method with engine\r\n def foo_args(self, args):\r\n print(\"foo_args\", args)\r\n\r\n # handler as a decorated method with engine\r\n @decorated\r\n def decorated_foo_args(self, args):\r\n print(\"decorated_foo_args\", args)\r\n\r\n\r\nfoo = Foo()\r\n\r\n# register handler as a method -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED, foo.foo)\r\n# register handler as a method with filter -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED(every=2), foo.foo)\r\n# register handler as a decorated method -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED, foo.decorated_foo)\r\n# register handler as a decorated method with filter -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED(every=2), foo.decorated_foo)\r\n# register handler as a method with engine -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED, foo.foo_args)\r\n# register handler as a method with engine and filter -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED(every=2), foo.foo_args)\r\n# register handler as a decorated method with engine -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED, foo.decorated_foo_args)\r\n\r\n# register handler as a decorated method with engine and filter -- FAILED\r\nengine.add_event_handler(Events.EPOCH_STARTED(every=2), foo.decorated_foo_args)\r\n\r\nengine.run([0])\r\n```\r\n\r\nThe error is\r\n```\r\nError adding <function Foo.decorated_foo_args at 0x1229b6af0> 'handler': takes parameters ['self', 'args'] but will be called with [](missing a required argument: 'self').\r\n```\r\n\r\nWhy ? \r\n\r\nFirst, a handler defined with a filtered event is wrapped with decoration. See https://github.com/sdesrozis/ignite/blob/93be57aa3f71ce601391d59096c3b430c4d9487b/ignite/engine/engine.py#L198. Note that `functools.wraps` is used to fit the signature of the related handler.\r\n\r\nThe failed case is decorated method with engine. So, I guess `functools.wraps` works perfectly and catch `self` and `engine` as arguments. But the signature checking search (using `inspect.signature`) fails because missing `self`... \r\n\r\nSee signature checking\r\nhttps://github.com/pytorch/ignite/blob/0de7156bb284bd01d788252469a3b386f10abbd7/ignite/engine/utils.py#L5\r\n\r\nI think this is related to `follow_wrapped=True` argument of `inspect.signature`.\r\n\r\n## Environment\r\n\r\n - PyTorch Version (e.g., 1.4): 1.5\r\n - Ignite Version (e.g., 0.3.0): 0.4\r\n - OS (e.g., Linux): MacOS\r\n - How you installed Ignite (`conda`, `pip`, source): Honda\r\n - Python version: 3.7\r\n - Any other relevant information:\r\n\r\n\nBug adding handler in case of decoration + class function + filtered event\n## \ud83d\udc1b Bug description\r\n\r\nI would like to report a bug using handler defined by decorated function in a class with filtered event.\r\n\r\nThe following code reproduces all possible situations to add an handler defined w/wo decoration in a class or not, w/wo engine (or args), using an event w/wo filter\r\n\r\n```python\r\nengine = Engine(lambda e, b: b)\r\n\r\n# decorator\r\ndef decorated(fun):\r\n @functools.wraps(fun)\r\n def wrapper(*args, **kwargs):\r\n return fun(*args, **kwargs)\r\n return wrapper\r\n\r\n# handler as a function\r\ndef foo():\r\n print(\"foo\")\r\n\r\n# handler as a decorated function\r\n@decorated\r\ndef decorated_foo():\r\n print(\"decorated_foo\")\r\n\r\n# register handler as a function -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED, foo)\r\n# register handler as a function with filter -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED(every=2), foo)\r\n# register handler as a decorated function -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED, decorated_foo)\r\n# register handler as a decorated function with filter -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED(every=2), decorated_foo)\r\n\r\n\r\n# handler as a function with engine (here args)\r\ndef foo_args(args):\r\n print(\"foo_args\", args)\r\n\r\n\r\n# handler as a decorated function with engine \r\n@decorated\r\ndef decorated_foo_args(args):\r\n print(\"decorated_foo_args\", args)\r\n\r\n# register handler as a function with engine -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED, foo_args)\r\n# register handler as a function with engine and filter -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED(every=2), foo_args)\r\n# register handler as a decorated function with engine -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED, decorated_foo_args)\r\n# register handler as a decorated function with engine and filter -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED(every=2), decorated_foo_args)\r\n\r\nclass Foo:\r\n # handler as a class function (ie method)\r\n def foo(self):\r\n print(\"foo\")\r\n\r\n # handler as a decorated method\r\n @decorated\r\n def decorated_foo(self):\r\n print(\"decorated_foo\")\r\n\r\n # handler as a method with engine\r\n def foo_args(self, args):\r\n print(\"foo_args\", args)\r\n\r\n # handler as a decorated method with engine\r\n @decorated\r\n def decorated_foo_args(self, args):\r\n print(\"decorated_foo_args\", args)\r\n\r\n\r\nfoo = Foo()\r\n\r\n# register handler as a method -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED, foo.foo)\r\n# register handler as a method with filter -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED(every=2), foo.foo)\r\n# register handler as a decorated method -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED, foo.decorated_foo)\r\n# register handler as a decorated method with filter -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED(every=2), foo.decorated_foo)\r\n# register handler as a method with engine -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED, foo.foo_args)\r\n# register handler as a method with engine and filter -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED(every=2), foo.foo_args)\r\n# register handler as a decorated method with engine -- OK\r\nengine.add_event_handler(Events.EPOCH_STARTED, foo.decorated_foo_args)\r\n\r\n# register handler as a decorated method with engine and filter -- FAILED\r\nengine.add_event_handler(Events.EPOCH_STARTED(every=2), foo.decorated_foo_args)\r\n\r\nengine.run([0])\r\n```\r\n\r\nThe error is\r\n```\r\nError adding <function Foo.decorated_foo_args at 0x1229b6af0> 'handler': takes parameters ['self', 'args'] but will be called with [](missing a required argument: 'self').\r\n```\r\n\r\nWhy ? \r\n\r\nFirst, a handler defined with a filtered event is wrapped with decoration. See https://github.com/sdesrozis/ignite/blob/93be57aa3f71ce601391d59096c3b430c4d9487b/ignite/engine/engine.py#L198. Note that `functools.wraps` is used to fit the signature of the related handler.\r\n\r\nThe failed case is decorated method with engine. So, I guess `functools.wraps` works perfectly and catch `self` and `engine` as arguments. But the signature checking search (using `inspect.signature`) fails because missing `self`... \r\n\r\nSee signature checking\r\nhttps://github.com/pytorch/ignite/blob/0de7156bb284bd01d788252469a3b386f10abbd7/ignite/engine/utils.py#L5\r\n\r\nI think this is related to `follow_wrapped=True` argument of `inspect.signature`.\r\n\r\n## Environment\r\n\r\n - PyTorch Version (e.g., 1.4): 1.5\r\n - Ignite Version (e.g., 0.3.0): 0.4\r\n - OS (e.g., Linux): MacOS\r\n - How you installed Ignite (`conda`, `pip`, source): Honda\r\n - Python version: 3.7\r\n - Any other relevant information:\r\n\r\n\n", "before_files": [{"content": "import inspect\nfrom typing import Callable\n\n\ndef _check_signature(fn: Callable, fn_description: str, *args, **kwargs) -> None:\n signature = inspect.signature(fn)\n try: # try without engine\n signature.bind(*args, **kwargs)\n except TypeError as exc:\n fn_params = list(signature.parameters)\n exception_msg = str(exc)\n passed_params = list(args) + list(kwargs)\n raise ValueError(\n \"Error adding {} '{}': \"\n \"takes parameters {} but will be called with {}\"\n \"({}).\".format(fn, fn_description, fn_params, passed_params, exception_msg)\n )\n", "path": "ignite/engine/utils.py"}]} | 3,022 | 145 |
gh_patches_debug_1919 | rasdani/github-patches | git_diff | hylang__hy-885 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Exclamation mark ! is not mangled
I noticed that https://github.com/hylang/hyway/blob/master/conway.hy uses "!" in `set!` and `get!`, but Hy doesn't mangle "!" into something else. The variable is added to the module as-is. That means it'll be hard to reach it from normal Python code. Also, hy2py on Hy code with `set!` returns invalid syntax: `def set!(`.
</issue>
<code>
[start of hy/lex/parser.py]
1 # Copyright (c) 2013 Nicolas Dandrimont <[email protected]>
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a
4 # copy of this software and associated documentation files (the "Software"),
5 # to deal in the Software without restriction, including without limitation
6 # the rights to use, copy, modify, merge, publish, distribute, sublicense,
7 # and/or sell copies of the Software, and to permit persons to whom the
8 # Software is furnished to do so, subject to the following conditions:
9 #
10 # The above copyright notice and this permission notice shall be included in
11 # all copies or substantial portions of the Software.
12 #
13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
16 # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
18 # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
19 # DEALINGS IN THE SOFTWARE.
20
21 import sys
22 from functools import wraps
23
24 from rply import ParserGenerator
25
26 from hy.models.complex import HyComplex
27 from hy.models.cons import HyCons
28 from hy.models.dict import HyDict
29 from hy.models.expression import HyExpression
30 from hy.models.float import HyFloat
31 from hy.models.integer import HyInteger
32 from hy.models.keyword import HyKeyword
33 from hy.models.list import HyList
34 from hy.models.set import HySet
35 from hy.models.string import HyString
36 from hy.models.symbol import HySymbol
37
38 from .lexer import lexer
39 from .exceptions import LexException, PrematureEndOfInput
40
41
42 pg = ParserGenerator(
43 [rule.name for rule in lexer.rules] + ['$end'],
44 cache_id="hy_parser"
45 )
46
47
48 def set_boundaries(fun):
49 @wraps(fun)
50 def wrapped(p):
51 start = p[0].source_pos
52 end = p[-1].source_pos
53 ret = fun(p)
54 ret.start_line = start.lineno
55 ret.start_column = start.colno
56 if start is not end:
57 ret.end_line = end.lineno
58 ret.end_column = end.colno
59 else:
60 ret.end_line = start.lineno
61 ret.end_column = start.colno + len(p[0].value)
62 return ret
63 return wrapped
64
65
66 def set_quote_boundaries(fun):
67 @wraps(fun)
68 def wrapped(p):
69 start = p[0].source_pos
70 ret = fun(p)
71 ret.start_line = start.lineno
72 ret.start_column = start.colno
73 ret.end_line = p[-1].end_line
74 ret.end_column = p[-1].end_column
75 return ret
76 return wrapped
77
78
79 @pg.production("main : HASHBANG real_main")
80 def main_hashbang(p):
81 return p[1]
82
83
84 @pg.production("main : real_main")
85 def main(p):
86 return p[0]
87
88
89 @pg.production("real_main : list_contents")
90 def real_main(p):
91 return p[0]
92
93
94 @pg.production("real_main : $end")
95 def real_main_empty(p):
96 return []
97
98
99 def reject_spurious_dots(*items):
100 "Reject the spurious dots from items"
101 for list in items:
102 for tok in list:
103 if tok == "." and type(tok) == HySymbol:
104 raise LexException("Malformed dotted list",
105 tok.start_line, tok.start_column)
106
107
108 @pg.production("paren : LPAREN list_contents RPAREN")
109 @set_boundaries
110 def paren(p):
111 cont = p[1]
112
113 # Dotted lists are expressions of the form
114 # (a b c . d)
115 # that evaluate to nested cons cells of the form
116 # (a . (b . (c . d)))
117 if len(cont) >= 3 and isinstance(cont[-2], HySymbol) and cont[-2] == ".":
118
119 reject_spurious_dots(cont[:-2], cont[-1:])
120
121 if len(cont) == 3:
122 # Two-item dotted list: return the cons cell directly
123 return HyCons(cont[0], cont[2])
124 else:
125 # Return a nested cons cell
126 return HyCons(cont[0], paren([p[0], cont[1:], p[2]]))
127
128 # Warn preemptively on a malformed dotted list.
129 # Only check for dots after the first item to allow for a potential
130 # attribute accessor shorthand
131 reject_spurious_dots(cont[1:])
132
133 return HyExpression(p[1])
134
135
136 @pg.production("paren : LPAREN RPAREN")
137 @set_boundaries
138 def empty_paren(p):
139 return HyExpression([])
140
141
142 @pg.production("list_contents : term list_contents")
143 def list_contents(p):
144 return [p[0]] + p[1]
145
146
147 @pg.production("list_contents : term")
148 def list_contents_single(p):
149 return [p[0]]
150
151
152 @pg.production("term : identifier")
153 @pg.production("term : paren")
154 @pg.production("term : dict")
155 @pg.production("term : list")
156 @pg.production("term : set")
157 @pg.production("term : string")
158 def term(p):
159 return p[0]
160
161
162 @pg.production("term : QUOTE term")
163 @set_quote_boundaries
164 def term_quote(p):
165 return HyExpression([HySymbol("quote"), p[1]])
166
167
168 @pg.production("term : QUASIQUOTE term")
169 @set_quote_boundaries
170 def term_quasiquote(p):
171 return HyExpression([HySymbol("quasiquote"), p[1]])
172
173
174 @pg.production("term : UNQUOTE term")
175 @set_quote_boundaries
176 def term_unquote(p):
177 return HyExpression([HySymbol("unquote"), p[1]])
178
179
180 @pg.production("term : UNQUOTESPLICE term")
181 @set_quote_boundaries
182 def term_unquote_splice(p):
183 return HyExpression([HySymbol("unquote_splice"), p[1]])
184
185
186 @pg.production("term : HASHREADER term")
187 @set_quote_boundaries
188 def hash_reader(p):
189 st = p[0].getstr()[1]
190 str_object = HyString(st)
191 expr = p[1]
192 return HyExpression([HySymbol("dispatch_reader_macro"), str_object, expr])
193
194
195 @pg.production("set : HLCURLY list_contents RCURLY")
196 @set_boundaries
197 def t_set(p):
198 return HySet(p[1])
199
200
201 @pg.production("set : HLCURLY RCURLY")
202 @set_boundaries
203 def empty_set(p):
204 return HySet([])
205
206
207 @pg.production("dict : LCURLY list_contents RCURLY")
208 @set_boundaries
209 def t_dict(p):
210 return HyDict(p[1])
211
212
213 @pg.production("dict : LCURLY RCURLY")
214 @set_boundaries
215 def empty_dict(p):
216 return HyDict([])
217
218
219 @pg.production("list : LBRACKET list_contents RBRACKET")
220 @set_boundaries
221 def t_list(p):
222 return HyList(p[1])
223
224
225 @pg.production("list : LBRACKET RBRACKET")
226 @set_boundaries
227 def t_empty_list(p):
228 return HyList([])
229
230
231 if sys.version_info[0] >= 3:
232 def uni_hystring(s):
233 return HyString(eval(s))
234 else:
235 def uni_hystring(s):
236 return HyString(eval('u'+s))
237
238
239 @pg.production("string : STRING")
240 @set_boundaries
241 def t_string(p):
242 # remove trailing quote
243 s = p[0].value[:-1]
244 # get the header
245 header, s = s.split('"', 1)
246 # remove unicode marker
247 header = header.replace("u", "")
248 # build python string
249 s = header + '"""' + s + '"""'
250 return uni_hystring(s)
251
252
253 @pg.production("string : PARTIAL_STRING")
254 def t_partial_string(p):
255 # Any unterminated string requires more input
256 raise PrematureEndOfInput("Premature end of input")
257
258
259 @pg.production("identifier : IDENTIFIER")
260 @set_boundaries
261 def t_identifier(p):
262 obj = p[0].value
263
264 try:
265 return HyInteger(obj)
266 except ValueError:
267 pass
268
269 if '/' in obj:
270 try:
271 lhs, rhs = obj.split('/')
272 return HyExpression([HySymbol('fraction'), HyInteger(lhs),
273 HyInteger(rhs)])
274 except ValueError:
275 pass
276
277 try:
278 return HyFloat(obj)
279 except ValueError:
280 pass
281
282 if obj != 'j':
283 try:
284 return HyComplex(obj)
285 except ValueError:
286 pass
287
288 table = {
289 "true": "True",
290 "false": "False",
291 "nil": "None",
292 "null": "None",
293 }
294
295 if obj in table:
296 return HySymbol(table[obj])
297
298 if obj.startswith(":"):
299 return HyKeyword(obj)
300
301 def mangle(p):
302 if p.startswith("*") and p.endswith("*") and p not in ("*", "**"):
303 p = p[1:-1].upper()
304
305 if "-" in p and p != "-":
306 p = p.replace("-", "_")
307
308 if p.endswith("?") and p != "?":
309 p = "is_%s" % (p[:-1])
310
311 return p
312
313 obj = ".".join([mangle(part) for part in obj.split(".")])
314
315 return HySymbol(obj)
316
317
318 @pg.error
319 def error_handler(token):
320 tokentype = token.gettokentype()
321 if tokentype == '$end':
322 raise PrematureEndOfInput("Premature end of input")
323 else:
324 raise LexException(
325 "Ran into a %s where it wasn't expected." % tokentype,
326 token.source_pos.lineno, token.source_pos.colno)
327
328
329 parser = pg.build()
330
[end of hy/lex/parser.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/hy/lex/parser.py b/hy/lex/parser.py
--- a/hy/lex/parser.py
+++ b/hy/lex/parser.py
@@ -308,6 +308,9 @@
if p.endswith("?") and p != "?":
p = "is_%s" % (p[:-1])
+ if p.endswith("!") and p != "!":
+ p = "%s_bang" % (p[:-1])
+
return p
obj = ".".join([mangle(part) for part in obj.split(".")])
| {"golden_diff": "diff --git a/hy/lex/parser.py b/hy/lex/parser.py\n--- a/hy/lex/parser.py\n+++ b/hy/lex/parser.py\n@@ -308,6 +308,9 @@\n if p.endswith(\"?\") and p != \"?\":\n p = \"is_%s\" % (p[:-1])\n \n+ if p.endswith(\"!\") and p != \"!\":\n+ p = \"%s_bang\" % (p[:-1])\n+\n return p\n \n obj = \".\".join([mangle(part) for part in obj.split(\".\")])\n", "issue": "Exclamation mark ! is not mangled\nI noticed that https://github.com/hylang/hyway/blob/master/conway.hy uses \"!\" in `set!` and `get!`, but Hy doesn't mangle \"!\" into something else. The variable is added to the module as-is. That means it'll be hard to reach it from normal Python code. Also, hy2py on Hy code with `set!` returns invalid syntax: `def set!(`.\n\n", "before_files": [{"content": "# Copyright (c) 2013 Nicolas Dandrimont <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nimport sys\nfrom functools import wraps\n\nfrom rply import ParserGenerator\n\nfrom hy.models.complex import HyComplex\nfrom hy.models.cons import HyCons\nfrom hy.models.dict import HyDict\nfrom hy.models.expression import HyExpression\nfrom hy.models.float import HyFloat\nfrom hy.models.integer import HyInteger\nfrom hy.models.keyword import HyKeyword\nfrom hy.models.list import HyList\nfrom hy.models.set import HySet\nfrom hy.models.string import HyString\nfrom hy.models.symbol import HySymbol\n\nfrom .lexer import lexer\nfrom .exceptions import LexException, PrematureEndOfInput\n\n\npg = ParserGenerator(\n [rule.name for rule in lexer.rules] + ['$end'],\n cache_id=\"hy_parser\"\n)\n\n\ndef set_boundaries(fun):\n @wraps(fun)\n def wrapped(p):\n start = p[0].source_pos\n end = p[-1].source_pos\n ret = fun(p)\n ret.start_line = start.lineno\n ret.start_column = start.colno\n if start is not end:\n ret.end_line = end.lineno\n ret.end_column = end.colno\n else:\n ret.end_line = start.lineno\n ret.end_column = start.colno + len(p[0].value)\n return ret\n return wrapped\n\n\ndef set_quote_boundaries(fun):\n @wraps(fun)\n def wrapped(p):\n start = p[0].source_pos\n ret = fun(p)\n ret.start_line = start.lineno\n ret.start_column = start.colno\n ret.end_line = p[-1].end_line\n ret.end_column = p[-1].end_column\n return ret\n return wrapped\n\n\[email protected](\"main : HASHBANG real_main\")\ndef main_hashbang(p):\n return p[1]\n\n\[email protected](\"main : real_main\")\ndef main(p):\n return p[0]\n\n\[email protected](\"real_main : list_contents\")\ndef real_main(p):\n return p[0]\n\n\[email protected](\"real_main : $end\")\ndef real_main_empty(p):\n return []\n\n\ndef reject_spurious_dots(*items):\n \"Reject the spurious dots from items\"\n for list in items:\n for tok in list:\n if tok == \".\" and type(tok) == HySymbol:\n raise LexException(\"Malformed dotted list\",\n tok.start_line, tok.start_column)\n\n\[email protected](\"paren : LPAREN list_contents RPAREN\")\n@set_boundaries\ndef paren(p):\n cont = p[1]\n\n # Dotted lists are expressions of the form\n # (a b c . d)\n # that evaluate to nested cons cells of the form\n # (a . (b . (c . d)))\n if len(cont) >= 3 and isinstance(cont[-2], HySymbol) and cont[-2] == \".\":\n\n reject_spurious_dots(cont[:-2], cont[-1:])\n\n if len(cont) == 3:\n # Two-item dotted list: return the cons cell directly\n return HyCons(cont[0], cont[2])\n else:\n # Return a nested cons cell\n return HyCons(cont[0], paren([p[0], cont[1:], p[2]]))\n\n # Warn preemptively on a malformed dotted list.\n # Only check for dots after the first item to allow for a potential\n # attribute accessor shorthand\n reject_spurious_dots(cont[1:])\n\n return HyExpression(p[1])\n\n\[email protected](\"paren : LPAREN RPAREN\")\n@set_boundaries\ndef empty_paren(p):\n return HyExpression([])\n\n\[email protected](\"list_contents : term list_contents\")\ndef list_contents(p):\n return [p[0]] + p[1]\n\n\[email protected](\"list_contents : term\")\ndef list_contents_single(p):\n return [p[0]]\n\n\[email protected](\"term : identifier\")\[email protected](\"term : paren\")\[email protected](\"term : dict\")\[email protected](\"term : list\")\[email protected](\"term : set\")\[email protected](\"term : string\")\ndef term(p):\n return p[0]\n\n\[email protected](\"term : QUOTE term\")\n@set_quote_boundaries\ndef term_quote(p):\n return HyExpression([HySymbol(\"quote\"), p[1]])\n\n\[email protected](\"term : QUASIQUOTE term\")\n@set_quote_boundaries\ndef term_quasiquote(p):\n return HyExpression([HySymbol(\"quasiquote\"), p[1]])\n\n\[email protected](\"term : UNQUOTE term\")\n@set_quote_boundaries\ndef term_unquote(p):\n return HyExpression([HySymbol(\"unquote\"), p[1]])\n\n\[email protected](\"term : UNQUOTESPLICE term\")\n@set_quote_boundaries\ndef term_unquote_splice(p):\n return HyExpression([HySymbol(\"unquote_splice\"), p[1]])\n\n\[email protected](\"term : HASHREADER term\")\n@set_quote_boundaries\ndef hash_reader(p):\n st = p[0].getstr()[1]\n str_object = HyString(st)\n expr = p[1]\n return HyExpression([HySymbol(\"dispatch_reader_macro\"), str_object, expr])\n\n\[email protected](\"set : HLCURLY list_contents RCURLY\")\n@set_boundaries\ndef t_set(p):\n return HySet(p[1])\n\n\[email protected](\"set : HLCURLY RCURLY\")\n@set_boundaries\ndef empty_set(p):\n return HySet([])\n\n\[email protected](\"dict : LCURLY list_contents RCURLY\")\n@set_boundaries\ndef t_dict(p):\n return HyDict(p[1])\n\n\[email protected](\"dict : LCURLY RCURLY\")\n@set_boundaries\ndef empty_dict(p):\n return HyDict([])\n\n\[email protected](\"list : LBRACKET list_contents RBRACKET\")\n@set_boundaries\ndef t_list(p):\n return HyList(p[1])\n\n\[email protected](\"list : LBRACKET RBRACKET\")\n@set_boundaries\ndef t_empty_list(p):\n return HyList([])\n\n\nif sys.version_info[0] >= 3:\n def uni_hystring(s):\n return HyString(eval(s))\nelse:\n def uni_hystring(s):\n return HyString(eval('u'+s))\n\n\[email protected](\"string : STRING\")\n@set_boundaries\ndef t_string(p):\n # remove trailing quote\n s = p[0].value[:-1]\n # get the header\n header, s = s.split('\"', 1)\n # remove unicode marker\n header = header.replace(\"u\", \"\")\n # build python string\n s = header + '\"\"\"' + s + '\"\"\"'\n return uni_hystring(s)\n\n\[email protected](\"string : PARTIAL_STRING\")\ndef t_partial_string(p):\n # Any unterminated string requires more input\n raise PrematureEndOfInput(\"Premature end of input\")\n\n\[email protected](\"identifier : IDENTIFIER\")\n@set_boundaries\ndef t_identifier(p):\n obj = p[0].value\n\n try:\n return HyInteger(obj)\n except ValueError:\n pass\n\n if '/' in obj:\n try:\n lhs, rhs = obj.split('/')\n return HyExpression([HySymbol('fraction'), HyInteger(lhs),\n HyInteger(rhs)])\n except ValueError:\n pass\n\n try:\n return HyFloat(obj)\n except ValueError:\n pass\n\n if obj != 'j':\n try:\n return HyComplex(obj)\n except ValueError:\n pass\n\n table = {\n \"true\": \"True\",\n \"false\": \"False\",\n \"nil\": \"None\",\n \"null\": \"None\",\n }\n\n if obj in table:\n return HySymbol(table[obj])\n\n if obj.startswith(\":\"):\n return HyKeyword(obj)\n\n def mangle(p):\n if p.startswith(\"*\") and p.endswith(\"*\") and p not in (\"*\", \"**\"):\n p = p[1:-1].upper()\n\n if \"-\" in p and p != \"-\":\n p = p.replace(\"-\", \"_\")\n\n if p.endswith(\"?\") and p != \"?\":\n p = \"is_%s\" % (p[:-1])\n\n return p\n\n obj = \".\".join([mangle(part) for part in obj.split(\".\")])\n\n return HySymbol(obj)\n\n\[email protected]\ndef error_handler(token):\n tokentype = token.gettokentype()\n if tokentype == '$end':\n raise PrematureEndOfInput(\"Premature end of input\")\n else:\n raise LexException(\n \"Ran into a %s where it wasn't expected.\" % tokentype,\n token.source_pos.lineno, token.source_pos.colno)\n\n\nparser = pg.build()\n", "path": "hy/lex/parser.py"}]} | 3,687 | 126 |
gh_patches_debug_3669 | rasdani/github-patches | git_diff | ocadotechnology__codeforlife-portal-783 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
portal API not working anymore
**Describe the bug**
When trying to access the following URLs:
/api/lastconnectedsince/YYYY/MM/DD
.../registered/YYYY/MM/DD
.../userspercountry/CC
On any of our servers, we get a 500 error.
Google console says:
> TemplateSyntaxError: 'url' is not a valid tag or filter in tag library 'future'
It happens even with the right to access them.
**To Reproduce**
If you have an authorised google account, go to https://www.codeforlife.education/api/lastconnectedsince/2018/07/20/ and you will see a 500 error
**Expected behaviour**
This page to display a number when your google account is autorised
**Desktop (please complete the following information):**
- OS:Ubuntu 16.04
- Browser:Chrome
**Additional context**
The urls.py file has been reworked for forward compatibility
</issue>
<code>
[start of setup.py]
1 # -*- coding: utf-8 -*-
2 from setuptools import find_packages, setup
3 import versioneer
4 setup(name='codeforlife-portal',
5 cmdclass=versioneer.get_cmdclass(),
6 version=versioneer.get_version(),
7 packages=find_packages(),
8 include_package_data=True,
9 install_requires=[
10 'django==1.9.13',
11 'django-appconf==1.0.1',
12 'django-countries==3.4.1',
13 'djangorestframework==3.1.3',
14 'django-jquery==1.9.1',
15 'django-autoconfig==0.8.0',
16 'django-pipeline==1.5.4',
17 'django-recaptcha==1.3.1', # 1.4 dropped support for < 1.11
18
19 'pyyaml==3.10',
20 'rapid-router >= 1.0.0.post.dev1',
21 'six==1.11.0',
22 'aimmo',
23 'docutils==0.12',
24 'reportlab==3.2.0',
25 'postcodes==0.1',
26 'django-formtools==1.0',
27 'django-two-factor-auth==1.5.0',
28 'urllib3==1.22',
29 'requests==2.18.4',
30
31 'django-classy-tags==0.6.1',
32 'django-treebeard==4.3',
33 'django-sekizai==0.10.0',
34
35 'django-online-status==0.1.0',
36
37 'Pillow==3.3.2',
38 'django-reversion==2.0.0',
39 'sqlparse',
40 'libsass',
41 'django-forms-bootstrap'
42 ],
43 tests_require=[
44 'django-setuptest==0.2.1',
45 'django-selenium-clean==0.3.0',
46 'responses==0.4.0',
47 'selenium==2.48.0',
48 ],
49 test_suite='setuptest.setuptest.SetupTestSuite',
50 zip_safe=False,
51 )
52
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -10,7 +10,7 @@
'django==1.9.13',
'django-appconf==1.0.1',
'django-countries==3.4.1',
- 'djangorestframework==3.1.3',
+ 'djangorestframework==3.2.3',
'django-jquery==1.9.1',
'django-autoconfig==0.8.0',
'django-pipeline==1.5.4',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -10,7 +10,7 @@\n 'django==1.9.13',\n 'django-appconf==1.0.1',\n 'django-countries==3.4.1',\n- 'djangorestframework==3.1.3',\n+ 'djangorestframework==3.2.3',\n 'django-jquery==1.9.1',\n 'django-autoconfig==0.8.0',\n 'django-pipeline==1.5.4',\n", "issue": "portal API not working anymore\n**Describe the bug**\r\nWhen trying to access the following URLs:\r\n/api/lastconnectedsince/YYYY/MM/DD\r\n.../registered/YYYY/MM/DD\r\n.../userspercountry/CC\r\nOn any of our servers, we get a 500 error.\r\nGoogle console says: \r\n\r\n> TemplateSyntaxError: 'url' is not a valid tag or filter in tag library 'future'\r\n\r\nIt happens even with the right to access them.\r\n\r\n**To Reproduce**\r\nIf you have an authorised google account, go to https://www.codeforlife.education/api/lastconnectedsince/2018/07/20/ and you will see a 500 error\r\n\r\n**Expected behaviour**\r\nThis page to display a number when your google account is autorised\r\n\r\n**Desktop (please complete the following information):**\r\n\r\n- OS:Ubuntu 16.04\r\n- Browser:Chrome\r\n \r\n**Additional context**\r\nThe urls.py file has been reworked for forward compatibility\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom setuptools import find_packages, setup\nimport versioneer\nsetup(name='codeforlife-portal',\n cmdclass=versioneer.get_cmdclass(),\n version=versioneer.get_version(),\n packages=find_packages(),\n include_package_data=True,\n install_requires=[\n 'django==1.9.13',\n 'django-appconf==1.0.1',\n 'django-countries==3.4.1',\n 'djangorestframework==3.1.3',\n 'django-jquery==1.9.1',\n 'django-autoconfig==0.8.0',\n 'django-pipeline==1.5.4',\n 'django-recaptcha==1.3.1', # 1.4 dropped support for < 1.11\n\n 'pyyaml==3.10',\n 'rapid-router >= 1.0.0.post.dev1',\n 'six==1.11.0',\n 'aimmo',\n 'docutils==0.12',\n 'reportlab==3.2.0',\n 'postcodes==0.1',\n 'django-formtools==1.0',\n 'django-two-factor-auth==1.5.0',\n 'urllib3==1.22',\n 'requests==2.18.4',\n\n 'django-classy-tags==0.6.1',\n 'django-treebeard==4.3',\n 'django-sekizai==0.10.0',\n\n 'django-online-status==0.1.0',\n\n 'Pillow==3.3.2',\n 'django-reversion==2.0.0',\n 'sqlparse',\n 'libsass',\n 'django-forms-bootstrap'\n ],\n tests_require=[\n 'django-setuptest==0.2.1',\n 'django-selenium-clean==0.3.0',\n 'responses==0.4.0',\n 'selenium==2.48.0',\n ],\n test_suite='setuptest.setuptest.SetupTestSuite',\n zip_safe=False,\n )\n", "path": "setup.py"}]} | 1,294 | 131 |
gh_patches_debug_29415 | rasdani/github-patches | git_diff | pre-commit__pre-commit-162 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
UnicodeEncodeError when writing to stdout in python2.6
```
$ pre-commit run fixmyjs
fixmyjs............................................................................................................................................................................................Failed
hookid: fixmyjs
Traceback (most recent call last):
File "virtualenv_run/bin/pre-commit", line 14, in <module>
sys.exit(main())
File "virtualenv_run/lib/python2.6/site-packages/pre_commit/util.py", line 41, in wrapper
return func(argv)
File "virtualenv_run/lib/python2.6/site-packages/pre_commit/main.py", line 99, in main
return run(runner, args)
File "virtualenv_run/lib/python2.6/site-packages/pre_commit/commands/run.py", line 144, in run
return _run_hook(runner, args, write=write)
File "virtualenv_run/lib/python2.6/site-packages/pre_commit/commands/run.py", line 116, in _run_hook
return _run_single_hook(runner, repo, hook_id, args, write=write)
File "virtualenv_run/lib/python2.6/site-packages/pre_commit/commands/run.py", line 91, in _run_single_hook
write(output.strip() + '\n')
UnicodeEncodeError: 'ascii' codec can't encode character u'\u2713' in position 0: ordinal not in range(128)
```
</issue>
<code>
[start of pre_commit/output.py]
1 from __future__ import unicode_literals
2
3 import subprocess
4
5 from pre_commit import color
6
7
8 # TODO: smell: import side-effects
9 COLS = int(
10 subprocess.Popen(
11 ['tput', 'cols'], stdout=subprocess.PIPE
12 ).communicate()[0] or
13 # Default in the case of no terminal
14 80
15 )
16
17
18 def get_hook_message(
19 start,
20 postfix='',
21 end_msg=None,
22 end_len=0,
23 end_color=None,
24 use_color=None,
25 cols=COLS,
26 ):
27 """Prints a message for running a hook.
28
29 This currently supports three approaches:
30
31 # Print `start` followed by dots, leaving 6 characters at the end
32 >>> print_hook_message('start', end_len=6)
33 start...............................................................
34
35 # Print `start` followed by dots with the end message colored if coloring
36 # is specified and a newline afterwards
37 >>> print_hook_message(
38 'start',
39 end_msg='end',
40 end_color=color.RED,
41 use_color=True,
42 )
43 start...................................................................end
44
45 # Print `start` followed by dots, followed by the `postfix` message
46 # uncolored, followed by the `end_msg` colored if specified and a newline
47 # afterwards
48 >>> print_hook_message(
49 'start',
50 postfix='postfix ',
51 end_msg='end',
52 end_color=color.RED,
53 use_color=True,
54 )
55 start...........................................................postfix end
56 """
57 if bool(end_msg) == bool(end_len):
58 raise ValueError('Expected one of (`end_msg`, `end_len`)')
59 if end_msg is not None and (end_color is None or use_color is None):
60 raise ValueError(
61 '`end_color` and `use_color` are required with `end_msg`'
62 )
63
64 if end_len:
65 return start + '.' * (cols - len(start) - end_len - 1)
66 else:
67 return '{0}{1}{2}{3}\n'.format(
68 start,
69 '.' * (cols - len(start) - len(postfix) - len(end_msg) - 1),
70 postfix,
71 color.format_color(end_msg, end_color, use_color),
72 )
73
[end of pre_commit/output.py]
[start of pre_commit/commands/run.py]
1 from __future__ import print_function
2 from __future__ import unicode_literals
3
4 import logging
5 import os
6 import sys
7
8 from pre_commit import git
9 from pre_commit import color
10 from pre_commit.logging_handler import LoggingHandler
11 from pre_commit.output import get_hook_message
12 from pre_commit.staged_files_only import staged_files_only
13 from pre_commit.util import noop_context
14
15
16 logger = logging.getLogger('pre_commit')
17
18
19 def _get_skips(environ):
20 skips = environ.get('SKIP', '')
21 return set(skip.strip() for skip in skips.split(',') if skip.strip())
22
23
24 def _hook_msg_start(hook, verbose):
25 return '{0}{1}'.format(
26 '[{0}] '.format(hook['id']) if verbose else '',
27 hook['name'],
28 )
29
30
31 def _print_no_files_skipped(hook, write, args):
32 write(get_hook_message(
33 _hook_msg_start(hook, args.verbose),
34 postfix='(no files to check) ',
35 end_msg='Skipped',
36 end_color=color.TURQUOISE,
37 use_color=args.color,
38 ))
39
40
41 def _print_user_skipped(hook, write, args):
42 write(get_hook_message(
43 _hook_msg_start(hook, args.verbose),
44 end_msg='Skipped',
45 end_color=color.YELLOW,
46 use_color=args.color,
47 ))
48
49
50 def _run_single_hook(runner, repository, hook, args, write, skips=set()):
51 if args.all_files:
52 get_filenames = git.get_all_files_matching
53 elif git.is_in_merge_conflict():
54 get_filenames = git.get_conflicted_files_matching
55 else:
56 get_filenames = git.get_staged_files_matching
57
58 filenames = get_filenames(hook['files'], hook['exclude'])
59 if hook['id'] in skips:
60 _print_user_skipped(hook, write, args)
61 return 0
62 elif not filenames:
63 _print_no_files_skipped(hook, write, args)
64 return 0
65
66 # Print the hook and the dots first in case the hook takes hella long to
67 # run.
68 write(get_hook_message(_hook_msg_start(hook, args.verbose), end_len=6))
69 sys.stdout.flush()
70
71 retcode, stdout, stderr = repository.run_hook(hook, filenames)
72
73 if retcode != hook['expected_return_value']:
74 retcode = 1
75 print_color = color.RED
76 pass_fail = 'Failed'
77 else:
78 retcode = 0
79 print_color = color.GREEN
80 pass_fail = 'Passed'
81
82 write(color.format_color(pass_fail, print_color, args.color) + '\n')
83
84 if (stdout or stderr) and (retcode or args.verbose):
85 write('hookid: {0}\n'.format(hook['id']))
86 write('\n')
87 for output in (stdout, stderr):
88 if output.strip():
89 write(output.strip() + '\n')
90 write('\n')
91
92 return retcode
93
94
95 def _run_hooks(runner, args, write, environ):
96 """Actually run the hooks."""
97 retval = 0
98
99 skips = _get_skips(environ)
100
101 for repo in runner.repositories:
102 for _, hook in repo.hooks:
103 retval |= _run_single_hook(
104 runner, repo, hook, args, write, skips=skips,
105 )
106
107 return retval
108
109
110 def _run_hook(runner, args, write):
111 hook_id = args.hook
112 for repo in runner.repositories:
113 for hook_id_in_repo, hook in repo.hooks:
114 if hook_id == hook_id_in_repo:
115 return _run_single_hook(
116 runner, repo, hook, args, write=write,
117 )
118 else:
119 write('No hook with id `{0}`\n'.format(hook_id))
120 return 1
121
122
123 def _has_unmerged_paths(runner):
124 _, stdout, _ = runner.cmd_runner.run(['git', 'ls-files', '--unmerged'])
125 return bool(stdout.strip())
126
127
128 def run(runner, args, write=sys.stdout.write, environ=os.environ):
129 # Set up our logging handler
130 logger.addHandler(LoggingHandler(args.color, write=write))
131 logger.setLevel(logging.INFO)
132
133 # Check if we have unresolved merge conflict files and fail fast.
134 if _has_unmerged_paths(runner):
135 logger.error('Unmerged files. Resolve before committing.')
136 return 1
137
138 if args.no_stash or args.all_files:
139 ctx = noop_context()
140 else:
141 ctx = staged_files_only(runner.cmd_runner)
142
143 with ctx:
144 if args.hook:
145 return _run_hook(runner, args, write=write)
146 else:
147 return _run_hooks(runner, args, write=write, environ=environ)
148
[end of pre_commit/commands/run.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pre_commit/commands/run.py b/pre_commit/commands/run.py
--- a/pre_commit/commands/run.py
+++ b/pre_commit/commands/run.py
@@ -9,6 +9,7 @@
from pre_commit import color
from pre_commit.logging_handler import LoggingHandler
from pre_commit.output import get_hook_message
+from pre_commit.output import sys_stdout_write_wrapper
from pre_commit.staged_files_only import staged_files_only
from pre_commit.util import noop_context
@@ -125,7 +126,7 @@
return bool(stdout.strip())
-def run(runner, args, write=sys.stdout.write, environ=os.environ):
+def run(runner, args, write=sys_stdout_write_wrapper, environ=os.environ):
# Set up our logging handler
logger.addHandler(LoggingHandler(args.color, write=write))
logger.setLevel(logging.INFO)
diff --git a/pre_commit/output.py b/pre_commit/output.py
--- a/pre_commit/output.py
+++ b/pre_commit/output.py
@@ -1,8 +1,10 @@
from __future__ import unicode_literals
import subprocess
+import sys
from pre_commit import color
+from pre_commit import five
# TODO: smell: import side-effects
@@ -70,3 +72,14 @@
postfix,
color.format_color(end_msg, end_color, use_color),
)
+
+
+def sys_stdout_write_wrapper(s, stream=sys.stdout):
+ """Python 2.6 chokes on unicode being passed to sys.stdout.write.
+
+ This is an adapter because PY2 is ok with bytes and PY3 requires text.
+ """
+ assert type(s) is five.text
+ if five.PY2: # pragma: no cover (PY2)
+ s = s.encode('UTF-8')
+ stream.write(s)
| {"golden_diff": "diff --git a/pre_commit/commands/run.py b/pre_commit/commands/run.py\n--- a/pre_commit/commands/run.py\n+++ b/pre_commit/commands/run.py\n@@ -9,6 +9,7 @@\n from pre_commit import color\n from pre_commit.logging_handler import LoggingHandler\n from pre_commit.output import get_hook_message\n+from pre_commit.output import sys_stdout_write_wrapper\n from pre_commit.staged_files_only import staged_files_only\n from pre_commit.util import noop_context\n \n@@ -125,7 +126,7 @@\n return bool(stdout.strip())\n \n \n-def run(runner, args, write=sys.stdout.write, environ=os.environ):\n+def run(runner, args, write=sys_stdout_write_wrapper, environ=os.environ):\n # Set up our logging handler\n logger.addHandler(LoggingHandler(args.color, write=write))\n logger.setLevel(logging.INFO)\ndiff --git a/pre_commit/output.py b/pre_commit/output.py\n--- a/pre_commit/output.py\n+++ b/pre_commit/output.py\n@@ -1,8 +1,10 @@\n from __future__ import unicode_literals\n \n import subprocess\n+import sys\n \n from pre_commit import color\n+from pre_commit import five\n \n \n # TODO: smell: import side-effects\n@@ -70,3 +72,14 @@\n postfix,\n color.format_color(end_msg, end_color, use_color),\n )\n+\n+\n+def sys_stdout_write_wrapper(s, stream=sys.stdout):\n+ \"\"\"Python 2.6 chokes on unicode being passed to sys.stdout.write.\n+\n+ This is an adapter because PY2 is ok with bytes and PY3 requires text.\n+ \"\"\"\n+ assert type(s) is five.text\n+ if five.PY2: # pragma: no cover (PY2)\n+ s = s.encode('UTF-8')\n+ stream.write(s)\n", "issue": "UnicodeEncodeError when writing to stdout in python2.6 \n```\n$ pre-commit run fixmyjs\nfixmyjs............................................................................................................................................................................................Failed\nhookid: fixmyjs\n\nTraceback (most recent call last):\n File \"virtualenv_run/bin/pre-commit\", line 14, in <module>\n sys.exit(main())\n File \"virtualenv_run/lib/python2.6/site-packages/pre_commit/util.py\", line 41, in wrapper\n return func(argv)\n File \"virtualenv_run/lib/python2.6/site-packages/pre_commit/main.py\", line 99, in main\n return run(runner, args)\n File \"virtualenv_run/lib/python2.6/site-packages/pre_commit/commands/run.py\", line 144, in run\n return _run_hook(runner, args, write=write)\n File \"virtualenv_run/lib/python2.6/site-packages/pre_commit/commands/run.py\", line 116, in _run_hook\n return _run_single_hook(runner, repo, hook_id, args, write=write)\n File \"virtualenv_run/lib/python2.6/site-packages/pre_commit/commands/run.py\", line 91, in _run_single_hook\n write(output.strip() + '\\n')\nUnicodeEncodeError: 'ascii' codec can't encode character u'\\u2713' in position 0: ordinal not in range(128)\n```\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport subprocess\n\nfrom pre_commit import color\n\n\n# TODO: smell: import side-effects\nCOLS = int(\n subprocess.Popen(\n ['tput', 'cols'], stdout=subprocess.PIPE\n ).communicate()[0] or\n # Default in the case of no terminal\n 80\n)\n\n\ndef get_hook_message(\n start,\n postfix='',\n end_msg=None,\n end_len=0,\n end_color=None,\n use_color=None,\n cols=COLS,\n):\n \"\"\"Prints a message for running a hook.\n\n This currently supports three approaches:\n\n # Print `start` followed by dots, leaving 6 characters at the end\n >>> print_hook_message('start', end_len=6)\n start...............................................................\n\n # Print `start` followed by dots with the end message colored if coloring\n # is specified and a newline afterwards\n >>> print_hook_message(\n 'start',\n end_msg='end',\n end_color=color.RED,\n use_color=True,\n )\n start...................................................................end\n\n # Print `start` followed by dots, followed by the `postfix` message\n # uncolored, followed by the `end_msg` colored if specified and a newline\n # afterwards\n >>> print_hook_message(\n 'start',\n postfix='postfix ',\n end_msg='end',\n end_color=color.RED,\n use_color=True,\n )\n start...........................................................postfix end\n \"\"\"\n if bool(end_msg) == bool(end_len):\n raise ValueError('Expected one of (`end_msg`, `end_len`)')\n if end_msg is not None and (end_color is None or use_color is None):\n raise ValueError(\n '`end_color` and `use_color` are required with `end_msg`'\n )\n\n if end_len:\n return start + '.' * (cols - len(start) - end_len - 1)\n else:\n return '{0}{1}{2}{3}\\n'.format(\n start,\n '.' * (cols - len(start) - len(postfix) - len(end_msg) - 1),\n postfix,\n color.format_color(end_msg, end_color, use_color),\n )\n", "path": "pre_commit/output.py"}, {"content": "from __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport logging\nimport os\nimport sys\n\nfrom pre_commit import git\nfrom pre_commit import color\nfrom pre_commit.logging_handler import LoggingHandler\nfrom pre_commit.output import get_hook_message\nfrom pre_commit.staged_files_only import staged_files_only\nfrom pre_commit.util import noop_context\n\n\nlogger = logging.getLogger('pre_commit')\n\n\ndef _get_skips(environ):\n skips = environ.get('SKIP', '')\n return set(skip.strip() for skip in skips.split(',') if skip.strip())\n\n\ndef _hook_msg_start(hook, verbose):\n return '{0}{1}'.format(\n '[{0}] '.format(hook['id']) if verbose else '',\n hook['name'],\n )\n\n\ndef _print_no_files_skipped(hook, write, args):\n write(get_hook_message(\n _hook_msg_start(hook, args.verbose),\n postfix='(no files to check) ',\n end_msg='Skipped',\n end_color=color.TURQUOISE,\n use_color=args.color,\n ))\n\n\ndef _print_user_skipped(hook, write, args):\n write(get_hook_message(\n _hook_msg_start(hook, args.verbose),\n end_msg='Skipped',\n end_color=color.YELLOW,\n use_color=args.color,\n ))\n\n\ndef _run_single_hook(runner, repository, hook, args, write, skips=set()):\n if args.all_files:\n get_filenames = git.get_all_files_matching\n elif git.is_in_merge_conflict():\n get_filenames = git.get_conflicted_files_matching\n else:\n get_filenames = git.get_staged_files_matching\n\n filenames = get_filenames(hook['files'], hook['exclude'])\n if hook['id'] in skips:\n _print_user_skipped(hook, write, args)\n return 0\n elif not filenames:\n _print_no_files_skipped(hook, write, args)\n return 0\n\n # Print the hook and the dots first in case the hook takes hella long to\n # run.\n write(get_hook_message(_hook_msg_start(hook, args.verbose), end_len=6))\n sys.stdout.flush()\n\n retcode, stdout, stderr = repository.run_hook(hook, filenames)\n\n if retcode != hook['expected_return_value']:\n retcode = 1\n print_color = color.RED\n pass_fail = 'Failed'\n else:\n retcode = 0\n print_color = color.GREEN\n pass_fail = 'Passed'\n\n write(color.format_color(pass_fail, print_color, args.color) + '\\n')\n\n if (stdout or stderr) and (retcode or args.verbose):\n write('hookid: {0}\\n'.format(hook['id']))\n write('\\n')\n for output in (stdout, stderr):\n if output.strip():\n write(output.strip() + '\\n')\n write('\\n')\n\n return retcode\n\n\ndef _run_hooks(runner, args, write, environ):\n \"\"\"Actually run the hooks.\"\"\"\n retval = 0\n\n skips = _get_skips(environ)\n\n for repo in runner.repositories:\n for _, hook in repo.hooks:\n retval |= _run_single_hook(\n runner, repo, hook, args, write, skips=skips,\n )\n\n return retval\n\n\ndef _run_hook(runner, args, write):\n hook_id = args.hook\n for repo in runner.repositories:\n for hook_id_in_repo, hook in repo.hooks:\n if hook_id == hook_id_in_repo:\n return _run_single_hook(\n runner, repo, hook, args, write=write,\n )\n else:\n write('No hook with id `{0}`\\n'.format(hook_id))\n return 1\n\n\ndef _has_unmerged_paths(runner):\n _, stdout, _ = runner.cmd_runner.run(['git', 'ls-files', '--unmerged'])\n return bool(stdout.strip())\n\n\ndef run(runner, args, write=sys.stdout.write, environ=os.environ):\n # Set up our logging handler\n logger.addHandler(LoggingHandler(args.color, write=write))\n logger.setLevel(logging.INFO)\n\n # Check if we have unresolved merge conflict files and fail fast.\n if _has_unmerged_paths(runner):\n logger.error('Unmerged files. Resolve before committing.')\n return 1\n\n if args.no_stash or args.all_files:\n ctx = noop_context()\n else:\n ctx = staged_files_only(runner.cmd_runner)\n\n with ctx:\n if args.hook:\n return _run_hook(runner, args, write=write)\n else:\n return _run_hooks(runner, args, write=write, environ=environ)\n", "path": "pre_commit/commands/run.py"}]} | 2,843 | 392 |
gh_patches_debug_27114 | rasdani/github-patches | git_diff | svthalia__concrexit-1126 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix "similar-code" issue in website/activemembers/admin.py
Similar blocks of code found in 2 locations. Consider refactoring.
https://codeclimate.com/github/svthalia/concrexit/website/activemembers/admin.py#issue_5eceacbde96d31000100042c
</issue>
<code>
[start of website/activemembers/admin.py]
1 """Registers admin interfaces for the activemembers module"""
2 import csv
3 import datetime
4
5 from django import forms
6 from django.contrib import admin, messages
7 from django.db.models import Q
8 from django.http import HttpResponse
9 from django.utils import timezone
10 from django.utils.translation import gettext_lazy as _
11
12 from activemembers import models
13 from activemembers.forms import MemberGroupMembershipForm, MemberGroupForm
14 from utils.snippets import datetime_to_lectureyear
15 from utils.translation import TranslatedModelAdmin
16
17
18 class MemberGroupMembershipInlineFormSet(forms.BaseInlineFormSet):
19 """
20 Solely here for performance reasons.
21
22 Needed because the `__str__()` of `MemberGroupMembership` (which is
23 displayed above each inline form) uses the username, name of the member
24 and name of the group.
25 """
26
27 def __init__(self, *args, **kwargs):
28 super().__init__(*args, **kwargs)
29 self.queryset = self.queryset.select_related("member", "group").filter(
30 until=None
31 )
32
33
34 class MemberGroupMembershipInline(admin.StackedInline):
35 """Inline for group memberships"""
36
37 model = models.MemberGroupMembership
38 formset = MemberGroupMembershipInlineFormSet
39 can_delete = False
40 ordering = ("since",)
41 extra = 0
42 autocomplete_fields = ("member",)
43
44
45 @admin.register(models.Committee)
46 class CommitteeAdmin(TranslatedModelAdmin):
47 """Manage the committees"""
48
49 inlines = (MemberGroupMembershipInline,)
50 form = MemberGroupForm
51 list_display = ("name", "since", "until", "active", "email")
52 list_filter = (
53 "until",
54 "active",
55 )
56 search_fields = ("name", "description")
57 filter_horizontal = ("permissions",)
58
59 fields = (
60 "name",
61 "description",
62 "photo",
63 "permissions",
64 "since",
65 "until",
66 "contact_mailinglist",
67 "contact_email",
68 "active",
69 "display_members",
70 )
71
72 def email(self, instance):
73 if instance.contact_email:
74 return instance.contact_email
75 elif instance.contact_mailinglist:
76 return instance.contact_mailinglist.name + "@thalia.nu"
77 return None
78
79
80 @admin.register(models.Society)
81 class SocietyAdmin(TranslatedModelAdmin):
82 """Manage the societies"""
83
84 inlines = (MemberGroupMembershipInline,)
85 form = MemberGroupForm
86 list_display = ("name", "since", "until", "active", "email")
87 list_filter = (
88 "until",
89 "active",
90 )
91 search_fields = ("name", "description")
92 filter_horizontal = ("permissions",)
93
94 fields = (
95 "name",
96 "description",
97 "photo",
98 "permissions",
99 "since",
100 "until",
101 "contact_mailinglist",
102 "contact_email",
103 "active",
104 "display_members",
105 )
106
107 def email(self, instance):
108 if instance.contact_email:
109 return instance.contact_email
110 elif instance.contact_mailinglist:
111 return instance.contact_mailinglist.name + "@thalia.nu"
112 return None
113
114
115 @admin.register(models.Board)
116 class BoardAdmin(TranslatedModelAdmin):
117 """Manage the board"""
118
119 inlines = (MemberGroupMembershipInline,)
120 form = MemberGroupForm
121 exclude = ("is_board",)
122 filter_horizontal = ("permissions",)
123
124 fields = (
125 "name",
126 "description",
127 "photo",
128 "permissions",
129 "contact_mailinglist",
130 "contact_email",
131 "since",
132 "until",
133 "display_members",
134 )
135
136
137 class TypeFilter(admin.SimpleListFilter):
138 """Filter memberships on board-only"""
139
140 title = _("group memberships")
141 parameter_name = "group_type"
142
143 def lookups(self, request, model_admin):
144 return [
145 ("boards", _("Only boards")),
146 ("committees", _("Only committees")),
147 ("societies", _("Only societies")),
148 ]
149
150 def queryset(self, request, queryset):
151 if self.value() == "boards":
152 return queryset.exclude(group__board=None)
153 elif self.value() == "committees":
154 return queryset.exclude(group__committee=None)
155 elif self.value() == "societies":
156 return queryset.exclude(group__society=None)
157
158 return queryset
159
160
161 class LectureYearFilter(admin.SimpleListFilter):
162 """Filter the memberships on those started or ended in a lecture year"""
163
164 title = _("lecture year")
165 parameter_name = "lecture_year"
166
167 def lookups(self, request, model_admin):
168 current_year = datetime_to_lectureyear(timezone.now())
169 first_year = datetime_to_lectureyear(
170 models.MemberGroupMembership.objects.earliest("since").since
171 )
172
173 return [
174 (year, "{}-{}".format(year, year + 1))
175 for year in range(first_year, current_year + 1)
176 ]
177
178 def queryset(self, request, queryset):
179 if not self.value():
180 return queryset
181
182 year = int(self.value())
183 first_of_september = datetime.date(year=year, month=9, day=1)
184
185 return queryset.exclude(until__lt=first_of_september)
186
187
188 class ActiveMembershipsFilter(admin.SimpleListFilter):
189 """Filter the memberships by whether they are active or not"""
190
191 title = _("active memberships")
192 parameter_name = "active"
193
194 def lookups(self, request, model_name):
195 return (
196 ("active", _("Active")),
197 ("inactive", _("Inactive")),
198 )
199
200 def queryset(self, request, queryset):
201 now = timezone.now()
202
203 if self.value() == "active":
204 return queryset.filter(Q(until__isnull=True) | Q(until__gte=now))
205
206 if self.value() == "inactive":
207 return queryset.filter(until__lt=now)
208
209
210 @admin.register(models.MemberGroupMembership)
211 class MemberGroupMembershipAdmin(TranslatedModelAdmin):
212 """Manage the group memberships"""
213
214 form = MemberGroupMembershipForm
215 list_display = ("member", "group", "since", "until", "chair", "role")
216 list_filter = ("group", TypeFilter, LectureYearFilter, ActiveMembershipsFilter)
217 list_select_related = (
218 "member",
219 "group",
220 )
221 search_fields = ("member__first_name", "member__last_name", "member__email")
222 date_hierarchy = "since"
223 actions = ("export",)
224
225 def changelist_view(self, request, extra_context=None):
226 self.message_user(
227 request,
228 _(
229 "Do not edit existing memberships if the "
230 "chair of a group has changed, add a "
231 "new membership instead."
232 ),
233 messages.WARNING,
234 )
235 return super().changelist_view(request, extra_context)
236
237 def export(self, request, queryset):
238 response = HttpResponse(content_type="text/csv")
239 response["Content-Disposition"] = (
240 "attachment;" "filename=" '"group_memberships.csv"'
241 )
242 writer = csv.writer(response)
243 writer.writerow(
244 [
245 _("First name"),
246 _("Last name"),
247 _("Email"),
248 _("Group"),
249 _("Member since"),
250 _("Member until"),
251 _("Chair of the group"),
252 _("Role"),
253 ]
254 )
255
256 for membership in queryset:
257 writer.writerow(
258 [
259 membership.member.first_name,
260 membership.member.last_name,
261 membership.member.email,
262 membership.group,
263 membership.since,
264 membership.until,
265 membership.chair,
266 membership.role,
267 ]
268 )
269
270 return response
271
272 export.short_description = _("Export selected memberships")
273
274
275 @admin.register(models.Mentorship)
276 class MentorshipAdmin(admin.ModelAdmin):
277 """Manage the mentorships"""
278
279 autocomplete_fields = ("member",)
280 search_fields = ("member__first_name", "member__last_name")
281 list_filter = ("year",)
282
[end of website/activemembers/admin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/website/activemembers/admin.py b/website/activemembers/admin.py
--- a/website/activemembers/admin.py
+++ b/website/activemembers/admin.py
@@ -42,9 +42,8 @@
autocomplete_fields = ("member",)
[email protected](models.Committee)
-class CommitteeAdmin(TranslatedModelAdmin):
- """Manage the committees"""
+class MemberGroupAdmin(TranslatedModelAdmin):
+ """Manage the member groups"""
inlines = (MemberGroupMembershipInline,)
form = MemberGroupForm
@@ -77,39 +76,18 @@
return None
[email protected](models.Society)
-class SocietyAdmin(TranslatedModelAdmin):
- """Manage the societies"""
[email protected](models.Committee)
+class CommitteeAdmin(MemberGroupAdmin):
+ """Manage the committees"""
- inlines = (MemberGroupMembershipInline,)
- form = MemberGroupForm
- list_display = ("name", "since", "until", "active", "email")
- list_filter = (
- "until",
- "active",
- )
- search_fields = ("name", "description")
- filter_horizontal = ("permissions",)
+ pass
- fields = (
- "name",
- "description",
- "photo",
- "permissions",
- "since",
- "until",
- "contact_mailinglist",
- "contact_email",
- "active",
- "display_members",
- )
- def email(self, instance):
- if instance.contact_email:
- return instance.contact_email
- elif instance.contact_mailinglist:
- return instance.contact_mailinglist.name + "@thalia.nu"
- return None
[email protected](models.Society)
+class SocietyAdmin(MemberGroupAdmin):
+ """Manage the societies"""
+
+ pass
@admin.register(models.Board)
| {"golden_diff": "diff --git a/website/activemembers/admin.py b/website/activemembers/admin.py\n--- a/website/activemembers/admin.py\n+++ b/website/activemembers/admin.py\n@@ -42,9 +42,8 @@\n autocomplete_fields = (\"member\",)\n \n \[email protected](models.Committee)\n-class CommitteeAdmin(TranslatedModelAdmin):\n- \"\"\"Manage the committees\"\"\"\n+class MemberGroupAdmin(TranslatedModelAdmin):\n+ \"\"\"Manage the member groups\"\"\"\n \n inlines = (MemberGroupMembershipInline,)\n form = MemberGroupForm\n@@ -77,39 +76,18 @@\n return None\n \n \[email protected](models.Society)\n-class SocietyAdmin(TranslatedModelAdmin):\n- \"\"\"Manage the societies\"\"\"\[email protected](models.Committee)\n+class CommitteeAdmin(MemberGroupAdmin):\n+ \"\"\"Manage the committees\"\"\"\n \n- inlines = (MemberGroupMembershipInline,)\n- form = MemberGroupForm\n- list_display = (\"name\", \"since\", \"until\", \"active\", \"email\")\n- list_filter = (\n- \"until\",\n- \"active\",\n- )\n- search_fields = (\"name\", \"description\")\n- filter_horizontal = (\"permissions\",)\n+ pass\n \n- fields = (\n- \"name\",\n- \"description\",\n- \"photo\",\n- \"permissions\",\n- \"since\",\n- \"until\",\n- \"contact_mailinglist\",\n- \"contact_email\",\n- \"active\",\n- \"display_members\",\n- )\n \n- def email(self, instance):\n- if instance.contact_email:\n- return instance.contact_email\n- elif instance.contact_mailinglist:\n- return instance.contact_mailinglist.name + \"@thalia.nu\"\n- return None\[email protected](models.Society)\n+class SocietyAdmin(MemberGroupAdmin):\n+ \"\"\"Manage the societies\"\"\"\n+\n+ pass\n \n \n @admin.register(models.Board)\n", "issue": "Fix \"similar-code\" issue in website/activemembers/admin.py\nSimilar blocks of code found in 2 locations. Consider refactoring.\n\nhttps://codeclimate.com/github/svthalia/concrexit/website/activemembers/admin.py#issue_5eceacbde96d31000100042c\n", "before_files": [{"content": "\"\"\"Registers admin interfaces for the activemembers module\"\"\"\nimport csv\nimport datetime\n\nfrom django import forms\nfrom django.contrib import admin, messages\nfrom django.db.models import Q\nfrom django.http import HttpResponse\nfrom django.utils import timezone\nfrom django.utils.translation import gettext_lazy as _\n\nfrom activemembers import models\nfrom activemembers.forms import MemberGroupMembershipForm, MemberGroupForm\nfrom utils.snippets import datetime_to_lectureyear\nfrom utils.translation import TranslatedModelAdmin\n\n\nclass MemberGroupMembershipInlineFormSet(forms.BaseInlineFormSet):\n \"\"\"\n Solely here for performance reasons.\n\n Needed because the `__str__()` of `MemberGroupMembership` (which is\n displayed above each inline form) uses the username, name of the member\n and name of the group.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.queryset = self.queryset.select_related(\"member\", \"group\").filter(\n until=None\n )\n\n\nclass MemberGroupMembershipInline(admin.StackedInline):\n \"\"\"Inline for group memberships\"\"\"\n\n model = models.MemberGroupMembership\n formset = MemberGroupMembershipInlineFormSet\n can_delete = False\n ordering = (\"since\",)\n extra = 0\n autocomplete_fields = (\"member\",)\n\n\[email protected](models.Committee)\nclass CommitteeAdmin(TranslatedModelAdmin):\n \"\"\"Manage the committees\"\"\"\n\n inlines = (MemberGroupMembershipInline,)\n form = MemberGroupForm\n list_display = (\"name\", \"since\", \"until\", \"active\", \"email\")\n list_filter = (\n \"until\",\n \"active\",\n )\n search_fields = (\"name\", \"description\")\n filter_horizontal = (\"permissions\",)\n\n fields = (\n \"name\",\n \"description\",\n \"photo\",\n \"permissions\",\n \"since\",\n \"until\",\n \"contact_mailinglist\",\n \"contact_email\",\n \"active\",\n \"display_members\",\n )\n\n def email(self, instance):\n if instance.contact_email:\n return instance.contact_email\n elif instance.contact_mailinglist:\n return instance.contact_mailinglist.name + \"@thalia.nu\"\n return None\n\n\[email protected](models.Society)\nclass SocietyAdmin(TranslatedModelAdmin):\n \"\"\"Manage the societies\"\"\"\n\n inlines = (MemberGroupMembershipInline,)\n form = MemberGroupForm\n list_display = (\"name\", \"since\", \"until\", \"active\", \"email\")\n list_filter = (\n \"until\",\n \"active\",\n )\n search_fields = (\"name\", \"description\")\n filter_horizontal = (\"permissions\",)\n\n fields = (\n \"name\",\n \"description\",\n \"photo\",\n \"permissions\",\n \"since\",\n \"until\",\n \"contact_mailinglist\",\n \"contact_email\",\n \"active\",\n \"display_members\",\n )\n\n def email(self, instance):\n if instance.contact_email:\n return instance.contact_email\n elif instance.contact_mailinglist:\n return instance.contact_mailinglist.name + \"@thalia.nu\"\n return None\n\n\[email protected](models.Board)\nclass BoardAdmin(TranslatedModelAdmin):\n \"\"\"Manage the board\"\"\"\n\n inlines = (MemberGroupMembershipInline,)\n form = MemberGroupForm\n exclude = (\"is_board\",)\n filter_horizontal = (\"permissions\",)\n\n fields = (\n \"name\",\n \"description\",\n \"photo\",\n \"permissions\",\n \"contact_mailinglist\",\n \"contact_email\",\n \"since\",\n \"until\",\n \"display_members\",\n )\n\n\nclass TypeFilter(admin.SimpleListFilter):\n \"\"\"Filter memberships on board-only\"\"\"\n\n title = _(\"group memberships\")\n parameter_name = \"group_type\"\n\n def lookups(self, request, model_admin):\n return [\n (\"boards\", _(\"Only boards\")),\n (\"committees\", _(\"Only committees\")),\n (\"societies\", _(\"Only societies\")),\n ]\n\n def queryset(self, request, queryset):\n if self.value() == \"boards\":\n return queryset.exclude(group__board=None)\n elif self.value() == \"committees\":\n return queryset.exclude(group__committee=None)\n elif self.value() == \"societies\":\n return queryset.exclude(group__society=None)\n\n return queryset\n\n\nclass LectureYearFilter(admin.SimpleListFilter):\n \"\"\"Filter the memberships on those started or ended in a lecture year\"\"\"\n\n title = _(\"lecture year\")\n parameter_name = \"lecture_year\"\n\n def lookups(self, request, model_admin):\n current_year = datetime_to_lectureyear(timezone.now())\n first_year = datetime_to_lectureyear(\n models.MemberGroupMembership.objects.earliest(\"since\").since\n )\n\n return [\n (year, \"{}-{}\".format(year, year + 1))\n for year in range(first_year, current_year + 1)\n ]\n\n def queryset(self, request, queryset):\n if not self.value():\n return queryset\n\n year = int(self.value())\n first_of_september = datetime.date(year=year, month=9, day=1)\n\n return queryset.exclude(until__lt=first_of_september)\n\n\nclass ActiveMembershipsFilter(admin.SimpleListFilter):\n \"\"\"Filter the memberships by whether they are active or not\"\"\"\n\n title = _(\"active memberships\")\n parameter_name = \"active\"\n\n def lookups(self, request, model_name):\n return (\n (\"active\", _(\"Active\")),\n (\"inactive\", _(\"Inactive\")),\n )\n\n def queryset(self, request, queryset):\n now = timezone.now()\n\n if self.value() == \"active\":\n return queryset.filter(Q(until__isnull=True) | Q(until__gte=now))\n\n if self.value() == \"inactive\":\n return queryset.filter(until__lt=now)\n\n\[email protected](models.MemberGroupMembership)\nclass MemberGroupMembershipAdmin(TranslatedModelAdmin):\n \"\"\"Manage the group memberships\"\"\"\n\n form = MemberGroupMembershipForm\n list_display = (\"member\", \"group\", \"since\", \"until\", \"chair\", \"role\")\n list_filter = (\"group\", TypeFilter, LectureYearFilter, ActiveMembershipsFilter)\n list_select_related = (\n \"member\",\n \"group\",\n )\n search_fields = (\"member__first_name\", \"member__last_name\", \"member__email\")\n date_hierarchy = \"since\"\n actions = (\"export\",)\n\n def changelist_view(self, request, extra_context=None):\n self.message_user(\n request,\n _(\n \"Do not edit existing memberships if the \"\n \"chair of a group has changed, add a \"\n \"new membership instead.\"\n ),\n messages.WARNING,\n )\n return super().changelist_view(request, extra_context)\n\n def export(self, request, queryset):\n response = HttpResponse(content_type=\"text/csv\")\n response[\"Content-Disposition\"] = (\n \"attachment;\" \"filename=\" '\"group_memberships.csv\"'\n )\n writer = csv.writer(response)\n writer.writerow(\n [\n _(\"First name\"),\n _(\"Last name\"),\n _(\"Email\"),\n _(\"Group\"),\n _(\"Member since\"),\n _(\"Member until\"),\n _(\"Chair of the group\"),\n _(\"Role\"),\n ]\n )\n\n for membership in queryset:\n writer.writerow(\n [\n membership.member.first_name,\n membership.member.last_name,\n membership.member.email,\n membership.group,\n membership.since,\n membership.until,\n membership.chair,\n membership.role,\n ]\n )\n\n return response\n\n export.short_description = _(\"Export selected memberships\")\n\n\[email protected](models.Mentorship)\nclass MentorshipAdmin(admin.ModelAdmin):\n \"\"\"Manage the mentorships\"\"\"\n\n autocomplete_fields = (\"member\",)\n search_fields = (\"member__first_name\", \"member__last_name\")\n list_filter = (\"year\",)\n", "path": "website/activemembers/admin.py"}]} | 3,038 | 423 |
gh_patches_debug_31932 | rasdani/github-patches | git_diff | tensorflow__addons-2390 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot load SavedModel with RSquare metric
**System information**
- macOS 11.2.1
- TensorFlow 2.4.1 , via pip install tensorflow
- TensorFlow-Addons 0.12.1 , via pip install tensorflow_addons
- Python version: 3.7.7
- Is GPU used? no
**Describe the bug**
I have a saved keras model.
If the model uses the RSquare metric, I am not able to load it back.
But with the exact same model defined without the RSquare metric, I do not have any issue.
**Code to reproduce the issue**
```
import tensorflow as tf
import tensorflow_addons as tfa
USE_R2 = True
model = tf.keras.models.Sequential(tf.keras.layers.Dense(1))
if USE_R2:
metrics = [tfa.metrics.RSquare(y_shape=(1,))]
else:
metrics = None
model.compile(loss='mse', metrics=metrics)
x = tf.constant( [[1, 2, 3, 4]] )
y = tf.constant( [[1]] )
model.fit(x,y)
model.save('./tmp')
model = tf.keras.models.load_model('./tmp')
```
**Other info / logs**
I get the following error:
> ValueError: Shapes () and (1,) are incompatible
</issue>
<code>
[start of tensorflow_addons/metrics/r_square.py]
1 # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """Implements R^2 scores."""
16 from typing import Tuple
17
18 import tensorflow as tf
19 from tensorflow.keras import backend as K
20 from tensorflow.keras.metrics import Metric
21 from tensorflow.python.ops import weights_broadcast_ops
22
23 from typeguard import typechecked
24 from tensorflow_addons.utils.types import AcceptableDTypes
25
26
27 VALID_MULTIOUTPUT = {"raw_values", "uniform_average", "variance_weighted"}
28
29
30 def _reduce_average(
31 input_tensor: tf.Tensor, axis=None, keepdims=False, weights=None
32 ) -> tf.Tensor:
33 """Computes the (weighted) mean of elements across dimensions of a tensor."""
34 if weights is None:
35 return tf.reduce_mean(input_tensor, axis=axis, keepdims=keepdims)
36
37 weighted_sum = tf.reduce_sum(weights * input_tensor, axis=axis, keepdims=keepdims)
38 sum_of_weights = tf.reduce_sum(weights, axis=axis, keepdims=keepdims)
39 average = weighted_sum / sum_of_weights
40 return average
41
42
43 @tf.keras.utils.register_keras_serializable(package="Addons")
44 class RSquare(Metric):
45 """Compute R^2 score.
46
47 This is also called the [coefficient of determination
48 ](https://en.wikipedia.org/wiki/Coefficient_of_determination).
49 It tells how close are data to the fitted regression line.
50
51 - Highest score can be 1.0 and it indicates that the predictors
52 perfectly accounts for variation in the target.
53 - Score 0.0 indicates that the predictors do not
54 account for variation in the target.
55 - It can also be negative if the model is worse.
56
57 The sample weighting for this metric implementation mimics the
58 behaviour of the [scikit-learn implementation
59 ](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.r2_score.html)
60 of the same metric.
61
62 Args:
63 multioutput: `string`, the reduce method for scores.
64 Should be one of `["raw_values", "uniform_average", "variance_weighted"]`.
65 name: (Optional) string name of the metric instance.
66 dtype: (Optional) data type of the metric result.
67
68 Usage:
69
70 >>> y_true = np.array([1, 4, 3], dtype=np.float32)
71 >>> y_pred = np.array([2, 4, 4], dtype=np.float32)
72 >>> metric = tfa.metrics.r_square.RSquare()
73 >>> metric.update_state(y_true, y_pred)
74 >>> result = metric.result()
75 >>> result.numpy()
76 0.57142854
77 """
78
79 @typechecked
80 def __init__(
81 self,
82 name: str = "r_square",
83 dtype: AcceptableDTypes = None,
84 y_shape: Tuple[int, ...] = (),
85 multioutput: str = "uniform_average",
86 **kwargs,
87 ):
88 super().__init__(name=name, dtype=dtype, **kwargs)
89 self.y_shape = y_shape
90
91 if multioutput not in VALID_MULTIOUTPUT:
92 raise ValueError(
93 "The multioutput argument must be one of {}, but was: {}".format(
94 VALID_MULTIOUTPUT, multioutput
95 )
96 )
97 self.multioutput = multioutput
98 self.squared_sum = self.add_weight(
99 name="squared_sum", shape=y_shape, initializer="zeros", dtype=dtype
100 )
101 self.sum = self.add_weight(
102 name="sum", shape=y_shape, initializer="zeros", dtype=dtype
103 )
104 self.res = self.add_weight(
105 name="residual", shape=y_shape, initializer="zeros", dtype=dtype
106 )
107 self.count = self.add_weight(
108 name="count", shape=y_shape, initializer="zeros", dtype=dtype
109 )
110
111 def update_state(self, y_true, y_pred, sample_weight=None) -> None:
112 y_true = tf.cast(y_true, dtype=self._dtype)
113 y_pred = tf.cast(y_pred, dtype=self._dtype)
114 if sample_weight is None:
115 sample_weight = 1
116 sample_weight = tf.cast(sample_weight, dtype=self._dtype)
117 sample_weight = weights_broadcast_ops.broadcast_weights(
118 weights=sample_weight, values=y_true
119 )
120
121 weighted_y_true = y_true * sample_weight
122 self.sum.assign_add(tf.reduce_sum(weighted_y_true, axis=0))
123 self.squared_sum.assign_add(tf.reduce_sum(y_true * weighted_y_true, axis=0))
124 self.res.assign_add(
125 tf.reduce_sum((y_true - y_pred) ** 2 * sample_weight, axis=0)
126 )
127 self.count.assign_add(tf.reduce_sum(sample_weight, axis=0))
128
129 def result(self) -> tf.Tensor:
130 mean = self.sum / self.count
131 total = self.squared_sum - self.sum * mean
132 raw_scores = 1 - (self.res / total)
133 raw_scores = tf.where(tf.math.is_inf(raw_scores), 0.0, raw_scores)
134
135 if self.multioutput == "raw_values":
136 return raw_scores
137 if self.multioutput == "uniform_average":
138 return tf.reduce_mean(raw_scores)
139 if self.multioutput == "variance_weighted":
140 return _reduce_average(raw_scores, weights=total)
141 raise RuntimeError(
142 "The multioutput attribute must be one of {}, but was: {}".format(
143 VALID_MULTIOUTPUT, self.multioutput
144 )
145 )
146
147 def reset_states(self) -> None:
148 # The state of the metric will be reset at the start of each epoch.
149 K.batch_set_value([(v, tf.zeros_like(v)) for v in self.variables])
150
[end of tensorflow_addons/metrics/r_square.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/tensorflow_addons/metrics/r_square.py b/tensorflow_addons/metrics/r_square.py
--- a/tensorflow_addons/metrics/r_square.py
+++ b/tensorflow_addons/metrics/r_square.py
@@ -24,7 +24,7 @@
from tensorflow_addons.utils.types import AcceptableDTypes
-VALID_MULTIOUTPUT = {"raw_values", "uniform_average", "variance_weighted"}
+_VALID_MULTIOUTPUT = {"raw_values", "uniform_average", "variance_weighted"}
def _reduce_average(
@@ -88,10 +88,10 @@
super().__init__(name=name, dtype=dtype, **kwargs)
self.y_shape = y_shape
- if multioutput not in VALID_MULTIOUTPUT:
+ if multioutput not in _VALID_MULTIOUTPUT:
raise ValueError(
"The multioutput argument must be one of {}, but was: {}".format(
- VALID_MULTIOUTPUT, multioutput
+ _VALID_MULTIOUTPUT, multioutput
)
)
self.multioutput = multioutput
@@ -138,12 +138,15 @@
return tf.reduce_mean(raw_scores)
if self.multioutput == "variance_weighted":
return _reduce_average(raw_scores, weights=total)
- raise RuntimeError(
- "The multioutput attribute must be one of {}, but was: {}".format(
- VALID_MULTIOUTPUT, self.multioutput
- )
- )
def reset_states(self) -> None:
# The state of the metric will be reset at the start of each epoch.
K.batch_set_value([(v, tf.zeros_like(v)) for v in self.variables])
+
+ def get_config(self):
+ config = {
+ "y_shape": self.y_shape,
+ "multioutput": self.multioutput,
+ }
+ base_config = super().get_config()
+ return {**base_config, **config}
| {"golden_diff": "diff --git a/tensorflow_addons/metrics/r_square.py b/tensorflow_addons/metrics/r_square.py\n--- a/tensorflow_addons/metrics/r_square.py\n+++ b/tensorflow_addons/metrics/r_square.py\n@@ -24,7 +24,7 @@\n from tensorflow_addons.utils.types import AcceptableDTypes\n \n \n-VALID_MULTIOUTPUT = {\"raw_values\", \"uniform_average\", \"variance_weighted\"}\n+_VALID_MULTIOUTPUT = {\"raw_values\", \"uniform_average\", \"variance_weighted\"}\n \n \n def _reduce_average(\n@@ -88,10 +88,10 @@\n super().__init__(name=name, dtype=dtype, **kwargs)\n self.y_shape = y_shape\n \n- if multioutput not in VALID_MULTIOUTPUT:\n+ if multioutput not in _VALID_MULTIOUTPUT:\n raise ValueError(\n \"The multioutput argument must be one of {}, but was: {}\".format(\n- VALID_MULTIOUTPUT, multioutput\n+ _VALID_MULTIOUTPUT, multioutput\n )\n )\n self.multioutput = multioutput\n@@ -138,12 +138,15 @@\n return tf.reduce_mean(raw_scores)\n if self.multioutput == \"variance_weighted\":\n return _reduce_average(raw_scores, weights=total)\n- raise RuntimeError(\n- \"The multioutput attribute must be one of {}, but was: {}\".format(\n- VALID_MULTIOUTPUT, self.multioutput\n- )\n- )\n \n def reset_states(self) -> None:\n # The state of the metric will be reset at the start of each epoch.\n K.batch_set_value([(v, tf.zeros_like(v)) for v in self.variables])\n+\n+ def get_config(self):\n+ config = {\n+ \"y_shape\": self.y_shape,\n+ \"multioutput\": self.multioutput,\n+ }\n+ base_config = super().get_config()\n+ return {**base_config, **config}\n", "issue": "Cannot load SavedModel with RSquare metric\n**System information**\r\n- macOS 11.2.1\r\n- TensorFlow 2.4.1 , via pip install tensorflow\r\n- TensorFlow-Addons 0.12.1 , via pip install tensorflow_addons\r\n- Python version: 3.7.7\r\n- Is GPU used? no\r\n\r\n**Describe the bug**\r\n\r\nI have a saved keras model. \r\nIf the model uses the RSquare metric, I am not able to load it back. \r\nBut with the exact same model defined without the RSquare metric, I do not have any issue.\r\n\r\n**Code to reproduce the issue**\r\n\r\n```\r\nimport tensorflow as tf\r\nimport tensorflow_addons as tfa\r\n\r\nUSE_R2 = True\r\n\r\nmodel = tf.keras.models.Sequential(tf.keras.layers.Dense(1))\r\nif USE_R2:\r\n metrics = [tfa.metrics.RSquare(y_shape=(1,))]\r\nelse:\r\n metrics = None\r\nmodel.compile(loss='mse', metrics=metrics)\r\n\r\nx = tf.constant( [[1, 2, 3, 4]] )\r\ny = tf.constant( [[1]] )\r\n\r\nmodel.fit(x,y)\r\n\r\nmodel.save('./tmp')\r\nmodel = tf.keras.models.load_model('./tmp')\r\n```\r\n\r\n**Other info / logs**\r\n\r\nI get the following error:\r\n\r\n> ValueError: Shapes () and (1,) are incompatible\r\n\n", "before_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Implements R^2 scores.\"\"\"\nfrom typing import Tuple\n\nimport tensorflow as tf\nfrom tensorflow.keras import backend as K\nfrom tensorflow.keras.metrics import Metric\nfrom tensorflow.python.ops import weights_broadcast_ops\n\nfrom typeguard import typechecked\nfrom tensorflow_addons.utils.types import AcceptableDTypes\n\n\nVALID_MULTIOUTPUT = {\"raw_values\", \"uniform_average\", \"variance_weighted\"}\n\n\ndef _reduce_average(\n input_tensor: tf.Tensor, axis=None, keepdims=False, weights=None\n) -> tf.Tensor:\n \"\"\"Computes the (weighted) mean of elements across dimensions of a tensor.\"\"\"\n if weights is None:\n return tf.reduce_mean(input_tensor, axis=axis, keepdims=keepdims)\n\n weighted_sum = tf.reduce_sum(weights * input_tensor, axis=axis, keepdims=keepdims)\n sum_of_weights = tf.reduce_sum(weights, axis=axis, keepdims=keepdims)\n average = weighted_sum / sum_of_weights\n return average\n\n\[email protected]_keras_serializable(package=\"Addons\")\nclass RSquare(Metric):\n \"\"\"Compute R^2 score.\n\n This is also called the [coefficient of determination\n ](https://en.wikipedia.org/wiki/Coefficient_of_determination).\n It tells how close are data to the fitted regression line.\n\n - Highest score can be 1.0 and it indicates that the predictors\n perfectly accounts for variation in the target.\n - Score 0.0 indicates that the predictors do not\n account for variation in the target.\n - It can also be negative if the model is worse.\n\n The sample weighting for this metric implementation mimics the\n behaviour of the [scikit-learn implementation\n ](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.r2_score.html)\n of the same metric.\n\n Args:\n multioutput: `string`, the reduce method for scores.\n Should be one of `[\"raw_values\", \"uniform_average\", \"variance_weighted\"]`.\n name: (Optional) string name of the metric instance.\n dtype: (Optional) data type of the metric result.\n\n Usage:\n\n >>> y_true = np.array([1, 4, 3], dtype=np.float32)\n >>> y_pred = np.array([2, 4, 4], dtype=np.float32)\n >>> metric = tfa.metrics.r_square.RSquare()\n >>> metric.update_state(y_true, y_pred)\n >>> result = metric.result()\n >>> result.numpy()\n 0.57142854\n \"\"\"\n\n @typechecked\n def __init__(\n self,\n name: str = \"r_square\",\n dtype: AcceptableDTypes = None,\n y_shape: Tuple[int, ...] = (),\n multioutput: str = \"uniform_average\",\n **kwargs,\n ):\n super().__init__(name=name, dtype=dtype, **kwargs)\n self.y_shape = y_shape\n\n if multioutput not in VALID_MULTIOUTPUT:\n raise ValueError(\n \"The multioutput argument must be one of {}, but was: {}\".format(\n VALID_MULTIOUTPUT, multioutput\n )\n )\n self.multioutput = multioutput\n self.squared_sum = self.add_weight(\n name=\"squared_sum\", shape=y_shape, initializer=\"zeros\", dtype=dtype\n )\n self.sum = self.add_weight(\n name=\"sum\", shape=y_shape, initializer=\"zeros\", dtype=dtype\n )\n self.res = self.add_weight(\n name=\"residual\", shape=y_shape, initializer=\"zeros\", dtype=dtype\n )\n self.count = self.add_weight(\n name=\"count\", shape=y_shape, initializer=\"zeros\", dtype=dtype\n )\n\n def update_state(self, y_true, y_pred, sample_weight=None) -> None:\n y_true = tf.cast(y_true, dtype=self._dtype)\n y_pred = tf.cast(y_pred, dtype=self._dtype)\n if sample_weight is None:\n sample_weight = 1\n sample_weight = tf.cast(sample_weight, dtype=self._dtype)\n sample_weight = weights_broadcast_ops.broadcast_weights(\n weights=sample_weight, values=y_true\n )\n\n weighted_y_true = y_true * sample_weight\n self.sum.assign_add(tf.reduce_sum(weighted_y_true, axis=0))\n self.squared_sum.assign_add(tf.reduce_sum(y_true * weighted_y_true, axis=0))\n self.res.assign_add(\n tf.reduce_sum((y_true - y_pred) ** 2 * sample_weight, axis=0)\n )\n self.count.assign_add(tf.reduce_sum(sample_weight, axis=0))\n\n def result(self) -> tf.Tensor:\n mean = self.sum / self.count\n total = self.squared_sum - self.sum * mean\n raw_scores = 1 - (self.res / total)\n raw_scores = tf.where(tf.math.is_inf(raw_scores), 0.0, raw_scores)\n\n if self.multioutput == \"raw_values\":\n return raw_scores\n if self.multioutput == \"uniform_average\":\n return tf.reduce_mean(raw_scores)\n if self.multioutput == \"variance_weighted\":\n return _reduce_average(raw_scores, weights=total)\n raise RuntimeError(\n \"The multioutput attribute must be one of {}, but was: {}\".format(\n VALID_MULTIOUTPUT, self.multioutput\n )\n )\n\n def reset_states(self) -> None:\n # The state of the metric will be reset at the start of each epoch.\n K.batch_set_value([(v, tf.zeros_like(v)) for v in self.variables])\n", "path": "tensorflow_addons/metrics/r_square.py"}]} | 2,509 | 423 |
gh_patches_debug_14532 | rasdani/github-patches | git_diff | lutris__lutris-2674 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Problem with user directories when sandboxing is disabled
Hi, i have strange problem in recent version. I have disabled wineprefix sandboxing, so my user directories were configured as symlinks. But in recent version programs (or even wine config) do not start with error:
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/lutris/runners/wine.py", line 676, in run_winecfg
self.prelaunch()
File "/usr/lib/python3.7/site-packages/lutris/runners/wine.py", line 778, in prelaunch
self.sandbox(prefix_manager)
File "/usr/lib/python3.7/site-packages/lutris/runners/wine.py", line 926, in sandbox
wine_prefix.desktop_integration(restore=True)
File "/usr/lib/python3.7/site-packages/lutris/util/wine/prefix.py", line 132, in desktop_integration
os.rename(old_path, path)
NotADirectoryError: [Errno 20] není adresářem: '/home/petr/.local/share/lutris/runners/winegames/drive_c/users/petr/Plocha.winecfg' -> '/home/petr/.local/share/lutris/runners/winegames/drive_c/users/petr/Plocha'
which is true, because "Plocha" is symlink to proper Desktop directory. When i delete "Plocha.winecfg", same error moves to next user directory "Mé dokumenty" (Documents). When i delete all "winecfg" files, all symlinks to existing user directories are replaced by empty directories. I.e. i am not able to run programs linked to standard user directories despite sandboxing is disabled.
</issue>
<code>
[start of lutris/util/wine/prefix.py]
1 """Wine prefix management"""
2 import os
3 from lutris.util.wine.registry import WineRegistry
4 from lutris.util.log import logger
5 from lutris.util import joypad, system, xdgshortcuts
6 from lutris.util.display import DISPLAY_MANAGER
7
8 DESKTOP_KEYS = ["Desktop", "Personal", "My Music", "My Videos", "My Pictures"]
9 DEFAULT_DESKTOP_FOLDERS = ["Desktop", "My Documents", "My Music", "My Videos", "My Pictures"]
10 DESKTOP_XDG = ["DESKTOP", "DOCUMENTS", "MUSIC", "VIDEOS", "PICTURES"]
11
12
13 class WinePrefixManager:
14 """Class to allow modification of Wine prefixes without the use of Wine"""
15
16 hkcu_prefix = "HKEY_CURRENT_USER"
17
18 def __init__(self, path):
19 if not path:
20 logger.warning("No path specified for Wine prefix")
21 self.path = path
22
23 def setup_defaults(self):
24 """Sets the defaults for newly created prefixes"""
25 self.override_dll("winemenubuilder.exe", "")
26 try:
27 self.desktop_integration()
28 except OSError as ex:
29 logger.error(
30 "Failed to setup desktop integration, the prefix may not be valid."
31 )
32 logger.exception(ex)
33
34 def get_registry_path(self, key):
35 """Matches registry keys to a registry file
36
37 Currently, only HKEY_CURRENT_USER keys are supported.
38 """
39 if key.startswith(self.hkcu_prefix):
40 return os.path.join(self.path, "user.reg")
41 raise ValueError("Unsupported key '{}'".format(key))
42
43 def get_key_path(self, key):
44 if key.startswith(self.hkcu_prefix):
45 return key[len(self.hkcu_prefix) + 1:]
46 raise ValueError(
47 "The key {} is currently not supported by WinePrefixManager".format(key)
48 )
49
50 def get_registry_key(self, key, subkey):
51 registry = WineRegistry(self.get_registry_path(key))
52 return registry.query(self.get_key_path(key), subkey)
53
54 def set_registry_key(self, key, subkey, value):
55 registry = WineRegistry(self.get_registry_path(key))
56 registry.set_value(self.get_key_path(key), subkey, value)
57 registry.save()
58
59 def clear_registry_key(self, key):
60 registry = WineRegistry(self.get_registry_path(key))
61 registry.clear_key(self.get_key_path(key))
62 registry.save()
63
64 def clear_registry_subkeys(self, key, subkeys):
65 registry = WineRegistry(self.get_registry_path(key))
66 registry.clear_subkeys(self.get_key_path(key), subkeys)
67 registry.save()
68
69 def override_dll(self, dll, mode):
70 key = self.hkcu_prefix + "/Software/Wine/DllOverrides"
71 if mode.startswith("dis"):
72 mode = ""
73 if mode not in ("builtin", "native", "builtin,native", "native,builtin", ""):
74 logger.error("DLL override '%s' mode is not valid", mode)
75 return
76 self.set_registry_key(key, dll, mode)
77
78 def get_desktop_folders(self):
79 """Return the list of desktop folder names loaded from the Windows registry"""
80 desktop_folders = []
81 for key in DESKTOP_KEYS:
82 folder = self.get_registry_key(
83 self.hkcu_prefix
84 + "/Software/Microsoft/Windows/CurrentVersion/Explorer/Shell Folders",
85 key,
86 )
87 if not folder:
88 logger.warning("Couldn't load shell folder name for %s", key)
89 continue
90 desktop_folders.append(folder[folder.rfind("\\") + 1:])
91 return desktop_folders or DEFAULT_DESKTOP_FOLDERS
92
93 def desktop_integration(self, desktop_dir=None, restore=False):
94 """Overwrite desktop integration"""
95 user = os.getenv("USER")
96 user_dir = os.path.join(self.path, "drive_c/users/", user)
97 desktop_folders = self.get_desktop_folders()
98
99 if desktop_dir:
100 desktop_dir = os.path.expanduser(desktop_dir)
101 else:
102 desktop_dir = user_dir
103
104 if system.path_exists(user_dir):
105 # Replace or restore desktop integration symlinks
106 for i, item in enumerate(desktop_folders):
107 path = os.path.join(user_dir, item)
108 old_path = path + ".winecfg"
109
110 if os.path.islink(path):
111 if not restore:
112 os.unlink(path)
113 elif os.path.isdir(path):
114 try:
115 os.rmdir(path)
116 # We can't delete nonempty dir, so we rename as wine do.
117 except OSError:
118 os.rename(path, old_path)
119
120 if restore and not os.path.isdir(path):
121 os.symlink(xdgshortcuts.get_xdg_entry(DESKTOP_XDG[i]), path)
122 # We don't need all the others process of the loop
123 continue
124
125 if desktop_dir != user_dir:
126 try:
127 src_path = os.path.join(desktop_dir, item)
128 except TypeError:
129 # There is supposedly a None value in there
130 # The current code shouldn't allow that
131 # Just raise a exception with the values
132 raise RuntimeError("Missing value desktop_dir=%s or item=%s"
133 % (desktop_dir, item))
134
135 os.makedirs(src_path, exist_ok=True)
136 os.symlink(src_path, path)
137 else:
138 # We use first the renamed dir, otherwise we make it.
139 if os.path.isdir(old_path):
140 os.rename(old_path, path)
141 else:
142 os.makedirs(path, exist_ok=True)
143
144 # Security: Remove other symlinks.
145 for item in os.listdir(user_dir):
146 path = os.path.join(user_dir, item)
147 if item not in DEFAULT_DESKTOP_FOLDERS and os.path.islink(path):
148 os.unlink(path)
149 os.makedirs(path)
150
151 def set_crash_dialogs(self, enabled):
152 """Enable or diable Wine crash dialogs"""
153 self.set_registry_key(
154 self.hkcu_prefix + "/Software/Wine/WineDbg",
155 "ShowCrashDialog",
156 1 if enabled else 0,
157 )
158
159 def set_virtual_desktop(self, enabled):
160 """Enable or disable wine virtual desktop.
161 The Lutris virtual desktop is refered to as 'WineDesktop', in Wine the
162 virtual desktop name is 'default'.
163 """
164 path = self.hkcu_prefix + "/Software/Wine/Explorer"
165 if enabled:
166 self.set_registry_key(path, "Desktop", "WineDesktop")
167 default_resolution = "x".join(DISPLAY_MANAGER.get_current_resolution())
168 logger.debug(
169 "Enabling wine virtual desktop with default resolution of %s",
170 default_resolution,
171 )
172 self.set_registry_key(
173 self.hkcu_prefix + "/Software/Wine/Explorer/Desktops",
174 "WineDesktop",
175 default_resolution,
176 )
177 else:
178 self.clear_registry_key(path)
179
180 def set_desktop_size(self, desktop_size):
181 """Sets the desktop size if one is given but do not reset the key if
182 one isn't.
183 """
184 path = self.hkcu_prefix + "/Software/Wine/Explorer/Desktops"
185 if desktop_size:
186 self.set_registry_key(path, "WineDesktop", desktop_size)
187
188 def use_xvid_mode(self, enabled):
189 """Set this to "Y" to allow wine switch the resolution using XVidMode extension."""
190 self.set_registry_key(
191 self.hkcu_prefix + "/Software/Wine/X11 Driver",
192 "UseXVidMode",
193 "Y" if enabled else "N",
194 )
195
196 def configure_joypads(self):
197 joypads = joypad.get_joypads()
198 key = self.hkcu_prefix + "/Software/Wine/DirectInput/Joysticks"
199 self.clear_registry_key(key)
200 for device, joypad_name in joypads:
201 if "event" in device:
202 disabled_joypad = "{} (js)".format(joypad_name)
203 else:
204 disabled_joypad = "{} (event)".format(joypad_name)
205 self.set_registry_key(key, disabled_joypad, "disabled")
206
[end of lutris/util/wine/prefix.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lutris/util/wine/prefix.py b/lutris/util/wine/prefix.py
--- a/lutris/util/wine/prefix.py
+++ b/lutris/util/wine/prefix.py
@@ -117,7 +117,11 @@
except OSError:
os.rename(path, old_path)
- if restore and not os.path.isdir(path):
+ # if we want to create a symlink and one is already there, just skip to the next item.
+ # this also makes sure the elif doesn't find a dir (isdir only looks at the target of the symlink).
+ if restore and os.path.islink(path):
+ continue
+ elif restore and not os.path.isdir(path):
os.symlink(xdgshortcuts.get_xdg_entry(DESKTOP_XDG[i]), path)
# We don't need all the others process of the loop
continue
| {"golden_diff": "diff --git a/lutris/util/wine/prefix.py b/lutris/util/wine/prefix.py\n--- a/lutris/util/wine/prefix.py\n+++ b/lutris/util/wine/prefix.py\n@@ -117,7 +117,11 @@\n except OSError:\n os.rename(path, old_path)\n \n- if restore and not os.path.isdir(path):\n+ # if we want to create a symlink and one is already there, just skip to the next item.\n+ # this also makes sure the elif doesn't find a dir (isdir only looks at the target of the symlink).\n+ if restore and os.path.islink(path):\n+ continue\n+ elif restore and not os.path.isdir(path):\n os.symlink(xdgshortcuts.get_xdg_entry(DESKTOP_XDG[i]), path)\n # We don't need all the others process of the loop\n continue\n", "issue": "Problem with user directories when sandboxing is disabled\nHi, i have strange problem in recent version. I have disabled wineprefix sandboxing, so my user directories were configured as symlinks. But in recent version programs (or even wine config) do not start with error:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.7/site-packages/lutris/runners/wine.py\", line 676, in run_winecfg\r\n self.prelaunch()\r\n File \"/usr/lib/python3.7/site-packages/lutris/runners/wine.py\", line 778, in prelaunch\r\n self.sandbox(prefix_manager)\r\n File \"/usr/lib/python3.7/site-packages/lutris/runners/wine.py\", line 926, in sandbox\r\n wine_prefix.desktop_integration(restore=True)\r\n File \"/usr/lib/python3.7/site-packages/lutris/util/wine/prefix.py\", line 132, in desktop_integration\r\n os.rename(old_path, path)\r\nNotADirectoryError: [Errno 20] nen\u00ed adres\u00e1\u0159em: '/home/petr/.local/share/lutris/runners/winegames/drive_c/users/petr/Plocha.winecfg' -> '/home/petr/.local/share/lutris/runners/winegames/drive_c/users/petr/Plocha'\r\n\r\nwhich is true, because \"Plocha\" is symlink to proper Desktop directory. When i delete \"Plocha.winecfg\", same error moves to next user directory \"M\u00e9 dokumenty\" (Documents). When i delete all \"winecfg\" files, all symlinks to existing user directories are replaced by empty directories. I.e. i am not able to run programs linked to standard user directories despite sandboxing is disabled.\n", "before_files": [{"content": "\"\"\"Wine prefix management\"\"\"\nimport os\nfrom lutris.util.wine.registry import WineRegistry\nfrom lutris.util.log import logger\nfrom lutris.util import joypad, system, xdgshortcuts\nfrom lutris.util.display import DISPLAY_MANAGER\n\nDESKTOP_KEYS = [\"Desktop\", \"Personal\", \"My Music\", \"My Videos\", \"My Pictures\"]\nDEFAULT_DESKTOP_FOLDERS = [\"Desktop\", \"My Documents\", \"My Music\", \"My Videos\", \"My Pictures\"]\nDESKTOP_XDG = [\"DESKTOP\", \"DOCUMENTS\", \"MUSIC\", \"VIDEOS\", \"PICTURES\"]\n\n\nclass WinePrefixManager:\n \"\"\"Class to allow modification of Wine prefixes without the use of Wine\"\"\"\n\n hkcu_prefix = \"HKEY_CURRENT_USER\"\n\n def __init__(self, path):\n if not path:\n logger.warning(\"No path specified for Wine prefix\")\n self.path = path\n\n def setup_defaults(self):\n \"\"\"Sets the defaults for newly created prefixes\"\"\"\n self.override_dll(\"winemenubuilder.exe\", \"\")\n try:\n self.desktop_integration()\n except OSError as ex:\n logger.error(\n \"Failed to setup desktop integration, the prefix may not be valid.\"\n )\n logger.exception(ex)\n\n def get_registry_path(self, key):\n \"\"\"Matches registry keys to a registry file\n\n Currently, only HKEY_CURRENT_USER keys are supported.\n \"\"\"\n if key.startswith(self.hkcu_prefix):\n return os.path.join(self.path, \"user.reg\")\n raise ValueError(\"Unsupported key '{}'\".format(key))\n\n def get_key_path(self, key):\n if key.startswith(self.hkcu_prefix):\n return key[len(self.hkcu_prefix) + 1:]\n raise ValueError(\n \"The key {} is currently not supported by WinePrefixManager\".format(key)\n )\n\n def get_registry_key(self, key, subkey):\n registry = WineRegistry(self.get_registry_path(key))\n return registry.query(self.get_key_path(key), subkey)\n\n def set_registry_key(self, key, subkey, value):\n registry = WineRegistry(self.get_registry_path(key))\n registry.set_value(self.get_key_path(key), subkey, value)\n registry.save()\n\n def clear_registry_key(self, key):\n registry = WineRegistry(self.get_registry_path(key))\n registry.clear_key(self.get_key_path(key))\n registry.save()\n\n def clear_registry_subkeys(self, key, subkeys):\n registry = WineRegistry(self.get_registry_path(key))\n registry.clear_subkeys(self.get_key_path(key), subkeys)\n registry.save()\n\n def override_dll(self, dll, mode):\n key = self.hkcu_prefix + \"/Software/Wine/DllOverrides\"\n if mode.startswith(\"dis\"):\n mode = \"\"\n if mode not in (\"builtin\", \"native\", \"builtin,native\", \"native,builtin\", \"\"):\n logger.error(\"DLL override '%s' mode is not valid\", mode)\n return\n self.set_registry_key(key, dll, mode)\n\n def get_desktop_folders(self):\n \"\"\"Return the list of desktop folder names loaded from the Windows registry\"\"\"\n desktop_folders = []\n for key in DESKTOP_KEYS:\n folder = self.get_registry_key(\n self.hkcu_prefix\n + \"/Software/Microsoft/Windows/CurrentVersion/Explorer/Shell Folders\",\n key,\n )\n if not folder:\n logger.warning(\"Couldn't load shell folder name for %s\", key)\n continue\n desktop_folders.append(folder[folder.rfind(\"\\\\\") + 1:])\n return desktop_folders or DEFAULT_DESKTOP_FOLDERS\n\n def desktop_integration(self, desktop_dir=None, restore=False):\n \"\"\"Overwrite desktop integration\"\"\"\n user = os.getenv(\"USER\")\n user_dir = os.path.join(self.path, \"drive_c/users/\", user)\n desktop_folders = self.get_desktop_folders()\n\n if desktop_dir:\n desktop_dir = os.path.expanduser(desktop_dir)\n else:\n desktop_dir = user_dir\n\n if system.path_exists(user_dir):\n # Replace or restore desktop integration symlinks\n for i, item in enumerate(desktop_folders):\n path = os.path.join(user_dir, item)\n old_path = path + \".winecfg\"\n\n if os.path.islink(path):\n if not restore:\n os.unlink(path)\n elif os.path.isdir(path):\n try:\n os.rmdir(path)\n # We can't delete nonempty dir, so we rename as wine do.\n except OSError:\n os.rename(path, old_path)\n\n if restore and not os.path.isdir(path):\n os.symlink(xdgshortcuts.get_xdg_entry(DESKTOP_XDG[i]), path)\n # We don't need all the others process of the loop\n continue\n\n if desktop_dir != user_dir:\n try:\n src_path = os.path.join(desktop_dir, item)\n except TypeError:\n # There is supposedly a None value in there\n # The current code shouldn't allow that\n # Just raise a exception with the values\n raise RuntimeError(\"Missing value desktop_dir=%s or item=%s\"\n % (desktop_dir, item))\n\n os.makedirs(src_path, exist_ok=True)\n os.symlink(src_path, path)\n else:\n # We use first the renamed dir, otherwise we make it.\n if os.path.isdir(old_path):\n os.rename(old_path, path)\n else:\n os.makedirs(path, exist_ok=True)\n\n # Security: Remove other symlinks.\n for item in os.listdir(user_dir):\n path = os.path.join(user_dir, item)\n if item not in DEFAULT_DESKTOP_FOLDERS and os.path.islink(path):\n os.unlink(path)\n os.makedirs(path)\n\n def set_crash_dialogs(self, enabled):\n \"\"\"Enable or diable Wine crash dialogs\"\"\"\n self.set_registry_key(\n self.hkcu_prefix + \"/Software/Wine/WineDbg\",\n \"ShowCrashDialog\",\n 1 if enabled else 0,\n )\n\n def set_virtual_desktop(self, enabled):\n \"\"\"Enable or disable wine virtual desktop.\n The Lutris virtual desktop is refered to as 'WineDesktop', in Wine the\n virtual desktop name is 'default'.\n \"\"\"\n path = self.hkcu_prefix + \"/Software/Wine/Explorer\"\n if enabled:\n self.set_registry_key(path, \"Desktop\", \"WineDesktop\")\n default_resolution = \"x\".join(DISPLAY_MANAGER.get_current_resolution())\n logger.debug(\n \"Enabling wine virtual desktop with default resolution of %s\",\n default_resolution,\n )\n self.set_registry_key(\n self.hkcu_prefix + \"/Software/Wine/Explorer/Desktops\",\n \"WineDesktop\",\n default_resolution,\n )\n else:\n self.clear_registry_key(path)\n\n def set_desktop_size(self, desktop_size):\n \"\"\"Sets the desktop size if one is given but do not reset the key if\n one isn't.\n \"\"\"\n path = self.hkcu_prefix + \"/Software/Wine/Explorer/Desktops\"\n if desktop_size:\n self.set_registry_key(path, \"WineDesktop\", desktop_size)\n\n def use_xvid_mode(self, enabled):\n \"\"\"Set this to \"Y\" to allow wine switch the resolution using XVidMode extension.\"\"\"\n self.set_registry_key(\n self.hkcu_prefix + \"/Software/Wine/X11 Driver\",\n \"UseXVidMode\",\n \"Y\" if enabled else \"N\",\n )\n\n def configure_joypads(self):\n joypads = joypad.get_joypads()\n key = self.hkcu_prefix + \"/Software/Wine/DirectInput/Joysticks\"\n self.clear_registry_key(key)\n for device, joypad_name in joypads:\n if \"event\" in device:\n disabled_joypad = \"{} (js)\".format(joypad_name)\n else:\n disabled_joypad = \"{} (event)\".format(joypad_name)\n self.set_registry_key(key, disabled_joypad, \"disabled\")\n", "path": "lutris/util/wine/prefix.py"}]} | 3,152 | 199 |
gh_patches_debug_59 | rasdani/github-patches | git_diff | Anselmoo__spectrafit-662 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Docs]: Using mike for versioning docs
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Missing Information in the Docs
https://squidfunk.github.io/mkdocs-material/setup/setting-up-versioning/
### Anything else?
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
</issue>
<code>
[start of spectrafit/__init__.py]
1 """SpectraFit, fast command line tool for fitting data."""
2 __version__ = "1.0.0a2"
3
[end of spectrafit/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/spectrafit/__init__.py b/spectrafit/__init__.py
--- a/spectrafit/__init__.py
+++ b/spectrafit/__init__.py
@@ -1,2 +1,2 @@
"""SpectraFit, fast command line tool for fitting data."""
-__version__ = "1.0.0a2"
+__version__ = "1.0.0a3"
| {"golden_diff": "diff --git a/spectrafit/__init__.py b/spectrafit/__init__.py\n--- a/spectrafit/__init__.py\n+++ b/spectrafit/__init__.py\n@@ -1,2 +1,2 @@\n \"\"\"SpectraFit, fast command line tool for fitting data.\"\"\"\n-__version__ = \"1.0.0a2\"\n+__version__ = \"1.0.0a3\"\n", "issue": "[Docs]: Using mike for versioning docs\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Missing Information in the Docs\n\nhttps://squidfunk.github.io/mkdocs-material/setup/setting-up-versioning/\n\n### Anything else?\n\n_No response_\n\n### Code of Conduct\n\n- [X] I agree to follow this project's Code of Conduct\n", "before_files": [{"content": "\"\"\"SpectraFit, fast command line tool for fitting data.\"\"\"\n__version__ = \"1.0.0a2\"\n", "path": "spectrafit/__init__.py"}]} | 650 | 96 |
gh_patches_debug_40787 | rasdani/github-patches | git_diff | ietf-tools__datatracker-7199 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Refactor to drop dependency on decorator package
### Description
We have a few decorators defined in `ietf/utils/decorator.py` that use the "decorator" package. This provides the `@decorator` decorator and a `decorate` method. The built-in Python `functools.wraps()` method can fulfill the needs here without the additional dependency. As far as I can tell we're not making use of any of the features provided by the "decorator" package.
The `@decorator` mechanism also seems to interfere with Django's `@method_decorator`, which led to refactoring the `requires_api_key` decorator already.
### Code of Conduct
- [X] I agree to follow the [IETF's Code of Conduct](https://github.com/ietf-tools/.github/blob/main/CODE_OF_CONDUCT.md)
</issue>
<code>
[start of ietf/utils/decorators.py]
1 # Copyright The IETF Trust 2016-2020, All Rights Reserved
2 # -*- coding: utf-8 -*-
3
4
5 import datetime
6
7 from decorator import decorator, decorate
8 from functools import wraps
9
10 from django.conf import settings
11 from django.contrib.auth import login
12 from django.http import HttpResponse
13 from django.shortcuts import render
14 from django.utils import timezone
15 from django.utils.encoding import force_bytes
16
17 import debug # pyflakes:ignore
18
19 from ietf.utils.test_runner import set_coverage_checking
20 from ietf.person.models import Person, PersonalApiKey, PersonApiKeyEvent
21 from ietf.utils import log
22
23 @decorator
24 def skip_coverage(f, *args, **kwargs):
25 if settings.TEST_CODE_COVERAGE_CHECKER:
26 set_coverage_checking(False)
27 result = f(*args, **kwargs)
28 set_coverage_checking(True)
29 return result
30 else:
31 return f(*args, **kwargs)
32
33 @decorator
34 def person_required(f, request, *args, **kwargs):
35 if not request.user.is_authenticated:
36 raise ValueError("The @person_required decorator should be called after @login_required.")
37 try:
38 request.user.person
39 except Person.DoesNotExist:
40 return render(request, 'registration/missing_person.html')
41 return f(request, *args, **kwargs)
42
43
44 def require_api_key(f):
45 @wraps(f)
46 def _wrapper(request, *args, **kwargs):
47 def err(code, text):
48 return HttpResponse(text, status=code, content_type='text/plain')
49 # Check method and get hash
50 if request.method == 'POST':
51 hash = request.POST.get('apikey')
52 elif request.method == 'GET':
53 hash = request.GET.get('apikey')
54 else:
55 return err(405, "Method not allowed")
56 if not hash:
57 return err(400, "Missing apikey parameter")
58 # Check hash
59 key = PersonalApiKey.validate_key(force_bytes(hash))
60 if not key:
61 return err(403, "Invalid apikey")
62 # Check endpoint
63 urlpath = request.META.get('PATH_INFO')
64 if not (urlpath and urlpath == key.endpoint):
65 return err(400, "Apikey endpoint mismatch")
66 # Check time since regular login
67 person = key.person
68 last_login = person.user.last_login
69 if not person.user.is_staff:
70 time_limit = (timezone.now() - datetime.timedelta(days=settings.UTILS_APIKEY_GUI_LOGIN_LIMIT_DAYS))
71 if last_login == None or last_login < time_limit:
72 return err(400, "Too long since last regular login")
73 # Log in
74 login(request, person.user)
75 # restore the user.last_login field, so it reflects only gui logins
76 person.user.last_login = last_login
77 person.user.save()
78 # Update stats
79 key.count += 1
80 key.latest = timezone.now()
81 key.save()
82 PersonApiKeyEvent.objects.create(person=person, type='apikey_login', key=key, desc="Logged in with key ID %s, endpoint %s" % (key.id, key.endpoint))
83 # Execute decorated function
84 try:
85 ret = f(request, *args, **kwargs)
86 except AttributeError as e:
87 log.log("Bad API call: args: %s, kwargs: %s, exception: %s" % (args, kwargs, e))
88 return err(400, "Bad or missing parameters")
89 return ret
90 return _wrapper
91
92
93 def _memoize(func, self, *args, **kwargs):
94 '''Memoize wrapper for instance methods. Use @lru_cache for functions.'''
95 if kwargs: # frozenset is used to ensure hashability
96 key = args, frozenset(list(kwargs.items()))
97 else:
98 key = args
99 # instance method, set up cache if needed
100 if not hasattr(self, '_cache'):
101 self._cache = {}
102 if not func in self._cache:
103 self._cache[func] = {}
104 #
105 cache = self._cache[func]
106 if key not in cache:
107 cache[key] = func(self, *args, **kwargs)
108 return cache[key]
109 def memoize(func):
110 if not hasattr(func, '__class__'):
111 raise NotImplementedError("Use @lru_cache instead of memoize() for functions.")
112 # For methods, we want the cache on the object, not on the class, in order
113 # to not having to think about cache bloat and content becoming stale, so
114 # we cannot set up the cache here.
115 return decorate(func, _memoize)
116
117
118 def ignore_view_kwargs(*args):
119 """Ignore the specified kwargs if they are present
120
121 Usage:
122 @ignore_view_kwargs("ignore_arg1", "ignore_arg2")
123 def my_view(request, good_arg):
124 ...
125
126 This will allow my_view() to be used in url() paths that have zero, one, or both of
127 ignore_arg1 and ignore_arg2 captured. These will be ignored, while good_arg will still
128 be captured as usual.
129 """
130 kwargs_to_ignore = args
131
132 def decorate(view):
133 @wraps(view)
134 def wrapped(*args, **kwargs):
135 for kwarg in kwargs_to_ignore:
136 kwargs.pop(kwarg, None)
137 return view(*args, **kwargs)
138
139 return wrapped
140
141 return decorate
142
143
144
[end of ietf/utils/decorators.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ietf/utils/decorators.py b/ietf/utils/decorators.py
--- a/ietf/utils/decorators.py
+++ b/ietf/utils/decorators.py
@@ -4,7 +4,6 @@
import datetime
-from decorator import decorator, decorate
from functools import wraps
from django.conf import settings
@@ -20,25 +19,29 @@
from ietf.person.models import Person, PersonalApiKey, PersonApiKeyEvent
from ietf.utils import log
-@decorator
-def skip_coverage(f, *args, **kwargs):
- if settings.TEST_CODE_COVERAGE_CHECKER:
- set_coverage_checking(False)
- result = f(*args, **kwargs)
- set_coverage_checking(True)
- return result
- else:
- return f(*args, **kwargs)
-
-@decorator
-def person_required(f, request, *args, **kwargs):
- if not request.user.is_authenticated:
- raise ValueError("The @person_required decorator should be called after @login_required.")
- try:
- request.user.person
- except Person.DoesNotExist:
- return render(request, 'registration/missing_person.html')
- return f(request, *args, **kwargs)
+def skip_coverage(f):
+ @wraps(f)
+ def _wrapper(*args, **kwargs):
+ if settings.TEST_CODE_COVERAGE_CHECKER:
+ set_coverage_checking(False)
+ result = f(*args, **kwargs)
+ set_coverage_checking(True)
+ return result
+ else:
+ return f(*args, **kwargs)
+ return _wrapper
+
+def person_required(f):
+ @wraps(f)
+ def _wrapper(request, *args, **kwargs):
+ if not request.user.is_authenticated:
+ raise ValueError("The @person_required decorator should be called after @login_required.")
+ try:
+ request.user.person
+ except Person.DoesNotExist:
+ return render(request, 'registration/missing_person.html')
+ return f(request, *args, **kwargs)
+ return _wrapper
def require_api_key(f):
@@ -90,29 +93,31 @@
return _wrapper
-def _memoize(func, self, *args, **kwargs):
- '''Memoize wrapper for instance methods. Use @lru_cache for functions.'''
- if kwargs: # frozenset is used to ensure hashability
- key = args, frozenset(list(kwargs.items()))
- else:
- key = args
- # instance method, set up cache if needed
- if not hasattr(self, '_cache'):
- self._cache = {}
- if not func in self._cache:
- self._cache[func] = {}
- #
- cache = self._cache[func]
- if key not in cache:
- cache[key] = func(self, *args, **kwargs)
- return cache[key]
def memoize(func):
+ @wraps(func)
+ def _memoize(self, *args, **kwargs):
+ '''Memoize wrapper for instance methods. Use @lru_cache for functions.'''
+ if kwargs: # frozenset is used to ensure hashability
+ key = args, frozenset(list(kwargs.items()))
+ else:
+ key = args
+ # instance method, set up cache if needed
+ if not hasattr(self, '_cache'):
+ self._cache = {}
+ if not func in self._cache:
+ self._cache[func] = {}
+ #
+ cache = self._cache[func]
+ if key not in cache:
+ cache[key] = func(self, *args, **kwargs)
+ return cache[key]
+
if not hasattr(func, '__class__'):
raise NotImplementedError("Use @lru_cache instead of memoize() for functions.")
# For methods, we want the cache on the object, not on the class, in order
# to not having to think about cache bloat and content becoming stale, so
# we cannot set up the cache here.
- return decorate(func, _memoize)
+ return _memoize
def ignore_view_kwargs(*args):
| {"golden_diff": "diff --git a/ietf/utils/decorators.py b/ietf/utils/decorators.py\n--- a/ietf/utils/decorators.py\n+++ b/ietf/utils/decorators.py\n@@ -4,7 +4,6 @@\n \n import datetime\n \n-from decorator import decorator, decorate\n from functools import wraps\n \n from django.conf import settings\n@@ -20,25 +19,29 @@\n from ietf.person.models import Person, PersonalApiKey, PersonApiKeyEvent\n from ietf.utils import log\n \n-@decorator\n-def skip_coverage(f, *args, **kwargs):\n- if settings.TEST_CODE_COVERAGE_CHECKER:\n- set_coverage_checking(False)\n- result = f(*args, **kwargs)\n- set_coverage_checking(True)\n- return result\n- else:\n- return f(*args, **kwargs)\n-\n-@decorator\n-def person_required(f, request, *args, **kwargs):\n- if not request.user.is_authenticated:\n- raise ValueError(\"The @person_required decorator should be called after @login_required.\")\n- try:\n- request.user.person\n- except Person.DoesNotExist:\n- return render(request, 'registration/missing_person.html')\n- return f(request, *args, **kwargs)\n+def skip_coverage(f):\n+ @wraps(f)\n+ def _wrapper(*args, **kwargs):\n+ if settings.TEST_CODE_COVERAGE_CHECKER:\n+ set_coverage_checking(False)\n+ result = f(*args, **kwargs)\n+ set_coverage_checking(True)\n+ return result\n+ else:\n+ return f(*args, **kwargs)\n+ return _wrapper\n+\n+def person_required(f):\n+ @wraps(f)\n+ def _wrapper(request, *args, **kwargs):\n+ if not request.user.is_authenticated:\n+ raise ValueError(\"The @person_required decorator should be called after @login_required.\")\n+ try:\n+ request.user.person\n+ except Person.DoesNotExist:\n+ return render(request, 'registration/missing_person.html')\n+ return f(request, *args, **kwargs)\n+ return _wrapper\n \n \n def require_api_key(f):\n@@ -90,29 +93,31 @@\n return _wrapper\n \n \n-def _memoize(func, self, *args, **kwargs):\n- '''Memoize wrapper for instance methods. Use @lru_cache for functions.'''\n- if kwargs: # frozenset is used to ensure hashability\n- key = args, frozenset(list(kwargs.items()))\n- else:\n- key = args\n- # instance method, set up cache if needed\n- if not hasattr(self, '_cache'):\n- self._cache = {}\n- if not func in self._cache:\n- self._cache[func] = {} \n- #\n- cache = self._cache[func]\n- if key not in cache:\n- cache[key] = func(self, *args, **kwargs)\n- return cache[key]\n def memoize(func):\n+ @wraps(func)\n+ def _memoize(self, *args, **kwargs):\n+ '''Memoize wrapper for instance methods. Use @lru_cache for functions.'''\n+ if kwargs: # frozenset is used to ensure hashability\n+ key = args, frozenset(list(kwargs.items()))\n+ else:\n+ key = args\n+ # instance method, set up cache if needed\n+ if not hasattr(self, '_cache'):\n+ self._cache = {}\n+ if not func in self._cache:\n+ self._cache[func] = {} \n+ #\n+ cache = self._cache[func]\n+ if key not in cache:\n+ cache[key] = func(self, *args, **kwargs)\n+ return cache[key]\n+\n if not hasattr(func, '__class__'):\n raise NotImplementedError(\"Use @lru_cache instead of memoize() for functions.\")\n # For methods, we want the cache on the object, not on the class, in order\n # to not having to think about cache bloat and content becoming stale, so\n # we cannot set up the cache here.\n- return decorate(func, _memoize)\n+ return _memoize\n \n \n def ignore_view_kwargs(*args):\n", "issue": "Refactor to drop dependency on decorator package\n### Description\n\nWe have a few decorators defined in `ietf/utils/decorator.py` that use the \"decorator\" package. This provides the `@decorator` decorator and a `decorate` method. The built-in Python `functools.wraps()` method can fulfill the needs here without the additional dependency. As far as I can tell we're not making use of any of the features provided by the \"decorator\" package.\r\n\r\nThe `@decorator` mechanism also seems to interfere with Django's `@method_decorator`, which led to refactoring the `requires_api_key` decorator already.\n\n### Code of Conduct\n\n- [X] I agree to follow the [IETF's Code of Conduct](https://github.com/ietf-tools/.github/blob/main/CODE_OF_CONDUCT.md)\n", "before_files": [{"content": "# Copyright The IETF Trust 2016-2020, All Rights Reserved\n# -*- coding: utf-8 -*-\n\n\nimport datetime\n\nfrom decorator import decorator, decorate\nfrom functools import wraps\n\nfrom django.conf import settings\nfrom django.contrib.auth import login\nfrom django.http import HttpResponse\nfrom django.shortcuts import render\nfrom django.utils import timezone\nfrom django.utils.encoding import force_bytes\n\nimport debug # pyflakes:ignore\n\nfrom ietf.utils.test_runner import set_coverage_checking\nfrom ietf.person.models import Person, PersonalApiKey, PersonApiKeyEvent\nfrom ietf.utils import log\n\n@decorator\ndef skip_coverage(f, *args, **kwargs):\n if settings.TEST_CODE_COVERAGE_CHECKER:\n set_coverage_checking(False)\n result = f(*args, **kwargs)\n set_coverage_checking(True)\n return result\n else:\n return f(*args, **kwargs)\n\n@decorator\ndef person_required(f, request, *args, **kwargs):\n if not request.user.is_authenticated:\n raise ValueError(\"The @person_required decorator should be called after @login_required.\")\n try:\n request.user.person\n except Person.DoesNotExist:\n return render(request, 'registration/missing_person.html')\n return f(request, *args, **kwargs)\n\n\ndef require_api_key(f):\n @wraps(f)\n def _wrapper(request, *args, **kwargs):\n def err(code, text):\n return HttpResponse(text, status=code, content_type='text/plain')\n # Check method and get hash\n if request.method == 'POST':\n hash = request.POST.get('apikey')\n elif request.method == 'GET':\n hash = request.GET.get('apikey')\n else:\n return err(405, \"Method not allowed\")\n if not hash:\n return err(400, \"Missing apikey parameter\")\n # Check hash\n key = PersonalApiKey.validate_key(force_bytes(hash))\n if not key:\n return err(403, \"Invalid apikey\")\n # Check endpoint\n urlpath = request.META.get('PATH_INFO')\n if not (urlpath and urlpath == key.endpoint):\n return err(400, \"Apikey endpoint mismatch\") \n # Check time since regular login\n person = key.person\n last_login = person.user.last_login\n if not person.user.is_staff:\n time_limit = (timezone.now() - datetime.timedelta(days=settings.UTILS_APIKEY_GUI_LOGIN_LIMIT_DAYS))\n if last_login == None or last_login < time_limit:\n return err(400, \"Too long since last regular login\")\n # Log in\n login(request, person.user)\n # restore the user.last_login field, so it reflects only gui logins\n person.user.last_login = last_login\n person.user.save()\n # Update stats\n key.count += 1\n key.latest = timezone.now()\n key.save()\n PersonApiKeyEvent.objects.create(person=person, type='apikey_login', key=key, desc=\"Logged in with key ID %s, endpoint %s\" % (key.id, key.endpoint))\n # Execute decorated function\n try:\n ret = f(request, *args, **kwargs)\n except AttributeError as e:\n log.log(\"Bad API call: args: %s, kwargs: %s, exception: %s\" % (args, kwargs, e))\n return err(400, \"Bad or missing parameters\")\n return ret\n return _wrapper\n\n\ndef _memoize(func, self, *args, **kwargs):\n '''Memoize wrapper for instance methods. Use @lru_cache for functions.'''\n if kwargs: # frozenset is used to ensure hashability\n key = args, frozenset(list(kwargs.items()))\n else:\n key = args\n # instance method, set up cache if needed\n if not hasattr(self, '_cache'):\n self._cache = {}\n if not func in self._cache:\n self._cache[func] = {} \n #\n cache = self._cache[func]\n if key not in cache:\n cache[key] = func(self, *args, **kwargs)\n return cache[key]\ndef memoize(func):\n if not hasattr(func, '__class__'):\n raise NotImplementedError(\"Use @lru_cache instead of memoize() for functions.\")\n # For methods, we want the cache on the object, not on the class, in order\n # to not having to think about cache bloat and content becoming stale, so\n # we cannot set up the cache here.\n return decorate(func, _memoize)\n\n\ndef ignore_view_kwargs(*args):\n \"\"\"Ignore the specified kwargs if they are present\n\n Usage: \n @ignore_view_kwargs(\"ignore_arg1\", \"ignore_arg2\")\n def my_view(request, good_arg):\n ...\n\n This will allow my_view() to be used in url() paths that have zero, one, or both of\n ignore_arg1 and ignore_arg2 captured. These will be ignored, while good_arg will still\n be captured as usual.\n \"\"\"\n kwargs_to_ignore = args\n\n def decorate(view):\n @wraps(view)\n def wrapped(*args, **kwargs):\n for kwarg in kwargs_to_ignore:\n kwargs.pop(kwarg, None)\n return view(*args, **kwargs)\n\n return wrapped\n\n return decorate\n\n\n", "path": "ietf/utils/decorators.py"}]} | 2,208 | 933 |
gh_patches_debug_10024 | rasdani/github-patches | git_diff | ray-project__ray-8933 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[rllib] rllib train ... --checkpoint-at-end flag is ignored
<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->
### What is the problem?
Ray 0.8.5
Python 3.7
MacOS Catalina
PyTorch 1.4.0
No checkpoint is produced when the `--checkpoint-at-end` flag is used. Adding `--checkpoint-freq 10` does cause checkpoints to be saved.
### Reproduction (REQUIRED)
```shell
rllib train --run PPO --env CartPole-v0 --stop='{"training_iteration": 25}' --ray-address auto --checkpoint-at-end
```
No checkpoint directory exists under `~/ray_results/default/PPO...`
Add the `--checkpoint-freq 10` flag:
```shell
rllib train --run PPO --env CartPole-v0 --stop='{"training_iteration": 25}' --ray-address auto --checkpoint-at-end --checkpoint-freq 10
```
Now there are `checkpoint_10` and `checkpoint_20` directories, but not a `checkpoint_25` at the end.
Could the choice of stop criteria, `training_iteration`, have something to do with it?
> **Note:** Being persnickety, it bugs me that the final directory path for the checkpoints is `.../checkpoint_20/checkpoint-20` (underscore vs. dash). How about one or the other?
- [x] I have verified my script runs in a clean environment and reproduces the issue.
- [ ] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).
[rllib] rllib train ... --checkpoint-at-end flag is ignored
<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->
### What is the problem?
Ray 0.8.5
Python 3.7
MacOS Catalina
PyTorch 1.4.0
No checkpoint is produced when the `--checkpoint-at-end` flag is used. Adding `--checkpoint-freq 10` does cause checkpoints to be saved.
### Reproduction (REQUIRED)
```shell
rllib train --run PPO --env CartPole-v0 --stop='{"training_iteration": 25}' --ray-address auto --checkpoint-at-end
```
No checkpoint directory exists under `~/ray_results/default/PPO...`
Add the `--checkpoint-freq 10` flag:
```shell
rllib train --run PPO --env CartPole-v0 --stop='{"training_iteration": 25}' --ray-address auto --checkpoint-at-end --checkpoint-freq 10
```
Now there are `checkpoint_10` and `checkpoint_20` directories, but not a `checkpoint_25` at the end.
Could the choice of stop criteria, `training_iteration`, have something to do with it?
> **Note:** Being persnickety, it bugs me that the final directory path for the checkpoints is `.../checkpoint_20/checkpoint-20` (underscore vs. dash). How about one or the other?
- [x] I have verified my script runs in a clean environment and reproduces the issue.
- [ ] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).
</issue>
<code>
[start of rllib/train.py]
1 #!/usr/bin/env python
2
3 import argparse
4 import os
5 from pathlib import Path
6 import yaml
7
8 import ray
9 from ray.cluster_utils import Cluster
10 from ray.tune.config_parser import make_parser
11 from ray.tune.result import DEFAULT_RESULTS_DIR
12 from ray.tune.resources import resources_to_json
13 from ray.tune.tune import _make_scheduler, run_experiments
14 from ray.rllib.utils.framework import try_import_tf, try_import_torch
15
16 # Try to import both backends for flag checking/warnings.
17 tf = try_import_tf()
18 torch, _ = try_import_torch()
19
20 EXAMPLE_USAGE = """
21 Training example via RLlib CLI:
22 rllib train --run DQN --env CartPole-v0
23
24 Grid search example via RLlib CLI:
25 rllib train -f tuned_examples/cartpole-grid-search-example.yaml
26
27 Grid search example via executable:
28 ./train.py -f tuned_examples/cartpole-grid-search-example.yaml
29
30 Note that -f overrides all other trial-specific command-line options.
31 """
32
33
34 def create_parser(parser_creator=None):
35 parser = make_parser(
36 parser_creator=parser_creator,
37 formatter_class=argparse.RawDescriptionHelpFormatter,
38 description="Train a reinforcement learning agent.",
39 epilog=EXAMPLE_USAGE)
40
41 # See also the base parser definition in ray/tune/config_parser.py
42 parser.add_argument(
43 "--ray-address",
44 default=None,
45 type=str,
46 help="Connect to an existing Ray cluster at this address instead "
47 "of starting a new one.")
48 parser.add_argument(
49 "--no-ray-ui",
50 action="store_true",
51 help="Whether to disable the Ray web ui.")
52 parser.add_argument(
53 "--local-mode",
54 action="store_true",
55 help="Whether to run ray with `local_mode=True`. "
56 "Only if --ray-num-nodes is not used.")
57 parser.add_argument(
58 "--ray-num-cpus",
59 default=None,
60 type=int,
61 help="--num-cpus to use if starting a new cluster.")
62 parser.add_argument(
63 "--ray-num-gpus",
64 default=None,
65 type=int,
66 help="--num-gpus to use if starting a new cluster.")
67 parser.add_argument(
68 "--ray-num-nodes",
69 default=None,
70 type=int,
71 help="Emulate multiple cluster nodes for debugging.")
72 parser.add_argument(
73 "--ray-redis-max-memory",
74 default=None,
75 type=int,
76 help="--redis-max-memory to use if starting a new cluster.")
77 parser.add_argument(
78 "--ray-memory",
79 default=None,
80 type=int,
81 help="--memory to use if starting a new cluster.")
82 parser.add_argument(
83 "--ray-object-store-memory",
84 default=None,
85 type=int,
86 help="--object-store-memory to use if starting a new cluster.")
87 parser.add_argument(
88 "--experiment-name",
89 default="default",
90 type=str,
91 help="Name of the subdirectory under `local_dir` to put results in.")
92 parser.add_argument(
93 "--local-dir",
94 default=DEFAULT_RESULTS_DIR,
95 type=str,
96 help="Local dir to save training results to. Defaults to '{}'.".format(
97 DEFAULT_RESULTS_DIR))
98 parser.add_argument(
99 "--upload-dir",
100 default="",
101 type=str,
102 help="Optional URI to sync training results to (e.g. s3://bucket).")
103 parser.add_argument(
104 "-v", action="store_true", help="Whether to use INFO level logging.")
105 parser.add_argument(
106 "-vv", action="store_true", help="Whether to use DEBUG level logging.")
107 parser.add_argument(
108 "--resume",
109 action="store_true",
110 help="Whether to attempt to resume previous Tune experiments.")
111 parser.add_argument(
112 "--torch",
113 action="store_true",
114 help="Whether to use PyTorch (instead of tf) as the DL framework.")
115 parser.add_argument(
116 "--eager",
117 action="store_true",
118 help="Whether to attempt to enable TF eager execution.")
119 parser.add_argument(
120 "--trace",
121 action="store_true",
122 help="Whether to attempt to enable tracing for eager mode.")
123 parser.add_argument(
124 "--env", default=None, type=str, help="The gym environment to use.")
125 parser.add_argument(
126 "--queue-trials",
127 action="store_true",
128 help=(
129 "Whether to queue trials when the cluster does not currently have "
130 "enough resources to launch one. This should be set to True when "
131 "running on an autoscaling cluster to enable automatic scale-up."))
132 parser.add_argument(
133 "-f",
134 "--config-file",
135 default=None,
136 type=str,
137 help="If specified, use config options from this file. Note that this "
138 "overrides any trial-specific options set via flags above.")
139 return parser
140
141
142 def run(args, parser):
143 if args.config_file:
144 with open(args.config_file) as f:
145 experiments = yaml.safe_load(f)
146 else:
147 # Note: keep this in sync with tune/config_parser.py
148 experiments = {
149 args.experiment_name: { # i.e. log to ~/ray_results/default
150 "run": args.run,
151 "checkpoint_freq": args.checkpoint_freq,
152 "keep_checkpoints_num": args.keep_checkpoints_num,
153 "checkpoint_score_attr": args.checkpoint_score_attr,
154 "local_dir": args.local_dir,
155 "resources_per_trial": (
156 args.resources_per_trial and
157 resources_to_json(args.resources_per_trial)),
158 "stop": args.stop,
159 "config": dict(args.config, env=args.env),
160 "restore": args.restore,
161 "num_samples": args.num_samples,
162 "upload_dir": args.upload_dir,
163 }
164 }
165
166 verbose = 1
167 for exp in experiments.values():
168 # Bazel makes it hard to find files specified in `args` (and `data`).
169 # Look for them here.
170 # NOTE: Some of our yaml files don't have a `config` section.
171 if exp.get("config", {}).get("input") and \
172 not os.path.exists(exp["config"]["input"]):
173 # This script runs in the ray/rllib dir.
174 rllib_dir = Path(__file__).parent
175 input_file = rllib_dir.absolute().joinpath(exp["config"]["input"])
176 exp["config"]["input"] = str(input_file)
177
178 if not exp.get("run"):
179 parser.error("the following arguments are required: --run")
180 if not exp.get("env") and not exp.get("config", {}).get("env"):
181 parser.error("the following arguments are required: --env")
182 if args.eager:
183 exp["config"]["framework"] = "tfe"
184 elif args.torch:
185 exp["config"]["framework"] = "torch"
186 else:
187 exp["config"]["framework"] = "tf"
188 if args.v:
189 exp["config"]["log_level"] = "INFO"
190 verbose = 2
191 if args.vv:
192 exp["config"]["log_level"] = "DEBUG"
193 verbose = 3
194 if args.trace:
195 if exp["config"]["framework"] != "tfe":
196 raise ValueError("Must enable --eager to enable tracing.")
197 exp["config"]["eager_tracing"] = True
198
199 if args.ray_num_nodes:
200 cluster = Cluster()
201 for _ in range(args.ray_num_nodes):
202 cluster.add_node(
203 num_cpus=args.ray_num_cpus or 1,
204 num_gpus=args.ray_num_gpus or 0,
205 object_store_memory=args.ray_object_store_memory,
206 memory=args.ray_memory,
207 redis_max_memory=args.ray_redis_max_memory)
208 ray.init(address=cluster.address)
209 else:
210 ray.init(
211 include_webui=not args.no_ray_ui,
212 address=args.ray_address,
213 object_store_memory=args.ray_object_store_memory,
214 memory=args.ray_memory,
215 redis_max_memory=args.ray_redis_max_memory,
216 num_cpus=args.ray_num_cpus,
217 num_gpus=args.ray_num_gpus,
218 local_mode=args.local_mode)
219 run_experiments(
220 experiments,
221 scheduler=_make_scheduler(args),
222 queue_trials=args.queue_trials,
223 resume=args.resume,
224 verbose=verbose,
225 concurrent=True)
226
227
228 if __name__ == "__main__":
229 parser = create_parser()
230 args = parser.parse_args()
231 run(args, parser)
232
[end of rllib/train.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/rllib/train.py b/rllib/train.py
--- a/rllib/train.py
+++ b/rllib/train.py
@@ -149,6 +149,7 @@
args.experiment_name: { # i.e. log to ~/ray_results/default
"run": args.run,
"checkpoint_freq": args.checkpoint_freq,
+ "checkpoint_at_end": args.checkpoint_at_end,
"keep_checkpoints_num": args.keep_checkpoints_num,
"checkpoint_score_attr": args.checkpoint_score_attr,
"local_dir": args.local_dir,
| {"golden_diff": "diff --git a/rllib/train.py b/rllib/train.py\n--- a/rllib/train.py\n+++ b/rllib/train.py\n@@ -149,6 +149,7 @@\n args.experiment_name: { # i.e. log to ~/ray_results/default\n \"run\": args.run,\n \"checkpoint_freq\": args.checkpoint_freq,\n+ \"checkpoint_at_end\": args.checkpoint_at_end,\n \"keep_checkpoints_num\": args.keep_checkpoints_num,\n \"checkpoint_score_attr\": args.checkpoint_score_attr,\n \"local_dir\": args.local_dir,\n", "issue": "[rllib] rllib train ... --checkpoint-at-end flag is ignored\n<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->\r\n\r\n### What is the problem?\r\n\r\nRay 0.8.5\r\nPython 3.7\r\nMacOS Catalina\r\nPyTorch 1.4.0\r\n\r\nNo checkpoint is produced when the `--checkpoint-at-end` flag is used. Adding `--checkpoint-freq 10` does cause checkpoints to be saved.\r\n\r\n### Reproduction (REQUIRED)\r\n\r\n```shell\r\nrllib train --run PPO --env CartPole-v0 --stop='{\"training_iteration\": 25}' --ray-address auto --checkpoint-at-end\r\n```\r\n\r\nNo checkpoint directory exists under `~/ray_results/default/PPO...`\r\n\r\nAdd the `--checkpoint-freq 10` flag:\r\n\r\n```shell\r\nrllib train --run PPO --env CartPole-v0 --stop='{\"training_iteration\": 25}' --ray-address auto --checkpoint-at-end --checkpoint-freq 10\r\n```\r\n\r\nNow there are `checkpoint_10` and `checkpoint_20` directories, but not a `checkpoint_25` at the end. \r\n\r\nCould the choice of stop criteria, `training_iteration`, have something to do with it?\r\n\r\n> **Note:** Being persnickety, it bugs me that the final directory path for the checkpoints is `.../checkpoint_20/checkpoint-20` (underscore vs. dash). How about one or the other?\r\n\r\n- [x] I have verified my script runs in a clean environment and reproduces the issue.\r\n- [ ] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).\r\n\n[rllib] rllib train ... --checkpoint-at-end flag is ignored\n<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->\r\n\r\n### What is the problem?\r\n\r\nRay 0.8.5\r\nPython 3.7\r\nMacOS Catalina\r\nPyTorch 1.4.0\r\n\r\nNo checkpoint is produced when the `--checkpoint-at-end` flag is used. Adding `--checkpoint-freq 10` does cause checkpoints to be saved.\r\n\r\n### Reproduction (REQUIRED)\r\n\r\n```shell\r\nrllib train --run PPO --env CartPole-v0 --stop='{\"training_iteration\": 25}' --ray-address auto --checkpoint-at-end\r\n```\r\n\r\nNo checkpoint directory exists under `~/ray_results/default/PPO...`\r\n\r\nAdd the `--checkpoint-freq 10` flag:\r\n\r\n```shell\r\nrllib train --run PPO --env CartPole-v0 --stop='{\"training_iteration\": 25}' --ray-address auto --checkpoint-at-end --checkpoint-freq 10\r\n```\r\n\r\nNow there are `checkpoint_10` and `checkpoint_20` directories, but not a `checkpoint_25` at the end. \r\n\r\nCould the choice of stop criteria, `training_iteration`, have something to do with it?\r\n\r\n> **Note:** Being persnickety, it bugs me that the final directory path for the checkpoints is `.../checkpoint_20/checkpoint-20` (underscore vs. dash). How about one or the other?\r\n\r\n- [x] I have verified my script runs in a clean environment and reproduces the issue.\r\n- [ ] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport argparse\nimport os\nfrom pathlib import Path\nimport yaml\n\nimport ray\nfrom ray.cluster_utils import Cluster\nfrom ray.tune.config_parser import make_parser\nfrom ray.tune.result import DEFAULT_RESULTS_DIR\nfrom ray.tune.resources import resources_to_json\nfrom ray.tune.tune import _make_scheduler, run_experiments\nfrom ray.rllib.utils.framework import try_import_tf, try_import_torch\n\n# Try to import both backends for flag checking/warnings.\ntf = try_import_tf()\ntorch, _ = try_import_torch()\n\nEXAMPLE_USAGE = \"\"\"\nTraining example via RLlib CLI:\n rllib train --run DQN --env CartPole-v0\n\nGrid search example via RLlib CLI:\n rllib train -f tuned_examples/cartpole-grid-search-example.yaml\n\nGrid search example via executable:\n ./train.py -f tuned_examples/cartpole-grid-search-example.yaml\n\nNote that -f overrides all other trial-specific command-line options.\n\"\"\"\n\n\ndef create_parser(parser_creator=None):\n parser = make_parser(\n parser_creator=parser_creator,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n description=\"Train a reinforcement learning agent.\",\n epilog=EXAMPLE_USAGE)\n\n # See also the base parser definition in ray/tune/config_parser.py\n parser.add_argument(\n \"--ray-address\",\n default=None,\n type=str,\n help=\"Connect to an existing Ray cluster at this address instead \"\n \"of starting a new one.\")\n parser.add_argument(\n \"--no-ray-ui\",\n action=\"store_true\",\n help=\"Whether to disable the Ray web ui.\")\n parser.add_argument(\n \"--local-mode\",\n action=\"store_true\",\n help=\"Whether to run ray with `local_mode=True`. \"\n \"Only if --ray-num-nodes is not used.\")\n parser.add_argument(\n \"--ray-num-cpus\",\n default=None,\n type=int,\n help=\"--num-cpus to use if starting a new cluster.\")\n parser.add_argument(\n \"--ray-num-gpus\",\n default=None,\n type=int,\n help=\"--num-gpus to use if starting a new cluster.\")\n parser.add_argument(\n \"--ray-num-nodes\",\n default=None,\n type=int,\n help=\"Emulate multiple cluster nodes for debugging.\")\n parser.add_argument(\n \"--ray-redis-max-memory\",\n default=None,\n type=int,\n help=\"--redis-max-memory to use if starting a new cluster.\")\n parser.add_argument(\n \"--ray-memory\",\n default=None,\n type=int,\n help=\"--memory to use if starting a new cluster.\")\n parser.add_argument(\n \"--ray-object-store-memory\",\n default=None,\n type=int,\n help=\"--object-store-memory to use if starting a new cluster.\")\n parser.add_argument(\n \"--experiment-name\",\n default=\"default\",\n type=str,\n help=\"Name of the subdirectory under `local_dir` to put results in.\")\n parser.add_argument(\n \"--local-dir\",\n default=DEFAULT_RESULTS_DIR,\n type=str,\n help=\"Local dir to save training results to. Defaults to '{}'.\".format(\n DEFAULT_RESULTS_DIR))\n parser.add_argument(\n \"--upload-dir\",\n default=\"\",\n type=str,\n help=\"Optional URI to sync training results to (e.g. s3://bucket).\")\n parser.add_argument(\n \"-v\", action=\"store_true\", help=\"Whether to use INFO level logging.\")\n parser.add_argument(\n \"-vv\", action=\"store_true\", help=\"Whether to use DEBUG level logging.\")\n parser.add_argument(\n \"--resume\",\n action=\"store_true\",\n help=\"Whether to attempt to resume previous Tune experiments.\")\n parser.add_argument(\n \"--torch\",\n action=\"store_true\",\n help=\"Whether to use PyTorch (instead of tf) as the DL framework.\")\n parser.add_argument(\n \"--eager\",\n action=\"store_true\",\n help=\"Whether to attempt to enable TF eager execution.\")\n parser.add_argument(\n \"--trace\",\n action=\"store_true\",\n help=\"Whether to attempt to enable tracing for eager mode.\")\n parser.add_argument(\n \"--env\", default=None, type=str, help=\"The gym environment to use.\")\n parser.add_argument(\n \"--queue-trials\",\n action=\"store_true\",\n help=(\n \"Whether to queue trials when the cluster does not currently have \"\n \"enough resources to launch one. This should be set to True when \"\n \"running on an autoscaling cluster to enable automatic scale-up.\"))\n parser.add_argument(\n \"-f\",\n \"--config-file\",\n default=None,\n type=str,\n help=\"If specified, use config options from this file. Note that this \"\n \"overrides any trial-specific options set via flags above.\")\n return parser\n\n\ndef run(args, parser):\n if args.config_file:\n with open(args.config_file) as f:\n experiments = yaml.safe_load(f)\n else:\n # Note: keep this in sync with tune/config_parser.py\n experiments = {\n args.experiment_name: { # i.e. log to ~/ray_results/default\n \"run\": args.run,\n \"checkpoint_freq\": args.checkpoint_freq,\n \"keep_checkpoints_num\": args.keep_checkpoints_num,\n \"checkpoint_score_attr\": args.checkpoint_score_attr,\n \"local_dir\": args.local_dir,\n \"resources_per_trial\": (\n args.resources_per_trial and\n resources_to_json(args.resources_per_trial)),\n \"stop\": args.stop,\n \"config\": dict(args.config, env=args.env),\n \"restore\": args.restore,\n \"num_samples\": args.num_samples,\n \"upload_dir\": args.upload_dir,\n }\n }\n\n verbose = 1\n for exp in experiments.values():\n # Bazel makes it hard to find files specified in `args` (and `data`).\n # Look for them here.\n # NOTE: Some of our yaml files don't have a `config` section.\n if exp.get(\"config\", {}).get(\"input\") and \\\n not os.path.exists(exp[\"config\"][\"input\"]):\n # This script runs in the ray/rllib dir.\n rllib_dir = Path(__file__).parent\n input_file = rllib_dir.absolute().joinpath(exp[\"config\"][\"input\"])\n exp[\"config\"][\"input\"] = str(input_file)\n\n if not exp.get(\"run\"):\n parser.error(\"the following arguments are required: --run\")\n if not exp.get(\"env\") and not exp.get(\"config\", {}).get(\"env\"):\n parser.error(\"the following arguments are required: --env\")\n if args.eager:\n exp[\"config\"][\"framework\"] = \"tfe\"\n elif args.torch:\n exp[\"config\"][\"framework\"] = \"torch\"\n else:\n exp[\"config\"][\"framework\"] = \"tf\"\n if args.v:\n exp[\"config\"][\"log_level\"] = \"INFO\"\n verbose = 2\n if args.vv:\n exp[\"config\"][\"log_level\"] = \"DEBUG\"\n verbose = 3\n if args.trace:\n if exp[\"config\"][\"framework\"] != \"tfe\":\n raise ValueError(\"Must enable --eager to enable tracing.\")\n exp[\"config\"][\"eager_tracing\"] = True\n\n if args.ray_num_nodes:\n cluster = Cluster()\n for _ in range(args.ray_num_nodes):\n cluster.add_node(\n num_cpus=args.ray_num_cpus or 1,\n num_gpus=args.ray_num_gpus or 0,\n object_store_memory=args.ray_object_store_memory,\n memory=args.ray_memory,\n redis_max_memory=args.ray_redis_max_memory)\n ray.init(address=cluster.address)\n else:\n ray.init(\n include_webui=not args.no_ray_ui,\n address=args.ray_address,\n object_store_memory=args.ray_object_store_memory,\n memory=args.ray_memory,\n redis_max_memory=args.ray_redis_max_memory,\n num_cpus=args.ray_num_cpus,\n num_gpus=args.ray_num_gpus,\n local_mode=args.local_mode)\n run_experiments(\n experiments,\n scheduler=_make_scheduler(args),\n queue_trials=args.queue_trials,\n resume=args.resume,\n verbose=verbose,\n concurrent=True)\n\n\nif __name__ == \"__main__\":\n parser = create_parser()\n args = parser.parse_args()\n run(args, parser)\n", "path": "rllib/train.py"}]} | 3,652 | 125 |
gh_patches_debug_12265 | rasdani/github-patches | git_diff | DDMAL__CantusDB-273 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Sources should automatically have segments
From #257:
> A source should always have a segment. It is either "Cantus Database" or "Sequence Database". It's a foreign key field. In cases where a source doesn't have a segment, it is probably a test source that we created.
> Desired behaviour: when creating a source, assign it to "Cantus Database" by default.
</issue>
<code>
[start of django/cantusdb_project/main_app/models/source.py]
1 from django.db import models
2 from main_app.models import BaseModel
3 from django.contrib.auth import get_user_model
4
5
6 class Source(BaseModel):
7 cursus_choices = [("Monastic", "Monastic"), ("Secular", "Secular")]
8 source_status_choices = [
9 (
10 "Editing process (not all the fields have been proofread)",
11 "Editing process (not all the fields have been proofread)",
12 ),
13 ("Published / Complete", "Published / Complete"),
14 ("Published / Proofread pending", "Published / Proofread pending"),
15 ("Unpublished / Editing process", "Unpublished / Editing process"),
16 ("Unpublished / Indexing process", "Unpublished / Indexing process"),
17 ("Unpublished / Proofread pending", "Unpublished / Proofread pending"),
18 ("Unpublished / Proofreading process", "Unpublished / Proofreading process"),
19 ]
20
21 # sources with public=False cannot be accessed by its url (access denied) and do not appear in source list
22 public = models.BooleanField(blank=True, null=True)
23 # sources with visible=False can be accessed by typing in the url, but do not appear in source list
24 visible = models.BooleanField(blank=True, null=True)
25 title = models.CharField(
26 max_length=255,
27 help_text="Full Manuscript Identification (City, Archive, Shelf-mark)",
28 )
29 # the siglum field as implemented on the old Cantus is composed of both the RISM siglum and the shelfmark
30 # it is a human-readable ID for a source
31 siglum = models.CharField(
32 max_length=63,
33 null=True,
34 blank=True,
35 help_text="RISM-style siglum + Shelf-mark (e.g. GB-Ob 202).",
36 )
37 # the RISM siglum uniquely identifies a library or holding institution
38 rism_siglum = models.ForeignKey(
39 "RismSiglum", on_delete=models.PROTECT, null=True, blank=True,
40 )
41 provenance = models.ForeignKey(
42 "Provenance",
43 on_delete=models.PROTECT,
44 help_text="If the origin is unknown, select a location where the source was "
45 "used later in its lifetime and provide details in the "
46 '"Provenance notes" field.',
47 null=True,
48 blank=True,
49 )
50 provenance_notes = models.TextField(
51 blank=True,
52 null=True,
53 help_text="More exact indication of the provenance (if necessary)",
54 )
55 full_source = models.BooleanField(blank=True, null=True)
56 date = models.CharField(
57 blank=True,
58 null=True,
59 max_length=63,
60 help_text='Date of the manuscript (e.g. "1200s", "1300-1350", etc.)',
61 )
62 century = models.ManyToManyField("Century", related_name="sources")
63 notation = models.ManyToManyField("Notation", related_name="sources")
64 cursus = models.CharField(
65 blank=True, null=True, choices=cursus_choices, max_length=63
66 )
67 # TODO: Fill this field up with JSON info when I have access to the Users
68 current_editors = models.ManyToManyField(get_user_model(), related_name="sources_user_can_edit")
69 inventoried_by = models.ManyToManyField(
70 "Indexer", related_name="sources_inventoried"
71 )
72 full_text_entered_by = models.ManyToManyField(
73 "Indexer", related_name="entered_full_text_for_sources"
74 )
75 melodies_entered_by = models.ManyToManyField(
76 "Indexer", related_name="entered_melody_for_sources"
77 )
78 proofreaders = models.ManyToManyField("Indexer", related_name="proofread_sources")
79 other_editors = models.ManyToManyField("Indexer", related_name="edited_sources")
80 segment = models.ForeignKey(
81 "Segment", on_delete=models.PROTECT, blank=True, null=True
82 )
83 source_status = models.CharField(blank=True, null=True, max_length=255)
84 complete_inventory = models.BooleanField(blank=True, null=True)
85 summary = models.TextField(blank=True, null=True)
86 liturgical_occasions = models.TextField(blank=True, null=True)
87 description = models.TextField(blank=True, null=True)
88 selected_bibliography = models.TextField(blank=True, null=True)
89 image_link = models.URLField(
90 blank=True,
91 null=True,
92 help_text='HTTP link to the image gallery of the source.',
93 )
94 indexing_notes = models.TextField(blank=True, null=True)
95 indexing_date = models.TextField(blank=True, null=True)
96 json_info = models.JSONField(blank=True, null=True)
97 fragmentarium_id = models.CharField(max_length=15, blank=True, null=True)
98 dact_id = models.CharField(max_length=15, blank=True, null=True)
99
100 def number_of_chants(self) -> int:
101 """Returns the number of Chants and Sequences in this Source."""
102 return self.chant_set.count() + self.sequence_set.count()
103
104 def number_of_melodies(self) -> int:
105 """Returns the number of Chants in this Source that have melodies."""
106 return self.chant_set.filter(volpiano__isnull=False).count()
107
108 def __str__(self):
109 string = '{t} ({i})'.format(t=self.title, i=self.id)
110 return string
111
112
[end of django/cantusdb_project/main_app/models/source.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/django/cantusdb_project/main_app/models/source.py b/django/cantusdb_project/main_app/models/source.py
--- a/django/cantusdb_project/main_app/models/source.py
+++ b/django/cantusdb_project/main_app/models/source.py
@@ -1,5 +1,5 @@
from django.db import models
-from main_app.models import BaseModel
+from main_app.models import BaseModel, Segment
from django.contrib.auth import get_user_model
@@ -109,4 +109,8 @@
string = '{t} ({i})'.format(t=self.title, i=self.id)
return string
-
\ No newline at end of file
+ def save(self, *args, **kwargs):
+ # when creating a source, assign it to "Cantus Database" by default
+ cantus_db_segment = Segment.objects.get(name="CANTUS Database")
+ self.segment = cantus_db_segment
+ super().save(*args, **kwargs)
| {"golden_diff": "diff --git a/django/cantusdb_project/main_app/models/source.py b/django/cantusdb_project/main_app/models/source.py\n--- a/django/cantusdb_project/main_app/models/source.py\n+++ b/django/cantusdb_project/main_app/models/source.py\n@@ -1,5 +1,5 @@\n from django.db import models\n-from main_app.models import BaseModel\n+from main_app.models import BaseModel, Segment\n from django.contrib.auth import get_user_model\n \n \n@@ -109,4 +109,8 @@\n string = '{t} ({i})'.format(t=self.title, i=self.id)\n return string\n \n- \n\\ No newline at end of file\n+ def save(self, *args, **kwargs):\n+ # when creating a source, assign it to \"Cantus Database\" by default\n+ cantus_db_segment = Segment.objects.get(name=\"CANTUS Database\")\n+ self.segment = cantus_db_segment\n+ super().save(*args, **kwargs)\n", "issue": "Sources should automatically have segments\nFrom #257:\r\n\r\n> A source should always have a segment. It is either \"Cantus Database\" or \"Sequence Database\". It's a foreign key field. In cases where a source doesn't have a segment, it is probably a test source that we created.\r\n> Desired behaviour: when creating a source, assign it to \"Cantus Database\" by default.\n", "before_files": [{"content": "from django.db import models\nfrom main_app.models import BaseModel\nfrom django.contrib.auth import get_user_model\n\n\nclass Source(BaseModel):\n cursus_choices = [(\"Monastic\", \"Monastic\"), (\"Secular\", \"Secular\")]\n source_status_choices = [\n (\n \"Editing process (not all the fields have been proofread)\",\n \"Editing process (not all the fields have been proofread)\",\n ),\n (\"Published / Complete\", \"Published / Complete\"),\n (\"Published / Proofread pending\", \"Published / Proofread pending\"),\n (\"Unpublished / Editing process\", \"Unpublished / Editing process\"),\n (\"Unpublished / Indexing process\", \"Unpublished / Indexing process\"),\n (\"Unpublished / Proofread pending\", \"Unpublished / Proofread pending\"),\n (\"Unpublished / Proofreading process\", \"Unpublished / Proofreading process\"),\n ]\n\n # sources with public=False cannot be accessed by its url (access denied) and do not appear in source list\n public = models.BooleanField(blank=True, null=True)\n # sources with visible=False can be accessed by typing in the url, but do not appear in source list\n visible = models.BooleanField(blank=True, null=True)\n title = models.CharField(\n max_length=255,\n help_text=\"Full Manuscript Identification (City, Archive, Shelf-mark)\",\n )\n # the siglum field as implemented on the old Cantus is composed of both the RISM siglum and the shelfmark\n # it is a human-readable ID for a source\n siglum = models.CharField(\n max_length=63, \n null=True, \n blank=True,\n help_text=\"RISM-style siglum + Shelf-mark (e.g. GB-Ob 202).\",\n )\n # the RISM siglum uniquely identifies a library or holding institution\n rism_siglum = models.ForeignKey(\n \"RismSiglum\", on_delete=models.PROTECT, null=True, blank=True,\n )\n provenance = models.ForeignKey(\n \"Provenance\",\n on_delete=models.PROTECT,\n help_text=\"If the origin is unknown, select a location where the source was \"\n \"used later in its lifetime and provide details in the \"\n '\"Provenance notes\" field.',\n null=True,\n blank=True,\n )\n provenance_notes = models.TextField(\n blank=True,\n null=True,\n help_text=\"More exact indication of the provenance (if necessary)\",\n )\n full_source = models.BooleanField(blank=True, null=True)\n date = models.CharField(\n blank=True,\n null=True,\n max_length=63,\n help_text='Date of the manuscript (e.g. \"1200s\", \"1300-1350\", etc.)',\n )\n century = models.ManyToManyField(\"Century\", related_name=\"sources\")\n notation = models.ManyToManyField(\"Notation\", related_name=\"sources\")\n cursus = models.CharField(\n blank=True, null=True, choices=cursus_choices, max_length=63\n )\n # TODO: Fill this field up with JSON info when I have access to the Users\n current_editors = models.ManyToManyField(get_user_model(), related_name=\"sources_user_can_edit\")\n inventoried_by = models.ManyToManyField(\n \"Indexer\", related_name=\"sources_inventoried\"\n )\n full_text_entered_by = models.ManyToManyField(\n \"Indexer\", related_name=\"entered_full_text_for_sources\"\n )\n melodies_entered_by = models.ManyToManyField(\n \"Indexer\", related_name=\"entered_melody_for_sources\"\n )\n proofreaders = models.ManyToManyField(\"Indexer\", related_name=\"proofread_sources\")\n other_editors = models.ManyToManyField(\"Indexer\", related_name=\"edited_sources\")\n segment = models.ForeignKey(\n \"Segment\", on_delete=models.PROTECT, blank=True, null=True\n )\n source_status = models.CharField(blank=True, null=True, max_length=255)\n complete_inventory = models.BooleanField(blank=True, null=True)\n summary = models.TextField(blank=True, null=True)\n liturgical_occasions = models.TextField(blank=True, null=True)\n description = models.TextField(blank=True, null=True)\n selected_bibliography = models.TextField(blank=True, null=True)\n image_link = models.URLField(\n blank=True, \n null=True,\n help_text='HTTP link to the image gallery of the source.',\n )\n indexing_notes = models.TextField(blank=True, null=True)\n indexing_date = models.TextField(blank=True, null=True)\n json_info = models.JSONField(blank=True, null=True)\n fragmentarium_id = models.CharField(max_length=15, blank=True, null=True)\n dact_id = models.CharField(max_length=15, blank=True, null=True)\n\n def number_of_chants(self) -> int:\n \"\"\"Returns the number of Chants and Sequences in this Source.\"\"\"\n return self.chant_set.count() + self.sequence_set.count()\n\n def number_of_melodies(self) -> int:\n \"\"\"Returns the number of Chants in this Source that have melodies.\"\"\"\n return self.chant_set.filter(volpiano__isnull=False).count()\n\n def __str__(self):\n string = '{t} ({i})'.format(t=self.title, i=self.id)\n return string\n\n ", "path": "django/cantusdb_project/main_app/models/source.py"}]} | 1,991 | 220 |
gh_patches_debug_41106 | rasdani/github-patches | git_diff | getsentry__sentry-56522 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Discord `alert.sent` event
start sending `alert.sent` analytic events for discord
add category to existing notification_sent event
</issue>
<code>
[start of src/sentry/rules/actions/integrations/base.py]
1 from __future__ import annotations
2
3 import abc
4 from typing import List
5
6 from django import forms
7
8 from sentry.models import OrganizationStatus
9 from sentry.rules.actions import EventAction
10 from sentry.services.hybrid_cloud.integration import (
11 RpcIntegration,
12 RpcOrganizationIntegration,
13 integration_service,
14 )
15
16 INTEGRATION_KEY = "integration"
17
18
19 class IntegrationEventAction(EventAction, abc.ABC):
20 """Intermediate abstract class to help DRY some event actions code."""
21
22 @property
23 @abc.abstractmethod
24 def prompt(self) -> str:
25 pass
26
27 @property
28 @abc.abstractmethod
29 def provider(self) -> str:
30 pass
31
32 @property
33 @abc.abstractmethod
34 def integration_key(self) -> str:
35 pass
36
37 def is_enabled(self) -> bool:
38 enabled: bool = bool(self.get_integrations())
39 return enabled
40
41 def get_integration_name(self) -> str:
42 """Get the integration's name for the label."""
43 integration = self.get_integration()
44 if not integration:
45 return "[removed]"
46
47 _name: str = integration.name
48 return _name
49
50 def get_integrations(self) -> List[RpcIntegration]:
51 return integration_service.get_integrations(
52 organization_id=self.project.organization_id,
53 status=OrganizationStatus.ACTIVE,
54 org_integration_status=OrganizationStatus.ACTIVE,
55 providers=[self.provider],
56 )
57
58 def get_integration_id(self) -> int:
59 integration_id: str | None = self.get_option(self.integration_key)
60 if integration_id:
61 return int(integration_id)
62 return 0
63
64 def get_integration(self) -> RpcIntegration | None:
65 """
66 Uses the required class variables `provider` and `integration_key` with
67 RuleBase.get_option to get the integration object from DB.
68 """
69 for integration in integration_service.get_integrations(
70 organization_id=self.project.organization_id,
71 status=OrganizationStatus.ACTIVE,
72 org_integration_status=OrganizationStatus.ACTIVE,
73 providers=[self.provider],
74 ):
75 if integration.id == self.get_integration_id():
76 return integration
77 return None
78
79 def get_organization_integration(self) -> RpcOrganizationIntegration | None:
80 return integration_service.get_organization_integration(
81 integration_id=self.get_integration_id(), organization_id=self.project.organization_id
82 )
83
84 def get_form_instance(self) -> forms.Form:
85 return self.form_cls(self.data, integrations=self.get_integrations())
86
[end of src/sentry/rules/actions/integrations/base.py]
[start of src/sentry/integrations/discord/analytics.py]
1 from sentry import analytics
2
3
4 class DiscordIntegrationNotificationSent(analytics.Event):
5 type = "integrations.discord.notification_sent"
6
7 attributes = (
8 analytics.Attribute("organization_id"),
9 analytics.Attribute("project_id"),
10 analytics.Attribute("group_id"),
11 analytics.Attribute("notification_uuid"),
12 analytics.Attribute("alert_id", required=False),
13 )
14
15
16 class DiscordIntegrationCommandInteractionReceived(analytics.Event):
17 type = "integrations.discord.command_interaction"
18
19 attributes = (analytics.Attribute("command_name"),)
20
21
22 class DiscordIntegrationIdentityLinked(analytics.Event):
23 type = "integrations.discord.identity_linked"
24
25 attributes = (
26 analytics.Attribute("provider"),
27 analytics.Attribute("actor_id"),
28 analytics.Attribute("actor_type"),
29 )
30
31
32 class DiscordIntegrationIdentityUnlinked(analytics.Event):
33 type = "integrations.discord.identity_unlinked"
34
35 attributes = (
36 analytics.Attribute("provider"),
37 analytics.Attribute("actor_id"),
38 analytics.Attribute("actor_type"),
39 )
40
41
42 class DiscordIntegrationMessageInteractionReceived(analytics.Event):
43 type = "integrations.discord.message_interaction"
44
45 attributes = (analytics.Attribute("custom_id"),)
46
47
48 class DiscordIntegrationAssign(analytics.Event):
49 type = "integrations.discord.assign"
50
51 attributes = (analytics.Attribute("actor_id"),)
52
53
54 class DiscordIntegrationStatus(analytics.Event):
55 type = "integrations.discord.status"
56
57 attributes = (
58 analytics.Attribute("organization_id"),
59 analytics.Attribute("user_id"),
60 analytics.Attribute("status"),
61 )
62
63
64 analytics.register(DiscordIntegrationNotificationSent)
65 analytics.register(DiscordIntegrationCommandInteractionReceived)
66 analytics.register(DiscordIntegrationIdentityLinked)
67 analytics.register(DiscordIntegrationIdentityUnlinked)
68 analytics.register(DiscordIntegrationMessageInteractionReceived)
69 analytics.register(DiscordIntegrationAssign)
70 analytics.register(DiscordIntegrationStatus)
71
[end of src/sentry/integrations/discord/analytics.py]
[start of src/sentry/integrations/discord/actions/notification.py]
1 from typing import Any, Generator, Optional, Sequence
2
3 from sentry import analytics, features
4 from sentry.eventstore.models import GroupEvent
5 from sentry.integrations.discord.actions.form import DiscordNotifyServiceForm
6 from sentry.integrations.discord.client import DiscordClient
7 from sentry.integrations.discord.message_builder.issues import DiscordIssuesMessageBuilder
8 from sentry.rules.actions import IntegrationEventAction
9 from sentry.rules.base import CallbackFuture, EventState
10 from sentry.shared_integrations.exceptions.base import ApiError
11 from sentry.types.rules import RuleFuture
12 from sentry.utils import metrics
13
14
15 class DiscordNotifyServiceAction(IntegrationEventAction):
16 id = "sentry.integrations.discord.notify_action.DiscordNotifyServiceAction"
17 form_cls = DiscordNotifyServiceForm
18 label = "Send a notification to the {server} Discord server in the channel with ID: {channel_id} and show tags {tags} in the notification."
19 prompt = "Send a Discord notification"
20 provider = "discord"
21 integration_key = "server"
22
23 def __init__(self, *args: Any, **kwargs: Any) -> None:
24 super().__init__(*args, **kwargs)
25 self.form_fields = {
26 "server": {
27 "type": "choice",
28 "choices": [(i.id, i.name) for i in self.get_integrations()],
29 },
30 "channel_id": {"type": "string", "placeholder": "e.g., 1134274732116676679"},
31 "tags": {"type": "string", "placeholder": "e.g., environment,user,my_tag"},
32 }
33
34 def after(
35 self, event: GroupEvent, state: EventState, notification_uuid: Optional[str] = None
36 ) -> Generator[CallbackFuture, None, None]:
37 channel_id = self.get_option("channel_id")
38 tags = set(self.get_tags_list())
39
40 integration = self.get_integration()
41 if not integration:
42 # Integration removed, but rule still active
43 return
44
45 def send_notification(event: GroupEvent, futures: Sequence[RuleFuture]) -> None:
46 if not features.has(
47 "organizations:integrations-discord-notifications", event.organization
48 ):
49 return
50
51 rules = [f.rule for f in futures]
52 message = DiscordIssuesMessageBuilder(event.group, event=event, tags=tags, rules=rules)
53
54 client = DiscordClient(integration_id=integration.id)
55 try:
56 client.send_message(channel_id, message, notification_uuid=notification_uuid)
57 except ApiError as e:
58 self.logger.error(
59 "rule.fail.discord_post",
60 extra={
61 "error": str(e),
62 "project_id": event.project_id,
63 "event_id": event.event_id,
64 "guild_id": integration.external_id,
65 "channel_id": channel_id,
66 },
67 )
68
69 key = f"discord:{integration.id}:{channel_id}"
70
71 analytics.record(
72 "integrations.discord.notification_sent",
73 organization_id=event.organization.id,
74 project_id=event.project_id,
75 group_id=event.group_id,
76 notification_uuid=notification_uuid if notification_uuid else "",
77 )
78 metrics.incr("notifications.sent", instance="discord.notifications", skip_internal=False)
79 yield self.future(send_notification, key=key)
80
81 def render_label(self) -> str:
82 tags = self.get_tags_list()
83
84 return self.label.format(
85 server=self.get_integration_name(),
86 channel_id=self.get_option("channel_id"),
87 tags="[{}]".format(", ".join(tags)),
88 )
89
90 def get_tags_list(self) -> Sequence[str]:
91 return [s.strip() for s in self.get_option("tags", "").split(",")]
92
93 def get_form_instance(self) -> Any:
94 return self.form_cls(self.data, integrations=self.get_integrations())
95
[end of src/sentry/integrations/discord/actions/notification.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/sentry/integrations/discord/actions/notification.py b/src/sentry/integrations/discord/actions/notification.py
--- a/src/sentry/integrations/discord/actions/notification.py
+++ b/src/sentry/integrations/discord/actions/notification.py
@@ -1,6 +1,6 @@
from typing import Any, Generator, Optional, Sequence
-from sentry import analytics, features
+from sentry import features
from sentry.eventstore.models import GroupEvent
from sentry.integrations.discord.actions.form import DiscordNotifyServiceForm
from sentry.integrations.discord.client import DiscordClient
@@ -65,16 +65,11 @@
"channel_id": channel_id,
},
)
+ rule = rules[0] if rules else None
+ self.record_notification_sent(event, channel_id, rule, notification_uuid)
key = f"discord:{integration.id}:{channel_id}"
- analytics.record(
- "integrations.discord.notification_sent",
- organization_id=event.organization.id,
- project_id=event.project_id,
- group_id=event.group_id,
- notification_uuid=notification_uuid if notification_uuid else "",
- )
metrics.incr("notifications.sent", instance="discord.notifications", skip_internal=False)
yield self.future(send_notification, key=key)
diff --git a/src/sentry/integrations/discord/analytics.py b/src/sentry/integrations/discord/analytics.py
--- a/src/sentry/integrations/discord/analytics.py
+++ b/src/sentry/integrations/discord/analytics.py
@@ -7,6 +7,7 @@
attributes = (
analytics.Attribute("organization_id"),
analytics.Attribute("project_id"),
+ analytics.Attribute("category"),
analytics.Attribute("group_id"),
analytics.Attribute("notification_uuid"),
analytics.Attribute("alert_id", required=False),
diff --git a/src/sentry/rules/actions/integrations/base.py b/src/sentry/rules/actions/integrations/base.py
--- a/src/sentry/rules/actions/integrations/base.py
+++ b/src/sentry/rules/actions/integrations/base.py
@@ -5,7 +5,9 @@
from django import forms
-from sentry.models import OrganizationStatus
+from sentry import analytics
+from sentry.eventstore.models import GroupEvent
+from sentry.models import OrganizationStatus, Rule
from sentry.rules.actions import EventAction
from sentry.services.hybrid_cloud.integration import (
RpcIntegration,
@@ -83,3 +85,31 @@
def get_form_instance(self) -> forms.Form:
return self.form_cls(self.data, integrations=self.get_integrations())
+
+ def record_notification_sent(
+ self,
+ event: GroupEvent,
+ external_id: str,
+ rule: Rule | None = None,
+ notification_uuid: str | None = None,
+ ) -> None:
+ # Currently these actions can only be triggered by issue alerts
+ analytics.record(
+ f"integrations.{self.provider}.notification_sent",
+ category="issue_alert",
+ organization_id=event.organization.id,
+ project_id=event.project_id,
+ group_id=event.group_id,
+ notification_uuid=notification_uuid if notification_uuid else "",
+ alert_id=rule.id if rule else None,
+ )
+ analytics.record(
+ "alert.sent",
+ provider=self.provider,
+ alert_id=rule.id if rule else "",
+ alert_type="issue_alert",
+ organization_id=event.organization.id,
+ project_id=event.project_id,
+ external_id=external_id,
+ notification_uuid=notification_uuid if notification_uuid else "",
+ )
| {"golden_diff": "diff --git a/src/sentry/integrations/discord/actions/notification.py b/src/sentry/integrations/discord/actions/notification.py\n--- a/src/sentry/integrations/discord/actions/notification.py\n+++ b/src/sentry/integrations/discord/actions/notification.py\n@@ -1,6 +1,6 @@\n from typing import Any, Generator, Optional, Sequence\n \n-from sentry import analytics, features\n+from sentry import features\n from sentry.eventstore.models import GroupEvent\n from sentry.integrations.discord.actions.form import DiscordNotifyServiceForm\n from sentry.integrations.discord.client import DiscordClient\n@@ -65,16 +65,11 @@\n \"channel_id\": channel_id,\n },\n )\n+ rule = rules[0] if rules else None\n+ self.record_notification_sent(event, channel_id, rule, notification_uuid)\n \n key = f\"discord:{integration.id}:{channel_id}\"\n \n- analytics.record(\n- \"integrations.discord.notification_sent\",\n- organization_id=event.organization.id,\n- project_id=event.project_id,\n- group_id=event.group_id,\n- notification_uuid=notification_uuid if notification_uuid else \"\",\n- )\n metrics.incr(\"notifications.sent\", instance=\"discord.notifications\", skip_internal=False)\n yield self.future(send_notification, key=key)\n \ndiff --git a/src/sentry/integrations/discord/analytics.py b/src/sentry/integrations/discord/analytics.py\n--- a/src/sentry/integrations/discord/analytics.py\n+++ b/src/sentry/integrations/discord/analytics.py\n@@ -7,6 +7,7 @@\n attributes = (\n analytics.Attribute(\"organization_id\"),\n analytics.Attribute(\"project_id\"),\n+ analytics.Attribute(\"category\"),\n analytics.Attribute(\"group_id\"),\n analytics.Attribute(\"notification_uuid\"),\n analytics.Attribute(\"alert_id\", required=False),\ndiff --git a/src/sentry/rules/actions/integrations/base.py b/src/sentry/rules/actions/integrations/base.py\n--- a/src/sentry/rules/actions/integrations/base.py\n+++ b/src/sentry/rules/actions/integrations/base.py\n@@ -5,7 +5,9 @@\n \n from django import forms\n \n-from sentry.models import OrganizationStatus\n+from sentry import analytics\n+from sentry.eventstore.models import GroupEvent\n+from sentry.models import OrganizationStatus, Rule\n from sentry.rules.actions import EventAction\n from sentry.services.hybrid_cloud.integration import (\n RpcIntegration,\n@@ -83,3 +85,31 @@\n \n def get_form_instance(self) -> forms.Form:\n return self.form_cls(self.data, integrations=self.get_integrations())\n+\n+ def record_notification_sent(\n+ self,\n+ event: GroupEvent,\n+ external_id: str,\n+ rule: Rule | None = None,\n+ notification_uuid: str | None = None,\n+ ) -> None:\n+ # Currently these actions can only be triggered by issue alerts\n+ analytics.record(\n+ f\"integrations.{self.provider}.notification_sent\",\n+ category=\"issue_alert\",\n+ organization_id=event.organization.id,\n+ project_id=event.project_id,\n+ group_id=event.group_id,\n+ notification_uuid=notification_uuid if notification_uuid else \"\",\n+ alert_id=rule.id if rule else None,\n+ )\n+ analytics.record(\n+ \"alert.sent\",\n+ provider=self.provider,\n+ alert_id=rule.id if rule else \"\",\n+ alert_type=\"issue_alert\",\n+ organization_id=event.organization.id,\n+ project_id=event.project_id,\n+ external_id=external_id,\n+ notification_uuid=notification_uuid if notification_uuid else \"\",\n+ )\n", "issue": "Discord `alert.sent` event\nstart sending `alert.sent` analytic events for discord\n\nadd category to existing notification_sent event\n", "before_files": [{"content": "from __future__ import annotations\n\nimport abc\nfrom typing import List\n\nfrom django import forms\n\nfrom sentry.models import OrganizationStatus\nfrom sentry.rules.actions import EventAction\nfrom sentry.services.hybrid_cloud.integration import (\n RpcIntegration,\n RpcOrganizationIntegration,\n integration_service,\n)\n\nINTEGRATION_KEY = \"integration\"\n\n\nclass IntegrationEventAction(EventAction, abc.ABC):\n \"\"\"Intermediate abstract class to help DRY some event actions code.\"\"\"\n\n @property\n @abc.abstractmethod\n def prompt(self) -> str:\n pass\n\n @property\n @abc.abstractmethod\n def provider(self) -> str:\n pass\n\n @property\n @abc.abstractmethod\n def integration_key(self) -> str:\n pass\n\n def is_enabled(self) -> bool:\n enabled: bool = bool(self.get_integrations())\n return enabled\n\n def get_integration_name(self) -> str:\n \"\"\"Get the integration's name for the label.\"\"\"\n integration = self.get_integration()\n if not integration:\n return \"[removed]\"\n\n _name: str = integration.name\n return _name\n\n def get_integrations(self) -> List[RpcIntegration]:\n return integration_service.get_integrations(\n organization_id=self.project.organization_id,\n status=OrganizationStatus.ACTIVE,\n org_integration_status=OrganizationStatus.ACTIVE,\n providers=[self.provider],\n )\n\n def get_integration_id(self) -> int:\n integration_id: str | None = self.get_option(self.integration_key)\n if integration_id:\n return int(integration_id)\n return 0\n\n def get_integration(self) -> RpcIntegration | None:\n \"\"\"\n Uses the required class variables `provider` and `integration_key` with\n RuleBase.get_option to get the integration object from DB.\n \"\"\"\n for integration in integration_service.get_integrations(\n organization_id=self.project.organization_id,\n status=OrganizationStatus.ACTIVE,\n org_integration_status=OrganizationStatus.ACTIVE,\n providers=[self.provider],\n ):\n if integration.id == self.get_integration_id():\n return integration\n return None\n\n def get_organization_integration(self) -> RpcOrganizationIntegration | None:\n return integration_service.get_organization_integration(\n integration_id=self.get_integration_id(), organization_id=self.project.organization_id\n )\n\n def get_form_instance(self) -> forms.Form:\n return self.form_cls(self.data, integrations=self.get_integrations())\n", "path": "src/sentry/rules/actions/integrations/base.py"}, {"content": "from sentry import analytics\n\n\nclass DiscordIntegrationNotificationSent(analytics.Event):\n type = \"integrations.discord.notification_sent\"\n\n attributes = (\n analytics.Attribute(\"organization_id\"),\n analytics.Attribute(\"project_id\"),\n analytics.Attribute(\"group_id\"),\n analytics.Attribute(\"notification_uuid\"),\n analytics.Attribute(\"alert_id\", required=False),\n )\n\n\nclass DiscordIntegrationCommandInteractionReceived(analytics.Event):\n type = \"integrations.discord.command_interaction\"\n\n attributes = (analytics.Attribute(\"command_name\"),)\n\n\nclass DiscordIntegrationIdentityLinked(analytics.Event):\n type = \"integrations.discord.identity_linked\"\n\n attributes = (\n analytics.Attribute(\"provider\"),\n analytics.Attribute(\"actor_id\"),\n analytics.Attribute(\"actor_type\"),\n )\n\n\nclass DiscordIntegrationIdentityUnlinked(analytics.Event):\n type = \"integrations.discord.identity_unlinked\"\n\n attributes = (\n analytics.Attribute(\"provider\"),\n analytics.Attribute(\"actor_id\"),\n analytics.Attribute(\"actor_type\"),\n )\n\n\nclass DiscordIntegrationMessageInteractionReceived(analytics.Event):\n type = \"integrations.discord.message_interaction\"\n\n attributes = (analytics.Attribute(\"custom_id\"),)\n\n\nclass DiscordIntegrationAssign(analytics.Event):\n type = \"integrations.discord.assign\"\n\n attributes = (analytics.Attribute(\"actor_id\"),)\n\n\nclass DiscordIntegrationStatus(analytics.Event):\n type = \"integrations.discord.status\"\n\n attributes = (\n analytics.Attribute(\"organization_id\"),\n analytics.Attribute(\"user_id\"),\n analytics.Attribute(\"status\"),\n )\n\n\nanalytics.register(DiscordIntegrationNotificationSent)\nanalytics.register(DiscordIntegrationCommandInteractionReceived)\nanalytics.register(DiscordIntegrationIdentityLinked)\nanalytics.register(DiscordIntegrationIdentityUnlinked)\nanalytics.register(DiscordIntegrationMessageInteractionReceived)\nanalytics.register(DiscordIntegrationAssign)\nanalytics.register(DiscordIntegrationStatus)\n", "path": "src/sentry/integrations/discord/analytics.py"}, {"content": "from typing import Any, Generator, Optional, Sequence\n\nfrom sentry import analytics, features\nfrom sentry.eventstore.models import GroupEvent\nfrom sentry.integrations.discord.actions.form import DiscordNotifyServiceForm\nfrom sentry.integrations.discord.client import DiscordClient\nfrom sentry.integrations.discord.message_builder.issues import DiscordIssuesMessageBuilder\nfrom sentry.rules.actions import IntegrationEventAction\nfrom sentry.rules.base import CallbackFuture, EventState\nfrom sentry.shared_integrations.exceptions.base import ApiError\nfrom sentry.types.rules import RuleFuture\nfrom sentry.utils import metrics\n\n\nclass DiscordNotifyServiceAction(IntegrationEventAction):\n id = \"sentry.integrations.discord.notify_action.DiscordNotifyServiceAction\"\n form_cls = DiscordNotifyServiceForm\n label = \"Send a notification to the {server} Discord server in the channel with ID: {channel_id} and show tags {tags} in the notification.\"\n prompt = \"Send a Discord notification\"\n provider = \"discord\"\n integration_key = \"server\"\n\n def __init__(self, *args: Any, **kwargs: Any) -> None:\n super().__init__(*args, **kwargs)\n self.form_fields = {\n \"server\": {\n \"type\": \"choice\",\n \"choices\": [(i.id, i.name) for i in self.get_integrations()],\n },\n \"channel_id\": {\"type\": \"string\", \"placeholder\": \"e.g., 1134274732116676679\"},\n \"tags\": {\"type\": \"string\", \"placeholder\": \"e.g., environment,user,my_tag\"},\n }\n\n def after(\n self, event: GroupEvent, state: EventState, notification_uuid: Optional[str] = None\n ) -> Generator[CallbackFuture, None, None]:\n channel_id = self.get_option(\"channel_id\")\n tags = set(self.get_tags_list())\n\n integration = self.get_integration()\n if not integration:\n # Integration removed, but rule still active\n return\n\n def send_notification(event: GroupEvent, futures: Sequence[RuleFuture]) -> None:\n if not features.has(\n \"organizations:integrations-discord-notifications\", event.organization\n ):\n return\n\n rules = [f.rule for f in futures]\n message = DiscordIssuesMessageBuilder(event.group, event=event, tags=tags, rules=rules)\n\n client = DiscordClient(integration_id=integration.id)\n try:\n client.send_message(channel_id, message, notification_uuid=notification_uuid)\n except ApiError as e:\n self.logger.error(\n \"rule.fail.discord_post\",\n extra={\n \"error\": str(e),\n \"project_id\": event.project_id,\n \"event_id\": event.event_id,\n \"guild_id\": integration.external_id,\n \"channel_id\": channel_id,\n },\n )\n\n key = f\"discord:{integration.id}:{channel_id}\"\n\n analytics.record(\n \"integrations.discord.notification_sent\",\n organization_id=event.organization.id,\n project_id=event.project_id,\n group_id=event.group_id,\n notification_uuid=notification_uuid if notification_uuid else \"\",\n )\n metrics.incr(\"notifications.sent\", instance=\"discord.notifications\", skip_internal=False)\n yield self.future(send_notification, key=key)\n\n def render_label(self) -> str:\n tags = self.get_tags_list()\n\n return self.label.format(\n server=self.get_integration_name(),\n channel_id=self.get_option(\"channel_id\"),\n tags=\"[{}]\".format(\", \".join(tags)),\n )\n\n def get_tags_list(self) -> Sequence[str]:\n return [s.strip() for s in self.get_option(\"tags\", \"\").split(\",\")]\n\n def get_form_instance(self) -> Any:\n return self.form_cls(self.data, integrations=self.get_integrations())\n", "path": "src/sentry/integrations/discord/actions/notification.py"}]} | 2,845 | 785 |
gh_patches_debug_7058 | rasdani/github-patches | git_diff | Kinto__kinto-1139 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Consistency on PUT with mandatory schema fields
While working on #790 I realize that there is something not clear in our specifications.
Currently, if a resource has a mandatory field (eg. groups `members`), then we cannot do a `PUT` with just the `permissions` values. This is because a PUT can lead to a creation, and the `members` fields has to be provided.
On other resources, which have no mandatory field, it is perfectly possible to only provide `permissions`.
But, I believe we should make every resources behave the same way.
For example, when we'll implement the edition of permissions in Kinto-admin, we don't want to have to pass the `data` if it was not changed.
Two solutions:
- Add a default value (`[]`) for the groups members attribute (_my prefered one, trivial and not absurd_)
- Allow `data` to be omitted only when the `PUT` replaces an existing object (_more complex to implement, but would work for any resource with mandatory fields_)
Consistency on PUT with mandatory schema fields
While working on #790 I realize that there is something not clear in our specifications.
Currently, if a resource has a mandatory field (eg. groups `members`), then we cannot do a `PUT` with just the `permissions` values. This is because a PUT can lead to a creation, and the `members` fields has to be provided.
On other resources, which have no mandatory field, it is perfectly possible to only provide `permissions`.
But, I believe we should make every resources behave the same way.
For example, when we'll implement the edition of permissions in Kinto-admin, we don't want to have to pass the `data` if it was not changed.
Two solutions:
- Add a default value (`[]`) for the groups members attribute (_my prefered one, trivial and not absurd_)
- Allow `data` to be omitted only when the `PUT` replaces an existing object (_more complex to implement, but would work for any resource with mandatory fields_)
</issue>
<code>
[start of kinto/views/groups.py]
1 import colander
2
3 from kinto.core import resource, utils
4 from kinto.core.events import ResourceChanged, ACTIONS
5 from pyramid.events import subscriber
6
7
8 def validate_member(node, member):
9 if member.startswith('/buckets/') or member == 'system.Everyone':
10 raise colander.Invalid(node, "'{}' is not a valid user ID.".format(member))
11
12
13 class GroupSchema(resource.ResourceSchema):
14 members = colander.SchemaNode(colander.Sequence(),
15 colander.SchemaNode(colander.String(),
16 validator=validate_member))
17
18
19 @resource.register(name='group',
20 collection_path='/buckets/{{bucket_id}}/groups',
21 record_path='/buckets/{{bucket_id}}/groups/{{id}}')
22 class Group(resource.ShareableResource):
23 schema = GroupSchema
24
25 def get_parent_id(self, request):
26 bucket_id = request.matchdict['bucket_id']
27 parent_id = utils.instance_uri(request, 'bucket', id=bucket_id)
28 return parent_id
29
30
31 @subscriber(ResourceChanged,
32 for_resources=('group',),
33 for_actions=(ACTIONS.DELETE,))
34 def on_groups_deleted(event):
35 """Some groups were deleted, remove them from users principals.
36 """
37 permission_backend = event.request.registry.permission
38
39 for change in event.impacted_records:
40 group = change['old']
41 bucket_id = event.payload['bucket_id']
42 group_uri = utils.instance_uri(event.request, 'group',
43 bucket_id=bucket_id,
44 id=group['id'])
45
46 permission_backend.remove_principal(group_uri)
47
48
49 @subscriber(ResourceChanged,
50 for_resources=('group',),
51 for_actions=(ACTIONS.CREATE, ACTIONS.UPDATE))
52 def on_groups_changed(event):
53 """Some groups were changed, update users principals.
54 """
55 permission_backend = event.request.registry.permission
56
57 for change in event.impacted_records:
58 if 'old' in change:
59 existing_record_members = set(change['old'].get('members', []))
60 else:
61 existing_record_members = set()
62
63 group = change['new']
64 group_uri = '/buckets/{bucket_id}/groups/{id}'.format(id=group['id'],
65 **event.payload)
66 new_record_members = set(group.get('members', []))
67 new_members = new_record_members - existing_record_members
68 removed_members = existing_record_members - new_record_members
69
70 for member in new_members:
71 # Add the group to the member principal.
72 permission_backend.add_user_principal(member, group_uri)
73
74 for member in removed_members:
75 # Remove the group from the member principal.
76 permission_backend.remove_user_principal(member, group_uri)
77
[end of kinto/views/groups.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kinto/views/groups.py b/kinto/views/groups.py
--- a/kinto/views/groups.py
+++ b/kinto/views/groups.py
@@ -13,7 +13,8 @@
class GroupSchema(resource.ResourceSchema):
members = colander.SchemaNode(colander.Sequence(),
colander.SchemaNode(colander.String(),
- validator=validate_member))
+ validator=validate_member),
+ missing=[])
@resource.register(name='group',
| {"golden_diff": "diff --git a/kinto/views/groups.py b/kinto/views/groups.py\n--- a/kinto/views/groups.py\n+++ b/kinto/views/groups.py\n@@ -13,7 +13,8 @@\n class GroupSchema(resource.ResourceSchema):\n members = colander.SchemaNode(colander.Sequence(),\n colander.SchemaNode(colander.String(),\n- validator=validate_member))\n+ validator=validate_member),\n+ missing=[])\n \n \n @resource.register(name='group',\n", "issue": "Consistency on PUT with mandatory schema fields\nWhile working on #790 I realize that there is something not clear in our specifications.\n\nCurrently, if a resource has a mandatory field (eg. groups `members`), then we cannot do a `PUT` with just the `permissions` values. This is because a PUT can lead to a creation, and the `members` fields has to be provided.\n\nOn other resources, which have no mandatory field, it is perfectly possible to only provide `permissions`.\n\nBut, I believe we should make every resources behave the same way.\n\nFor example, when we'll implement the edition of permissions in Kinto-admin, we don't want to have to pass the `data` if it was not changed.\n\nTwo solutions:\n- Add a default value (`[]`) for the groups members attribute (_my prefered one, trivial and not absurd_)\n- Allow `data` to be omitted only when the `PUT` replaces an existing object (_more complex to implement, but would work for any resource with mandatory fields_)\n\nConsistency on PUT with mandatory schema fields\nWhile working on #790 I realize that there is something not clear in our specifications.\n\nCurrently, if a resource has a mandatory field (eg. groups `members`), then we cannot do a `PUT` with just the `permissions` values. This is because a PUT can lead to a creation, and the `members` fields has to be provided.\n\nOn other resources, which have no mandatory field, it is perfectly possible to only provide `permissions`.\n\nBut, I believe we should make every resources behave the same way.\n\nFor example, when we'll implement the edition of permissions in Kinto-admin, we don't want to have to pass the `data` if it was not changed.\n\nTwo solutions:\n- Add a default value (`[]`) for the groups members attribute (_my prefered one, trivial and not absurd_)\n- Allow `data` to be omitted only when the `PUT` replaces an existing object (_more complex to implement, but would work for any resource with mandatory fields_)\n\n", "before_files": [{"content": "import colander\n\nfrom kinto.core import resource, utils\nfrom kinto.core.events import ResourceChanged, ACTIONS\nfrom pyramid.events import subscriber\n\n\ndef validate_member(node, member):\n if member.startswith('/buckets/') or member == 'system.Everyone':\n raise colander.Invalid(node, \"'{}' is not a valid user ID.\".format(member))\n\n\nclass GroupSchema(resource.ResourceSchema):\n members = colander.SchemaNode(colander.Sequence(),\n colander.SchemaNode(colander.String(),\n validator=validate_member))\n\n\[email protected](name='group',\n collection_path='/buckets/{{bucket_id}}/groups',\n record_path='/buckets/{{bucket_id}}/groups/{{id}}')\nclass Group(resource.ShareableResource):\n schema = GroupSchema\n\n def get_parent_id(self, request):\n bucket_id = request.matchdict['bucket_id']\n parent_id = utils.instance_uri(request, 'bucket', id=bucket_id)\n return parent_id\n\n\n@subscriber(ResourceChanged,\n for_resources=('group',),\n for_actions=(ACTIONS.DELETE,))\ndef on_groups_deleted(event):\n \"\"\"Some groups were deleted, remove them from users principals.\n \"\"\"\n permission_backend = event.request.registry.permission\n\n for change in event.impacted_records:\n group = change['old']\n bucket_id = event.payload['bucket_id']\n group_uri = utils.instance_uri(event.request, 'group',\n bucket_id=bucket_id,\n id=group['id'])\n\n permission_backend.remove_principal(group_uri)\n\n\n@subscriber(ResourceChanged,\n for_resources=('group',),\n for_actions=(ACTIONS.CREATE, ACTIONS.UPDATE))\ndef on_groups_changed(event):\n \"\"\"Some groups were changed, update users principals.\n \"\"\"\n permission_backend = event.request.registry.permission\n\n for change in event.impacted_records:\n if 'old' in change:\n existing_record_members = set(change['old'].get('members', []))\n else:\n existing_record_members = set()\n\n group = change['new']\n group_uri = '/buckets/{bucket_id}/groups/{id}'.format(id=group['id'],\n **event.payload)\n new_record_members = set(group.get('members', []))\n new_members = new_record_members - existing_record_members\n removed_members = existing_record_members - new_record_members\n\n for member in new_members:\n # Add the group to the member principal.\n permission_backend.add_user_principal(member, group_uri)\n\n for member in removed_members:\n # Remove the group from the member principal.\n permission_backend.remove_user_principal(member, group_uri)\n", "path": "kinto/views/groups.py"}]} | 1,651 | 98 |
gh_patches_debug_43621 | rasdani/github-patches | git_diff | sktime__sktime-5642 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] convert to period bug in `_StatsModelsAdapter`
**Describe the bug**
<!--
A clear and concise description of what the bug is.
-->
For panel data with datetime indexes, converting datetimes to period index causes an error in the predict stage in the `_StatsModelsAdapter` models
**To Reproduce**
<!--
Add a Minimal, Complete, and Verifiable example (for more details, see e.g. https://stackoverflow.com/help/mcve
If the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com
-->
```python
from sktime.datasets import load_airline
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
import pandas as pd
y = load_airline()
# create dummy index with hourly timestamps and panel data by hour of day
y.index = pd.date_range(start='1960-01-01', periods=len(y.index), freq='H')
y.index.names = ["datetime"]
y.name = "passengers"
y = y.to_frame()
y['hour_of_day'] = y.index.hour
y = y.reset_index().set_index(['hour_of_day', 'datetime']).sort_index()
forecaster = ExponentialSmoothing(
trend='add', sp=1
)
forecaster.fit(y)
forecaster.predict(fh=[1])
```
**Expected behavior**
<!--
A clear and concise description of what you expected to happen.
-->
For each panel predict one step ahead
**Additional context**
<!--
Add any other context about the problem here.
-->
I will address this issue 👍
**Versions**
<details>
<!--
Please run the following code snippet and paste the output here:
from sktime import show_versions; show_versions()
-->
Python dependencies:
pip: 23.1.2
sktime: 0.24.1
sklearn: 1.2.2
skbase: 0.6.1
numpy: 1.24.3
scipy: 1.10.1
pandas: 2.0.2
matplotlib: None
joblib: 1.2.0
numba: None
statsmodels: 0.14.0
pmdarima: 2.0.3
statsforecast: None
tsfresh: None
tslearn: None
torch: None
tensorflow: None
tensorflow_probability: None
<!-- Thanks for contributing! -->
</issue>
<code>
[start of sktime/forecasting/base/adapters/_statsmodels.py]
1 # !/usr/bin/env python3 -u
2 # copyright: sktime developers, BSD-3-Clause License (see LICENSE file)
3 """Implements adapter for statsmodels forecasters to be used in sktime framework."""
4
5 __author__ = ["mloning"]
6 __all__ = ["_StatsModelsAdapter"]
7
8 import inspect
9
10 import numpy as np
11 import pandas as pd
12
13 from sktime.forecasting.base import BaseForecaster
14 from sktime.utils.warnings import warn
15
16
17 class _StatsModelsAdapter(BaseForecaster):
18 """Base class for interfacing statsmodels forecasting algorithms."""
19
20 _fitted_param_names = ()
21 _tags = {
22 "ignores-exogeneous-X": True,
23 "requires-fh-in-fit": False,
24 "handles-missing-data": False,
25 "python_dependencies": "statsmodels",
26 }
27
28 def __init__(self, random_state=None):
29 self._forecaster = None
30 self.random_state = random_state
31 self._fitted_forecaster = None
32 super().__init__()
33
34 def _fit(self, y, X, fh):
35 """Fit to training data.
36
37 Parameters
38 ----------
39 y : pd.Series
40 Target time series to which to fit the forecaster.
41 fh : int, list or np.array, optional (default=None)
42 The forecasters horizon with the steps ahead to to predict.
43 X : pd.DataFrame, optional (default=None)
44 Exogenous variables are ignored
45
46 Returns
47 -------
48 self : returns an instance of self.
49 """
50 # statsmodels does not support the pd.Int64Index as required,
51 # so we coerce them here to pd.RangeIndex
52 if isinstance(y, pd.Series) and pd.api.types.is_integer_dtype(y.index):
53 y, X = _coerce_int_to_range_index(y, X)
54 self._fit_forecaster(y, X)
55 return self
56
57 def _fit_forecaster(self, y_train, X_train=None):
58 """Log used internally in fit."""
59 raise NotImplementedError("abstract method")
60
61 def _update(self, y, X=None, update_params=True):
62 """Update used internally in update."""
63 if update_params or self.is_composite():
64 super()._update(y, X, update_params=update_params)
65 else:
66 if not hasattr(self._fitted_forecaster, "append"):
67 warn(
68 f"NotImplementedWarning: {self.__class__.__name__} "
69 f"can not accept new data when update_params=False. "
70 f"Call with update_params=True to refit with new data.",
71 obj=self,
72 )
73 else:
74 # only append unseen data to fitted forecaster
75 index_diff = y.index.difference(
76 self._fitted_forecaster.fittedvalues.index
77 )
78 if index_diff.isin(y.index).all():
79 y = y.loc[index_diff]
80 self._fitted_forecaster = self._fitted_forecaster.append(y)
81
82 def _predict(self, fh, X):
83 """Make forecasts.
84
85 Parameters
86 ----------
87 fh : ForecastingHorizon
88 The forecasters horizon with the steps ahead to to predict.
89 Default is one-step ahead forecast,
90 i.e. np.array([1])
91 X : pd.DataFrame, optional (default=None)
92 Exogenous variables are ignored.
93
94 Returns
95 -------
96 y_pred : pd.Series
97 Returns series of predicted values.
98 """
99 # statsmodels requires zero-based indexing starting at the
100 # beginning of the training series when passing integers
101 start, end = fh.to_absolute_int(self._y.index[0], self.cutoff)[[0, -1]]
102 fh_abs = fh.to_absolute_index(self.cutoff)
103
104 # bug fix for evaluate function as test_plus_train indices are passed
105 # statsmodels exog must contain test indices only.
106 # For discussion see https://github.com/sktime/sktime/issues/3830
107 if X is not None:
108 ind_drop = self._X.index
109 X = X.loc[~X.index.isin(ind_drop)]
110 # Entire range of the forecast horizon is required
111 X = X[: fh_abs[-1]]
112
113 if "exog" in inspect.signature(self._forecaster.__init__).parameters.keys():
114 y_pred = self._fitted_forecaster.predict(start=start, end=end, exog=X)
115 else:
116 y_pred = self._fitted_forecaster.predict(start=start, end=end)
117
118 # statsmodels forecasts all periods from start to end of forecasting
119 # horizon, but only return given time points in forecasting horizon
120 y_pred = y_pred.loc[fh_abs]
121 # ensure that name is not added nor removed
122 # otherwise this may upset conversion to pd.DataFrame
123 y_pred.name = self._y.name
124 return y_pred
125
126 @staticmethod
127 def _extract_conf_int(prediction_results, alpha) -> pd.DataFrame:
128 """Construct confidence interval at specified `alpha` for each timestep.
129
130 Parameters
131 ----------
132 prediction_results : PredictionResults
133 results class, as returned by ``self._fitted_forecaster.get_prediction``
134 alpha : float
135 one minus nominal coverage
136
137 Returns
138 -------
139 pd.DataFrame
140 confidence intervals at each timestep
141
142 The dataframe must have at least two columns ``lower`` and ``upper``, and
143 the row indices must be integers relative to ``self.cutoff``. Order of
144 columns do not matter, and row indices must be a superset of relative
145 integer horizon of ``fh``.
146 """
147 del prediction_results, alpha # tools like ``vulture`` may complain as unused
148
149 raise NotImplementedError("abstract method")
150
151 def _predict_interval(self, fh, X, coverage):
152 """Compute/return prediction interval forecasts.
153
154 private _predict_interval containing the core logic,
155 called from predict_interval and default _predict_quantiles
156
157 Parameters
158 ----------
159 fh : guaranteed to be ForecastingHorizon
160 The forecasting horizon with the steps ahead to to predict.
161 X : optional (default=None)
162 guaranteed to be of a type in self.get_tag("X_inner_mtype")
163 Exogeneous time series to predict from.
164 coverage : float or list of float, optional (default=0.95)
165 nominal coverage(s) of predictive interval(s)
166
167 Returns
168 -------
169 pred_int : pd.DataFrame
170 Column has multi-index: first level is variable name from y in fit,
171 second level coverage fractions for which intervals were computed.
172 in the same order as in input `coverage`.
173 Third level is string "lower" or "upper", for lower/upper interval end.
174 Row index is fh, with additional (upper) levels equal to instance levels,
175 from y seen in fit, if y_inner_mtype is Panel or Hierarchical.
176 Entries are forecasts of lower/upper interval end,
177 for var in col index, at nominal coverage in second col index,
178 lower/upper depending on third col index, for the row index.
179 Upper/lower interval end forecasts are equivalent to
180 quantile forecasts at alpha = 0.5 - c/2, 0.5 + c/2 for c in coverage.
181 """
182 implements_interval_adapter = self._has_implementation_of("_extract_conf_int")
183 implements_quantiles = self._has_implementation_of("_predict_quantiles")
184
185 if not implements_interval_adapter and implements_quantiles:
186 return BaseForecaster._predict_interval(self, fh, X=X, coverage=coverage)
187
188 start, end = fh.to_absolute_int(self._y.index[0], self.cutoff)[[0, -1]]
189 valid_indices = fh.to_absolute(self.cutoff).to_pandas()
190
191 get_prediction_arguments = {"start": start, "end": end}
192
193 if hasattr(self, "random_state"):
194 get_prediction_arguments["random_state"] = self.random_state
195
196 if inspect.signature(self._fitted_forecaster.get_prediction).parameters.get(
197 "exog"
198 ):
199 get_prediction_arguments["exog"] = X
200
201 prediction_results = self._fitted_forecaster.get_prediction(
202 **get_prediction_arguments
203 )
204
205 var_names = self._get_varnames()
206 var_name = var_names[0]
207 columns = pd.MultiIndex.from_product([var_names, coverage, ["lower", "upper"]])
208 pred_int = pd.DataFrame(index=valid_indices, columns=columns)
209
210 for c in coverage:
211 pred_statsmodels = self._extract_conf_int(prediction_results, (1 - c))
212
213 pred_int[(var_name, c, "lower")] = pred_statsmodels.loc[
214 valid_indices, "lower"
215 ]
216 pred_int[(var_name, c, "upper")] = pred_statsmodels.loc[
217 valid_indices, "upper"
218 ]
219
220 return pred_int
221
222 def _get_fitted_params(self):
223 """Get fitted parameters.
224
225 Returns
226 -------
227 fitted_params : dict
228 """
229 fitted_params = {}
230 for name in self._get_fitted_param_names():
231 if name in ["aic", "aicc", "bic", "hqic"]:
232 fitted_params[name] = getattr(self._fitted_forecaster, name, None)
233 else:
234 fitted_params[name] = self._fitted_forecaster.params.get(name)
235 return fitted_params
236
237 def _get_fitted_param_names(self):
238 """Get names of fitted parameters."""
239 return self._fitted_param_names
240
241
242 def _coerce_int_to_range_index(y, X=None):
243 new_index = pd.RangeIndex(y.index[0], y.index[-1] + 1)
244 try:
245 np.testing.assert_array_equal(y.index, new_index)
246 except AssertionError:
247 raise ValueError(
248 "Coercion of integer pd.Index to pd.RangeIndex "
249 "failed. Please provide `y_train` with a "
250 "pd.RangeIndex."
251 )
252 y.index = new_index
253 if X is not None:
254 X.index = new_index
255 return y, X
256
[end of sktime/forecasting/base/adapters/_statsmodels.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sktime/forecasting/base/adapters/_statsmodels.py b/sktime/forecasting/base/adapters/_statsmodels.py
--- a/sktime/forecasting/base/adapters/_statsmodels.py
+++ b/sktime/forecasting/base/adapters/_statsmodels.py
@@ -2,7 +2,7 @@
# copyright: sktime developers, BSD-3-Clause License (see LICENSE file)
"""Implements adapter for statsmodels forecasters to be used in sktime framework."""
-__author__ = ["mloning"]
+__author__ = ["mloning", "ciaran-g"]
__all__ = ["_StatsModelsAdapter"]
import inspect
@@ -99,7 +99,7 @@
# statsmodels requires zero-based indexing starting at the
# beginning of the training series when passing integers
start, end = fh.to_absolute_int(self._y.index[0], self.cutoff)[[0, -1]]
- fh_abs = fh.to_absolute_index(self.cutoff)
+ fh_int = fh.to_absolute_int(self._y.index[0], self.cutoff) - len(self._y)
# bug fix for evaluate function as test_plus_train indices are passed
# statsmodels exog must contain test indices only.
@@ -108,7 +108,7 @@
ind_drop = self._X.index
X = X.loc[~X.index.isin(ind_drop)]
# Entire range of the forecast horizon is required
- X = X[: fh_abs[-1]]
+ X = X.iloc[: (fh_int[-1] + 1)] # include end point
if "exog" in inspect.signature(self._forecaster.__init__).parameters.keys():
y_pred = self._fitted_forecaster.predict(start=start, end=end, exog=X)
@@ -117,7 +117,9 @@
# statsmodels forecasts all periods from start to end of forecasting
# horizon, but only return given time points in forecasting horizon
- y_pred = y_pred.loc[fh_abs]
+ # if fh[0] > 1 steps ahead of cutoff then make relative to `start`
+ fh_int = fh_int - fh_int[0]
+ y_pred = y_pred.iloc[fh_int]
# ensure that name is not added nor removed
# otherwise this may upset conversion to pd.DataFrame
y_pred.name = self._y.name
@@ -186,7 +188,9 @@
return BaseForecaster._predict_interval(self, fh, X=X, coverage=coverage)
start, end = fh.to_absolute_int(self._y.index[0], self.cutoff)[[0, -1]]
- valid_indices = fh.to_absolute(self.cutoff).to_pandas()
+ fh_int = fh.to_absolute_int(self._y.index[0], self.cutoff) - len(self._y)
+ # if fh > 1 steps ahead of cutoff
+ fh_int = fh_int - fh_int[0]
get_prediction_arguments = {"start": start, "end": end}
@@ -205,17 +209,15 @@
var_names = self._get_varnames()
var_name = var_names[0]
columns = pd.MultiIndex.from_product([var_names, coverage, ["lower", "upper"]])
- pred_int = pd.DataFrame(index=valid_indices, columns=columns)
+ preds_index = self._extract_conf_int(prediction_results, (1 - coverage[0]))
+ preds_index = preds_index.iloc[fh_int].index
+ pred_int = pd.DataFrame(index=preds_index, columns=columns)
for c in coverage:
pred_statsmodels = self._extract_conf_int(prediction_results, (1 - c))
- pred_int[(var_name, c, "lower")] = pred_statsmodels.loc[
- valid_indices, "lower"
- ]
- pred_int[(var_name, c, "upper")] = pred_statsmodels.loc[
- valid_indices, "upper"
- ]
+ pred_int[(var_name, c, "lower")] = pred_statsmodels.iloc[fh_int]["lower"]
+ pred_int[(var_name, c, "upper")] = pred_statsmodels.iloc[fh_int]["upper"]
return pred_int
| {"golden_diff": "diff --git a/sktime/forecasting/base/adapters/_statsmodels.py b/sktime/forecasting/base/adapters/_statsmodels.py\n--- a/sktime/forecasting/base/adapters/_statsmodels.py\n+++ b/sktime/forecasting/base/adapters/_statsmodels.py\n@@ -2,7 +2,7 @@\n # copyright: sktime developers, BSD-3-Clause License (see LICENSE file)\n \"\"\"Implements adapter for statsmodels forecasters to be used in sktime framework.\"\"\"\n \n-__author__ = [\"mloning\"]\n+__author__ = [\"mloning\", \"ciaran-g\"]\n __all__ = [\"_StatsModelsAdapter\"]\n \n import inspect\n@@ -99,7 +99,7 @@\n # statsmodels requires zero-based indexing starting at the\n # beginning of the training series when passing integers\n start, end = fh.to_absolute_int(self._y.index[0], self.cutoff)[[0, -1]]\n- fh_abs = fh.to_absolute_index(self.cutoff)\n+ fh_int = fh.to_absolute_int(self._y.index[0], self.cutoff) - len(self._y)\n \n # bug fix for evaluate function as test_plus_train indices are passed\n # statsmodels exog must contain test indices only.\n@@ -108,7 +108,7 @@\n ind_drop = self._X.index\n X = X.loc[~X.index.isin(ind_drop)]\n # Entire range of the forecast horizon is required\n- X = X[: fh_abs[-1]]\n+ X = X.iloc[: (fh_int[-1] + 1)] # include end point\n \n if \"exog\" in inspect.signature(self._forecaster.__init__).parameters.keys():\n y_pred = self._fitted_forecaster.predict(start=start, end=end, exog=X)\n@@ -117,7 +117,9 @@\n \n # statsmodels forecasts all periods from start to end of forecasting\n # horizon, but only return given time points in forecasting horizon\n- y_pred = y_pred.loc[fh_abs]\n+ # if fh[0] > 1 steps ahead of cutoff then make relative to `start`\n+ fh_int = fh_int - fh_int[0]\n+ y_pred = y_pred.iloc[fh_int]\n # ensure that name is not added nor removed\n # otherwise this may upset conversion to pd.DataFrame\n y_pred.name = self._y.name\n@@ -186,7 +188,9 @@\n return BaseForecaster._predict_interval(self, fh, X=X, coverage=coverage)\n \n start, end = fh.to_absolute_int(self._y.index[0], self.cutoff)[[0, -1]]\n- valid_indices = fh.to_absolute(self.cutoff).to_pandas()\n+ fh_int = fh.to_absolute_int(self._y.index[0], self.cutoff) - len(self._y)\n+ # if fh > 1 steps ahead of cutoff\n+ fh_int = fh_int - fh_int[0]\n \n get_prediction_arguments = {\"start\": start, \"end\": end}\n \n@@ -205,17 +209,15 @@\n var_names = self._get_varnames()\n var_name = var_names[0]\n columns = pd.MultiIndex.from_product([var_names, coverage, [\"lower\", \"upper\"]])\n- pred_int = pd.DataFrame(index=valid_indices, columns=columns)\n+ preds_index = self._extract_conf_int(prediction_results, (1 - coverage[0]))\n+ preds_index = preds_index.iloc[fh_int].index\n+ pred_int = pd.DataFrame(index=preds_index, columns=columns)\n \n for c in coverage:\n pred_statsmodels = self._extract_conf_int(prediction_results, (1 - c))\n \n- pred_int[(var_name, c, \"lower\")] = pred_statsmodels.loc[\n- valid_indices, \"lower\"\n- ]\n- pred_int[(var_name, c, \"upper\")] = pred_statsmodels.loc[\n- valid_indices, \"upper\"\n- ]\n+ pred_int[(var_name, c, \"lower\")] = pred_statsmodels.iloc[fh_int][\"lower\"]\n+ pred_int[(var_name, c, \"upper\")] = pred_statsmodels.iloc[fh_int][\"upper\"]\n \n return pred_int\n", "issue": "[BUG] convert to period bug in `_StatsModelsAdapter`\n**Describe the bug**\r\n<!--\r\nA clear and concise description of what the bug is.\r\n-->\r\n\r\nFor panel data with datetime indexes, converting datetimes to period index causes an error in the predict stage in the `_StatsModelsAdapter` models\r\n\r\n**To Reproduce**\r\n<!--\r\nAdd a Minimal, Complete, and Verifiable example (for more details, see e.g. https://stackoverflow.com/help/mcve\r\n\r\nIf the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com\r\n-->\r\n\r\n```python\r\nfrom sktime.datasets import load_airline\r\nfrom sktime.forecasting.exp_smoothing import ExponentialSmoothing\r\nimport pandas as pd\r\ny = load_airline()\r\n\r\n# create dummy index with hourly timestamps and panel data by hour of day\r\ny.index = pd.date_range(start='1960-01-01', periods=len(y.index), freq='H')\r\ny.index.names = [\"datetime\"]\r\ny.name = \"passengers\"\r\ny = y.to_frame()\r\ny['hour_of_day'] = y.index.hour\r\ny = y.reset_index().set_index(['hour_of_day', 'datetime']).sort_index()\r\n\r\nforecaster = ExponentialSmoothing(\r\n trend='add', sp=1\r\n) \r\nforecaster.fit(y)\r\nforecaster.predict(fh=[1])\r\n\r\n```\r\n\r\n**Expected behavior**\r\n<!--\r\nA clear and concise description of what you expected to happen.\r\n-->\r\n\r\nFor each panel predict one step ahead\r\n\r\n**Additional context**\r\n<!--\r\nAdd any other context about the problem here.\r\n-->\r\n\r\nI will address this issue \ud83d\udc4d \r\n\r\n**Versions**\r\n<details>\r\n\r\n<!--\r\nPlease run the following code snippet and paste the output here:\r\n\r\nfrom sktime import show_versions; show_versions()\r\n-->\r\n\r\nPython dependencies:\r\n pip: 23.1.2\r\n sktime: 0.24.1\r\n sklearn: 1.2.2\r\n skbase: 0.6.1\r\n numpy: 1.24.3\r\n scipy: 1.10.1\r\n pandas: 2.0.2\r\n matplotlib: None\r\n joblib: 1.2.0\r\n numba: None\r\n statsmodels: 0.14.0\r\n pmdarima: 2.0.3\r\nstatsforecast: None\r\n tsfresh: None\r\n tslearn: None\r\n torch: None\r\n tensorflow: None\r\ntensorflow_probability: None\r\n\r\n<!-- Thanks for contributing! -->\r\n\n", "before_files": [{"content": "# !/usr/bin/env python3 -u\n# copyright: sktime developers, BSD-3-Clause License (see LICENSE file)\n\"\"\"Implements adapter for statsmodels forecasters to be used in sktime framework.\"\"\"\n\n__author__ = [\"mloning\"]\n__all__ = [\"_StatsModelsAdapter\"]\n\nimport inspect\n\nimport numpy as np\nimport pandas as pd\n\nfrom sktime.forecasting.base import BaseForecaster\nfrom sktime.utils.warnings import warn\n\n\nclass _StatsModelsAdapter(BaseForecaster):\n \"\"\"Base class for interfacing statsmodels forecasting algorithms.\"\"\"\n\n _fitted_param_names = ()\n _tags = {\n \"ignores-exogeneous-X\": True,\n \"requires-fh-in-fit\": False,\n \"handles-missing-data\": False,\n \"python_dependencies\": \"statsmodels\",\n }\n\n def __init__(self, random_state=None):\n self._forecaster = None\n self.random_state = random_state\n self._fitted_forecaster = None\n super().__init__()\n\n def _fit(self, y, X, fh):\n \"\"\"Fit to training data.\n\n Parameters\n ----------\n y : pd.Series\n Target time series to which to fit the forecaster.\n fh : int, list or np.array, optional (default=None)\n The forecasters horizon with the steps ahead to to predict.\n X : pd.DataFrame, optional (default=None)\n Exogenous variables are ignored\n\n Returns\n -------\n self : returns an instance of self.\n \"\"\"\n # statsmodels does not support the pd.Int64Index as required,\n # so we coerce them here to pd.RangeIndex\n if isinstance(y, pd.Series) and pd.api.types.is_integer_dtype(y.index):\n y, X = _coerce_int_to_range_index(y, X)\n self._fit_forecaster(y, X)\n return self\n\n def _fit_forecaster(self, y_train, X_train=None):\n \"\"\"Log used internally in fit.\"\"\"\n raise NotImplementedError(\"abstract method\")\n\n def _update(self, y, X=None, update_params=True):\n \"\"\"Update used internally in update.\"\"\"\n if update_params or self.is_composite():\n super()._update(y, X, update_params=update_params)\n else:\n if not hasattr(self._fitted_forecaster, \"append\"):\n warn(\n f\"NotImplementedWarning: {self.__class__.__name__} \"\n f\"can not accept new data when update_params=False. \"\n f\"Call with update_params=True to refit with new data.\",\n obj=self,\n )\n else:\n # only append unseen data to fitted forecaster\n index_diff = y.index.difference(\n self._fitted_forecaster.fittedvalues.index\n )\n if index_diff.isin(y.index).all():\n y = y.loc[index_diff]\n self._fitted_forecaster = self._fitted_forecaster.append(y)\n\n def _predict(self, fh, X):\n \"\"\"Make forecasts.\n\n Parameters\n ----------\n fh : ForecastingHorizon\n The forecasters horizon with the steps ahead to to predict.\n Default is one-step ahead forecast,\n i.e. np.array([1])\n X : pd.DataFrame, optional (default=None)\n Exogenous variables are ignored.\n\n Returns\n -------\n y_pred : pd.Series\n Returns series of predicted values.\n \"\"\"\n # statsmodels requires zero-based indexing starting at the\n # beginning of the training series when passing integers\n start, end = fh.to_absolute_int(self._y.index[0], self.cutoff)[[0, -1]]\n fh_abs = fh.to_absolute_index(self.cutoff)\n\n # bug fix for evaluate function as test_plus_train indices are passed\n # statsmodels exog must contain test indices only.\n # For discussion see https://github.com/sktime/sktime/issues/3830\n if X is not None:\n ind_drop = self._X.index\n X = X.loc[~X.index.isin(ind_drop)]\n # Entire range of the forecast horizon is required\n X = X[: fh_abs[-1]]\n\n if \"exog\" in inspect.signature(self._forecaster.__init__).parameters.keys():\n y_pred = self._fitted_forecaster.predict(start=start, end=end, exog=X)\n else:\n y_pred = self._fitted_forecaster.predict(start=start, end=end)\n\n # statsmodels forecasts all periods from start to end of forecasting\n # horizon, but only return given time points in forecasting horizon\n y_pred = y_pred.loc[fh_abs]\n # ensure that name is not added nor removed\n # otherwise this may upset conversion to pd.DataFrame\n y_pred.name = self._y.name\n return y_pred\n\n @staticmethod\n def _extract_conf_int(prediction_results, alpha) -> pd.DataFrame:\n \"\"\"Construct confidence interval at specified `alpha` for each timestep.\n\n Parameters\n ----------\n prediction_results : PredictionResults\n results class, as returned by ``self._fitted_forecaster.get_prediction``\n alpha : float\n one minus nominal coverage\n\n Returns\n -------\n pd.DataFrame\n confidence intervals at each timestep\n\n The dataframe must have at least two columns ``lower`` and ``upper``, and\n the row indices must be integers relative to ``self.cutoff``. Order of\n columns do not matter, and row indices must be a superset of relative\n integer horizon of ``fh``.\n \"\"\"\n del prediction_results, alpha # tools like ``vulture`` may complain as unused\n\n raise NotImplementedError(\"abstract method\")\n\n def _predict_interval(self, fh, X, coverage):\n \"\"\"Compute/return prediction interval forecasts.\n\n private _predict_interval containing the core logic,\n called from predict_interval and default _predict_quantiles\n\n Parameters\n ----------\n fh : guaranteed to be ForecastingHorizon\n The forecasting horizon with the steps ahead to to predict.\n X : optional (default=None)\n guaranteed to be of a type in self.get_tag(\"X_inner_mtype\")\n Exogeneous time series to predict from.\n coverage : float or list of float, optional (default=0.95)\n nominal coverage(s) of predictive interval(s)\n\n Returns\n -------\n pred_int : pd.DataFrame\n Column has multi-index: first level is variable name from y in fit,\n second level coverage fractions for which intervals were computed.\n in the same order as in input `coverage`.\n Third level is string \"lower\" or \"upper\", for lower/upper interval end.\n Row index is fh, with additional (upper) levels equal to instance levels,\n from y seen in fit, if y_inner_mtype is Panel or Hierarchical.\n Entries are forecasts of lower/upper interval end,\n for var in col index, at nominal coverage in second col index,\n lower/upper depending on third col index, for the row index.\n Upper/lower interval end forecasts are equivalent to\n quantile forecasts at alpha = 0.5 - c/2, 0.5 + c/2 for c in coverage.\n \"\"\"\n implements_interval_adapter = self._has_implementation_of(\"_extract_conf_int\")\n implements_quantiles = self._has_implementation_of(\"_predict_quantiles\")\n\n if not implements_interval_adapter and implements_quantiles:\n return BaseForecaster._predict_interval(self, fh, X=X, coverage=coverage)\n\n start, end = fh.to_absolute_int(self._y.index[0], self.cutoff)[[0, -1]]\n valid_indices = fh.to_absolute(self.cutoff).to_pandas()\n\n get_prediction_arguments = {\"start\": start, \"end\": end}\n\n if hasattr(self, \"random_state\"):\n get_prediction_arguments[\"random_state\"] = self.random_state\n\n if inspect.signature(self._fitted_forecaster.get_prediction).parameters.get(\n \"exog\"\n ):\n get_prediction_arguments[\"exog\"] = X\n\n prediction_results = self._fitted_forecaster.get_prediction(\n **get_prediction_arguments\n )\n\n var_names = self._get_varnames()\n var_name = var_names[0]\n columns = pd.MultiIndex.from_product([var_names, coverage, [\"lower\", \"upper\"]])\n pred_int = pd.DataFrame(index=valid_indices, columns=columns)\n\n for c in coverage:\n pred_statsmodels = self._extract_conf_int(prediction_results, (1 - c))\n\n pred_int[(var_name, c, \"lower\")] = pred_statsmodels.loc[\n valid_indices, \"lower\"\n ]\n pred_int[(var_name, c, \"upper\")] = pred_statsmodels.loc[\n valid_indices, \"upper\"\n ]\n\n return pred_int\n\n def _get_fitted_params(self):\n \"\"\"Get fitted parameters.\n\n Returns\n -------\n fitted_params : dict\n \"\"\"\n fitted_params = {}\n for name in self._get_fitted_param_names():\n if name in [\"aic\", \"aicc\", \"bic\", \"hqic\"]:\n fitted_params[name] = getattr(self._fitted_forecaster, name, None)\n else:\n fitted_params[name] = self._fitted_forecaster.params.get(name)\n return fitted_params\n\n def _get_fitted_param_names(self):\n \"\"\"Get names of fitted parameters.\"\"\"\n return self._fitted_param_names\n\n\ndef _coerce_int_to_range_index(y, X=None):\n new_index = pd.RangeIndex(y.index[0], y.index[-1] + 1)\n try:\n np.testing.assert_array_equal(y.index, new_index)\n except AssertionError:\n raise ValueError(\n \"Coercion of integer pd.Index to pd.RangeIndex \"\n \"failed. Please provide `y_train` with a \"\n \"pd.RangeIndex.\"\n )\n y.index = new_index\n if X is not None:\n X.index = new_index\n return y, X\n", "path": "sktime/forecasting/base/adapters/_statsmodels.py"}]} | 3,883 | 937 |
gh_patches_debug_15873 | rasdani/github-patches | git_diff | frappe__frappe-13917 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Review: Connected App: Difficult to see how Token Cache get_expires_in could be any more wrong
https://github.com/frappe/frappe/blob/86e512452d77f3e61405fd33ecd1bf881790ae18/frappe/integrations/doctype/token_cache/token_cache.py#L53
PR to follow
</issue>
<code>
[start of frappe/integrations/doctype/token_cache/token_cache.py]
1 # -*- coding: utf-8 -*-
2 # Copyright (c) 2019, Frappe Technologies and contributors
3 # For license information, please see license.txt
4
5 from __future__ import unicode_literals
6 from datetime import datetime, timedelta
7
8 import frappe
9 from frappe import _
10 from frappe.utils import cstr, cint
11 from frappe.model.document import Document
12
13 class TokenCache(Document):
14
15 def get_auth_header(self):
16 if self.access_token:
17 headers = {'Authorization': 'Bearer ' + self.get_password('access_token')}
18 return headers
19
20 raise frappe.exceptions.DoesNotExistError
21
22 def update_data(self, data):
23 """
24 Store data returned by authorization flow.
25
26 Params:
27 data - Dict with access_token, refresh_token, expires_in and scope.
28 """
29 token_type = cstr(data.get('token_type', '')).lower()
30 if token_type not in ['bearer', 'mac']:
31 frappe.throw(_('Received an invalid token type.'))
32 # 'Bearer' or 'MAC'
33 token_type = token_type.title() if token_type == 'bearer' else token_type.upper()
34
35 self.token_type = token_type
36 self.access_token = cstr(data.get('access_token', ''))
37 self.refresh_token = cstr(data.get('refresh_token', ''))
38 self.expires_in = cint(data.get('expires_in', 0))
39
40 new_scopes = data.get('scope')
41 if new_scopes:
42 if isinstance(new_scopes, str):
43 new_scopes = new_scopes.split(' ')
44 if isinstance(new_scopes, list):
45 self.scopes = None
46 for scope in new_scopes:
47 self.append('scopes', {'scope': scope})
48
49 self.state = None
50 self.save(ignore_permissions=True)
51 frappe.db.commit()
52 return self
53
54 def get_expires_in(self):
55 expiry_time = frappe.utils.get_datetime(self.modified) + timedelta(self.expires_in)
56 return (datetime.now() - expiry_time).total_seconds()
57
58 def is_expired(self):
59 return self.get_expires_in() < 0
60
61 def get_json(self):
62 return {
63 'access_token': self.get_password('access_token', ''),
64 'refresh_token': self.get_password('refresh_token', ''),
65 'expires_in': self.get_expires_in(),
66 'token_type': self.token_type
67 }
68
[end of frappe/integrations/doctype/token_cache/token_cache.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/frappe/integrations/doctype/token_cache/token_cache.py b/frappe/integrations/doctype/token_cache/token_cache.py
--- a/frappe/integrations/doctype/token_cache/token_cache.py
+++ b/frappe/integrations/doctype/token_cache/token_cache.py
@@ -3,7 +3,7 @@
# For license information, please see license.txt
from __future__ import unicode_literals
-from datetime import datetime, timedelta
+from datetime import timedelta
import frappe
from frappe import _
@@ -52,8 +52,8 @@
return self
def get_expires_in(self):
- expiry_time = frappe.utils.get_datetime(self.modified) + timedelta(self.expires_in)
- return (datetime.now() - expiry_time).total_seconds()
+ expiry_time = frappe.utils.get_datetime(self.modified) + timedelta(seconds=self.expires_in)
+ return (expiry_time - frappe.utils.now_datetime()).total_seconds()
def is_expired(self):
return self.get_expires_in() < 0
| {"golden_diff": "diff --git a/frappe/integrations/doctype/token_cache/token_cache.py b/frappe/integrations/doctype/token_cache/token_cache.py\n--- a/frappe/integrations/doctype/token_cache/token_cache.py\n+++ b/frappe/integrations/doctype/token_cache/token_cache.py\n@@ -3,7 +3,7 @@\n # For license information, please see license.txt\n \n from __future__ import unicode_literals\n-from datetime import datetime, timedelta\n+from datetime import timedelta\n \n import frappe\n from frappe import _\n@@ -52,8 +52,8 @@\n \t\treturn self\n \n \tdef get_expires_in(self):\n-\t\texpiry_time = frappe.utils.get_datetime(self.modified) + timedelta(self.expires_in)\n-\t\treturn (datetime.now() - expiry_time).total_seconds()\n+\t\texpiry_time = frappe.utils.get_datetime(self.modified) + timedelta(seconds=self.expires_in)\n+\t\treturn (expiry_time - frappe.utils.now_datetime()).total_seconds()\n \n \tdef is_expired(self):\n \t\treturn self.get_expires_in() < 0\n", "issue": "Review: Connected App: Difficult to see how Token Cache get_expires_in could be any more wrong\nhttps://github.com/frappe/frappe/blob/86e512452d77f3e61405fd33ecd1bf881790ae18/frappe/integrations/doctype/token_cache/token_cache.py#L53\r\n\r\nPR to follow\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2019, Frappe Technologies and contributors\n# For license information, please see license.txt\n\nfrom __future__ import unicode_literals\nfrom datetime import datetime, timedelta\n\nimport frappe\nfrom frappe import _\nfrom frappe.utils import cstr, cint\nfrom frappe.model.document import Document\n\nclass TokenCache(Document):\n\n\tdef get_auth_header(self):\n\t\tif self.access_token:\n\t\t\theaders = {'Authorization': 'Bearer ' + self.get_password('access_token')}\n\t\t\treturn headers\n\n\t\traise frappe.exceptions.DoesNotExistError\n\n\tdef update_data(self, data):\n\t\t\"\"\"\n\t\tStore data returned by authorization flow.\n\n\t\tParams:\n\t\tdata - Dict with access_token, refresh_token, expires_in and scope.\n\t\t\"\"\"\n\t\ttoken_type = cstr(data.get('token_type', '')).lower()\n\t\tif token_type not in ['bearer', 'mac']:\n\t\t\tfrappe.throw(_('Received an invalid token type.'))\n\t\t# 'Bearer' or 'MAC'\n\t\ttoken_type = token_type.title() if token_type == 'bearer' else token_type.upper()\n\n\t\tself.token_type = token_type\n\t\tself.access_token = cstr(data.get('access_token', ''))\n\t\tself.refresh_token = cstr(data.get('refresh_token', ''))\n\t\tself.expires_in = cint(data.get('expires_in', 0))\n\n\t\tnew_scopes = data.get('scope')\n\t\tif new_scopes:\n\t\t\tif isinstance(new_scopes, str):\n\t\t\t\tnew_scopes = new_scopes.split(' ')\n\t\t\tif isinstance(new_scopes, list):\n\t\t\t\tself.scopes = None\n\t\t\t\tfor scope in new_scopes:\n\t\t\t\t\tself.append('scopes', {'scope': scope})\n\n\t\tself.state = None\n\t\tself.save(ignore_permissions=True)\n\t\tfrappe.db.commit()\n\t\treturn self\n\n\tdef get_expires_in(self):\n\t\texpiry_time = frappe.utils.get_datetime(self.modified) + timedelta(self.expires_in)\n\t\treturn (datetime.now() - expiry_time).total_seconds()\n\n\tdef is_expired(self):\n\t\treturn self.get_expires_in() < 0\n\n\tdef get_json(self):\n\t\treturn {\n\t\t\t'access_token': self.get_password('access_token', ''),\n\t\t\t'refresh_token': self.get_password('refresh_token', ''),\n\t\t\t'expires_in': self.get_expires_in(),\n\t\t\t'token_type': self.token_type\n\t\t}\n", "path": "frappe/integrations/doctype/token_cache/token_cache.py"}]} | 1,285 | 224 |
gh_patches_debug_24555 | rasdani/github-patches | git_diff | pyg-team__pytorch_geometric-5051 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Data Batch problem in PyG
### 🐛 Describe the bug
Hi. I am a computational physics researcher and was using PyG very well.
my pyg code was working well a few weeks ago, but now that I run my code, it is not working anymore without any changes.
the problem is like below.
I have many material structures and in my "custom_dataset" class, these are preprocessed and all graph informations (node features, edge features, edge index etc) are inserted into "Data" object in PyTorch geometric.
You can see that each preprocessed sample with index $i$ was printed normal "Data" object in pyg

But When I insert my custom dataset class into pyg DataLoader and I did like below,
``` Python
sample = next(iter(train_loader)) # batch sample
```
batch sample is denoted by "DataDataBatch". I didn't see this kind of object name.
and i can't use "sample.x' or "sample.edge_index" command. Instead I need to do like this

I want to use expressions like "sample.x", "sample.edge_index" or "sample.edge_attr" as like before.
I expect your kind explanations. Thank you.
### Environment
* PyG version: `2.0.5`
* PyTorch version: `1.11.0+cu113`
* OS: `GoogleColab Pro Plus`
* Python version: `Python 3.7.13 in colab`
* CUDA/cuDNN version:
* How you installed PyTorch and PyG (`conda`, `pip`, source):
``` python
# Install required packages.
import os
import torch
os.environ['TORCH'] = torch.__version__
print(torch.__version__)
!pip install -q torch-scatter -f https://data.pyg.org/whl/torch-${TORCH}.html
!pip install -q torch-sparse -f https://data.pyg.org/whl/torch-${TORCH}.html
!pip install -q git+https://github.com/pyg-team/pytorch_geometric.git
!pip install -q pymatgen==2020.11.11
```
* Any other relevant information (*e.g.*, version of `torch-scatter`):
</issue>
<code>
[start of torch_geometric/loader/dataloader.py]
1 from collections.abc import Mapping, Sequence
2 from typing import List, Optional, Union
3
4 import torch.utils.data
5 from torch.utils.data.dataloader import default_collate
6
7 from torch_geometric.data import Batch, Dataset
8 from torch_geometric.data.data import BaseData
9
10
11 class Collater:
12 def __init__(self, follow_batch, exclude_keys):
13 self.follow_batch = follow_batch
14 self.exclude_keys = exclude_keys
15
16 def __call__(self, batch):
17 elem = batch[0]
18 if isinstance(elem, BaseData):
19 return Batch.from_data_list(batch, self.follow_batch,
20 self.exclude_keys)
21 elif isinstance(elem, torch.Tensor):
22 return default_collate(batch)
23 elif isinstance(elem, float):
24 return torch.tensor(batch, dtype=torch.float)
25 elif isinstance(elem, int):
26 return torch.tensor(batch)
27 elif isinstance(elem, str):
28 return batch
29 elif isinstance(elem, Mapping):
30 return {key: self([data[key] for data in batch]) for key in elem}
31 elif isinstance(elem, tuple) and hasattr(elem, '_fields'):
32 return type(elem)(*(self(s) for s in zip(*batch)))
33 elif isinstance(elem, Sequence) and not isinstance(elem, str):
34 return [self(s) for s in zip(*batch)]
35
36 raise TypeError(f'DataLoader found invalid type: {type(elem)}')
37
38 def collate(self, batch): # Deprecated...
39 return self(batch)
40
41
42 class DataLoader(torch.utils.data.DataLoader):
43 r"""A data loader which merges data objects from a
44 :class:`torch_geometric.data.Dataset` to a mini-batch.
45 Data objects can be either of type :class:`~torch_geometric.data.Data` or
46 :class:`~torch_geometric.data.HeteroData`.
47
48 Args:
49 dataset (Dataset): The dataset from which to load the data.
50 batch_size (int, optional): How many samples per batch to load.
51 (default: :obj:`1`)
52 shuffle (bool, optional): If set to :obj:`True`, the data will be
53 reshuffled at every epoch. (default: :obj:`False`)
54 follow_batch (List[str], optional): Creates assignment batch
55 vectors for each key in the list. (default: :obj:`None`)
56 exclude_keys (List[str], optional): Will exclude each key in the
57 list. (default: :obj:`None`)
58 **kwargs (optional): Additional arguments of
59 :class:`torch.utils.data.DataLoader`.
60 """
61 def __init__(
62 self,
63 dataset: Union[Dataset, List[BaseData]],
64 batch_size: int = 1,
65 shuffle: bool = False,
66 follow_batch: Optional[List[str]] = None,
67 exclude_keys: Optional[List[str]] = None,
68 **kwargs,
69 ):
70
71 if 'collate_fn' in kwargs:
72 del kwargs['collate_fn']
73
74 # Save for PyTorch Lightning < 1.6:
75 self.follow_batch = follow_batch
76 self.exclude_keys = exclude_keys
77
78 super().__init__(
79 dataset,
80 batch_size,
81 shuffle,
82 collate_fn=Collater(follow_batch, exclude_keys),
83 **kwargs,
84 )
85
[end of torch_geometric/loader/dataloader.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/torch_geometric/loader/dataloader.py b/torch_geometric/loader/dataloader.py
--- a/torch_geometric/loader/dataloader.py
+++ b/torch_geometric/loader/dataloader.py
@@ -1,4 +1,5 @@
from collections.abc import Mapping, Sequence
+from inspect import signature
from typing import List, Optional, Union
import torch.utils.data
@@ -39,6 +40,28 @@
return self(batch)
+# PyG 'Data' objects are subclasses of MutableMapping, which is an
+# instance of collections.abc.Mapping. Currently, PyTorch pin_memory
+# for DataLoaders treats the returned batches as Mapping objects and
+# calls `pin_memory` on each element in `Data.__dict__`, which is not
+# desired behavior if 'Data' has a `pin_memory` function. We patch
+# this behavior here by monkeypatching `pin_memory`, but can hopefully patch
+# this in PyTorch in the future:
+__torch_pin_memory = torch.utils.data._utils.pin_memory.pin_memory
+__torch_pin_memory_params = signature(__torch_pin_memory).parameters
+
+
+def pin_memory(data, device=None):
+ if hasattr(data, "pin_memory"):
+ return data.pin_memory()
+ if len(__torch_pin_memory_params) > 1:
+ return __torch_pin_memory(data, device)
+ return __torch_pin_memory(data)
+
+
+torch.utils.data._utils.pin_memory.pin_memory = pin_memory
+
+
class DataLoader(torch.utils.data.DataLoader):
r"""A data loader which merges data objects from a
:class:`torch_geometric.data.Dataset` to a mini-batch.
| {"golden_diff": "diff --git a/torch_geometric/loader/dataloader.py b/torch_geometric/loader/dataloader.py\n--- a/torch_geometric/loader/dataloader.py\n+++ b/torch_geometric/loader/dataloader.py\n@@ -1,4 +1,5 @@\n from collections.abc import Mapping, Sequence\n+from inspect import signature\n from typing import List, Optional, Union\n \n import torch.utils.data\n@@ -39,6 +40,28 @@\n return self(batch)\n \n \n+# PyG 'Data' objects are subclasses of MutableMapping, which is an\n+# instance of collections.abc.Mapping. Currently, PyTorch pin_memory\n+# for DataLoaders treats the returned batches as Mapping objects and\n+# calls `pin_memory` on each element in `Data.__dict__`, which is not\n+# desired behavior if 'Data' has a `pin_memory` function. We patch\n+# this behavior here by monkeypatching `pin_memory`, but can hopefully patch\n+# this in PyTorch in the future:\n+__torch_pin_memory = torch.utils.data._utils.pin_memory.pin_memory\n+__torch_pin_memory_params = signature(__torch_pin_memory).parameters\n+\n+\n+def pin_memory(data, device=None):\n+ if hasattr(data, \"pin_memory\"):\n+ return data.pin_memory()\n+ if len(__torch_pin_memory_params) > 1:\n+ return __torch_pin_memory(data, device)\n+ return __torch_pin_memory(data)\n+\n+\n+torch.utils.data._utils.pin_memory.pin_memory = pin_memory\n+\n+\n class DataLoader(torch.utils.data.DataLoader):\n r\"\"\"A data loader which merges data objects from a\n :class:`torch_geometric.data.Dataset` to a mini-batch.\n", "issue": "Data Batch problem in PyG\n### \ud83d\udc1b Describe the bug\n\nHi. I am a computational physics researcher and was using PyG very well.\r\nmy pyg code was working well a few weeks ago, but now that I run my code, it is not working anymore without any changes.\r\n\r\nthe problem is like below.\r\nI have many material structures and in my \"custom_dataset\" class, these are preprocessed and all graph informations (node features, edge features, edge index etc) are inserted into \"Data\" object in PyTorch geometric.\r\nYou can see that each preprocessed sample with index $i$ was printed normal \"Data\" object in pyg\r\n\r\n\r\n\r\nBut When I insert my custom dataset class into pyg DataLoader and I did like below,\r\n\r\n``` Python\r\nsample = next(iter(train_loader)) # batch sample\r\n```\r\n\r\nbatch sample is denoted by \"DataDataBatch\". I didn't see this kind of object name.\r\nand i can't use \"sample.x' or \"sample.edge_index\" command. Instead I need to do like this\r\n\r\n\r\n\r\nI want to use expressions like \"sample.x\", \"sample.edge_index\" or \"sample.edge_attr\" as like before. \r\nI expect your kind explanations. Thank you.\r\n\n\n### Environment\n\n* PyG version: `2.0.5`\r\n* PyTorch version: `1.11.0+cu113`\r\n* OS: `GoogleColab Pro Plus`\r\n* Python version: `Python 3.7.13 in colab`\r\n* CUDA/cuDNN version:\r\n* How you installed PyTorch and PyG (`conda`, `pip`, source): \r\n``` python\r\n# Install required packages.\r\nimport os\r\nimport torch\r\nos.environ['TORCH'] = torch.__version__\r\nprint(torch.__version__)\r\n!pip install -q torch-scatter -f https://data.pyg.org/whl/torch-${TORCH}.html\r\n!pip install -q torch-sparse -f https://data.pyg.org/whl/torch-${TORCH}.html\r\n!pip install -q git+https://github.com/pyg-team/pytorch_geometric.git\r\n!pip install -q pymatgen==2020.11.11 \r\n```\r\n* Any other relevant information (*e.g.*, version of `torch-scatter`):\r\n\n", "before_files": [{"content": "from collections.abc import Mapping, Sequence\nfrom typing import List, Optional, Union\n\nimport torch.utils.data\nfrom torch.utils.data.dataloader import default_collate\n\nfrom torch_geometric.data import Batch, Dataset\nfrom torch_geometric.data.data import BaseData\n\n\nclass Collater:\n def __init__(self, follow_batch, exclude_keys):\n self.follow_batch = follow_batch\n self.exclude_keys = exclude_keys\n\n def __call__(self, batch):\n elem = batch[0]\n if isinstance(elem, BaseData):\n return Batch.from_data_list(batch, self.follow_batch,\n self.exclude_keys)\n elif isinstance(elem, torch.Tensor):\n return default_collate(batch)\n elif isinstance(elem, float):\n return torch.tensor(batch, dtype=torch.float)\n elif isinstance(elem, int):\n return torch.tensor(batch)\n elif isinstance(elem, str):\n return batch\n elif isinstance(elem, Mapping):\n return {key: self([data[key] for data in batch]) for key in elem}\n elif isinstance(elem, tuple) and hasattr(elem, '_fields'):\n return type(elem)(*(self(s) for s in zip(*batch)))\n elif isinstance(elem, Sequence) and not isinstance(elem, str):\n return [self(s) for s in zip(*batch)]\n\n raise TypeError(f'DataLoader found invalid type: {type(elem)}')\n\n def collate(self, batch): # Deprecated...\n return self(batch)\n\n\nclass DataLoader(torch.utils.data.DataLoader):\n r\"\"\"A data loader which merges data objects from a\n :class:`torch_geometric.data.Dataset` to a mini-batch.\n Data objects can be either of type :class:`~torch_geometric.data.Data` or\n :class:`~torch_geometric.data.HeteroData`.\n\n Args:\n dataset (Dataset): The dataset from which to load the data.\n batch_size (int, optional): How many samples per batch to load.\n (default: :obj:`1`)\n shuffle (bool, optional): If set to :obj:`True`, the data will be\n reshuffled at every epoch. (default: :obj:`False`)\n follow_batch (List[str], optional): Creates assignment batch\n vectors for each key in the list. (default: :obj:`None`)\n exclude_keys (List[str], optional): Will exclude each key in the\n list. (default: :obj:`None`)\n **kwargs (optional): Additional arguments of\n :class:`torch.utils.data.DataLoader`.\n \"\"\"\n def __init__(\n self,\n dataset: Union[Dataset, List[BaseData]],\n batch_size: int = 1,\n shuffle: bool = False,\n follow_batch: Optional[List[str]] = None,\n exclude_keys: Optional[List[str]] = None,\n **kwargs,\n ):\n\n if 'collate_fn' in kwargs:\n del kwargs['collate_fn']\n\n # Save for PyTorch Lightning < 1.6:\n self.follow_batch = follow_batch\n self.exclude_keys = exclude_keys\n\n super().__init__(\n dataset,\n batch_size,\n shuffle,\n collate_fn=Collater(follow_batch, exclude_keys),\n **kwargs,\n )\n", "path": "torch_geometric/loader/dataloader.py"}]} | 2,005 | 362 |
gh_patches_debug_38136 | rasdani/github-patches | git_diff | pytorch__ignite-280 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Replace asserts in the code by if condition then raise
It would be better to replace the code as
https://github.com/pytorch/ignite/blob/0f1905e87b95779cf6544a5fe29b46519ad9d4e0/ignite/metrics/epoch_metric.py#L33-L34
by
```python
if y_pred.ndimension() < 1 or y_pred.ndimension() > 2:
raise TypeError("Predictions should be of shape (batch_size, n_classes)"
# etc
```
Usage of assert to change in `EpochMetric`, `EarlyStopping` and `Loss`
</issue>
<code>
[start of ignite/metrics/epoch_metric.py]
1 import torch
2
3 from ignite.metrics.metric import Metric
4
5
6 class EpochMetric(Metric):
7 """Class for metrics that should be computed on the entire output history of a model.
8 Model's output and targets are restricted to be of shape `(batch_size, n_classes)`. Output
9 datatype should be `float32`. Target datatype should be `long`.
10
11 - `update` must receive output of the form `(y_pred, y)`.
12
13 If target shape is `(batch_size, n_classes)` and `n_classes > 1` than it should be binary: e.g. `[[0, 1, 0, 1], ]`
14
15 Args:
16 compute_fn (callable): a callable with the signature (`torch.tensor`, `torch.tensor`) takes as the input
17 `predictions` and `targets` and returns a scalar.
18
19 """
20
21 def __init__(self, compute_fn, output_transform=lambda x: x):
22 assert callable(compute_fn), "Argument compute_fn should be callable"
23 super(EpochMetric, self).__init__(output_transform=output_transform)
24 self.compute_fn = compute_fn
25
26 def reset(self):
27 self._predictions = torch.tensor([], dtype=torch.float32)
28 self._targets = torch.tensor([], dtype=torch.long)
29
30 def update(self, output):
31 y_pred, y = output
32
33 assert 1 <= y_pred.ndimension() <= 2, "Predictions should be of shape (batch_size, n_classes)"
34 assert 1 <= y.ndimension() <= 2, "Targets should be of shape (batch_size, n_classes)"
35
36 if y.ndimension() == 2:
37 assert torch.equal(y ** 2, y), 'Targets should be binary (0 or 1)'
38
39 if y_pred.ndimension() == 2 and y_pred.shape[1] == 1:
40 y_pred = y_pred.squeeze(dim=-1)
41
42 if y.ndimension() == 2 and y.shape[1] == 1:
43 y = y.squeeze(dim=-1)
44
45 y_pred = y_pred.type_as(self._predictions)
46 y = y.type_as(self._targets)
47
48 self._predictions = torch.cat([self._predictions, y_pred], dim=0)
49 self._targets = torch.cat([self._targets, y], dim=0)
50
51 # Check once the signature and execution of compute_fn
52 if self._predictions.shape == y_pred.shape:
53 try:
54 self.compute_fn(self._predictions, self._targets)
55 except Exception as e:
56 raise RuntimeError("Problem with `compute_fn`:\n {}".format(e))
57
58 def compute(self):
59 return self.compute_fn(self._predictions, self._targets)
60
[end of ignite/metrics/epoch_metric.py]
[start of ignite/handlers/early_stopping.py]
1 import logging
2
3 from ignite.engine import Engine
4
5
6 class EarlyStopping(object):
7 """EarlyStopping handler can be used to stop the training if no improvement after a given number of events
8
9 Args:
10 patience (int):
11 Number of events to wait if no improvement and then stop the training
12 score_function (Callable):
13 It should be a function taking a single argument, an `ignite.engine.Engine` object,
14 and return a score `float`. An improvement is considered if the score is higher.
15 trainer (Engine):
16 trainer engine to stop the run if no improvement
17
18 Examples:
19
20 .. code-block:: python
21
22 from ignite.engine import Engine, Events
23 from ignite.handlers import EarlyStopping
24
25 def score_function(engine):
26 val_loss = engine.state.metrics['nll']
27 return -val_loss
28
29 handler = EarlyStopping(patience=10, score_function=score_function, trainer=trainer)
30 # Note: the handler is attached to an *Evaluator* (runs one epoch on validation dataset)
31 evaluator.add_event_handler(Events.COMPLETED, handler)
32
33 """
34 def __init__(self, patience, score_function, trainer):
35 assert callable(score_function), "Argument score_function should be a function"
36 assert patience > 0, "Argument patience should be positive"
37 assert isinstance(trainer, Engine), "Argument trainer should be an instance of Engine"
38 self.score_function = score_function
39 self.patience = patience
40 self.trainer = trainer
41 self.counter = 0
42 self.best_score = None
43 self._logger = logging.getLogger(__name__ + "." + self.__class__.__name__)
44 self._logger.addHandler(logging.NullHandler())
45
46 def __call__(self, engine):
47 score = self.score_function(engine)
48
49 if self.best_score is None:
50 self.best_score = score
51 elif score < self.best_score:
52 self.counter += 1
53 self._logger.debug("EarlyStopping: %i / %i" % (self.counter, self.patience))
54 if self.counter >= self.patience:
55 self._logger.info("EarlyStopping: Stop training")
56 self.trainer.terminate()
57 else:
58 self.best_score = score
59 self.counter = 0
60
[end of ignite/handlers/early_stopping.py]
[start of ignite/metrics/loss.py]
1 from __future__ import division
2
3 from ignite.exceptions import NotComputableError
4 from ignite.metrics.metric import Metric
5
6
7 class Loss(Metric):
8 """
9 Calculates the average loss according to the passed loss_fn.
10
11 Args:
12 loss_fn (callable): a callable taking a prediction tensor, a target
13 tensor, optionally other arguments, and returns the average loss
14 over all observations in the batch.
15 output_transform (callable): a callable that is used to transform the
16 :class:`ignite.engine.Engine`'s `process_function`'s output into the
17 form expected by the metric.
18 This can be useful if, for example, you have a multi-output model and
19 you want to compute the metric with respect to one of the outputs.
20 The output is is expected to be a tuple (prediction, target) or
21 (prediction, target, kwargs) where kwargs is a dictionary of extra
22 keywords arguments.
23
24 """
25
26 def __init__(self, loss_fn, output_transform=lambda x: x):
27 super(Loss, self).__init__(output_transform)
28 self._loss_fn = loss_fn
29
30 def reset(self):
31 self._sum = 0
32 self._num_examples = 0
33
34 def update(self, output):
35 if len(output) == 2:
36 y_pred, y = output
37 kwargs = {}
38 else:
39 y_pred, y, kwargs = output
40 average_loss = self._loss_fn(y_pred, y, **kwargs)
41 assert len(average_loss.shape) == 0, '`loss_fn` did not return the average loss'
42 self._sum += average_loss.item() * y.shape[0]
43 self._num_examples += y.shape[0]
44
45 def compute(self):
46 if self._num_examples == 0:
47 raise NotComputableError(
48 'Loss must have at least one example before it can be computed')
49 return self._sum / self._num_examples
50
[end of ignite/metrics/loss.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ignite/handlers/early_stopping.py b/ignite/handlers/early_stopping.py
--- a/ignite/handlers/early_stopping.py
+++ b/ignite/handlers/early_stopping.py
@@ -32,9 +32,16 @@
"""
def __init__(self, patience, score_function, trainer):
- assert callable(score_function), "Argument score_function should be a function"
- assert patience > 0, "Argument patience should be positive"
- assert isinstance(trainer, Engine), "Argument trainer should be an instance of Engine"
+
+ if not callable(score_function):
+ raise TypeError("Argument score_function should be a function")
+
+ if patience < 1:
+ raise ValueError("Argument patience should be positive integer")
+
+ if not isinstance(trainer, Engine):
+ raise TypeError("Argument trainer should be an instance of Engine")
+
self.score_function = score_function
self.patience = patience
self.trainer = trainer
diff --git a/ignite/metrics/epoch_metric.py b/ignite/metrics/epoch_metric.py
--- a/ignite/metrics/epoch_metric.py
+++ b/ignite/metrics/epoch_metric.py
@@ -19,7 +19,10 @@
"""
def __init__(self, compute_fn, output_transform=lambda x: x):
- assert callable(compute_fn), "Argument compute_fn should be callable"
+
+ if not callable(compute_fn):
+ raise TypeError("Argument compute_fn should be callable")
+
super(EpochMetric, self).__init__(output_transform=output_transform)
self.compute_fn = compute_fn
@@ -30,11 +33,15 @@
def update(self, output):
y_pred, y = output
- assert 1 <= y_pred.ndimension() <= 2, "Predictions should be of shape (batch_size, n_classes)"
- assert 1 <= y.ndimension() <= 2, "Targets should be of shape (batch_size, n_classes)"
+ if y_pred.ndimension() not in (1, 2):
+ raise ValueError("Predictions should be of shape (batch_size, n_classes) or (batch_size, )")
+
+ if y.ndimension() not in (1, 2):
+ raise ValueError("Targets should be of shape (batch_size, n_classes) or (batch_size, )")
if y.ndimension() == 2:
- assert torch.equal(y ** 2, y), 'Targets should be binary (0 or 1)'
+ if not torch.equal(y ** 2, y):
+ raise ValueError('Targets should be binary (0 or 1)')
if y_pred.ndimension() == 2 and y_pred.shape[1] == 1:
y_pred = y_pred.squeeze(dim=-1)
diff --git a/ignite/metrics/loss.py b/ignite/metrics/loss.py
--- a/ignite/metrics/loss.py
+++ b/ignite/metrics/loss.py
@@ -38,7 +38,10 @@
else:
y_pred, y, kwargs = output
average_loss = self._loss_fn(y_pred, y, **kwargs)
- assert len(average_loss.shape) == 0, '`loss_fn` did not return the average loss'
+
+ if len(average_loss.shape) != 0:
+ raise ValueError('loss_fn did not return the average loss')
+
self._sum += average_loss.item() * y.shape[0]
self._num_examples += y.shape[0]
| {"golden_diff": "diff --git a/ignite/handlers/early_stopping.py b/ignite/handlers/early_stopping.py\n--- a/ignite/handlers/early_stopping.py\n+++ b/ignite/handlers/early_stopping.py\n@@ -32,9 +32,16 @@\n \n \"\"\"\n def __init__(self, patience, score_function, trainer):\n- assert callable(score_function), \"Argument score_function should be a function\"\n- assert patience > 0, \"Argument patience should be positive\"\n- assert isinstance(trainer, Engine), \"Argument trainer should be an instance of Engine\"\n+\n+ if not callable(score_function):\n+ raise TypeError(\"Argument score_function should be a function\")\n+\n+ if patience < 1:\n+ raise ValueError(\"Argument patience should be positive integer\")\n+\n+ if not isinstance(trainer, Engine):\n+ raise TypeError(\"Argument trainer should be an instance of Engine\")\n+\n self.score_function = score_function\n self.patience = patience\n self.trainer = trainer\ndiff --git a/ignite/metrics/epoch_metric.py b/ignite/metrics/epoch_metric.py\n--- a/ignite/metrics/epoch_metric.py\n+++ b/ignite/metrics/epoch_metric.py\n@@ -19,7 +19,10 @@\n \"\"\"\n \n def __init__(self, compute_fn, output_transform=lambda x: x):\n- assert callable(compute_fn), \"Argument compute_fn should be callable\"\n+\n+ if not callable(compute_fn):\n+ raise TypeError(\"Argument compute_fn should be callable\")\n+\n super(EpochMetric, self).__init__(output_transform=output_transform)\n self.compute_fn = compute_fn\n \n@@ -30,11 +33,15 @@\n def update(self, output):\n y_pred, y = output\n \n- assert 1 <= y_pred.ndimension() <= 2, \"Predictions should be of shape (batch_size, n_classes)\"\n- assert 1 <= y.ndimension() <= 2, \"Targets should be of shape (batch_size, n_classes)\"\n+ if y_pred.ndimension() not in (1, 2):\n+ raise ValueError(\"Predictions should be of shape (batch_size, n_classes) or (batch_size, )\")\n+\n+ if y.ndimension() not in (1, 2):\n+ raise ValueError(\"Targets should be of shape (batch_size, n_classes) or (batch_size, )\")\n \n if y.ndimension() == 2:\n- assert torch.equal(y ** 2, y), 'Targets should be binary (0 or 1)'\n+ if not torch.equal(y ** 2, y):\n+ raise ValueError('Targets should be binary (0 or 1)')\n \n if y_pred.ndimension() == 2 and y_pred.shape[1] == 1:\n y_pred = y_pred.squeeze(dim=-1)\ndiff --git a/ignite/metrics/loss.py b/ignite/metrics/loss.py\n--- a/ignite/metrics/loss.py\n+++ b/ignite/metrics/loss.py\n@@ -38,7 +38,10 @@\n else:\n y_pred, y, kwargs = output\n average_loss = self._loss_fn(y_pred, y, **kwargs)\n- assert len(average_loss.shape) == 0, '`loss_fn` did not return the average loss'\n+\n+ if len(average_loss.shape) != 0:\n+ raise ValueError('loss_fn did not return the average loss')\n+\n self._sum += average_loss.item() * y.shape[0]\n self._num_examples += y.shape[0]\n", "issue": "Replace asserts in the code by if condition then raise \nIt would be better to replace the code as \r\n\r\nhttps://github.com/pytorch/ignite/blob/0f1905e87b95779cf6544a5fe29b46519ad9d4e0/ignite/metrics/epoch_metric.py#L33-L34\r\n\r\nby \r\n```python\r\nif y_pred.ndimension() < 1 or y_pred.ndimension() > 2:\r\n raise TypeError(\"Predictions should be of shape (batch_size, n_classes)\"\r\n# etc\r\n```\r\n\r\nUsage of assert to change in `EpochMetric`, `EarlyStopping` and `Loss`\r\n\n", "before_files": [{"content": "import torch\n\nfrom ignite.metrics.metric import Metric\n\n\nclass EpochMetric(Metric):\n \"\"\"Class for metrics that should be computed on the entire output history of a model.\n Model's output and targets are restricted to be of shape `(batch_size, n_classes)`. Output\n datatype should be `float32`. Target datatype should be `long`.\n\n - `update` must receive output of the form `(y_pred, y)`.\n\n If target shape is `(batch_size, n_classes)` and `n_classes > 1` than it should be binary: e.g. `[[0, 1, 0, 1], ]`\n\n Args:\n compute_fn (callable): a callable with the signature (`torch.tensor`, `torch.tensor`) takes as the input\n `predictions` and `targets` and returns a scalar.\n\n \"\"\"\n\n def __init__(self, compute_fn, output_transform=lambda x: x):\n assert callable(compute_fn), \"Argument compute_fn should be callable\"\n super(EpochMetric, self).__init__(output_transform=output_transform)\n self.compute_fn = compute_fn\n\n def reset(self):\n self._predictions = torch.tensor([], dtype=torch.float32)\n self._targets = torch.tensor([], dtype=torch.long)\n\n def update(self, output):\n y_pred, y = output\n\n assert 1 <= y_pred.ndimension() <= 2, \"Predictions should be of shape (batch_size, n_classes)\"\n assert 1 <= y.ndimension() <= 2, \"Targets should be of shape (batch_size, n_classes)\"\n\n if y.ndimension() == 2:\n assert torch.equal(y ** 2, y), 'Targets should be binary (0 or 1)'\n\n if y_pred.ndimension() == 2 and y_pred.shape[1] == 1:\n y_pred = y_pred.squeeze(dim=-1)\n\n if y.ndimension() == 2 and y.shape[1] == 1:\n y = y.squeeze(dim=-1)\n\n y_pred = y_pred.type_as(self._predictions)\n y = y.type_as(self._targets)\n\n self._predictions = torch.cat([self._predictions, y_pred], dim=0)\n self._targets = torch.cat([self._targets, y], dim=0)\n\n # Check once the signature and execution of compute_fn\n if self._predictions.shape == y_pred.shape:\n try:\n self.compute_fn(self._predictions, self._targets)\n except Exception as e:\n raise RuntimeError(\"Problem with `compute_fn`:\\n {}\".format(e))\n\n def compute(self):\n return self.compute_fn(self._predictions, self._targets)\n", "path": "ignite/metrics/epoch_metric.py"}, {"content": "import logging\n\nfrom ignite.engine import Engine\n\n\nclass EarlyStopping(object):\n \"\"\"EarlyStopping handler can be used to stop the training if no improvement after a given number of events\n\n Args:\n patience (int):\n Number of events to wait if no improvement and then stop the training\n score_function (Callable):\n It should be a function taking a single argument, an `ignite.engine.Engine` object,\n and return a score `float`. An improvement is considered if the score is higher.\n trainer (Engine):\n trainer engine to stop the run if no improvement\n\n Examples:\n\n .. code-block:: python\n\n from ignite.engine import Engine, Events\n from ignite.handlers import EarlyStopping\n\n def score_function(engine):\n val_loss = engine.state.metrics['nll']\n return -val_loss\n\n handler = EarlyStopping(patience=10, score_function=score_function, trainer=trainer)\n # Note: the handler is attached to an *Evaluator* (runs one epoch on validation dataset)\n evaluator.add_event_handler(Events.COMPLETED, handler)\n\n \"\"\"\n def __init__(self, patience, score_function, trainer):\n assert callable(score_function), \"Argument score_function should be a function\"\n assert patience > 0, \"Argument patience should be positive\"\n assert isinstance(trainer, Engine), \"Argument trainer should be an instance of Engine\"\n self.score_function = score_function\n self.patience = patience\n self.trainer = trainer\n self.counter = 0\n self.best_score = None\n self._logger = logging.getLogger(__name__ + \".\" + self.__class__.__name__)\n self._logger.addHandler(logging.NullHandler())\n\n def __call__(self, engine):\n score = self.score_function(engine)\n\n if self.best_score is None:\n self.best_score = score\n elif score < self.best_score:\n self.counter += 1\n self._logger.debug(\"EarlyStopping: %i / %i\" % (self.counter, self.patience))\n if self.counter >= self.patience:\n self._logger.info(\"EarlyStopping: Stop training\")\n self.trainer.terminate()\n else:\n self.best_score = score\n self.counter = 0\n", "path": "ignite/handlers/early_stopping.py"}, {"content": "from __future__ import division\n\nfrom ignite.exceptions import NotComputableError\nfrom ignite.metrics.metric import Metric\n\n\nclass Loss(Metric):\n \"\"\"\n Calculates the average loss according to the passed loss_fn.\n\n Args:\n loss_fn (callable): a callable taking a prediction tensor, a target\n tensor, optionally other arguments, and returns the average loss\n over all observations in the batch.\n output_transform (callable): a callable that is used to transform the\n :class:`ignite.engine.Engine`'s `process_function`'s output into the\n form expected by the metric.\n This can be useful if, for example, you have a multi-output model and\n you want to compute the metric with respect to one of the outputs.\n The output is is expected to be a tuple (prediction, target) or\n (prediction, target, kwargs) where kwargs is a dictionary of extra\n keywords arguments.\n\n \"\"\"\n\n def __init__(self, loss_fn, output_transform=lambda x: x):\n super(Loss, self).__init__(output_transform)\n self._loss_fn = loss_fn\n\n def reset(self):\n self._sum = 0\n self._num_examples = 0\n\n def update(self, output):\n if len(output) == 2:\n y_pred, y = output\n kwargs = {}\n else:\n y_pred, y, kwargs = output\n average_loss = self._loss_fn(y_pred, y, **kwargs)\n assert len(average_loss.shape) == 0, '`loss_fn` did not return the average loss'\n self._sum += average_loss.item() * y.shape[0]\n self._num_examples += y.shape[0]\n\n def compute(self):\n if self._num_examples == 0:\n raise NotComputableError(\n 'Loss must have at least one example before it can be computed')\n return self._sum / self._num_examples\n", "path": "ignite/metrics/loss.py"}]} | 2,523 | 785 |
gh_patches_debug_3074 | rasdani/github-patches | git_diff | bookwyrm-social__bookwyrm-1341 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot make other users admin on the website
**Describe the bug**
For the moment, there is no way to promote an user to be an admin. One has to do it in the "./bw-dev shell"
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'Admin' and then the page of the user you want to promote
2. Promote the user and save
3. The "promoted user" logins in
4. Nope, not promoted
**Expected behavior**
The logged in promoted user should see the admin panel.
</issue>
<code>
[start of bookwyrm/views/user_admin.py]
1 """ manage user """
2 from django.contrib.auth.decorators import login_required, permission_required
3 from django.core.paginator import Paginator
4 from django.shortcuts import get_object_or_404
5 from django.template.response import TemplateResponse
6 from django.utils.decorators import method_decorator
7 from django.views import View
8
9 from bookwyrm import forms, models
10 from bookwyrm.settings import PAGE_LENGTH
11
12
13 # pylint: disable= no-self-use
14 @method_decorator(login_required, name="dispatch")
15 @method_decorator(
16 permission_required("bookwyrm.moderate_users", raise_exception=True),
17 name="dispatch",
18 )
19 class UserAdminList(View):
20 """admin view of users on this server"""
21
22 def get(self, request):
23 """list of users"""
24 filters = {}
25 server = request.GET.get("server")
26 if server:
27 server = models.FederatedServer.objects.filter(server_name=server).first()
28 filters["federated_server"] = server
29 filters["federated_server__isnull"] = False
30 username = request.GET.get("username")
31 if username:
32 filters["username__icontains"] = username
33 scope = request.GET.get("scope")
34 if scope:
35 filters["local"] = scope == "local"
36
37 users = models.User.objects.filter(**filters)
38
39 sort = request.GET.get("sort", "-created_date")
40 sort_fields = [
41 "created_date",
42 "last_active_date",
43 "username",
44 "federated_server__server_name",
45 "is_active",
46 ]
47 if sort in sort_fields + ["-{:s}".format(f) for f in sort_fields]:
48 users = users.order_by(sort)
49
50 paginated = Paginator(users, PAGE_LENGTH)
51 data = {
52 "users": paginated.get_page(request.GET.get("page")),
53 "sort": sort,
54 "server": server,
55 }
56 return TemplateResponse(request, "user_admin/user_admin.html", data)
57
58
59 @method_decorator(login_required, name="dispatch")
60 @method_decorator(
61 permission_required("bookwyrm.moderate_users", raise_exception=True),
62 name="dispatch",
63 )
64 class UserAdmin(View):
65 """moderate an individual user"""
66
67 def get(self, request, user):
68 """user view"""
69 user = get_object_or_404(models.User, id=user)
70 data = {"user": user, "group_form": forms.UserGroupForm()}
71 return TemplateResponse(request, "user_admin/user.html", data)
72
73 def post(self, request, user):
74 """update user group"""
75 user = get_object_or_404(models.User, id=user)
76 form = forms.UserGroupForm(request.POST, instance=user)
77 if form.is_valid():
78 form.save()
79 data = {"user": user, "group_form": form}
80 return TemplateResponse(request, "user_admin/user.html", data)
81
[end of bookwyrm/views/user_admin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bookwyrm/views/user_admin.py b/bookwyrm/views/user_admin.py
--- a/bookwyrm/views/user_admin.py
+++ b/bookwyrm/views/user_admin.py
@@ -13,7 +13,7 @@
# pylint: disable= no-self-use
@method_decorator(login_required, name="dispatch")
@method_decorator(
- permission_required("bookwyrm.moderate_users", raise_exception=True),
+ permission_required("bookwyrm.moderate_user", raise_exception=True),
name="dispatch",
)
class UserAdminList(View):
| {"golden_diff": "diff --git a/bookwyrm/views/user_admin.py b/bookwyrm/views/user_admin.py\n--- a/bookwyrm/views/user_admin.py\n+++ b/bookwyrm/views/user_admin.py\n@@ -13,7 +13,7 @@\n # pylint: disable= no-self-use\n @method_decorator(login_required, name=\"dispatch\")\n @method_decorator(\n- permission_required(\"bookwyrm.moderate_users\", raise_exception=True),\n+ permission_required(\"bookwyrm.moderate_user\", raise_exception=True),\n name=\"dispatch\",\n )\n class UserAdminList(View):\n", "issue": "Cannot make other users admin on the website\n**Describe the bug**\r\nFor the moment, there is no way to promote an user to be an admin. One has to do it in the \"./bw-dev shell\"\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Go to 'Admin' and then the page of the user you want to promote\r\n2. Promote the user and save\r\n3. The \"promoted user\" logins in\r\n4. Nope, not promoted\r\n\r\n**Expected behavior**\r\nThe logged in promoted user should see the admin panel.\r\n\n", "before_files": [{"content": "\"\"\" manage user \"\"\"\nfrom django.contrib.auth.decorators import login_required, permission_required\nfrom django.core.paginator import Paginator\nfrom django.shortcuts import get_object_or_404\nfrom django.template.response import TemplateResponse\nfrom django.utils.decorators import method_decorator\nfrom django.views import View\n\nfrom bookwyrm import forms, models\nfrom bookwyrm.settings import PAGE_LENGTH\n\n\n# pylint: disable= no-self-use\n@method_decorator(login_required, name=\"dispatch\")\n@method_decorator(\n permission_required(\"bookwyrm.moderate_users\", raise_exception=True),\n name=\"dispatch\",\n)\nclass UserAdminList(View):\n \"\"\"admin view of users on this server\"\"\"\n\n def get(self, request):\n \"\"\"list of users\"\"\"\n filters = {}\n server = request.GET.get(\"server\")\n if server:\n server = models.FederatedServer.objects.filter(server_name=server).first()\n filters[\"federated_server\"] = server\n filters[\"federated_server__isnull\"] = False\n username = request.GET.get(\"username\")\n if username:\n filters[\"username__icontains\"] = username\n scope = request.GET.get(\"scope\")\n if scope:\n filters[\"local\"] = scope == \"local\"\n\n users = models.User.objects.filter(**filters)\n\n sort = request.GET.get(\"sort\", \"-created_date\")\n sort_fields = [\n \"created_date\",\n \"last_active_date\",\n \"username\",\n \"federated_server__server_name\",\n \"is_active\",\n ]\n if sort in sort_fields + [\"-{:s}\".format(f) for f in sort_fields]:\n users = users.order_by(sort)\n\n paginated = Paginator(users, PAGE_LENGTH)\n data = {\n \"users\": paginated.get_page(request.GET.get(\"page\")),\n \"sort\": sort,\n \"server\": server,\n }\n return TemplateResponse(request, \"user_admin/user_admin.html\", data)\n\n\n@method_decorator(login_required, name=\"dispatch\")\n@method_decorator(\n permission_required(\"bookwyrm.moderate_users\", raise_exception=True),\n name=\"dispatch\",\n)\nclass UserAdmin(View):\n \"\"\"moderate an individual user\"\"\"\n\n def get(self, request, user):\n \"\"\"user view\"\"\"\n user = get_object_or_404(models.User, id=user)\n data = {\"user\": user, \"group_form\": forms.UserGroupForm()}\n return TemplateResponse(request, \"user_admin/user.html\", data)\n\n def post(self, request, user):\n \"\"\"update user group\"\"\"\n user = get_object_or_404(models.User, id=user)\n form = forms.UserGroupForm(request.POST, instance=user)\n if form.is_valid():\n form.save()\n data = {\"user\": user, \"group_form\": form}\n return TemplateResponse(request, \"user_admin/user.html\", data)\n", "path": "bookwyrm/views/user_admin.py"}]} | 1,414 | 121 |
gh_patches_debug_557 | rasdani/github-patches | git_diff | pex-tool__pex-743 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release 1.6.8
On the docket:
+ [x] Fixup pex re-exec during bootstrap. #741
+ [x] Pex should not re-exec when the current interpreter satifies constraints #709
+ [x] Pex should not lose PEX_PYTHON or PEX_PYTHON_PATH when re-exec-ing #710
+ [x] Fix resolution of `setup.py` project extras. #739
Deferred:
+ [ ] Remove PEX_HTTP_RETRIES and push into a flag for the pex tool #94
+ [ ] Sdist resolution is not always reproducible #735
</issue>
<code>
[start of pex/version.py]
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = '1.6.7'
5
[end of pex/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = '1.6.7'
+__version__ = '1.6.8'
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = '1.6.7'\n+__version__ = '1.6.8'\n", "issue": "Release 1.6.8\nOn the docket:\r\n\r\n+ [x] Fixup pex re-exec during bootstrap. #741 \r\n + [x] Pex should not re-exec when the current interpreter satifies constraints #709\r\n + [x] Pex should not lose PEX_PYTHON or PEX_PYTHON_PATH when re-exec-ing #710\r\n+ [x] Fix resolution of `setup.py` project extras. #739\r\n\r\nDeferred:\r\n\r\n+ [ ] Remove PEX_HTTP_RETRIES and push into a flag for the pex tool #94\r\n+ [ ] Sdist resolution is not always reproducible #735\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = '1.6.7'\n", "path": "pex/version.py"}]} | 724 | 94 |
gh_patches_debug_13557 | rasdani/github-patches | git_diff | mesonbuild__meson-3715 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Windows module fails on multiple resource files with same name
I have a project with multiple subfolders that contain resource scripts named 'rsrc.rc', this worked with at least 0.44.0, but fails with current master:
> meson.build:7:0: ERROR: Tried to create target "Windows resource for file 'rsrc.rc'", but a target of that name already exists.
Here is a small testcase: [rsrcbug.zip](https://github.com/mesonbuild/meson/files/2007861/rsrcbug.zip)
</issue>
<code>
[start of mesonbuild/modules/windows.py]
1 # Copyright 2015 The Meson development team
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16
17 from .. import mlog
18 from .. import mesonlib, dependencies, build
19 from ..mesonlib import MesonException, extract_as_list
20 from . import get_include_args
21 from . import ModuleReturnValue
22 from . import ExtensionModule
23 from ..interpreterbase import permittedKwargs, FeatureNewKwargs
24
25 class WindowsModule(ExtensionModule):
26
27 def detect_compiler(self, compilers):
28 for l in ('c', 'cpp'):
29 if l in compilers:
30 return compilers[l]
31 raise MesonException('Resource compilation requires a C or C++ compiler.')
32
33 @FeatureNewKwargs('windows.compile_resources', '0.47.0', ['depend_files'])
34 @permittedKwargs({'args', 'include_directories', 'depend_files'})
35 def compile_resources(self, state, args, kwargs):
36 comp = self.detect_compiler(state.compilers)
37
38 extra_args = mesonlib.stringlistify(kwargs.get('args', []))
39 wrc_deps = extract_as_list(kwargs, 'depend_files', pop = True)
40 inc_dirs = extract_as_list(kwargs, 'include_directories', pop = True)
41 for incd in inc_dirs:
42 if not isinstance(incd.held_object, (str, build.IncludeDirs)):
43 raise MesonException('Resource include dirs should be include_directories().')
44 extra_args += get_include_args(inc_dirs)
45
46 if comp.id == 'msvc':
47 rescomp = dependencies.ExternalProgram('rc', silent=True)
48 res_args = extra_args + ['/nologo', '/fo@OUTPUT@', '@INPUT@']
49 suffix = 'res'
50 else:
51 m = 'Argument {!r} has a space which may not work with windres due to ' \
52 'a MinGW bug: https://sourceware.org/bugzilla/show_bug.cgi?id=4933'
53 for arg in extra_args:
54 if ' ' in arg:
55 mlog.warning(m.format(arg))
56 rescomp_name = None
57 # FIXME: Does not handle `native: true` executables, see
58 # https://github.com/mesonbuild/meson/issues/1531
59 if state.environment.is_cross_build():
60 # If cross compiling see if windres has been specified in the
61 # cross file before trying to find it another way.
62 rescomp_name = state.environment.cross_info.config['binaries'].get('windres')
63 if rescomp_name is None:
64 # Pick-up env var WINDRES if set. This is often used for
65 # specifying an arch-specific windres.
66 rescomp_name = os.environ.get('WINDRES', 'windres')
67 rescomp = dependencies.ExternalProgram(rescomp_name, silent=True)
68 res_args = extra_args + ['@INPUT@', '@OUTPUT@']
69 suffix = 'o'
70 if not rescomp.found():
71 raise MesonException('Could not find Windows resource compiler "%s".' % rescomp_name)
72
73 res_targets = []
74
75 def add_target(src):
76 if isinstance(src, list):
77 for subsrc in src:
78 add_target(subsrc)
79 return
80
81 if hasattr(src, 'held_object'):
82 src = src.held_object
83
84 res_kwargs = {
85 'output': '@BASENAME@.' + suffix,
86 'input': [src],
87 'command': [rescomp] + res_args,
88 'depend_files': wrc_deps,
89 }
90
91 if isinstance(src, (str, mesonlib.File)):
92 name = 'file {!r}'.format(str(src))
93 elif isinstance(src, build.CustomTarget):
94 if len(src.get_outputs()) > 1:
95 raise MesonException('windows.compile_resources does not accept custom targets with more than 1 output.')
96
97 name = 'target {!r}'.format(src.get_id())
98 else:
99 raise MesonException('Unexpected source type {!r}. windows.compile_resources accepts only strings, files, custom targets, and lists thereof.'.format(src))
100
101 # Path separators are not allowed in target names
102 name = name.replace('/', '_').replace('\\', '_')
103
104 # instruct binutils windres to generate a preprocessor depfile
105 if comp.id != 'msvc':
106 res_kwargs['depfile'] = res_kwargs['output'] + '.d'
107 res_kwargs['command'] += ['--preprocessor-arg=-MD', '--preprocessor-arg=-MQ@OUTPUT@', '--preprocessor-arg=-MF@DEPFILE@']
108
109 res_targets.append(build.CustomTarget('Windows resource for ' + name, state.subdir, state.subproject, res_kwargs))
110
111 add_target(args)
112
113 return ModuleReturnValue(res_targets, [res_targets])
114
115 def initialize(*args, **kwargs):
116 return WindowsModule(*args, **kwargs)
117
[end of mesonbuild/modules/windows.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mesonbuild/modules/windows.py b/mesonbuild/modules/windows.py
--- a/mesonbuild/modules/windows.py
+++ b/mesonbuild/modules/windows.py
@@ -88,8 +88,10 @@
'depend_files': wrc_deps,
}
- if isinstance(src, (str, mesonlib.File)):
- name = 'file {!r}'.format(str(src))
+ if isinstance(src, str):
+ name = 'file {!r}'.format(os.path.join(state.subdir, src))
+ elif isinstance(src, mesonlib.File):
+ name = 'file {!r}'.format(src.relative_name())
elif isinstance(src, build.CustomTarget):
if len(src.get_outputs()) > 1:
raise MesonException('windows.compile_resources does not accept custom targets with more than 1 output.')
| {"golden_diff": "diff --git a/mesonbuild/modules/windows.py b/mesonbuild/modules/windows.py\n--- a/mesonbuild/modules/windows.py\n+++ b/mesonbuild/modules/windows.py\n@@ -88,8 +88,10 @@\n 'depend_files': wrc_deps,\n }\n \n- if isinstance(src, (str, mesonlib.File)):\n- name = 'file {!r}'.format(str(src))\n+ if isinstance(src, str):\n+ name = 'file {!r}'.format(os.path.join(state.subdir, src))\n+ elif isinstance(src, mesonlib.File):\n+ name = 'file {!r}'.format(src.relative_name())\n elif isinstance(src, build.CustomTarget):\n if len(src.get_outputs()) > 1:\n raise MesonException('windows.compile_resources does not accept custom targets with more than 1 output.')\n", "issue": "Windows module fails on multiple resource files with same name\nI have a project with multiple subfolders that contain resource scripts named 'rsrc.rc', this worked with at least 0.44.0, but fails with current master:\r\n\r\n> meson.build:7:0: ERROR: Tried to create target \"Windows resource for file 'rsrc.rc'\", but a target of that name already exists.\r\n\r\nHere is a small testcase: [rsrcbug.zip](https://github.com/mesonbuild/meson/files/2007861/rsrcbug.zip)\r\n\n", "before_files": [{"content": "# Copyright 2015 The Meson development team\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\nfrom .. import mlog\nfrom .. import mesonlib, dependencies, build\nfrom ..mesonlib import MesonException, extract_as_list\nfrom . import get_include_args\nfrom . import ModuleReturnValue\nfrom . import ExtensionModule\nfrom ..interpreterbase import permittedKwargs, FeatureNewKwargs\n\nclass WindowsModule(ExtensionModule):\n\n def detect_compiler(self, compilers):\n for l in ('c', 'cpp'):\n if l in compilers:\n return compilers[l]\n raise MesonException('Resource compilation requires a C or C++ compiler.')\n\n @FeatureNewKwargs('windows.compile_resources', '0.47.0', ['depend_files'])\n @permittedKwargs({'args', 'include_directories', 'depend_files'})\n def compile_resources(self, state, args, kwargs):\n comp = self.detect_compiler(state.compilers)\n\n extra_args = mesonlib.stringlistify(kwargs.get('args', []))\n wrc_deps = extract_as_list(kwargs, 'depend_files', pop = True)\n inc_dirs = extract_as_list(kwargs, 'include_directories', pop = True)\n for incd in inc_dirs:\n if not isinstance(incd.held_object, (str, build.IncludeDirs)):\n raise MesonException('Resource include dirs should be include_directories().')\n extra_args += get_include_args(inc_dirs)\n\n if comp.id == 'msvc':\n rescomp = dependencies.ExternalProgram('rc', silent=True)\n res_args = extra_args + ['/nologo', '/fo@OUTPUT@', '@INPUT@']\n suffix = 'res'\n else:\n m = 'Argument {!r} has a space which may not work with windres due to ' \\\n 'a MinGW bug: https://sourceware.org/bugzilla/show_bug.cgi?id=4933'\n for arg in extra_args:\n if ' ' in arg:\n mlog.warning(m.format(arg))\n rescomp_name = None\n # FIXME: Does not handle `native: true` executables, see\n # https://github.com/mesonbuild/meson/issues/1531\n if state.environment.is_cross_build():\n # If cross compiling see if windres has been specified in the\n # cross file before trying to find it another way.\n rescomp_name = state.environment.cross_info.config['binaries'].get('windres')\n if rescomp_name is None:\n # Pick-up env var WINDRES if set. This is often used for\n # specifying an arch-specific windres.\n rescomp_name = os.environ.get('WINDRES', 'windres')\n rescomp = dependencies.ExternalProgram(rescomp_name, silent=True)\n res_args = extra_args + ['@INPUT@', '@OUTPUT@']\n suffix = 'o'\n if not rescomp.found():\n raise MesonException('Could not find Windows resource compiler \"%s\".' % rescomp_name)\n\n res_targets = []\n\n def add_target(src):\n if isinstance(src, list):\n for subsrc in src:\n add_target(subsrc)\n return\n\n if hasattr(src, 'held_object'):\n src = src.held_object\n\n res_kwargs = {\n 'output': '@BASENAME@.' + suffix,\n 'input': [src],\n 'command': [rescomp] + res_args,\n 'depend_files': wrc_deps,\n }\n\n if isinstance(src, (str, mesonlib.File)):\n name = 'file {!r}'.format(str(src))\n elif isinstance(src, build.CustomTarget):\n if len(src.get_outputs()) > 1:\n raise MesonException('windows.compile_resources does not accept custom targets with more than 1 output.')\n\n name = 'target {!r}'.format(src.get_id())\n else:\n raise MesonException('Unexpected source type {!r}. windows.compile_resources accepts only strings, files, custom targets, and lists thereof.'.format(src))\n\n # Path separators are not allowed in target names\n name = name.replace('/', '_').replace('\\\\', '_')\n\n # instruct binutils windres to generate a preprocessor depfile\n if comp.id != 'msvc':\n res_kwargs['depfile'] = res_kwargs['output'] + '.d'\n res_kwargs['command'] += ['--preprocessor-arg=-MD', '--preprocessor-arg=-MQ@OUTPUT@', '--preprocessor-arg=-MF@DEPFILE@']\n\n res_targets.append(build.CustomTarget('Windows resource for ' + name, state.subdir, state.subproject, res_kwargs))\n\n add_target(args)\n\n return ModuleReturnValue(res_targets, [res_targets])\n\ndef initialize(*args, **kwargs):\n return WindowsModule(*args, **kwargs)\n", "path": "mesonbuild/modules/windows.py"}]} | 2,053 | 181 |
gh_patches_debug_30757 | rasdani/github-patches | git_diff | pypa__setuptools-2306 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Don't install setup_requires when run as a PEP 517 backend
PEP 517 separates the responsibilities around build requirements: the backend is responsible for saying what is required, and the frontend is responsible for ensuring they're available to the build.
In setuptools, build requirements are defined by `setup_requires`, and these get passed through to PEP 517's `get_requires_for_build_*` hooks. There is a monkeypatch to return them from these hooks and prevent setuptools trying to install them itself:
https://github.com/pypa/setuptools/blob/5e60dc50e540a942aeb558aabe7d92ab7eb13d4b/setuptools/build_meta.py#L56-L75
But something similar to that - preventing installation - should really be in place for all the PEP 517 hooks, because a PEP 517 backend isn't responsible for installing dependencies.
Why does this matter? Pip has the `--no-build-isolation` option, with which the caller can declare that they have taken care of build dependencies and pip should try to build the package in the current environment. This is useful for downstream packagers, and for experimenting with different versions of your build dependencies. But setuptools doesn't know about this, so it charges ahead and attempts to install things when that's not what you want.
The workaround I'm looking at is to only specify `setup_requires` if `'egg_info' in sys.argv`, as is the case when the `get_requires_for_build_*` hooks are called. But this is clearly not ideal.
</issue>
<code>
[start of setuptools/build_meta.py]
1 """A PEP 517 interface to setuptools
2
3 Previously, when a user or a command line tool (let's call it a "frontend")
4 needed to make a request of setuptools to take a certain action, for
5 example, generating a list of installation requirements, the frontend would
6 would call "setup.py egg_info" or "setup.py bdist_wheel" on the command line.
7
8 PEP 517 defines a different method of interfacing with setuptools. Rather
9 than calling "setup.py" directly, the frontend should:
10
11 1. Set the current directory to the directory with a setup.py file
12 2. Import this module into a safe python interpreter (one in which
13 setuptools can potentially set global variables or crash hard).
14 3. Call one of the functions defined in PEP 517.
15
16 What each function does is defined in PEP 517. However, here is a "casual"
17 definition of the functions (this definition should not be relied on for
18 bug reports or API stability):
19
20 - `build_wheel`: build a wheel in the folder and return the basename
21 - `get_requires_for_build_wheel`: get the `setup_requires` to build
22 - `prepare_metadata_for_build_wheel`: get the `install_requires`
23 - `build_sdist`: build an sdist in the folder and return the basename
24 - `get_requires_for_build_sdist`: get the `setup_requires` to build
25
26 Again, this is not a formal definition! Just a "taste" of the module.
27 """
28
29 import io
30 import os
31 import sys
32 import tokenize
33 import shutil
34 import contextlib
35
36 import setuptools
37 import distutils
38 from setuptools.py31compat import TemporaryDirectory
39
40 from pkg_resources import parse_requirements
41
42 __all__ = ['get_requires_for_build_sdist',
43 'get_requires_for_build_wheel',
44 'prepare_metadata_for_build_wheel',
45 'build_wheel',
46 'build_sdist',
47 '__legacy__',
48 'SetupRequirementsError']
49
50
51 class SetupRequirementsError(BaseException):
52 def __init__(self, specifiers):
53 self.specifiers = specifiers
54
55
56 class Distribution(setuptools.dist.Distribution):
57 def fetch_build_eggs(self, specifiers):
58 specifier_list = list(map(str, parse_requirements(specifiers)))
59
60 raise SetupRequirementsError(specifier_list)
61
62 @classmethod
63 @contextlib.contextmanager
64 def patch(cls):
65 """
66 Replace
67 distutils.dist.Distribution with this class
68 for the duration of this context.
69 """
70 orig = distutils.core.Distribution
71 distutils.core.Distribution = cls
72 try:
73 yield
74 finally:
75 distutils.core.Distribution = orig
76
77
78 def _to_str(s):
79 """
80 Convert a filename to a string (on Python 2, explicitly
81 a byte string, not Unicode) as distutils checks for the
82 exact type str.
83 """
84 if sys.version_info[0] == 2 and not isinstance(s, str):
85 # Assume it's Unicode, as that's what the PEP says
86 # should be provided.
87 return s.encode(sys.getfilesystemencoding())
88 return s
89
90
91 def _get_immediate_subdirectories(a_dir):
92 return [name for name in os.listdir(a_dir)
93 if os.path.isdir(os.path.join(a_dir, name))]
94
95
96 def _file_with_extension(directory, extension):
97 matching = (
98 f for f in os.listdir(directory)
99 if f.endswith(extension)
100 )
101 file, = matching
102 return file
103
104
105 def _open_setup_script(setup_script):
106 if not os.path.exists(setup_script):
107 # Supply a default setup.py
108 return io.StringIO(u"from setuptools import setup; setup()")
109
110 return getattr(tokenize, 'open', open)(setup_script)
111
112
113 class _BuildMetaBackend(object):
114
115 def _fix_config(self, config_settings):
116 config_settings = config_settings or {}
117 config_settings.setdefault('--global-option', [])
118 return config_settings
119
120 def _get_build_requires(self, config_settings, requirements):
121 config_settings = self._fix_config(config_settings)
122
123 sys.argv = sys.argv[:1] + ['egg_info'] + \
124 config_settings["--global-option"]
125 try:
126 with Distribution.patch():
127 self.run_setup()
128 except SetupRequirementsError as e:
129 requirements += e.specifiers
130
131 return requirements
132
133 def run_setup(self, setup_script='setup.py'):
134 # Note that we can reuse our build directory between calls
135 # Correctness comes first, then optimization later
136 __file__ = setup_script
137 __name__ = '__main__'
138
139 with _open_setup_script(__file__) as f:
140 code = f.read().replace(r'\r\n', r'\n')
141
142 exec(compile(code, __file__, 'exec'), locals())
143
144 def get_requires_for_build_wheel(self, config_settings=None):
145 config_settings = self._fix_config(config_settings)
146 return self._get_build_requires(
147 config_settings, requirements=['wheel'])
148
149 def get_requires_for_build_sdist(self, config_settings=None):
150 config_settings = self._fix_config(config_settings)
151 return self._get_build_requires(config_settings, requirements=[])
152
153 def prepare_metadata_for_build_wheel(self, metadata_directory,
154 config_settings=None):
155 sys.argv = sys.argv[:1] + ['dist_info', '--egg-base',
156 _to_str(metadata_directory)]
157 self.run_setup()
158
159 dist_info_directory = metadata_directory
160 while True:
161 dist_infos = [f for f in os.listdir(dist_info_directory)
162 if f.endswith('.dist-info')]
163
164 if (
165 len(dist_infos) == 0 and
166 len(_get_immediate_subdirectories(dist_info_directory)) == 1
167 ):
168
169 dist_info_directory = os.path.join(
170 dist_info_directory, os.listdir(dist_info_directory)[0])
171 continue
172
173 assert len(dist_infos) == 1
174 break
175
176 # PEP 517 requires that the .dist-info directory be placed in the
177 # metadata_directory. To comply, we MUST copy the directory to the root
178 if dist_info_directory != metadata_directory:
179 shutil.move(
180 os.path.join(dist_info_directory, dist_infos[0]),
181 metadata_directory)
182 shutil.rmtree(dist_info_directory, ignore_errors=True)
183
184 return dist_infos[0]
185
186 def _build_with_temp_dir(self, setup_command, result_extension,
187 result_directory, config_settings):
188 config_settings = self._fix_config(config_settings)
189 result_directory = os.path.abspath(result_directory)
190
191 # Build in a temporary directory, then copy to the target.
192 os.makedirs(result_directory, exist_ok=True)
193 with TemporaryDirectory(dir=result_directory) as tmp_dist_dir:
194 sys.argv = (sys.argv[:1] + setup_command +
195 ['--dist-dir', tmp_dist_dir] +
196 config_settings["--global-option"])
197 self.run_setup()
198
199 result_basename = _file_with_extension(
200 tmp_dist_dir, result_extension)
201 result_path = os.path.join(result_directory, result_basename)
202 if os.path.exists(result_path):
203 # os.rename will fail overwriting on non-Unix.
204 os.remove(result_path)
205 os.rename(os.path.join(tmp_dist_dir, result_basename), result_path)
206
207 return result_basename
208
209 def build_wheel(self, wheel_directory, config_settings=None,
210 metadata_directory=None):
211 return self._build_with_temp_dir(['bdist_wheel'], '.whl',
212 wheel_directory, config_settings)
213
214 def build_sdist(self, sdist_directory, config_settings=None):
215 return self._build_with_temp_dir(['sdist', '--formats', 'gztar'],
216 '.tar.gz', sdist_directory,
217 config_settings)
218
219
220 class _BuildMetaLegacyBackend(_BuildMetaBackend):
221 """Compatibility backend for setuptools
222
223 This is a version of setuptools.build_meta that endeavors
224 to maintain backwards
225 compatibility with pre-PEP 517 modes of invocation. It
226 exists as a temporary
227 bridge between the old packaging mechanism and the new
228 packaging mechanism,
229 and will eventually be removed.
230 """
231 def run_setup(self, setup_script='setup.py'):
232 # In order to maintain compatibility with scripts assuming that
233 # the setup.py script is in a directory on the PYTHONPATH, inject
234 # '' into sys.path. (pypa/setuptools#1642)
235 sys_path = list(sys.path) # Save the original path
236
237 script_dir = os.path.dirname(os.path.abspath(setup_script))
238 if script_dir not in sys.path:
239 sys.path.insert(0, script_dir)
240
241 # Some setup.py scripts (e.g. in pygame and numpy) use sys.argv[0] to
242 # get the directory of the source code. They expect it to refer to the
243 # setup.py script.
244 sys_argv_0 = sys.argv[0]
245 sys.argv[0] = setup_script
246
247 try:
248 super(_BuildMetaLegacyBackend,
249 self).run_setup(setup_script=setup_script)
250 finally:
251 # While PEP 517 frontends should be calling each hook in a fresh
252 # subprocess according to the standard (and thus it should not be
253 # strictly necessary to restore the old sys.path), we'll restore
254 # the original path so that the path manipulation does not persist
255 # within the hook after run_setup is called.
256 sys.path[:] = sys_path
257 sys.argv[0] = sys_argv_0
258
259
260 # The primary backend
261 _BACKEND = _BuildMetaBackend()
262
263 get_requires_for_build_wheel = _BACKEND.get_requires_for_build_wheel
264 get_requires_for_build_sdist = _BACKEND.get_requires_for_build_sdist
265 prepare_metadata_for_build_wheel = _BACKEND.prepare_metadata_for_build_wheel
266 build_wheel = _BACKEND.build_wheel
267 build_sdist = _BACKEND.build_sdist
268
269
270 # The legacy backend
271 __legacy__ = _BuildMetaLegacyBackend()
272
[end of setuptools/build_meta.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setuptools/build_meta.py b/setuptools/build_meta.py
--- a/setuptools/build_meta.py
+++ b/setuptools/build_meta.py
@@ -75,6 +75,22 @@
distutils.core.Distribution = orig
[email protected]
+def no_install_setup_requires():
+ """Temporarily disable installing setup_requires
+
+ Under PEP 517, the backend reports build dependencies to the frontend,
+ and the frontend is responsible for ensuring they're installed.
+ So setuptools (acting as a backend) should not try to install them.
+ """
+ orig = setuptools._install_setup_requires
+ setuptools._install_setup_requires = lambda attrs: None
+ try:
+ yield
+ finally:
+ setuptools._install_setup_requires = orig
+
+
def _to_str(s):
"""
Convert a filename to a string (on Python 2, explicitly
@@ -154,7 +170,8 @@
config_settings=None):
sys.argv = sys.argv[:1] + ['dist_info', '--egg-base',
_to_str(metadata_directory)]
- self.run_setup()
+ with no_install_setup_requires():
+ self.run_setup()
dist_info_directory = metadata_directory
while True:
@@ -194,7 +211,8 @@
sys.argv = (sys.argv[:1] + setup_command +
['--dist-dir', tmp_dist_dir] +
config_settings["--global-option"])
- self.run_setup()
+ with no_install_setup_requires():
+ self.run_setup()
result_basename = _file_with_extension(
tmp_dist_dir, result_extension)
| {"golden_diff": "diff --git a/setuptools/build_meta.py b/setuptools/build_meta.py\n--- a/setuptools/build_meta.py\n+++ b/setuptools/build_meta.py\n@@ -75,6 +75,22 @@\n distutils.core.Distribution = orig\n \n \[email protected]\n+def no_install_setup_requires():\n+ \"\"\"Temporarily disable installing setup_requires\n+\n+ Under PEP 517, the backend reports build dependencies to the frontend,\n+ and the frontend is responsible for ensuring they're installed.\n+ So setuptools (acting as a backend) should not try to install them.\n+ \"\"\"\n+ orig = setuptools._install_setup_requires\n+ setuptools._install_setup_requires = lambda attrs: None\n+ try:\n+ yield\n+ finally:\n+ setuptools._install_setup_requires = orig\n+\n+\n def _to_str(s):\n \"\"\"\n Convert a filename to a string (on Python 2, explicitly\n@@ -154,7 +170,8 @@\n config_settings=None):\n sys.argv = sys.argv[:1] + ['dist_info', '--egg-base',\n _to_str(metadata_directory)]\n- self.run_setup()\n+ with no_install_setup_requires():\n+ self.run_setup()\n \n dist_info_directory = metadata_directory\n while True:\n@@ -194,7 +211,8 @@\n sys.argv = (sys.argv[:1] + setup_command +\n ['--dist-dir', tmp_dist_dir] +\n config_settings[\"--global-option\"])\n- self.run_setup()\n+ with no_install_setup_requires():\n+ self.run_setup()\n \n result_basename = _file_with_extension(\n tmp_dist_dir, result_extension)\n", "issue": "Don't install setup_requires when run as a PEP 517 backend\nPEP 517 separates the responsibilities around build requirements: the backend is responsible for saying what is required, and the frontend is responsible for ensuring they're available to the build.\r\n\r\nIn setuptools, build requirements are defined by `setup_requires`, and these get passed through to PEP 517's `get_requires_for_build_*` hooks. There is a monkeypatch to return them from these hooks and prevent setuptools trying to install them itself:\r\n\r\nhttps://github.com/pypa/setuptools/blob/5e60dc50e540a942aeb558aabe7d92ab7eb13d4b/setuptools/build_meta.py#L56-L75\r\n\r\nBut something similar to that - preventing installation - should really be in place for all the PEP 517 hooks, because a PEP 517 backend isn't responsible for installing dependencies.\r\n\r\nWhy does this matter? Pip has the `--no-build-isolation` option, with which the caller can declare that they have taken care of build dependencies and pip should try to build the package in the current environment. This is useful for downstream packagers, and for experimenting with different versions of your build dependencies. But setuptools doesn't know about this, so it charges ahead and attempts to install things when that's not what you want.\r\n\r\nThe workaround I'm looking at is to only specify `setup_requires` if `'egg_info' in sys.argv`, as is the case when the `get_requires_for_build_*` hooks are called. But this is clearly not ideal.\n", "before_files": [{"content": "\"\"\"A PEP 517 interface to setuptools\n\nPreviously, when a user or a command line tool (let's call it a \"frontend\")\nneeded to make a request of setuptools to take a certain action, for\nexample, generating a list of installation requirements, the frontend would\nwould call \"setup.py egg_info\" or \"setup.py bdist_wheel\" on the command line.\n\nPEP 517 defines a different method of interfacing with setuptools. Rather\nthan calling \"setup.py\" directly, the frontend should:\n\n 1. Set the current directory to the directory with a setup.py file\n 2. Import this module into a safe python interpreter (one in which\n setuptools can potentially set global variables or crash hard).\n 3. Call one of the functions defined in PEP 517.\n\nWhat each function does is defined in PEP 517. However, here is a \"casual\"\ndefinition of the functions (this definition should not be relied on for\nbug reports or API stability):\n\n - `build_wheel`: build a wheel in the folder and return the basename\n - `get_requires_for_build_wheel`: get the `setup_requires` to build\n - `prepare_metadata_for_build_wheel`: get the `install_requires`\n - `build_sdist`: build an sdist in the folder and return the basename\n - `get_requires_for_build_sdist`: get the `setup_requires` to build\n\nAgain, this is not a formal definition! Just a \"taste\" of the module.\n\"\"\"\n\nimport io\nimport os\nimport sys\nimport tokenize\nimport shutil\nimport contextlib\n\nimport setuptools\nimport distutils\nfrom setuptools.py31compat import TemporaryDirectory\n\nfrom pkg_resources import parse_requirements\n\n__all__ = ['get_requires_for_build_sdist',\n 'get_requires_for_build_wheel',\n 'prepare_metadata_for_build_wheel',\n 'build_wheel',\n 'build_sdist',\n '__legacy__',\n 'SetupRequirementsError']\n\n\nclass SetupRequirementsError(BaseException):\n def __init__(self, specifiers):\n self.specifiers = specifiers\n\n\nclass Distribution(setuptools.dist.Distribution):\n def fetch_build_eggs(self, specifiers):\n specifier_list = list(map(str, parse_requirements(specifiers)))\n\n raise SetupRequirementsError(specifier_list)\n\n @classmethod\n @contextlib.contextmanager\n def patch(cls):\n \"\"\"\n Replace\n distutils.dist.Distribution with this class\n for the duration of this context.\n \"\"\"\n orig = distutils.core.Distribution\n distutils.core.Distribution = cls\n try:\n yield\n finally:\n distutils.core.Distribution = orig\n\n\ndef _to_str(s):\n \"\"\"\n Convert a filename to a string (on Python 2, explicitly\n a byte string, not Unicode) as distutils checks for the\n exact type str.\n \"\"\"\n if sys.version_info[0] == 2 and not isinstance(s, str):\n # Assume it's Unicode, as that's what the PEP says\n # should be provided.\n return s.encode(sys.getfilesystemencoding())\n return s\n\n\ndef _get_immediate_subdirectories(a_dir):\n return [name for name in os.listdir(a_dir)\n if os.path.isdir(os.path.join(a_dir, name))]\n\n\ndef _file_with_extension(directory, extension):\n matching = (\n f for f in os.listdir(directory)\n if f.endswith(extension)\n )\n file, = matching\n return file\n\n\ndef _open_setup_script(setup_script):\n if not os.path.exists(setup_script):\n # Supply a default setup.py\n return io.StringIO(u\"from setuptools import setup; setup()\")\n\n return getattr(tokenize, 'open', open)(setup_script)\n\n\nclass _BuildMetaBackend(object):\n\n def _fix_config(self, config_settings):\n config_settings = config_settings or {}\n config_settings.setdefault('--global-option', [])\n return config_settings\n\n def _get_build_requires(self, config_settings, requirements):\n config_settings = self._fix_config(config_settings)\n\n sys.argv = sys.argv[:1] + ['egg_info'] + \\\n config_settings[\"--global-option\"]\n try:\n with Distribution.patch():\n self.run_setup()\n except SetupRequirementsError as e:\n requirements += e.specifiers\n\n return requirements\n\n def run_setup(self, setup_script='setup.py'):\n # Note that we can reuse our build directory between calls\n # Correctness comes first, then optimization later\n __file__ = setup_script\n __name__ = '__main__'\n\n with _open_setup_script(__file__) as f:\n code = f.read().replace(r'\\r\\n', r'\\n')\n\n exec(compile(code, __file__, 'exec'), locals())\n\n def get_requires_for_build_wheel(self, config_settings=None):\n config_settings = self._fix_config(config_settings)\n return self._get_build_requires(\n config_settings, requirements=['wheel'])\n\n def get_requires_for_build_sdist(self, config_settings=None):\n config_settings = self._fix_config(config_settings)\n return self._get_build_requires(config_settings, requirements=[])\n\n def prepare_metadata_for_build_wheel(self, metadata_directory,\n config_settings=None):\n sys.argv = sys.argv[:1] + ['dist_info', '--egg-base',\n _to_str(metadata_directory)]\n self.run_setup()\n\n dist_info_directory = metadata_directory\n while True:\n dist_infos = [f for f in os.listdir(dist_info_directory)\n if f.endswith('.dist-info')]\n\n if (\n len(dist_infos) == 0 and\n len(_get_immediate_subdirectories(dist_info_directory)) == 1\n ):\n\n dist_info_directory = os.path.join(\n dist_info_directory, os.listdir(dist_info_directory)[0])\n continue\n\n assert len(dist_infos) == 1\n break\n\n # PEP 517 requires that the .dist-info directory be placed in the\n # metadata_directory. To comply, we MUST copy the directory to the root\n if dist_info_directory != metadata_directory:\n shutil.move(\n os.path.join(dist_info_directory, dist_infos[0]),\n metadata_directory)\n shutil.rmtree(dist_info_directory, ignore_errors=True)\n\n return dist_infos[0]\n\n def _build_with_temp_dir(self, setup_command, result_extension,\n result_directory, config_settings):\n config_settings = self._fix_config(config_settings)\n result_directory = os.path.abspath(result_directory)\n\n # Build in a temporary directory, then copy to the target.\n os.makedirs(result_directory, exist_ok=True)\n with TemporaryDirectory(dir=result_directory) as tmp_dist_dir:\n sys.argv = (sys.argv[:1] + setup_command +\n ['--dist-dir', tmp_dist_dir] +\n config_settings[\"--global-option\"])\n self.run_setup()\n\n result_basename = _file_with_extension(\n tmp_dist_dir, result_extension)\n result_path = os.path.join(result_directory, result_basename)\n if os.path.exists(result_path):\n # os.rename will fail overwriting on non-Unix.\n os.remove(result_path)\n os.rename(os.path.join(tmp_dist_dir, result_basename), result_path)\n\n return result_basename\n\n def build_wheel(self, wheel_directory, config_settings=None,\n metadata_directory=None):\n return self._build_with_temp_dir(['bdist_wheel'], '.whl',\n wheel_directory, config_settings)\n\n def build_sdist(self, sdist_directory, config_settings=None):\n return self._build_with_temp_dir(['sdist', '--formats', 'gztar'],\n '.tar.gz', sdist_directory,\n config_settings)\n\n\nclass _BuildMetaLegacyBackend(_BuildMetaBackend):\n \"\"\"Compatibility backend for setuptools\n\n This is a version of setuptools.build_meta that endeavors\n to maintain backwards\n compatibility with pre-PEP 517 modes of invocation. It\n exists as a temporary\n bridge between the old packaging mechanism and the new\n packaging mechanism,\n and will eventually be removed.\n \"\"\"\n def run_setup(self, setup_script='setup.py'):\n # In order to maintain compatibility with scripts assuming that\n # the setup.py script is in a directory on the PYTHONPATH, inject\n # '' into sys.path. (pypa/setuptools#1642)\n sys_path = list(sys.path) # Save the original path\n\n script_dir = os.path.dirname(os.path.abspath(setup_script))\n if script_dir not in sys.path:\n sys.path.insert(0, script_dir)\n\n # Some setup.py scripts (e.g. in pygame and numpy) use sys.argv[0] to\n # get the directory of the source code. They expect it to refer to the\n # setup.py script.\n sys_argv_0 = sys.argv[0]\n sys.argv[0] = setup_script\n\n try:\n super(_BuildMetaLegacyBackend,\n self).run_setup(setup_script=setup_script)\n finally:\n # While PEP 517 frontends should be calling each hook in a fresh\n # subprocess according to the standard (and thus it should not be\n # strictly necessary to restore the old sys.path), we'll restore\n # the original path so that the path manipulation does not persist\n # within the hook after run_setup is called.\n sys.path[:] = sys_path\n sys.argv[0] = sys_argv_0\n\n\n# The primary backend\n_BACKEND = _BuildMetaBackend()\n\nget_requires_for_build_wheel = _BACKEND.get_requires_for_build_wheel\nget_requires_for_build_sdist = _BACKEND.get_requires_for_build_sdist\nprepare_metadata_for_build_wheel = _BACKEND.prepare_metadata_for_build_wheel\nbuild_wheel = _BACKEND.build_wheel\nbuild_sdist = _BACKEND.build_sdist\n\n\n# The legacy backend\n__legacy__ = _BuildMetaLegacyBackend()\n", "path": "setuptools/build_meta.py"}]} | 3,720 | 364 |
gh_patches_debug_36499 | rasdani/github-patches | git_diff | pytorch__ignite-380 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Issue with metric arithmetics
I'm trying to define my metric as
```python
from ignite.metrics import Accuracy
accuracy = Accuracy()
error_metric = 1.0 - accuracy
```
and I got the following error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-70-c4c69e70a6d5> in <module>()
2
3 accuracy = Accuracy()
----> 4 error_metric = 1.0 - accuracy
TypeError: unsupported operand type(s) for -: 'float' and 'Accuracy'
```
But I can define
```python
from ignite.metrics import Accuracy
accuracy = Accuracy()
error_metric = (accuracy - 1.0) * -1.0
```
cc @zasdfgbnm
Issue with metric arithmetics
I'm trying to define my metric as
```python
from ignite.metrics import Accuracy
accuracy = Accuracy()
error_metric = 1.0 - accuracy
```
and I got the following error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-70-c4c69e70a6d5> in <module>()
2
3 accuracy = Accuracy()
----> 4 error_metric = 1.0 - accuracy
TypeError: unsupported operand type(s) for -: 'float' and 'Accuracy'
```
But I can define
```python
from ignite.metrics import Accuracy
accuracy = Accuracy()
error_metric = (accuracy - 1.0) * -1.0
```
cc @zasdfgbnm
</issue>
<code>
[start of ignite/metrics/metric.py]
1 from abc import ABCMeta, abstractmethod
2 from ignite._six import with_metaclass
3 from ignite.engine import Events
4 import torch
5
6
7 class Metric(with_metaclass(ABCMeta, object)):
8 """
9 Base class for all Metrics.
10
11 Args:
12 output_transform (callable, optional): a callable that is used to transform the
13 :class:`ignite.engine.Engine`'s `process_function`'s output into the
14 form expected by the metric. This can be useful if, for example, you have a multi-output model and
15 you want to compute the metric with respect to one of the outputs.
16
17 """
18
19 def __init__(self, output_transform=lambda x: x):
20 self._output_transform = output_transform
21 self.reset()
22
23 @abstractmethod
24 def reset(self):
25 """
26 Resets the metric to to it's initial state.
27
28 This is called at the start of each epoch.
29 """
30 pass
31
32 @abstractmethod
33 def update(self, output):
34 """
35 Updates the metric's state using the passed batch output.
36
37 This is called once for each batch.
38
39 Args:
40 output: the is the output from the engine's process function
41 """
42 pass
43
44 @abstractmethod
45 def compute(self):
46 """
47 Computes the metric based on it's accumulated state.
48
49 This is called at the end of each epoch.
50
51 Returns:
52 Any: the actual quantity of interest
53
54 Raises:
55 NotComputableError: raised when the metric cannot be computed
56 """
57 pass
58
59 def started(self, engine):
60 self.reset()
61
62 @torch.no_grad()
63 def iteration_completed(self, engine):
64 output = self._output_transform(engine.state.output)
65 self.update(output)
66
67 def completed(self, engine, name):
68 engine.state.metrics[name] = self.compute()
69
70 def attach(self, engine, name):
71 engine.add_event_handler(Events.EPOCH_COMPLETED, self.completed, name)
72 if not engine.has_event_handler(self.started, Events.EPOCH_STARTED):
73 engine.add_event_handler(Events.EPOCH_STARTED, self.started)
74 if not engine.has_event_handler(self.iteration_completed, Events.ITERATION_COMPLETED):
75 engine.add_event_handler(Events.ITERATION_COMPLETED, self.iteration_completed)
76
77 def __add__(self, other):
78 from ignite.metrics import MetricsLambda
79 return MetricsLambda(lambda x, y: x + y, self, other)
80
81 def __sub__(self, other):
82 from ignite.metrics import MetricsLambda
83 return MetricsLambda(lambda x, y: x - y, self, other)
84
85 def __mul__(self, other):
86 from ignite.metrics import MetricsLambda
87 return MetricsLambda(lambda x, y: x * y, self, other)
88
89 def __pow__(self, other):
90 from ignite.metrics import MetricsLambda
91 return MetricsLambda(lambda x, y: x ** y, self, other)
92
93 def __mod__(self, other):
94 from ignite.metrics import MetricsLambda
95 return MetricsLambda(lambda x, y: x % y, self, other)
96
97 def __div__(self, other):
98 from ignite.metrics import MetricsLambda
99 return MetricsLambda(lambda x, y: x.__div__(y), self, other)
100
101 def __truediv__(self, other):
102 from ignite.metrics import MetricsLambda
103 return MetricsLambda(lambda x, y: x.__truediv__(y), self, other)
104
105 def __floordiv__(self, other):
106 from ignite.metrics import MetricsLambda
107 return MetricsLambda(lambda x, y: x // y, self, other)
108
[end of ignite/metrics/metric.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ignite/metrics/metric.py b/ignite/metrics/metric.py
--- a/ignite/metrics/metric.py
+++ b/ignite/metrics/metric.py
@@ -78,18 +78,34 @@
from ignite.metrics import MetricsLambda
return MetricsLambda(lambda x, y: x + y, self, other)
+ def __radd__(self, other):
+ from ignite.metrics import MetricsLambda
+ return MetricsLambda(lambda x, y: x + y, other, self)
+
def __sub__(self, other):
from ignite.metrics import MetricsLambda
return MetricsLambda(lambda x, y: x - y, self, other)
+ def __rsub__(self, other):
+ from ignite.metrics import MetricsLambda
+ return MetricsLambda(lambda x, y: x - y, other, self)
+
def __mul__(self, other):
from ignite.metrics import MetricsLambda
return MetricsLambda(lambda x, y: x * y, self, other)
+ def __rmul__(self, other):
+ from ignite.metrics import MetricsLambda
+ return MetricsLambda(lambda x, y: x * y, other, self)
+
def __pow__(self, other):
from ignite.metrics import MetricsLambda
return MetricsLambda(lambda x, y: x ** y, self, other)
+ def __rpow__(self, other):
+ from ignite.metrics import MetricsLambda
+ return MetricsLambda(lambda x, y: x ** y, other, self)
+
def __mod__(self, other):
from ignite.metrics import MetricsLambda
return MetricsLambda(lambda x, y: x % y, self, other)
@@ -98,10 +114,18 @@
from ignite.metrics import MetricsLambda
return MetricsLambda(lambda x, y: x.__div__(y), self, other)
+ def __rdiv__(self, other):
+ from ignite.metrics import MetricsLambda
+ return MetricsLambda(lambda x, y: x.__div__(y), other, self)
+
def __truediv__(self, other):
from ignite.metrics import MetricsLambda
return MetricsLambda(lambda x, y: x.__truediv__(y), self, other)
+ def __rtruediv__(self, other):
+ from ignite.metrics import MetricsLambda
+ return MetricsLambda(lambda x, y: x.__truediv__(y), other, self)
+
def __floordiv__(self, other):
from ignite.metrics import MetricsLambda
return MetricsLambda(lambda x, y: x // y, self, other)
| {"golden_diff": "diff --git a/ignite/metrics/metric.py b/ignite/metrics/metric.py\n--- a/ignite/metrics/metric.py\n+++ b/ignite/metrics/metric.py\n@@ -78,18 +78,34 @@\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x + y, self, other)\n \n+ def __radd__(self, other):\n+ from ignite.metrics import MetricsLambda\n+ return MetricsLambda(lambda x, y: x + y, other, self)\n+\n def __sub__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x - y, self, other)\n \n+ def __rsub__(self, other):\n+ from ignite.metrics import MetricsLambda\n+ return MetricsLambda(lambda x, y: x - y, other, self)\n+\n def __mul__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x * y, self, other)\n \n+ def __rmul__(self, other):\n+ from ignite.metrics import MetricsLambda\n+ return MetricsLambda(lambda x, y: x * y, other, self)\n+\n def __pow__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x ** y, self, other)\n \n+ def __rpow__(self, other):\n+ from ignite.metrics import MetricsLambda\n+ return MetricsLambda(lambda x, y: x ** y, other, self)\n+\n def __mod__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x % y, self, other)\n@@ -98,10 +114,18 @@\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x.__div__(y), self, other)\n \n+ def __rdiv__(self, other):\n+ from ignite.metrics import MetricsLambda\n+ return MetricsLambda(lambda x, y: x.__div__(y), other, self)\n+\n def __truediv__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x.__truediv__(y), self, other)\n \n+ def __rtruediv__(self, other):\n+ from ignite.metrics import MetricsLambda\n+ return MetricsLambda(lambda x, y: x.__truediv__(y), other, self)\n+\n def __floordiv__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x // y, self, other)\n", "issue": "Issue with metric arithmetics\nI'm trying to define my metric as \r\n```python\r\nfrom ignite.metrics import Accuracy\r\n\r\naccuracy = Accuracy()\r\nerror_metric = 1.0 - accuracy\r\n```\r\nand I got the following error:\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-70-c4c69e70a6d5> in <module>()\r\n 2 \r\n 3 accuracy = Accuracy()\r\n----> 4 error_metric = 1.0 - accuracy\r\n\r\nTypeError: unsupported operand type(s) for -: 'float' and 'Accuracy'\r\n```\r\nBut I can define \r\n```python\r\nfrom ignite.metrics import Accuracy\r\n\r\naccuracy = Accuracy()\r\nerror_metric = (accuracy - 1.0) * -1.0\r\n```\r\n\r\ncc @zasdfgbnm \nIssue with metric arithmetics\nI'm trying to define my metric as \r\n```python\r\nfrom ignite.metrics import Accuracy\r\n\r\naccuracy = Accuracy()\r\nerror_metric = 1.0 - accuracy\r\n```\r\nand I got the following error:\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-70-c4c69e70a6d5> in <module>()\r\n 2 \r\n 3 accuracy = Accuracy()\r\n----> 4 error_metric = 1.0 - accuracy\r\n\r\nTypeError: unsupported operand type(s) for -: 'float' and 'Accuracy'\r\n```\r\nBut I can define \r\n```python\r\nfrom ignite.metrics import Accuracy\r\n\r\naccuracy = Accuracy()\r\nerror_metric = (accuracy - 1.0) * -1.0\r\n```\r\n\r\ncc @zasdfgbnm \n", "before_files": [{"content": "from abc import ABCMeta, abstractmethod\nfrom ignite._six import with_metaclass\nfrom ignite.engine import Events\nimport torch\n\n\nclass Metric(with_metaclass(ABCMeta, object)):\n \"\"\"\n Base class for all Metrics.\n\n Args:\n output_transform (callable, optional): a callable that is used to transform the\n :class:`ignite.engine.Engine`'s `process_function`'s output into the\n form expected by the metric. This can be useful if, for example, you have a multi-output model and\n you want to compute the metric with respect to one of the outputs.\n\n \"\"\"\n\n def __init__(self, output_transform=lambda x: x):\n self._output_transform = output_transform\n self.reset()\n\n @abstractmethod\n def reset(self):\n \"\"\"\n Resets the metric to to it's initial state.\n\n This is called at the start of each epoch.\n \"\"\"\n pass\n\n @abstractmethod\n def update(self, output):\n \"\"\"\n Updates the metric's state using the passed batch output.\n\n This is called once for each batch.\n\n Args:\n output: the is the output from the engine's process function\n \"\"\"\n pass\n\n @abstractmethod\n def compute(self):\n \"\"\"\n Computes the metric based on it's accumulated state.\n\n This is called at the end of each epoch.\n\n Returns:\n Any: the actual quantity of interest\n\n Raises:\n NotComputableError: raised when the metric cannot be computed\n \"\"\"\n pass\n\n def started(self, engine):\n self.reset()\n\n @torch.no_grad()\n def iteration_completed(self, engine):\n output = self._output_transform(engine.state.output)\n self.update(output)\n\n def completed(self, engine, name):\n engine.state.metrics[name] = self.compute()\n\n def attach(self, engine, name):\n engine.add_event_handler(Events.EPOCH_COMPLETED, self.completed, name)\n if not engine.has_event_handler(self.started, Events.EPOCH_STARTED):\n engine.add_event_handler(Events.EPOCH_STARTED, self.started)\n if not engine.has_event_handler(self.iteration_completed, Events.ITERATION_COMPLETED):\n engine.add_event_handler(Events.ITERATION_COMPLETED, self.iteration_completed)\n\n def __add__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x + y, self, other)\n\n def __sub__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x - y, self, other)\n\n def __mul__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x * y, self, other)\n\n def __pow__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x ** y, self, other)\n\n def __mod__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x % y, self, other)\n\n def __div__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x.__div__(y), self, other)\n\n def __truediv__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x.__truediv__(y), self, other)\n\n def __floordiv__(self, other):\n from ignite.metrics import MetricsLambda\n return MetricsLambda(lambda x, y: x // y, self, other)\n", "path": "ignite/metrics/metric.py"}]} | 1,875 | 581 |
gh_patches_debug_30714 | rasdani/github-patches | git_diff | mindsdb__mindsdb-1328 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Mess in integrations setup
Currently we have two issues with integrations setup:
1. 'setup' is call for all integrations, regardless 'publish' flag.
2. if integration exists in db and in config file with same name, then only integration from db will be setup. That bad, because any integration from config file will be copied to db right after mindsdb start, and any changes in config file after that will have no effect.
</issue>
<code>
[start of mindsdb/__main__.py]
1 import atexit
2 import traceback
3 import sys
4 import os
5 import time
6 import asyncio
7 import signal
8
9 import torch.multiprocessing as mp
10
11 from mindsdb.utilities.config import Config, STOP_THREADS_EVENT
12 from mindsdb.interfaces.model.model_interface import ray_based, ModelInterface
13 from mindsdb.api.http.start import start as start_http
14 from mindsdb.api.mysql.start import start as start_mysql
15 from mindsdb.api.mongo.start import start as start_mongo
16 from mindsdb.utilities.ps import is_pid_listen_port, get_child_pids
17 from mindsdb.utilities.functions import args_parse
18 from mindsdb.interfaces.database.database import DatabaseWrapper
19 from mindsdb.utilities.log import log
20
21 from mindsdb.interfaces.database.integrations import get_db_integrations
22
23 COMPANY_ID = os.environ.get('MINDSDB_COMPANY_ID', None)
24
25
26 def close_api_gracefully(apis):
27 try:
28 for api in apis.values():
29 process = api['process']
30 childs = get_child_pids(process.pid)
31 for p in childs:
32 try:
33 os.kill(p, signal.SIGTERM)
34 except Exception:
35 p.kill()
36 sys.stdout.flush()
37 process.terminate()
38 process.join()
39 sys.stdout.flush()
40 if ray_based:
41 os.system('ray stop --force')
42 except KeyboardInterrupt:
43 sys.exit(0)
44
45
46 if __name__ == '__main__':
47 mp.freeze_support()
48 args = args_parse()
49 config = Config()
50
51 if args.verbose is True:
52 # Figure this one out later
53 pass
54
55 os.environ['DEFAULT_LOG_LEVEL'] = config['log']['level']['console']
56 os.environ['LIGHTWOOD_LOG_LEVEL'] = config['log']['level']['console']
57
58 # Switch to this once the native interface has it's own thread :/
59 ctx = mp.get_context('spawn')
60
61 from mindsdb.__about__ import __version__ as mindsdb_version
62 print(f'Version {mindsdb_version}')
63
64 print(f'Configuration file:\n {config.config_path}')
65 print(f"Storage path:\n {config['paths']['root']}")
66
67 # @TODO Backwards compatibiltiy for tests, remove later
68 from mindsdb.interfaces.database.integrations import add_db_integration, get_db_integration
69 dbw = DatabaseWrapper(COMPANY_ID)
70 model_interface = ModelInterface()
71 raw_model_data_arr = model_interface.get_models()
72 model_data_arr = []
73 for model in raw_model_data_arr:
74 if model['status'] == 'complete':
75 x = model_interface.get_model_data(model['name'])
76 try:
77 model_data_arr.append(model_interface.get_model_data(model['name']))
78 except Exception:
79 pass
80
81 is_cloud = config.get('cloud', False)
82 if not is_cloud:
83 for integration_name in get_db_integrations(COMPANY_ID, sensitive_info=True):
84 print(f"Setting up integration: {integration_name}")
85 dbw.setup_integration(integration_name)
86
87 for integration_name in config.get('integrations', {}):
88 print(f'Adding: {integration_name}')
89 try:
90 it = get_db_integration(integration_name, None)
91 if it is None:
92 add_db_integration(integration_name, config['integrations'][integration_name], None) # Setup for user `None`, since we don't need this for cloud
93 if config['integrations'][integration_name].get('publish', False) and not is_cloud:
94 dbw.setup_integration(integration_name)
95 dbw.register_predictors(model_data_arr, integration_name=integration_name)
96 except Exception as e:
97 log.error(f'\n\nError: {e} adding database integration {integration_name}\n\n')
98
99 del model_interface
100 del dbw
101 # @TODO Backwards compatibiltiy for tests, remove later
102
103 if args.api is None:
104 api_arr = ['http', 'mysql']
105 else:
106 api_arr = args.api.split(',')
107
108 apis = {
109 api: {
110 'port': config['api'][api]['port'],
111 'process': None,
112 'started': False
113 } for api in api_arr
114 }
115
116 start_functions = {
117 'http': start_http,
118 'mysql': start_mysql,
119 'mongodb': start_mongo
120 }
121
122 for api_name, api_data in apis.items():
123 if api_data['started']:
124 continue
125 print(f'{api_name} API: starting...')
126 try:
127 if api_name == 'http':
128 p = ctx.Process(target=start_functions[api_name], args=(args.verbose, args.no_studio))
129 else:
130 p = ctx.Process(target=start_functions[api_name], args=(args.verbose,))
131 p.start()
132 api_data['process'] = p
133 except Exception as e:
134 log.error(f'Failed to start {api_name} API with exception {e}\n{traceback.format_exc()}')
135 close_api_gracefully(apis)
136 raise e
137
138 atexit.register(close_api_gracefully, apis=apis)
139
140 async def wait_api_start(api_name, pid, port):
141 timeout = 60
142 start_time = time.time()
143 started = is_pid_listen_port(pid, port)
144 while (time.time() - start_time) < timeout and started is False:
145 await asyncio.sleep(0.5)
146 started = is_pid_listen_port(pid, port)
147 return api_name, port, started
148
149 async def wait_apis_start():
150 futures = [
151 wait_api_start(api_name, api_data['process'].pid, api_data['port'])
152 for api_name, api_data in apis.items() if 'port' in api_data
153 ]
154 for i, future in enumerate(asyncio.as_completed(futures)):
155 api_name, port, started = await future
156 if started:
157 print(f"{api_name} API: started on {port}")
158 else:
159 log.error(f"ERROR: {api_name} API cant start on {port}")
160
161 ioloop = asyncio.get_event_loop()
162 ioloop.run_until_complete(wait_apis_start())
163 ioloop.close()
164
165 try:
166 for api_data in apis.values():
167 api_data['process'].join()
168 except KeyboardInterrupt:
169 print('Stopping stream integrations...')
170 STOP_THREADS_EVENT.set()
171 print('Closing app...')
172
[end of mindsdb/__main__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mindsdb/__main__.py b/mindsdb/__main__.py
--- a/mindsdb/__main__.py
+++ b/mindsdb/__main__.py
@@ -82,17 +82,20 @@
if not is_cloud:
for integration_name in get_db_integrations(COMPANY_ID, sensitive_info=True):
print(f"Setting up integration: {integration_name}")
- dbw.setup_integration(integration_name)
+ if get_db_integration(integration_name, COMPANY_ID)['publish']:
+ # do setup and register only if it is 'publish' integration
+ dbw.setup_integration(integration_name)
+ dbw.register_predictors(model_data_arr, integration_name=integration_name)
for integration_name in config.get('integrations', {}):
print(f'Adding: {integration_name}')
try:
it = get_db_integration(integration_name, None)
- if it is None:
+ if it is None: # register and setup it only if it doesn't conflict with records in db
add_db_integration(integration_name, config['integrations'][integration_name], None) # Setup for user `None`, since we don't need this for cloud
- if config['integrations'][integration_name].get('publish', False) and not is_cloud:
- dbw.setup_integration(integration_name)
- dbw.register_predictors(model_data_arr, integration_name=integration_name)
+ if config['integrations'][integration_name].get('publish', False) and not is_cloud:
+ dbw.setup_integration(integration_name)
+ dbw.register_predictors(model_data_arr, integration_name=integration_name)
except Exception as e:
log.error(f'\n\nError: {e} adding database integration {integration_name}\n\n')
| {"golden_diff": "diff --git a/mindsdb/__main__.py b/mindsdb/__main__.py\n--- a/mindsdb/__main__.py\n+++ b/mindsdb/__main__.py\n@@ -82,17 +82,20 @@\n if not is_cloud:\n for integration_name in get_db_integrations(COMPANY_ID, sensitive_info=True):\n print(f\"Setting up integration: {integration_name}\")\n- dbw.setup_integration(integration_name)\n+ if get_db_integration(integration_name, COMPANY_ID)['publish']:\n+ # do setup and register only if it is 'publish' integration\n+ dbw.setup_integration(integration_name)\n+ dbw.register_predictors(model_data_arr, integration_name=integration_name)\n \n for integration_name in config.get('integrations', {}):\n print(f'Adding: {integration_name}')\n try:\n it = get_db_integration(integration_name, None)\n- if it is None:\n+ if it is None: # register and setup it only if it doesn't conflict with records in db\n add_db_integration(integration_name, config['integrations'][integration_name], None) # Setup for user `None`, since we don't need this for cloud\n- if config['integrations'][integration_name].get('publish', False) and not is_cloud:\n- dbw.setup_integration(integration_name)\n- dbw.register_predictors(model_data_arr, integration_name=integration_name)\n+ if config['integrations'][integration_name].get('publish', False) and not is_cloud:\n+ dbw.setup_integration(integration_name)\n+ dbw.register_predictors(model_data_arr, integration_name=integration_name)\n except Exception as e:\n log.error(f'\\n\\nError: {e} adding database integration {integration_name}\\n\\n')\n", "issue": "Mess in integrations setup\nCurrently we have two issues with integrations setup:\r\n1. 'setup' is call for all integrations, regardless 'publish' flag.\r\n2. if integration exists in db and in config file with same name, then only integration from db will be setup. That bad, because any integration from config file will be copied to db right after mindsdb start, and any changes in config file after that will have no effect.\n", "before_files": [{"content": "import atexit\nimport traceback\nimport sys\nimport os\nimport time\nimport asyncio\nimport signal\n\nimport torch.multiprocessing as mp\n\nfrom mindsdb.utilities.config import Config, STOP_THREADS_EVENT\nfrom mindsdb.interfaces.model.model_interface import ray_based, ModelInterface\nfrom mindsdb.api.http.start import start as start_http\nfrom mindsdb.api.mysql.start import start as start_mysql\nfrom mindsdb.api.mongo.start import start as start_mongo\nfrom mindsdb.utilities.ps import is_pid_listen_port, get_child_pids\nfrom mindsdb.utilities.functions import args_parse\nfrom mindsdb.interfaces.database.database import DatabaseWrapper\nfrom mindsdb.utilities.log import log\n\nfrom mindsdb.interfaces.database.integrations import get_db_integrations\n\nCOMPANY_ID = os.environ.get('MINDSDB_COMPANY_ID', None)\n\n\ndef close_api_gracefully(apis):\n try:\n for api in apis.values():\n process = api['process']\n childs = get_child_pids(process.pid)\n for p in childs:\n try:\n os.kill(p, signal.SIGTERM)\n except Exception:\n p.kill()\n sys.stdout.flush()\n process.terminate()\n process.join()\n sys.stdout.flush()\n if ray_based:\n os.system('ray stop --force')\n except KeyboardInterrupt:\n sys.exit(0)\n\n\nif __name__ == '__main__':\n mp.freeze_support()\n args = args_parse()\n config = Config()\n\n if args.verbose is True:\n # Figure this one out later\n pass\n\n os.environ['DEFAULT_LOG_LEVEL'] = config['log']['level']['console']\n os.environ['LIGHTWOOD_LOG_LEVEL'] = config['log']['level']['console']\n\n # Switch to this once the native interface has it's own thread :/\n ctx = mp.get_context('spawn')\n\n from mindsdb.__about__ import __version__ as mindsdb_version\n print(f'Version {mindsdb_version}')\n\n print(f'Configuration file:\\n {config.config_path}')\n print(f\"Storage path:\\n {config['paths']['root']}\")\n\n # @TODO Backwards compatibiltiy for tests, remove later\n from mindsdb.interfaces.database.integrations import add_db_integration, get_db_integration\n dbw = DatabaseWrapper(COMPANY_ID)\n model_interface = ModelInterface()\n raw_model_data_arr = model_interface.get_models()\n model_data_arr = []\n for model in raw_model_data_arr:\n if model['status'] == 'complete':\n x = model_interface.get_model_data(model['name'])\n try:\n model_data_arr.append(model_interface.get_model_data(model['name']))\n except Exception:\n pass\n\n is_cloud = config.get('cloud', False)\n if not is_cloud:\n for integration_name in get_db_integrations(COMPANY_ID, sensitive_info=True):\n print(f\"Setting up integration: {integration_name}\")\n dbw.setup_integration(integration_name)\n\n for integration_name in config.get('integrations', {}):\n print(f'Adding: {integration_name}')\n try:\n it = get_db_integration(integration_name, None)\n if it is None:\n add_db_integration(integration_name, config['integrations'][integration_name], None) # Setup for user `None`, since we don't need this for cloud\n if config['integrations'][integration_name].get('publish', False) and not is_cloud:\n dbw.setup_integration(integration_name)\n dbw.register_predictors(model_data_arr, integration_name=integration_name)\n except Exception as e:\n log.error(f'\\n\\nError: {e} adding database integration {integration_name}\\n\\n')\n\n del model_interface\n del dbw\n # @TODO Backwards compatibiltiy for tests, remove later\n\n if args.api is None:\n api_arr = ['http', 'mysql']\n else:\n api_arr = args.api.split(',')\n\n apis = {\n api: {\n 'port': config['api'][api]['port'],\n 'process': None,\n 'started': False\n } for api in api_arr\n }\n\n start_functions = {\n 'http': start_http,\n 'mysql': start_mysql,\n 'mongodb': start_mongo\n }\n\n for api_name, api_data in apis.items():\n if api_data['started']:\n continue\n print(f'{api_name} API: starting...')\n try:\n if api_name == 'http':\n p = ctx.Process(target=start_functions[api_name], args=(args.verbose, args.no_studio))\n else:\n p = ctx.Process(target=start_functions[api_name], args=(args.verbose,))\n p.start()\n api_data['process'] = p\n except Exception as e:\n log.error(f'Failed to start {api_name} API with exception {e}\\n{traceback.format_exc()}')\n close_api_gracefully(apis)\n raise e\n\n atexit.register(close_api_gracefully, apis=apis)\n\n async def wait_api_start(api_name, pid, port):\n timeout = 60\n start_time = time.time()\n started = is_pid_listen_port(pid, port)\n while (time.time() - start_time) < timeout and started is False:\n await asyncio.sleep(0.5)\n started = is_pid_listen_port(pid, port)\n return api_name, port, started\n\n async def wait_apis_start():\n futures = [\n wait_api_start(api_name, api_data['process'].pid, api_data['port'])\n for api_name, api_data in apis.items() if 'port' in api_data\n ]\n for i, future in enumerate(asyncio.as_completed(futures)):\n api_name, port, started = await future\n if started:\n print(f\"{api_name} API: started on {port}\")\n else:\n log.error(f\"ERROR: {api_name} API cant start on {port}\")\n\n ioloop = asyncio.get_event_loop()\n ioloop.run_until_complete(wait_apis_start())\n ioloop.close()\n\n try:\n for api_data in apis.values():\n api_data['process'].join()\n except KeyboardInterrupt:\n print('Stopping stream integrations...')\n STOP_THREADS_EVENT.set()\n print('Closing app...')\n", "path": "mindsdb/__main__.py"}]} | 2,382 | 390 |
gh_patches_debug_11137 | rasdani/github-patches | git_diff | DataDog__dd-trace-py-3035 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
tuple index out of range of threading.py
After upgrading from ddtrace==0.46.0 to version ddtrace==0.55.4 my service crash with IndexError.
```
Traceback (most recent call last):
File "/my_service/services/base_service.py", line 105, in run
futures.append(executor.submit(fn=self._single_entry_point_run, entry_point=entry_point))
File "/my_service/venv/lib/python3.7/site-packages/ddtrace/contrib/futures/threading.py", line 26, in _wrap_submit
fn = args[0]
IndexError: tuple index out of range
```
I'm facing this issue even when setting futures=False.
`patch_all(celery=True, django=True, psycopg2=True, redis=True, futures=True)`
</issue>
<code>
[start of ddtrace/contrib/futures/threading.py]
1 import ddtrace
2
3
4 def _wrap_submit(func, instance, args, kwargs):
5 """
6 Wrap `Executor` method used to submit a work executed in another
7 thread. This wrapper ensures that a new `Context` is created and
8 properly propagated using an intermediate function.
9 """
10 # If there isn't a currently active context, then do not create one
11 # DEV: Calling `.active()` when there isn't an active context will create a new context
12 # DEV: We need to do this in case they are either:
13 # - Starting nested futures
14 # - Starting futures from outside of an existing context
15 #
16 # In either of these cases we essentially will propagate the wrong context between futures
17 #
18 # The resolution is to not create/propagate a new context if one does not exist, but let the
19 # future's thread create the context instead.
20 current_ctx = None
21 if ddtrace.tracer.context_provider._has_active_context():
22 current_ctx = ddtrace.tracer.context_provider.active()
23
24 # extract the target function that must be executed in
25 # a new thread and the `target` arguments
26 fn = args[0]
27 fn_args = args[1:]
28 return func(_wrap_execution, current_ctx, fn, fn_args, kwargs)
29
30
31 def _wrap_execution(ctx, fn, args, kwargs):
32 """
33 Intermediate target function that is executed in a new thread;
34 it receives the original function with arguments and keyword
35 arguments, including our tracing `Context`. The current context
36 provider sets the Active context in a thread local storage
37 variable because it's outside the asynchronous loop.
38 """
39 if ctx is not None:
40 ddtrace.tracer.context_provider.activate(ctx)
41 return fn(*args, **kwargs)
42
[end of ddtrace/contrib/futures/threading.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ddtrace/contrib/futures/threading.py b/ddtrace/contrib/futures/threading.py
--- a/ddtrace/contrib/futures/threading.py
+++ b/ddtrace/contrib/futures/threading.py
@@ -21,10 +21,12 @@
if ddtrace.tracer.context_provider._has_active_context():
current_ctx = ddtrace.tracer.context_provider.active()
- # extract the target function that must be executed in
- # a new thread and the `target` arguments
- fn = args[0]
- fn_args = args[1:]
+ # The target function can be provided as a kwarg argument "fn" or the first positional argument
+ if "fn" in kwargs:
+ fn = kwargs.pop("fn")
+ fn_args = args
+ else:
+ fn, fn_args = args[0], args[1:]
return func(_wrap_execution, current_ctx, fn, fn_args, kwargs)
| {"golden_diff": "diff --git a/ddtrace/contrib/futures/threading.py b/ddtrace/contrib/futures/threading.py\n--- a/ddtrace/contrib/futures/threading.py\n+++ b/ddtrace/contrib/futures/threading.py\n@@ -21,10 +21,12 @@\n if ddtrace.tracer.context_provider._has_active_context():\n current_ctx = ddtrace.tracer.context_provider.active()\n \n- # extract the target function that must be executed in\n- # a new thread and the `target` arguments\n- fn = args[0]\n- fn_args = args[1:]\n+ # The target function can be provided as a kwarg argument \"fn\" or the first positional argument\n+ if \"fn\" in kwargs:\n+ fn = kwargs.pop(\"fn\")\n+ fn_args = args\n+ else:\n+ fn, fn_args = args[0], args[1:]\n return func(_wrap_execution, current_ctx, fn, fn_args, kwargs)\n", "issue": "tuple index out of range of threading.py\nAfter upgrading from ddtrace==0.46.0 to version ddtrace==0.55.4 my service crash with IndexError.\r\n```\r\nTraceback (most recent call last):\r\n File \"/my_service/services/base_service.py\", line 105, in run\r\n futures.append(executor.submit(fn=self._single_entry_point_run, entry_point=entry_point))\r\n File \"/my_service/venv/lib/python3.7/site-packages/ddtrace/contrib/futures/threading.py\", line 26, in _wrap_submit\r\n fn = args[0]\r\nIndexError: tuple index out of range\r\n```\r\n\r\nI'm facing this issue even when setting futures=False.\r\n`patch_all(celery=True, django=True, psycopg2=True, redis=True, futures=True)`\r\n\n", "before_files": [{"content": "import ddtrace\n\n\ndef _wrap_submit(func, instance, args, kwargs):\n \"\"\"\n Wrap `Executor` method used to submit a work executed in another\n thread. This wrapper ensures that a new `Context` is created and\n properly propagated using an intermediate function.\n \"\"\"\n # If there isn't a currently active context, then do not create one\n # DEV: Calling `.active()` when there isn't an active context will create a new context\n # DEV: We need to do this in case they are either:\n # - Starting nested futures\n # - Starting futures from outside of an existing context\n #\n # In either of these cases we essentially will propagate the wrong context between futures\n #\n # The resolution is to not create/propagate a new context if one does not exist, but let the\n # future's thread create the context instead.\n current_ctx = None\n if ddtrace.tracer.context_provider._has_active_context():\n current_ctx = ddtrace.tracer.context_provider.active()\n\n # extract the target function that must be executed in\n # a new thread and the `target` arguments\n fn = args[0]\n fn_args = args[1:]\n return func(_wrap_execution, current_ctx, fn, fn_args, kwargs)\n\n\ndef _wrap_execution(ctx, fn, args, kwargs):\n \"\"\"\n Intermediate target function that is executed in a new thread;\n it receives the original function with arguments and keyword\n arguments, including our tracing `Context`. The current context\n provider sets the Active context in a thread local storage\n variable because it's outside the asynchronous loop.\n \"\"\"\n if ctx is not None:\n ddtrace.tracer.context_provider.activate(ctx)\n return fn(*args, **kwargs)\n", "path": "ddtrace/contrib/futures/threading.py"}]} | 1,175 | 215 |
gh_patches_debug_20162 | rasdani/github-patches | git_diff | Kinto__kinto-120 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Default bucket UUID doesn't have dashes
I've seen that default record ID's have got dashes whereas default bucket id doesn't.
Does it makes sense to try to be consistent here?
```
$ http GET http://localhost:8888/v1/buckets/e93a0bb5b7d16d4f9bfd81b6d737271c -v --auth 'mary:marypassword'
{
"data": {
"id": "e93a0bb5b7d16d4f9bfd81b6d737271c",
"last_modified": 1436191171386
},
[...]
}
```
</issue>
<code>
[start of kinto/views/buckets.py]
1 from pyramid.httpexceptions import HTTPForbidden, HTTPPreconditionFailed
2 from pyramid.security import NO_PERMISSION_REQUIRED
3 from pyramid.view import view_config
4
5 from cliquet import resource
6 from cliquet.utils import hmac_digest, build_request
7
8 from kinto.views import NameGenerator
9
10
11 def create_bucket(request, bucket_id):
12 """Create a bucket if it doesn't exists."""
13 bucket_put = (request.method.lower() == 'put' and
14 request.path.endswith('buckets/default'))
15
16 if not bucket_put:
17 subrequest = build_request(request, {
18 'method': 'PUT',
19 'path': '/buckets/%s' % bucket_id,
20 'body': {"data": {}},
21 'headers': {'If-None-Match': '*'.encode('utf-8')}
22 })
23
24 try:
25 request.invoke_subrequest(subrequest)
26 except HTTPPreconditionFailed:
27 # The bucket already exists
28 pass
29
30
31 def create_collection(request, bucket_id):
32 subpath = request.matchdict['subpath']
33 if subpath.startswith('/collections/'):
34 collection_id = subpath.split('/')[2]
35 collection_put = (request.method.lower() == 'put' and
36 request.path.endswith(collection_id))
37 if not collection_put:
38 subrequest = build_request(request, {
39 'method': 'PUT',
40 'path': '/buckets/%s/collections/%s' % (
41 bucket_id, collection_id),
42 'body': {"data": {}},
43 'headers': {'If-None-Match': '*'.encode('utf-8')}
44 })
45 try:
46 request.invoke_subrequest(subrequest)
47 except HTTPPreconditionFailed:
48 # The collection already exists
49 pass
50
51
52 @view_config(route_name='default_bucket', permission=NO_PERMISSION_REQUIRED)
53 def default_bucket(request):
54 if getattr(request, 'prefixed_userid', None) is None:
55 raise HTTPForbidden # Pass through the forbidden_view_config
56
57 settings = request.registry.settings
58 hmac_secret = settings['cliquet.userid_hmac_secret']
59 # Build the user unguessable bucket_id UUID from its user_id
60 bucket_id = hmac_digest(hmac_secret, request.prefixed_userid)[:32]
61 path = request.path.replace('default', bucket_id)
62 querystring = request.url[(request.url.index(request.path) +
63 len(request.path)):]
64
65 # Make sure bucket exists
66 create_bucket(request, bucket_id)
67
68 # Make sure the collection exists
69 create_collection(request, bucket_id)
70
71 subrequest = build_request(request, {
72 'method': request.method,
73 'path': path + querystring,
74 'body': request.body
75 })
76
77 return request.invoke_subrequest(subrequest)
78
79
80 @resource.register(name='bucket',
81 collection_methods=('GET',),
82 collection_path='/buckets',
83 record_path='/buckets/{{id}}')
84 class Bucket(resource.ProtectedResource):
85 permissions = ('read', 'write', 'collection:create', 'group:create')
86
87 def __init__(self, *args, **kwargs):
88 super(Bucket, self).__init__(*args, **kwargs)
89 self.collection.id_generator = NameGenerator()
90
91 def get_parent_id(self, request):
92 # Buckets are not isolated by user, unlike Cliquet resources.
93 return ''
94
95 def delete(self):
96 result = super(Bucket, self).delete()
97
98 # Delete groups.
99 storage = self.collection.storage
100 parent_id = '/buckets/%s' % self.record_id
101 storage.delete_all(collection_id='group', parent_id=parent_id)
102
103 # Delete collections.
104 deleted = storage.delete_all(collection_id='collection',
105 parent_id=parent_id)
106
107 # Delete records.
108 id_field = self.collection.id_field
109 for collection in deleted:
110 parent_id = '/buckets/%s/collections/%s' % (self.record_id,
111 collection[id_field])
112 storage.delete_all(collection_id='record', parent_id=parent_id)
113
114 return result
115
[end of kinto/views/buckets.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kinto/views/buckets.py b/kinto/views/buckets.py
--- a/kinto/views/buckets.py
+++ b/kinto/views/buckets.py
@@ -1,3 +1,6 @@
+from six import text_type
+from uuid import UUID
+
from pyramid.httpexceptions import HTTPForbidden, HTTPPreconditionFailed
from pyramid.security import NO_PERMISSION_REQUIRED
from pyramid.view import view_config
@@ -57,7 +60,8 @@
settings = request.registry.settings
hmac_secret = settings['cliquet.userid_hmac_secret']
# Build the user unguessable bucket_id UUID from its user_id
- bucket_id = hmac_digest(hmac_secret, request.prefixed_userid)[:32]
+ digest = hmac_digest(hmac_secret, request.prefixed_userid)
+ bucket_id = text_type(UUID(digest[:32]))
path = request.path.replace('default', bucket_id)
querystring = request.url[(request.url.index(request.path) +
len(request.path)):]
| {"golden_diff": "diff --git a/kinto/views/buckets.py b/kinto/views/buckets.py\n--- a/kinto/views/buckets.py\n+++ b/kinto/views/buckets.py\n@@ -1,3 +1,6 @@\n+from six import text_type\n+from uuid import UUID\n+\n from pyramid.httpexceptions import HTTPForbidden, HTTPPreconditionFailed\n from pyramid.security import NO_PERMISSION_REQUIRED\n from pyramid.view import view_config\n@@ -57,7 +60,8 @@\n settings = request.registry.settings\n hmac_secret = settings['cliquet.userid_hmac_secret']\n # Build the user unguessable bucket_id UUID from its user_id\n- bucket_id = hmac_digest(hmac_secret, request.prefixed_userid)[:32]\n+ digest = hmac_digest(hmac_secret, request.prefixed_userid)\n+ bucket_id = text_type(UUID(digest[:32]))\n path = request.path.replace('default', bucket_id)\n querystring = request.url[(request.url.index(request.path) +\n len(request.path)):]\n", "issue": "Default bucket UUID doesn't have dashes\nI've seen that default record ID's have got dashes whereas default bucket id doesn't.\n\nDoes it makes sense to try to be consistent here?\n\n```\n$ http GET http://localhost:8888/v1/buckets/e93a0bb5b7d16d4f9bfd81b6d737271c -v --auth 'mary:marypassword'\n{\n \"data\": {\n \"id\": \"e93a0bb5b7d16d4f9bfd81b6d737271c\", \n \"last_modified\": 1436191171386\n }, \n [...]\n}\n```\n\n", "before_files": [{"content": "from pyramid.httpexceptions import HTTPForbidden, HTTPPreconditionFailed\nfrom pyramid.security import NO_PERMISSION_REQUIRED\nfrom pyramid.view import view_config\n\nfrom cliquet import resource\nfrom cliquet.utils import hmac_digest, build_request\n\nfrom kinto.views import NameGenerator\n\n\ndef create_bucket(request, bucket_id):\n \"\"\"Create a bucket if it doesn't exists.\"\"\"\n bucket_put = (request.method.lower() == 'put' and\n request.path.endswith('buckets/default'))\n\n if not bucket_put:\n subrequest = build_request(request, {\n 'method': 'PUT',\n 'path': '/buckets/%s' % bucket_id,\n 'body': {\"data\": {}},\n 'headers': {'If-None-Match': '*'.encode('utf-8')}\n })\n\n try:\n request.invoke_subrequest(subrequest)\n except HTTPPreconditionFailed:\n # The bucket already exists\n pass\n\n\ndef create_collection(request, bucket_id):\n subpath = request.matchdict['subpath']\n if subpath.startswith('/collections/'):\n collection_id = subpath.split('/')[2]\n collection_put = (request.method.lower() == 'put' and\n request.path.endswith(collection_id))\n if not collection_put:\n subrequest = build_request(request, {\n 'method': 'PUT',\n 'path': '/buckets/%s/collections/%s' % (\n bucket_id, collection_id),\n 'body': {\"data\": {}},\n 'headers': {'If-None-Match': '*'.encode('utf-8')}\n })\n try:\n request.invoke_subrequest(subrequest)\n except HTTPPreconditionFailed:\n # The collection already exists\n pass\n\n\n@view_config(route_name='default_bucket', permission=NO_PERMISSION_REQUIRED)\ndef default_bucket(request):\n if getattr(request, 'prefixed_userid', None) is None:\n raise HTTPForbidden # Pass through the forbidden_view_config\n\n settings = request.registry.settings\n hmac_secret = settings['cliquet.userid_hmac_secret']\n # Build the user unguessable bucket_id UUID from its user_id\n bucket_id = hmac_digest(hmac_secret, request.prefixed_userid)[:32]\n path = request.path.replace('default', bucket_id)\n querystring = request.url[(request.url.index(request.path) +\n len(request.path)):]\n\n # Make sure bucket exists\n create_bucket(request, bucket_id)\n\n # Make sure the collection exists\n create_collection(request, bucket_id)\n\n subrequest = build_request(request, {\n 'method': request.method,\n 'path': path + querystring,\n 'body': request.body\n })\n\n return request.invoke_subrequest(subrequest)\n\n\[email protected](name='bucket',\n collection_methods=('GET',),\n collection_path='/buckets',\n record_path='/buckets/{{id}}')\nclass Bucket(resource.ProtectedResource):\n permissions = ('read', 'write', 'collection:create', 'group:create')\n\n def __init__(self, *args, **kwargs):\n super(Bucket, self).__init__(*args, **kwargs)\n self.collection.id_generator = NameGenerator()\n\n def get_parent_id(self, request):\n # Buckets are not isolated by user, unlike Cliquet resources.\n return ''\n\n def delete(self):\n result = super(Bucket, self).delete()\n\n # Delete groups.\n storage = self.collection.storage\n parent_id = '/buckets/%s' % self.record_id\n storage.delete_all(collection_id='group', parent_id=parent_id)\n\n # Delete collections.\n deleted = storage.delete_all(collection_id='collection',\n parent_id=parent_id)\n\n # Delete records.\n id_field = self.collection.id_field\n for collection in deleted:\n parent_id = '/buckets/%s/collections/%s' % (self.record_id,\n collection[id_field])\n storage.delete_all(collection_id='record', parent_id=parent_id)\n\n return result\n", "path": "kinto/views/buckets.py"}]} | 1,780 | 217 |
gh_patches_debug_16866 | rasdani/github-patches | git_diff | plone__Products.CMFPlone-1528 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CSS bundles generation breaks background images relative urls
This is a bug related to PR #1300.
</issue>
<code>
[start of Products/CMFPlone/resources/browser/combine.py]
1 import re
2 from zExceptions import NotFound
3 from Acquisition import aq_base
4 from datetime import datetime
5 from plone.registry.interfaces import IRegistry
6 from plone.resource.file import FilesystemFile
7 from plone.resource.interfaces import IResourceDirectory
8 from Products.CMFPlone.interfaces import IBundleRegistry
9 from Products.CMFPlone.interfaces.resources import (
10 OVERRIDE_RESOURCE_DIRECTORY_NAME,
11 )
12 from StringIO import StringIO
13 from zope.component import getUtility
14 from zope.component import queryUtility
15
16 PRODUCTION_RESOURCE_DIRECTORY = "production"
17
18
19 def get_production_resource_directory():
20 persistent_directory = queryUtility(IResourceDirectory, name="persistent")
21 if persistent_directory is None:
22 return ''
23 container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]
24 try:
25 production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]
26 except NotFound:
27 return "%s/++unique++1" % PRODUCTION_RESOURCE_DIRECTORY
28 timestamp = production_folder.readFile('timestamp.txt')
29 return "%s/++unique++%s" % (
30 PRODUCTION_RESOURCE_DIRECTORY, timestamp)
31
32
33 def get_resource(context, path):
34 if path.startswith('++plone++'):
35 # ++plone++ resources can be customized, we return their override
36 # value if any
37 overrides = get_override_directory(context)
38 filepath = path[9:]
39 if overrides.isFile(filepath):
40 return overrides.readFile(filepath)
41
42 resource = context.unrestrictedTraverse(path)
43 if isinstance(resource, FilesystemFile):
44 (directory, sep, filename) = path.rpartition('/')
45 return context.unrestrictedTraverse(directory).readFile(filename)
46 else:
47 if hasattr(aq_base(resource), 'GET'):
48 # for FileResource
49 return resource.GET()
50 else:
51 # any BrowserView
52 return resource()
53
54
55 def write_js(context, folder, meta_bundle):
56 registry = getUtility(IRegistry)
57 resources = []
58
59 # default resources
60 if meta_bundle == 'default' and registry.records.get(
61 'plone.resources/jquery.js'
62 ):
63 resources.append(get_resource(context,
64 registry.records['plone.resources/jquery.js'].value))
65 resources.append(get_resource(context,
66 registry.records['plone.resources.requirejs'].value))
67 resources.append(get_resource(context,
68 registry.records['plone.resources.configjs'].value))
69
70 # bundles
71 bundles = registry.collectionOfInterface(
72 IBundleRegistry, prefix="plone.bundles", check=False)
73 for bundle in bundles.values():
74 if bundle.merge_with == meta_bundle and bundle.jscompilation:
75 resources.append(get_resource(context, bundle.jscompilation))
76
77 fi = StringIO()
78 for script in resources:
79 fi.write(script + '\n')
80 folder.writeFile(meta_bundle + ".js", fi)
81
82
83 def write_css(context, folder, meta_bundle):
84 registry = getUtility(IRegistry)
85 resources = []
86
87 bundles = registry.collectionOfInterface(
88 IBundleRegistry, prefix="plone.bundles", check=False)
89 for bundle in bundles.values():
90 if bundle.merge_with == meta_bundle and bundle.csscompilation:
91 css = get_resource(context, bundle.csscompilation)
92 # Preserve relative urls:
93 # we prefix with '../'' any url not starting with '/'
94 # or http: or data:
95 css = re.sub(
96 r"""(url\(['"]?(?!['"]?([a-z]+:|\/)))""",
97 r'\1../',
98 css)
99 resources.append(css)
100
101 fi = StringIO()
102 for script in resources:
103 fi.write(script + '\n')
104 folder.writeFile(meta_bundle + ".css", fi)
105
106
107 def get_override_directory(context):
108 persistent_directory = queryUtility(IResourceDirectory, name="persistent")
109 if persistent_directory is None:
110 return
111 if OVERRIDE_RESOURCE_DIRECTORY_NAME not in persistent_directory:
112 persistent_directory.makeDirectory(OVERRIDE_RESOURCE_DIRECTORY_NAME)
113 return persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]
114
115
116 def combine_bundles(context):
117 container = get_override_directory(context)
118 if PRODUCTION_RESOURCE_DIRECTORY not in container:
119 container.makeDirectory(PRODUCTION_RESOURCE_DIRECTORY)
120 production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]
121
122 # store timestamp
123 fi = StringIO()
124 fi.write(datetime.now().isoformat())
125 production_folder.writeFile("timestamp.txt", fi)
126
127 # generate new combined bundles
128 write_js(context, production_folder, 'default')
129 write_js(context, production_folder, 'logged-in')
130 write_css(context, production_folder, 'default')
131 write_css(context, production_folder, 'logged-in')
132
[end of Products/CMFPlone/resources/browser/combine.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/Products/CMFPlone/resources/browser/combine.py b/Products/CMFPlone/resources/browser/combine.py
--- a/Products/CMFPlone/resources/browser/combine.py
+++ b/Products/CMFPlone/resources/browser/combine.py
@@ -89,12 +89,13 @@
for bundle in bundles.values():
if bundle.merge_with == meta_bundle and bundle.csscompilation:
css = get_resource(context, bundle.csscompilation)
- # Preserve relative urls:
- # we prefix with '../'' any url not starting with '/'
- # or http: or data:
+ (path, sep, filename) = bundle.csscompilation.rpartition('/')
+ # Process relative urls:
+ # we prefix with current resource path any url not starting with
+ # '/' or http: or data:
css = re.sub(
r"""(url\(['"]?(?!['"]?([a-z]+:|\/)))""",
- r'\1../',
+ r'\1%s/' % path,
css)
resources.append(css)
| {"golden_diff": "diff --git a/Products/CMFPlone/resources/browser/combine.py b/Products/CMFPlone/resources/browser/combine.py\n--- a/Products/CMFPlone/resources/browser/combine.py\n+++ b/Products/CMFPlone/resources/browser/combine.py\n@@ -89,12 +89,13 @@\n for bundle in bundles.values():\n if bundle.merge_with == meta_bundle and bundle.csscompilation:\n css = get_resource(context, bundle.csscompilation)\n- # Preserve relative urls:\n- # we prefix with '../'' any url not starting with '/'\n- # or http: or data:\n+ (path, sep, filename) = bundle.csscompilation.rpartition('/')\n+ # Process relative urls:\n+ # we prefix with current resource path any url not starting with\n+ # '/' or http: or data:\n css = re.sub(\n r\"\"\"(url\\(['\"]?(?!['\"]?([a-z]+:|\\/)))\"\"\",\n- r'\\1../',\n+ r'\\1%s/' % path,\n css)\n resources.append(css)\n", "issue": "CSS bundles generation breaks background images relative urls\nThis is a bug related to PR #1300.\n\n", "before_files": [{"content": "import re\nfrom zExceptions import NotFound\nfrom Acquisition import aq_base\nfrom datetime import datetime\nfrom plone.registry.interfaces import IRegistry\nfrom plone.resource.file import FilesystemFile\nfrom plone.resource.interfaces import IResourceDirectory\nfrom Products.CMFPlone.interfaces import IBundleRegistry\nfrom Products.CMFPlone.interfaces.resources import (\n OVERRIDE_RESOURCE_DIRECTORY_NAME,\n)\nfrom StringIO import StringIO\nfrom zope.component import getUtility\nfrom zope.component import queryUtility\n\nPRODUCTION_RESOURCE_DIRECTORY = \"production\"\n\n\ndef get_production_resource_directory():\n persistent_directory = queryUtility(IResourceDirectory, name=\"persistent\")\n if persistent_directory is None:\n return ''\n container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n try:\n production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]\n except NotFound:\n return \"%s/++unique++1\" % PRODUCTION_RESOURCE_DIRECTORY\n timestamp = production_folder.readFile('timestamp.txt')\n return \"%s/++unique++%s\" % (\n PRODUCTION_RESOURCE_DIRECTORY, timestamp)\n\n\ndef get_resource(context, path):\n if path.startswith('++plone++'):\n # ++plone++ resources can be customized, we return their override\n # value if any\n overrides = get_override_directory(context)\n filepath = path[9:]\n if overrides.isFile(filepath):\n return overrides.readFile(filepath)\n\n resource = context.unrestrictedTraverse(path)\n if isinstance(resource, FilesystemFile):\n (directory, sep, filename) = path.rpartition('/')\n return context.unrestrictedTraverse(directory).readFile(filename)\n else:\n if hasattr(aq_base(resource), 'GET'):\n # for FileResource\n return resource.GET()\n else:\n # any BrowserView\n return resource()\n\n\ndef write_js(context, folder, meta_bundle):\n registry = getUtility(IRegistry)\n resources = []\n\n # default resources\n if meta_bundle == 'default' and registry.records.get(\n 'plone.resources/jquery.js'\n ):\n resources.append(get_resource(context,\n registry.records['plone.resources/jquery.js'].value))\n resources.append(get_resource(context,\n registry.records['plone.resources.requirejs'].value))\n resources.append(get_resource(context,\n registry.records['plone.resources.configjs'].value))\n\n # bundles\n bundles = registry.collectionOfInterface(\n IBundleRegistry, prefix=\"plone.bundles\", check=False)\n for bundle in bundles.values():\n if bundle.merge_with == meta_bundle and bundle.jscompilation:\n resources.append(get_resource(context, bundle.jscompilation))\n\n fi = StringIO()\n for script in resources:\n fi.write(script + '\\n')\n folder.writeFile(meta_bundle + \".js\", fi)\n\n\ndef write_css(context, folder, meta_bundle):\n registry = getUtility(IRegistry)\n resources = []\n\n bundles = registry.collectionOfInterface(\n IBundleRegistry, prefix=\"plone.bundles\", check=False)\n for bundle in bundles.values():\n if bundle.merge_with == meta_bundle and bundle.csscompilation:\n css = get_resource(context, bundle.csscompilation)\n # Preserve relative urls:\n # we prefix with '../'' any url not starting with '/'\n # or http: or data:\n css = re.sub(\n r\"\"\"(url\\(['\"]?(?!['\"]?([a-z]+:|\\/)))\"\"\",\n r'\\1../',\n css)\n resources.append(css)\n\n fi = StringIO()\n for script in resources:\n fi.write(script + '\\n')\n folder.writeFile(meta_bundle + \".css\", fi)\n\n\ndef get_override_directory(context):\n persistent_directory = queryUtility(IResourceDirectory, name=\"persistent\")\n if persistent_directory is None:\n return\n if OVERRIDE_RESOURCE_DIRECTORY_NAME not in persistent_directory:\n persistent_directory.makeDirectory(OVERRIDE_RESOURCE_DIRECTORY_NAME)\n return persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n\n\ndef combine_bundles(context):\n container = get_override_directory(context)\n if PRODUCTION_RESOURCE_DIRECTORY not in container:\n container.makeDirectory(PRODUCTION_RESOURCE_DIRECTORY)\n production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]\n\n # store timestamp\n fi = StringIO()\n fi.write(datetime.now().isoformat())\n production_folder.writeFile(\"timestamp.txt\", fi)\n\n # generate new combined bundles\n write_js(context, production_folder, 'default')\n write_js(context, production_folder, 'logged-in')\n write_css(context, production_folder, 'default')\n write_css(context, production_folder, 'logged-in')\n", "path": "Products/CMFPlone/resources/browser/combine.py"}]} | 1,805 | 239 |
gh_patches_debug_67317 | rasdani/github-patches | git_diff | qutip__qutip-2305 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
QuTiP 4.7.4: incompatibility with the latest scipy (1.12.0)
https://github.com/qutip/qutip/blob/f5149616a4071a273e7e48a63d956836739c4569/qutip/parallel.py#L7
When the latest scipy version is used (1.12.0), QuTiP (4.7.4) cannot be imported since `from scipy import array` is no longer supported in scipy.
Code to reproduce the bug:
`import qutip`
Output:
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[1], line 1
----> 1 import qutip
File ~/anaconda3/envs/test/lib/python3.11/site-packages/qutip/__init__.py:133
131 # graphics
132 from qutip.bloch import *
--> 133 from qutip.visualization import *
134 from qutip.orbital import *
135 from qutip.bloch3d import *
File ~/anaconda3/envs/test/lib/python3.11/site-packages/qutip/visualization.py:24
22 from qutip.qobj import Qobj, isket
23 from qutip.states import ket2dm
---> 24 from qutip.wigner import wigner
25 from qutip.tensor import tensor
26 from qutip.matplotlib_utilities import complex_phase_cmap
File ~/anaconda3/envs/test/lib/python3.11/site-packages/qutip/wigner.py:19
17 import qutip
18 from qutip import Qobj, ket2dm, jmat
---> 19 from qutip.parallel import parfor
20 from qutip.cy.sparse_utils import _csr_get_diag
21 from qutip.sparse import eigh
File ~/anaconda3/envs/test/lib/python3.11/site-packages/qutip/parallel.py:7
1 """
2 This function provides functions for parallel execution of loops and function
3 mappings, using the builtin Python module multiprocessing.
4 """
5 __all__ = ['parfor', 'parallel_map', 'serial_map']
----> 7 from scipy import array
8 import multiprocessing
9 from functools import partial
ImportError: cannot import name 'array' from 'scipy' (/Users/konstantin/anaconda3/envs/test/lib/python3.11/site-packages/scipy/__init__.py)
```
</issue>
<code>
[start of qutip/parallel.py]
1 """
2 This function provides functions for parallel execution of loops and function
3 mappings, using the builtin Python module multiprocessing.
4 """
5 __all__ = ['parfor', 'parallel_map', 'serial_map']
6
7 from scipy import array
8 import multiprocessing
9 from functools import partial
10 import os
11 import sys
12 import signal
13 import qutip.settings as qset
14 from qutip.ui.progressbar import BaseProgressBar, TextProgressBar
15
16
17 if sys.platform == 'darwin':
18 Pool = multiprocessing.get_context('fork').Pool
19 else:
20 Pool = multiprocessing.Pool
21
22
23 def _task_wrapper(args):
24 try:
25 return args[0](*args[1])
26 except KeyboardInterrupt:
27 os.kill(args[2], signal.SIGINT)
28 sys.exit(1)
29
30
31 def _task_wrapper_with_args(args, user_args):
32 try:
33 return args[0](*args[1], **user_args)
34 except KeyboardInterrupt:
35 os.kill(args[2], signal.SIGINT)
36 sys.exit(1)
37
38
39 def parfor(func, *args, **kwargs):
40 """Executes a multi-variable function in parallel on the local machine.
41
42 Parallel execution of a for-loop over function `func` for multiple input
43 arguments and keyword arguments.
44
45 .. note::
46
47 From QuTiP 3.1, we recommend to use :func:`qutip.parallel.parallel_map`
48 instead of this function.
49
50 Parameters
51 ----------
52 func : function_type
53 A function to run in parallel on the local machine. The function 'func'
54 accepts a series of arguments that are passed to the function as
55 variables. In general, the function can have multiple input variables,
56 and these arguments must be passed in the same order as they are
57 defined in the function definition. In addition, the user can pass
58 multiple keyword arguments to the function.
59
60 The following keyword argument is reserved:
61
62 num_cpus : int
63 Number of CPU's to use. Default uses maximum number of CPU's.
64 Performance degrades if num_cpus is larger than the physical CPU
65 count of your machine.
66
67 Returns
68 -------
69 result : list
70 A ``list`` with length equal to number of input parameters
71 containing the output from `func`.
72
73 """
74 os.environ['QUTIP_IN_PARALLEL'] = 'TRUE'
75 kw = _default_kwargs()
76 if 'num_cpus' in kwargs.keys():
77 kw['num_cpus'] = kwargs['num_cpus']
78 del kwargs['num_cpus']
79 if len(kwargs) != 0:
80 task_func = partial(_task_wrapper_with_args, user_args=kwargs)
81 else:
82 task_func = _task_wrapper
83
84 if kw['num_cpus'] > qset.num_cpus:
85 print("Requested number of CPUs (%s) " % kw['num_cpus'] +
86 "is larger than physical number (%s)." % qset.num_cpus)
87 print("Reduce 'num_cpus' for greater performance.")
88
89 pool = Pool(processes=kw['num_cpus'])
90 args = [list(arg) for arg in args]
91 var = [[args[j][i] for j in range(len(args))]
92 for i in range(len(list(args[0])))]
93 try:
94 map_args = ((func, v, os.getpid()) for v in var)
95 par_return = list(pool.map(task_func, map_args))
96
97 pool.terminate()
98 pool.join()
99 os.environ['QUTIP_IN_PARALLEL'] = 'FALSE'
100 if isinstance(par_return[0], tuple):
101 par_return = [elem for elem in par_return]
102 num_elems = len(par_return[0])
103 dt = [type(ii) for ii in par_return[0]]
104 return [array([elem[ii] for elem in par_return], dtype=dt[ii])
105 for ii in range(num_elems)]
106 else:
107 return par_return
108
109 except KeyboardInterrupt:
110 os.environ['QUTIP_IN_PARALLEL'] = 'FALSE'
111 pool.terminate()
112
113
114 def serial_map(task, values, task_args=tuple(), task_kwargs={}, **kwargs):
115 """
116 Serial mapping function with the same call signature as parallel_map, for
117 easy switching between serial and parallel execution. This
118 is functionally equivalent to::
119
120 result = [task(value, *task_args, **task_kwargs) for value in values]
121
122 This function work as a drop-in replacement of
123 :func:`qutip.parallel.parallel_map`.
124
125 Parameters
126 ----------
127 task : a Python function
128 The function that is to be called for each value in ``task_vec``.
129 values : array / list
130 The list or array of values for which the ``task`` function is to be
131 evaluated.
132 task_args : list / dictionary
133 The optional additional argument to the ``task`` function.
134 task_kwargs : list / dictionary
135 The optional additional keyword argument to the ``task`` function.
136 progress_bar : ProgressBar
137 Progress bar class instance for showing progress.
138
139 Returns
140 --------
141 result : list
142 The result list contains the value of
143 ``task(value, *task_args, **task_kwargs)`` for each
144 value in ``values``.
145
146 """
147 try:
148 progress_bar = kwargs['progress_bar']
149 if progress_bar is True:
150 progress_bar = TextProgressBar()
151 except:
152 progress_bar = BaseProgressBar()
153
154 progress_bar.start(len(values))
155 results = []
156 for n, value in enumerate(values):
157 progress_bar.update(n)
158 result = task(value, *task_args, **task_kwargs)
159 results.append(result)
160 progress_bar.finished()
161
162 return results
163
164
165 def parallel_map(task, values, task_args=tuple(), task_kwargs={}, **kwargs):
166 """
167 Parallel execution of a mapping of `values` to the function `task`. This
168 is functionally equivalent to::
169
170 result = [task(value, *task_args, **task_kwargs) for value in values]
171
172 Parameters
173 ----------
174 task : a Python function
175 The function that is to be called for each value in ``task_vec``.
176 values : array / list
177 The list or array of values for which the ``task`` function is to be
178 evaluated.
179 task_args : list / dictionary
180 The optional additional argument to the ``task`` function.
181 task_kwargs : list / dictionary
182 The optional additional keyword argument to the ``task`` function.
183 progress_bar : ProgressBar
184 Progress bar class instance for showing progress.
185
186 Returns
187 --------
188 result : list
189 The result list contains the value of
190 ``task(value, *task_args, **task_kwargs)`` for
191 each value in ``values``.
192
193 """
194 os.environ['QUTIP_IN_PARALLEL'] = 'TRUE'
195 kw = _default_kwargs()
196 if 'num_cpus' in kwargs:
197 kw['num_cpus'] = kwargs['num_cpus']
198
199 try:
200 progress_bar = kwargs['progress_bar']
201 if progress_bar is True:
202 progress_bar = TextProgressBar()
203 except:
204 progress_bar = BaseProgressBar()
205
206 progress_bar.start(len(values))
207 nfinished = [0]
208
209 def _update_progress_bar(x):
210 nfinished[0] += 1
211 progress_bar.update(nfinished[0])
212
213 try:
214 pool = Pool(processes=kw['num_cpus'])
215
216 async_res = [pool.apply_async(task, (value,) + task_args, task_kwargs,
217 _update_progress_bar)
218 for value in values]
219
220 while not all([ar.ready() for ar in async_res]):
221 for ar in async_res:
222 ar.wait(timeout=0.1)
223
224 pool.terminate()
225 pool.join()
226
227 except KeyboardInterrupt as e:
228 os.environ['QUTIP_IN_PARALLEL'] = 'FALSE'
229 pool.terminate()
230 pool.join()
231 raise e
232
233 progress_bar.finished()
234 os.environ['QUTIP_IN_PARALLEL'] = 'FALSE'
235 return [ar.get() for ar in async_res]
236
237
238 def _default_kwargs():
239 settings = {'num_cpus': qset.num_cpus}
240 return settings
241
[end of qutip/parallel.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/qutip/parallel.py b/qutip/parallel.py
--- a/qutip/parallel.py
+++ b/qutip/parallel.py
@@ -4,7 +4,7 @@
"""
__all__ = ['parfor', 'parallel_map', 'serial_map']
-from scipy import array
+from numpy import array
import multiprocessing
from functools import partial
import os
| {"golden_diff": "diff --git a/qutip/parallel.py b/qutip/parallel.py\n--- a/qutip/parallel.py\n+++ b/qutip/parallel.py\n@@ -4,7 +4,7 @@\n \"\"\"\n __all__ = ['parfor', 'parallel_map', 'serial_map']\n \n-from scipy import array\n+from numpy import array\n import multiprocessing\n from functools import partial\n import os\n", "issue": "QuTiP 4.7.4: incompatibility with the latest scipy (1.12.0)\nhttps://github.com/qutip/qutip/blob/f5149616a4071a273e7e48a63d956836739c4569/qutip/parallel.py#L7\r\n\r\nWhen the latest scipy version is used (1.12.0), QuTiP (4.7.4) cannot be imported since `from scipy import array` is no longer supported in scipy.\r\n\r\nCode to reproduce the bug: \r\n`import qutip`\r\n\r\nOutput:\r\n```\r\n---------------------------------------------------------------------------\r\nImportError Traceback (most recent call last)\r\nCell In[1], line 1\r\n----> 1 import qutip\r\n\r\nFile ~/anaconda3/envs/test/lib/python3.11/site-packages/qutip/__init__.py:133\r\n 131 # graphics\r\n 132 from qutip.bloch import *\r\n--> 133 from qutip.visualization import *\r\n 134 from qutip.orbital import *\r\n 135 from qutip.bloch3d import *\r\n\r\nFile ~/anaconda3/envs/test/lib/python3.11/site-packages/qutip/visualization.py:24\r\n 22 from qutip.qobj import Qobj, isket\r\n 23 from qutip.states import ket2dm\r\n---> 24 from qutip.wigner import wigner\r\n 25 from qutip.tensor import tensor\r\n 26 from qutip.matplotlib_utilities import complex_phase_cmap\r\n\r\nFile ~/anaconda3/envs/test/lib/python3.11/site-packages/qutip/wigner.py:19\r\n 17 import qutip\r\n 18 from qutip import Qobj, ket2dm, jmat\r\n---> 19 from qutip.parallel import parfor\r\n 20 from qutip.cy.sparse_utils import _csr_get_diag\r\n 21 from qutip.sparse import eigh\r\n\r\nFile ~/anaconda3/envs/test/lib/python3.11/site-packages/qutip/parallel.py:7\r\n 1 \"\"\"\r\n 2 This function provides functions for parallel execution of loops and function\r\n 3 mappings, using the builtin Python module multiprocessing.\r\n 4 \"\"\"\r\n 5 __all__ = ['parfor', 'parallel_map', 'serial_map']\r\n----> 7 from scipy import array\r\n 8 import multiprocessing\r\n 9 from functools import partial\r\n\r\nImportError: cannot import name 'array' from 'scipy' (/Users/konstantin/anaconda3/envs/test/lib/python3.11/site-packages/scipy/__init__.py)\r\n```\r\n\n", "before_files": [{"content": "\"\"\"\nThis function provides functions for parallel execution of loops and function\nmappings, using the builtin Python module multiprocessing.\n\"\"\"\n__all__ = ['parfor', 'parallel_map', 'serial_map']\n\nfrom scipy import array\nimport multiprocessing\nfrom functools import partial\nimport os\nimport sys\nimport signal\nimport qutip.settings as qset\nfrom qutip.ui.progressbar import BaseProgressBar, TextProgressBar\n\n\nif sys.platform == 'darwin':\n Pool = multiprocessing.get_context('fork').Pool\nelse:\n Pool = multiprocessing.Pool\n\n\ndef _task_wrapper(args):\n try:\n return args[0](*args[1])\n except KeyboardInterrupt:\n os.kill(args[2], signal.SIGINT)\n sys.exit(1)\n\n\ndef _task_wrapper_with_args(args, user_args):\n try:\n return args[0](*args[1], **user_args)\n except KeyboardInterrupt:\n os.kill(args[2], signal.SIGINT)\n sys.exit(1)\n\n\ndef parfor(func, *args, **kwargs):\n \"\"\"Executes a multi-variable function in parallel on the local machine.\n\n Parallel execution of a for-loop over function `func` for multiple input\n arguments and keyword arguments.\n\n .. note::\n\n From QuTiP 3.1, we recommend to use :func:`qutip.parallel.parallel_map`\n instead of this function.\n\n Parameters\n ----------\n func : function_type\n A function to run in parallel on the local machine. The function 'func'\n accepts a series of arguments that are passed to the function as\n variables. In general, the function can have multiple input variables,\n and these arguments must be passed in the same order as they are\n defined in the function definition. In addition, the user can pass\n multiple keyword arguments to the function.\n\n The following keyword argument is reserved:\n\n num_cpus : int\n Number of CPU's to use. Default uses maximum number of CPU's.\n Performance degrades if num_cpus is larger than the physical CPU\n count of your machine.\n\n Returns\n -------\n result : list\n A ``list`` with length equal to number of input parameters\n containing the output from `func`.\n\n \"\"\"\n os.environ['QUTIP_IN_PARALLEL'] = 'TRUE'\n kw = _default_kwargs()\n if 'num_cpus' in kwargs.keys():\n kw['num_cpus'] = kwargs['num_cpus']\n del kwargs['num_cpus']\n if len(kwargs) != 0:\n task_func = partial(_task_wrapper_with_args, user_args=kwargs)\n else:\n task_func = _task_wrapper\n\n if kw['num_cpus'] > qset.num_cpus:\n print(\"Requested number of CPUs (%s) \" % kw['num_cpus'] +\n \"is larger than physical number (%s).\" % qset.num_cpus)\n print(\"Reduce 'num_cpus' for greater performance.\")\n\n pool = Pool(processes=kw['num_cpus'])\n args = [list(arg) for arg in args]\n var = [[args[j][i] for j in range(len(args))]\n for i in range(len(list(args[0])))]\n try:\n map_args = ((func, v, os.getpid()) for v in var)\n par_return = list(pool.map(task_func, map_args))\n\n pool.terminate()\n pool.join()\n os.environ['QUTIP_IN_PARALLEL'] = 'FALSE'\n if isinstance(par_return[0], tuple):\n par_return = [elem for elem in par_return]\n num_elems = len(par_return[0])\n dt = [type(ii) for ii in par_return[0]]\n return [array([elem[ii] for elem in par_return], dtype=dt[ii])\n for ii in range(num_elems)]\n else:\n return par_return\n\n except KeyboardInterrupt:\n os.environ['QUTIP_IN_PARALLEL'] = 'FALSE'\n pool.terminate()\n\n\ndef serial_map(task, values, task_args=tuple(), task_kwargs={}, **kwargs):\n \"\"\"\n Serial mapping function with the same call signature as parallel_map, for\n easy switching between serial and parallel execution. This\n is functionally equivalent to::\n\n result = [task(value, *task_args, **task_kwargs) for value in values]\n\n This function work as a drop-in replacement of\n :func:`qutip.parallel.parallel_map`.\n\n Parameters\n ----------\n task : a Python function\n The function that is to be called for each value in ``task_vec``.\n values : array / list\n The list or array of values for which the ``task`` function is to be\n evaluated.\n task_args : list / dictionary\n The optional additional argument to the ``task`` function.\n task_kwargs : list / dictionary\n The optional additional keyword argument to the ``task`` function.\n progress_bar : ProgressBar\n Progress bar class instance for showing progress.\n\n Returns\n --------\n result : list\n The result list contains the value of\n ``task(value, *task_args, **task_kwargs)`` for each\n value in ``values``.\n\n \"\"\"\n try:\n progress_bar = kwargs['progress_bar']\n if progress_bar is True:\n progress_bar = TextProgressBar()\n except:\n progress_bar = BaseProgressBar()\n\n progress_bar.start(len(values))\n results = []\n for n, value in enumerate(values):\n progress_bar.update(n)\n result = task(value, *task_args, **task_kwargs)\n results.append(result)\n progress_bar.finished()\n\n return results\n\n\ndef parallel_map(task, values, task_args=tuple(), task_kwargs={}, **kwargs):\n \"\"\"\n Parallel execution of a mapping of `values` to the function `task`. This\n is functionally equivalent to::\n\n result = [task(value, *task_args, **task_kwargs) for value in values]\n\n Parameters\n ----------\n task : a Python function\n The function that is to be called for each value in ``task_vec``.\n values : array / list\n The list or array of values for which the ``task`` function is to be\n evaluated.\n task_args : list / dictionary\n The optional additional argument to the ``task`` function.\n task_kwargs : list / dictionary\n The optional additional keyword argument to the ``task`` function.\n progress_bar : ProgressBar\n Progress bar class instance for showing progress.\n\n Returns\n --------\n result : list\n The result list contains the value of\n ``task(value, *task_args, **task_kwargs)`` for\n each value in ``values``.\n\n \"\"\"\n os.environ['QUTIP_IN_PARALLEL'] = 'TRUE'\n kw = _default_kwargs()\n if 'num_cpus' in kwargs:\n kw['num_cpus'] = kwargs['num_cpus']\n\n try:\n progress_bar = kwargs['progress_bar']\n if progress_bar is True:\n progress_bar = TextProgressBar()\n except:\n progress_bar = BaseProgressBar()\n\n progress_bar.start(len(values))\n nfinished = [0]\n\n def _update_progress_bar(x):\n nfinished[0] += 1\n progress_bar.update(nfinished[0])\n\n try:\n pool = Pool(processes=kw['num_cpus'])\n\n async_res = [pool.apply_async(task, (value,) + task_args, task_kwargs,\n _update_progress_bar)\n for value in values]\n\n while not all([ar.ready() for ar in async_res]):\n for ar in async_res:\n ar.wait(timeout=0.1)\n\n pool.terminate()\n pool.join()\n\n except KeyboardInterrupt as e:\n os.environ['QUTIP_IN_PARALLEL'] = 'FALSE'\n pool.terminate()\n pool.join()\n raise e\n\n progress_bar.finished()\n os.environ['QUTIP_IN_PARALLEL'] = 'FALSE'\n return [ar.get() for ar in async_res]\n\n\ndef _default_kwargs():\n settings = {'num_cpus': qset.num_cpus}\n return settings\n", "path": "qutip/parallel.py"}]} | 3,495 | 86 |
gh_patches_debug_8379 | rasdani/github-patches | git_diff | kedro-org__kedro-3013 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Document the LIFO order in which hooks are executed in `settings.py`
### Description
We mention that hook implementations registered in `settings.py` run in LIFO order and that auto discovered hooks run before hooks in `settings.py`.
- [ ] We need to also document what the order is in which auto-discovered hooks run. Add this to: https://kedro.readthedocs.io/en/stable/hooks/introduction.html To verify the run order, create a project and install several plugins with hooks to test.
- [ ] Add a comment in the `settings.py` template file to explain the run order of hooks
</issue>
<code>
[start of kedro/templates/project/{{ cookiecutter.repo_name }}/src/{{ cookiecutter.python_package }}/settings.py]
1 """Project settings. There is no need to edit this file unless you want to change values
2 from the Kedro defaults. For further information, including these default values, see
3 https://kedro.readthedocs.io/en/stable/kedro_project_setup/settings.html."""
4
5 # Instantiated project hooks.
6 # For example, after creating a hooks.py and defining a ProjectHooks class there, do
7 # from {{cookiecutter.python_package}}.hooks import ProjectHooks
8 # HOOKS = (ProjectHooks(),)
9
10 # Installed plugins for which to disable hook auto-registration.
11 # DISABLE_HOOKS_FOR_PLUGINS = ("kedro-viz",)
12
13 # Class that manages storing KedroSession data.
14 # from kedro.framework.session.store import BaseSessionStore
15 # SESSION_STORE_CLASS = BaseSessionStore
16 # Keyword arguments to pass to the `SESSION_STORE_CLASS` constructor.
17 # SESSION_STORE_ARGS = {
18 # "path": "./sessions"
19 # }
20
21 # Directory that holds configuration.
22 # CONF_SOURCE = "conf"
23
24 # Class that manages how configuration is loaded.
25 from kedro.config import OmegaConfigLoader # noqa: import-outside-toplevel
26
27 CONFIG_LOADER_CLASS = OmegaConfigLoader
28 # Keyword arguments to pass to the `CONFIG_LOADER_CLASS` constructor.
29 # CONFIG_LOADER_ARGS = {
30 # "config_patterns": {
31 # "spark" : ["spark*/"],
32 # "parameters": ["parameters*", "parameters*/**", "**/parameters*"],
33 # }
34 # }
35
36 # Class that manages Kedro's library components.
37 # from kedro.framework.context import KedroContext
38 # CONTEXT_CLASS = KedroContext
39
40 # Class that manages the Data Catalog.
41 # from kedro.io import DataCatalog
42 # DATA_CATALOG_CLASS = DataCatalog
43
[end of kedro/templates/project/{{ cookiecutter.repo_name }}/src/{{ cookiecutter.python_package }}/settings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kedro/templates/project/{{ cookiecutter.repo_name }}/src/{{ cookiecutter.python_package }}/settings.py b/kedro/templates/project/{{ cookiecutter.repo_name }}/src/{{ cookiecutter.python_package }}/settings.py
--- a/kedro/templates/project/{{ cookiecutter.repo_name }}/src/{{ cookiecutter.python_package }}/settings.py
+++ b/kedro/templates/project/{{ cookiecutter.repo_name }}/src/{{ cookiecutter.python_package }}/settings.py
@@ -5,6 +5,7 @@
# Instantiated project hooks.
# For example, after creating a hooks.py and defining a ProjectHooks class there, do
# from {{cookiecutter.python_package}}.hooks import ProjectHooks
+# Hooks are executed in a Last-In-First-Out (LIFO) order.
# HOOKS = (ProjectHooks(),)
# Installed plugins for which to disable hook auto-registration.
| {"golden_diff": "diff --git a/kedro/templates/project/{{ cookiecutter.repo_name }}/src/{{ cookiecutter.python_package }}/settings.py b/kedro/templates/project/{{ cookiecutter.repo_name }}/src/{{ cookiecutter.python_package }}/settings.py\n--- a/kedro/templates/project/{{ cookiecutter.repo_name }}/src/{{ cookiecutter.python_package }}/settings.py\t\n+++ b/kedro/templates/project/{{ cookiecutter.repo_name }}/src/{{ cookiecutter.python_package }}/settings.py\t\n@@ -5,6 +5,7 @@\n # Instantiated project hooks.\n # For example, after creating a hooks.py and defining a ProjectHooks class there, do\n # from {{cookiecutter.python_package}}.hooks import ProjectHooks\n+# Hooks are executed in a Last-In-First-Out (LIFO) order.\n # HOOKS = (ProjectHooks(),)\n \n # Installed plugins for which to disable hook auto-registration.\n", "issue": "Document the LIFO order in which hooks are executed in `settings.py`\n### Description\r\n\r\nWe mention that hook implementations registered in `settings.py` run in LIFO order and that auto discovered hooks run before hooks in `settings.py`. \r\n\r\n- [ ] We need to also document what the order is in which auto-discovered hooks run. Add this to: https://kedro.readthedocs.io/en/stable/hooks/introduction.html To verify the run order, create a project and install several plugins with hooks to test.\r\n- [ ] Add a comment in the `settings.py` template file to explain the run order of hooks\n", "before_files": [{"content": "\"\"\"Project settings. There is no need to edit this file unless you want to change values\nfrom the Kedro defaults. For further information, including these default values, see\nhttps://kedro.readthedocs.io/en/stable/kedro_project_setup/settings.html.\"\"\"\n\n# Instantiated project hooks.\n# For example, after creating a hooks.py and defining a ProjectHooks class there, do\n# from {{cookiecutter.python_package}}.hooks import ProjectHooks\n# HOOKS = (ProjectHooks(),)\n\n# Installed plugins for which to disable hook auto-registration.\n# DISABLE_HOOKS_FOR_PLUGINS = (\"kedro-viz\",)\n\n# Class that manages storing KedroSession data.\n# from kedro.framework.session.store import BaseSessionStore\n# SESSION_STORE_CLASS = BaseSessionStore\n# Keyword arguments to pass to the `SESSION_STORE_CLASS` constructor.\n# SESSION_STORE_ARGS = {\n# \"path\": \"./sessions\"\n# }\n\n# Directory that holds configuration.\n# CONF_SOURCE = \"conf\"\n\n# Class that manages how configuration is loaded.\nfrom kedro.config import OmegaConfigLoader # noqa: import-outside-toplevel\n\nCONFIG_LOADER_CLASS = OmegaConfigLoader\n# Keyword arguments to pass to the `CONFIG_LOADER_CLASS` constructor.\n# CONFIG_LOADER_ARGS = {\n# \"config_patterns\": {\n# \"spark\" : [\"spark*/\"],\n# \"parameters\": [\"parameters*\", \"parameters*/**\", \"**/parameters*\"],\n# }\n# }\n\n# Class that manages Kedro's library components.\n# from kedro.framework.context import KedroContext\n# CONTEXT_CLASS = KedroContext\n\n# Class that manages the Data Catalog.\n# from kedro.io import DataCatalog\n# DATA_CATALOG_CLASS = DataCatalog\n", "path": "kedro/templates/project/{{ cookiecutter.repo_name }}/src/{{ cookiecutter.python_package }}/settings.py"}]} | 1,142 | 189 |
gh_patches_debug_12933 | rasdani/github-patches | git_diff | koxudaxi__datamodel-code-generator-1186 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
` --use-default-kwarg` breaks codegen with default_factory `Field(default=default_factory=...`
**Describe the bug**
When using ` --use-default-kwarg` on models with a default_factory codegen breaks: `Field(default=default_factory=...`
```
File "src\black\parsing.py", line 127, in lib2to3_parse
black.parsing.InvalidInput: Cannot parse: 17:54: foo: Optional[Foo] = Field(default=default_factory=lambda :Foo.parse_obj({'foo': 'foo'}), title='Foo')
```
**To Reproduce**
Example schema:
```json
{
"title": "Requests",
"definitions": {
"Foo": {
"title": "Foo",
"type": "object",
"properties": {
"foo": {
"title": "Foo",
"default": "foo",
"type": "string"
}
}
},
"Bar": {
"title": "Bar",
"type": "object",
"properties": {
"foo": {
"title": "Foo",
"default": {
"foo": "foo"
},
"allOf": [
{
"$ref": "#/definitions/Foo"
}
]
}
}
}
}
}
```
Used commandline:
```
$ datamodel-codegen --input schema.json --output model.py --use-default-kwarg
```
**Expected behavior**
Such a model should be produced
``` python
class Foo(BaseModel):
foo: Optional[str] = Field(default='foo', title='Foo')
class Bar(BaseModel):
foo: Optional[Foo] = Field(default_factory=lambda: Foo.parse_obj({'foo': 'foo'}), title='Foo')
```
**Version:**
✅ works in 0.16.1
❌ breaks in 0.17.0 (probably introduced by https://github.com/koxudaxi/datamodel-code-generator/pull/1047 )
- python 3.11.1
- windows 11
</issue>
<code>
[start of datamodel_code_generator/model/pydantic/base_model.py]
1 from __future__ import annotations
2
3 from pathlib import Path
4 from typing import Any, ClassVar, DefaultDict, Dict, List, Optional, Set, Tuple, Union
5
6 from pydantic import Field
7
8 from datamodel_code_generator import cached_property
9 from datamodel_code_generator.imports import Import
10 from datamodel_code_generator.model import (
11 ConstraintsBase,
12 DataModel,
13 DataModelFieldBase,
14 )
15 from datamodel_code_generator.model.base import UNDEFINED
16 from datamodel_code_generator.model.pydantic.imports import IMPORT_EXTRA, IMPORT_FIELD
17 from datamodel_code_generator.reference import Reference
18 from datamodel_code_generator.types import chain_as_tuple
19
20
21 class Constraints(ConstraintsBase):
22 gt: Optional[Union[float, int]] = Field(None, alias='exclusiveMinimum')
23 ge: Optional[Union[float, int]] = Field(None, alias='minimum')
24 lt: Optional[Union[float, int]] = Field(None, alias='exclusiveMaximum')
25 le: Optional[Union[float, int]] = Field(None, alias='maximum')
26 multiple_of: Optional[float] = Field(None, alias='multipleOf')
27 min_items: Optional[int] = Field(None, alias='minItems')
28 max_items: Optional[int] = Field(None, alias='maxItems')
29 min_length: Optional[int] = Field(None, alias='minLength')
30 max_length: Optional[int] = Field(None, alias='maxLength')
31 regex: Optional[str] = Field(None, alias='pattern')
32 unique_items: Optional[bool] = Field(None, alias='uniqueItems')
33
34
35 class DataModelField(DataModelFieldBase):
36 _EXCLUDE_FIELD_KEYS: ClassVar[Set[str]] = {
37 'alias',
38 'default',
39 'const',
40 'gt',
41 'ge',
42 'lt',
43 'le',
44 'multiple_of',
45 'min_items',
46 'max_items',
47 'min_length',
48 'max_length',
49 'regex',
50 }
51 _COMPARE_EXPRESSIONS: ClassVar[Set[str]] = {'gt', 'ge', 'lt', 'le'}
52 constraints: Optional[Constraints] = None
53
54 @property
55 def method(self) -> Optional[str]:
56 return self.validator
57
58 @property
59 def validator(self) -> Optional[str]:
60 return None
61 # TODO refactor this method for other validation logic
62 # from datamodel_code_generator.model.pydantic import VALIDATOR_TEMPLATE
63 #
64 # return VALIDATOR_TEMPLATE.render(
65 # field_name=self.name, types=','.join([t.type_hint for t in self.data_types])
66 # )
67
68 @property
69 def field(self) -> Optional[str]:
70 """for backwards compatibility"""
71 result = str(self)
72 if self.use_default_kwarg and not result.startswith('Field(...'):
73 # Use `default=` for fields that have a default value so that type
74 # checkers using @dataclass_transform can infer the field as
75 # optional in __init__.
76 result = result.replace('Field(', 'Field(default=')
77 if result == '':
78 return None
79
80 return result
81
82 def self_reference(self) -> bool:
83 return isinstance(self.parent, BaseModel) and self.parent.reference.path in {
84 d.reference.path for d in self.data_type.all_data_types if d.reference
85 }
86
87 def _get_strict_field_constraint_value(self, constraint: str, value: Any) -> Any:
88 if value is None or constraint not in self._COMPARE_EXPRESSIONS:
89 return value
90
91 for data_type in self.data_type.all_data_types:
92 if data_type.type == 'int':
93 value = int(value)
94 else:
95 value = float(value)
96 break
97 return value
98
99 def _get_default_as_pydantic_model(self) -> Optional[str]:
100 for data_type in self.data_type.data_types or (self.data_type,):
101 # TODO: Check nested data_types
102 if data_type.is_dict or self.data_type.is_union:
103 # TODO: Parse Union and dict model for default
104 continue
105 elif data_type.is_list and len(data_type.data_types) == 1:
106 data_type = data_type.data_types[0]
107 data_type.alias
108 if (
109 data_type.reference
110 and isinstance(data_type.reference.source, BaseModel)
111 and isinstance(self.default, list)
112 ): # pragma: no cover
113 return f'lambda :[{data_type.alias or data_type.reference.source.class_name}.parse_obj(v) for v in {repr(self.default)}]'
114 elif data_type.reference and isinstance(
115 data_type.reference.source, BaseModel
116 ): # pragma: no cover
117 return f'lambda :{data_type.alias or data_type.reference.source.class_name}.parse_obj({repr(self.default)})'
118 return None
119
120 def __str__(self) -> str:
121 data: Dict[str, Any] = {
122 k: v for k, v in self.extras.items() if k not in self._EXCLUDE_FIELD_KEYS
123 }
124 if self.alias:
125 data['alias'] = self.alias
126 if (
127 self.constraints is not None
128 and not self.self_reference()
129 and not self.data_type.strict
130 ):
131 data = {
132 **data,
133 **{
134 k: self._get_strict_field_constraint_value(k, v)
135 for k, v in self.constraints.dict().items()
136 },
137 }
138
139 if self.use_field_description:
140 data.pop('description', None) # Description is part of field docstring
141
142 if self.const:
143 data['const'] = True
144
145 discriminator = data.pop('discriminator', None)
146 if discriminator:
147 if isinstance(discriminator, str):
148 data['discriminator'] = discriminator
149 elif isinstance(discriminator, dict): # pragma: no cover
150 data['discriminator'] = discriminator['propertyName']
151
152 if self.required:
153 default_factory = None
154 elif self.default and 'default_factory' not in data:
155 default_factory = self._get_default_as_pydantic_model()
156 else:
157 default_factory = data.pop('default_factory', None)
158
159 field_arguments = sorted(
160 f'{k}={repr(v)}' for k, v in data.items() if v is not None
161 )
162
163 if not field_arguments and not default_factory:
164 if self.nullable and self.required:
165 return 'Field(...)' # Field() is for mypy
166 return ''
167
168 if self.use_annotated:
169 pass
170 elif self.required:
171 field_arguments = ['...', *field_arguments]
172 elif default_factory:
173 field_arguments = [f'default_factory={default_factory}', *field_arguments]
174 else:
175 field_arguments = [f'{repr(self.default)}', *field_arguments]
176
177 return f'Field({", ".join(field_arguments)})'
178
179 @property
180 def annotated(self) -> Optional[str]:
181 if not self.use_annotated or not str(self):
182 return None
183 return f'Annotated[{self.type_hint}, {str(self)}]'
184
185
186 class BaseModel(DataModel):
187 TEMPLATE_FILE_PATH: ClassVar[str] = 'pydantic/BaseModel.jinja2'
188 BASE_CLASS: ClassVar[str] = 'pydantic.BaseModel'
189
190 def __init__(
191 self,
192 *,
193 reference: Reference,
194 fields: List[DataModelField],
195 decorators: Optional[List[str]] = None,
196 base_classes: Optional[List[Reference]] = None,
197 custom_base_class: Optional[str] = None,
198 custom_template_dir: Optional[Path] = None,
199 extra_template_data: Optional[DefaultDict[str, Any]] = None,
200 path: Optional[Path] = None,
201 description: Optional[str] = None,
202 default: Any = UNDEFINED,
203 nullable: bool = False,
204 ):
205 methods: List[str] = [field.method for field in fields if field.method]
206
207 super().__init__(
208 fields=fields, # type: ignore
209 reference=reference,
210 decorators=decorators,
211 base_classes=base_classes,
212 custom_base_class=custom_base_class,
213 custom_template_dir=custom_template_dir,
214 extra_template_data=extra_template_data,
215 methods=methods,
216 path=path,
217 description=description,
218 default=default,
219 nullable=nullable,
220 )
221
222 config_parameters: Dict[str, Any] = {}
223
224 additionalProperties = self.extra_template_data.get('additionalProperties')
225 allow_extra_fields = self.extra_template_data.get('allow_extra_fields')
226 if additionalProperties is not None or allow_extra_fields:
227 config_parameters['extra'] = (
228 'Extra.allow'
229 if additionalProperties or allow_extra_fields
230 else 'Extra.forbid'
231 )
232 self._additional_imports.append(IMPORT_EXTRA)
233
234 for config_attribute in 'allow_population_by_field_name', 'allow_mutation':
235 if config_attribute in self.extra_template_data:
236 config_parameters[config_attribute] = self.extra_template_data[
237 config_attribute
238 ]
239 for data_type in self.all_data_types:
240 if data_type.is_custom_type:
241 config_parameters['arbitrary_types_allowed'] = True
242 break
243
244 if isinstance(self.extra_template_data.get('config'), dict):
245 for key, value in self.extra_template_data['config'].items():
246 config_parameters[key] = value
247
248 if config_parameters:
249 from datamodel_code_generator.model.pydantic import Config
250
251 self.extra_template_data['config'] = Config.parse_obj(config_parameters)
252
253 @property
254 def imports(self) -> Tuple[Import, ...]:
255 if any(f for f in self.fields if f.field):
256 return chain_as_tuple(super().imports, (IMPORT_FIELD,))
257 return super().imports
258
259 @cached_property
260 def template_file_path(self) -> Path:
261 # This property is for Backward compatibility
262 # Current version supports '{custom_template_dir}/BaseModel.jinja'
263 # But, Future version will support only '{custom_template_dir}/pydantic/BaseModel.jinja'
264 if self._custom_template_dir is not None:
265 custom_template_file_path = (
266 self._custom_template_dir / Path(self.TEMPLATE_FILE_PATH).name
267 )
268 if custom_template_file_path.exists():
269 return custom_template_file_path
270 return super().template_file_path
271
[end of datamodel_code_generator/model/pydantic/base_model.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/datamodel_code_generator/model/pydantic/base_model.py b/datamodel_code_generator/model/pydantic/base_model.py
--- a/datamodel_code_generator/model/pydantic/base_model.py
+++ b/datamodel_code_generator/model/pydantic/base_model.py
@@ -69,7 +69,11 @@
def field(self) -> Optional[str]:
"""for backwards compatibility"""
result = str(self)
- if self.use_default_kwarg and not result.startswith('Field(...'):
+ if (
+ self.use_default_kwarg
+ and not result.startswith('Field(...')
+ and not result.startswith('Field(default_factory=')
+ ):
# Use `default=` for fields that have a default value so that type
# checkers using @dataclass_transform can infer the field as
# optional in __init__.
| {"golden_diff": "diff --git a/datamodel_code_generator/model/pydantic/base_model.py b/datamodel_code_generator/model/pydantic/base_model.py\n--- a/datamodel_code_generator/model/pydantic/base_model.py\n+++ b/datamodel_code_generator/model/pydantic/base_model.py\n@@ -69,7 +69,11 @@\n def field(self) -> Optional[str]:\n \"\"\"for backwards compatibility\"\"\"\n result = str(self)\n- if self.use_default_kwarg and not result.startswith('Field(...'):\n+ if (\n+ self.use_default_kwarg\n+ and not result.startswith('Field(...')\n+ and not result.startswith('Field(default_factory=')\n+ ):\n # Use `default=` for fields that have a default value so that type\n # checkers using @dataclass_transform can infer the field as\n # optional in __init__.\n", "issue": "` --use-default-kwarg` breaks codegen with default_factory `Field(default=default_factory=...`\n**Describe the bug**\r\nWhen using ` --use-default-kwarg` on models with a default_factory codegen breaks: `Field(default=default_factory=...`\r\n\r\n\r\n```\r\nFile \"src\\black\\parsing.py\", line 127, in lib2to3_parse\r\nblack.parsing.InvalidInput: Cannot parse: 17:54: foo: Optional[Foo] = Field(default=default_factory=lambda :Foo.parse_obj({'foo': 'foo'}), title='Foo')\r\n```\r\n**To Reproduce**\r\n\r\nExample schema:\r\n```json\r\n\r\n{\r\n \"title\": \"Requests\",\r\n \"definitions\": {\r\n \"Foo\": {\r\n \"title\": \"Foo\",\r\n \"type\": \"object\",\r\n \"properties\": {\r\n \"foo\": {\r\n \"title\": \"Foo\",\r\n \"default\": \"foo\",\r\n \"type\": \"string\"\r\n }\r\n }\r\n },\r\n \"Bar\": {\r\n \"title\": \"Bar\",\r\n \"type\": \"object\",\r\n \"properties\": {\r\n \"foo\": {\r\n \"title\": \"Foo\",\r\n \"default\": {\r\n \"foo\": \"foo\"\r\n },\r\n \"allOf\": [\r\n {\r\n \"$ref\": \"#/definitions/Foo\"\r\n }\r\n ]\r\n }\r\n }\r\n }\r\n }\r\n}\r\n\r\n```\r\n\r\nUsed commandline:\r\n```\r\n$ datamodel-codegen --input schema.json --output model.py --use-default-kwarg\r\n```\r\n\r\n**Expected behavior**\r\nSuch a model should be produced\r\n``` python\r\nclass Foo(BaseModel):\r\n foo: Optional[str] = Field(default='foo', title='Foo')\r\n\r\n\r\nclass Bar(BaseModel):\r\n foo: Optional[Foo] = Field(default_factory=lambda: Foo.parse_obj({'foo': 'foo'}), title='Foo')\r\n\r\n```\r\n\r\n**Version:**\r\n\r\n\u2705 works in 0.16.1\r\n\u274c breaks in 0.17.0 (probably introduced by https://github.com/koxudaxi/datamodel-code-generator/pull/1047 )\r\n- python 3.11.1\r\n- windows 11\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom pathlib import Path\nfrom typing import Any, ClassVar, DefaultDict, Dict, List, Optional, Set, Tuple, Union\n\nfrom pydantic import Field\n\nfrom datamodel_code_generator import cached_property\nfrom datamodel_code_generator.imports import Import\nfrom datamodel_code_generator.model import (\n ConstraintsBase,\n DataModel,\n DataModelFieldBase,\n)\nfrom datamodel_code_generator.model.base import UNDEFINED\nfrom datamodel_code_generator.model.pydantic.imports import IMPORT_EXTRA, IMPORT_FIELD\nfrom datamodel_code_generator.reference import Reference\nfrom datamodel_code_generator.types import chain_as_tuple\n\n\nclass Constraints(ConstraintsBase):\n gt: Optional[Union[float, int]] = Field(None, alias='exclusiveMinimum')\n ge: Optional[Union[float, int]] = Field(None, alias='minimum')\n lt: Optional[Union[float, int]] = Field(None, alias='exclusiveMaximum')\n le: Optional[Union[float, int]] = Field(None, alias='maximum')\n multiple_of: Optional[float] = Field(None, alias='multipleOf')\n min_items: Optional[int] = Field(None, alias='minItems')\n max_items: Optional[int] = Field(None, alias='maxItems')\n min_length: Optional[int] = Field(None, alias='minLength')\n max_length: Optional[int] = Field(None, alias='maxLength')\n regex: Optional[str] = Field(None, alias='pattern')\n unique_items: Optional[bool] = Field(None, alias='uniqueItems')\n\n\nclass DataModelField(DataModelFieldBase):\n _EXCLUDE_FIELD_KEYS: ClassVar[Set[str]] = {\n 'alias',\n 'default',\n 'const',\n 'gt',\n 'ge',\n 'lt',\n 'le',\n 'multiple_of',\n 'min_items',\n 'max_items',\n 'min_length',\n 'max_length',\n 'regex',\n }\n _COMPARE_EXPRESSIONS: ClassVar[Set[str]] = {'gt', 'ge', 'lt', 'le'}\n constraints: Optional[Constraints] = None\n\n @property\n def method(self) -> Optional[str]:\n return self.validator\n\n @property\n def validator(self) -> Optional[str]:\n return None\n # TODO refactor this method for other validation logic\n # from datamodel_code_generator.model.pydantic import VALIDATOR_TEMPLATE\n #\n # return VALIDATOR_TEMPLATE.render(\n # field_name=self.name, types=','.join([t.type_hint for t in self.data_types])\n # )\n\n @property\n def field(self) -> Optional[str]:\n \"\"\"for backwards compatibility\"\"\"\n result = str(self)\n if self.use_default_kwarg and not result.startswith('Field(...'):\n # Use `default=` for fields that have a default value so that type\n # checkers using @dataclass_transform can infer the field as\n # optional in __init__.\n result = result.replace('Field(', 'Field(default=')\n if result == '':\n return None\n\n return result\n\n def self_reference(self) -> bool:\n return isinstance(self.parent, BaseModel) and self.parent.reference.path in {\n d.reference.path for d in self.data_type.all_data_types if d.reference\n }\n\n def _get_strict_field_constraint_value(self, constraint: str, value: Any) -> Any:\n if value is None or constraint not in self._COMPARE_EXPRESSIONS:\n return value\n\n for data_type in self.data_type.all_data_types:\n if data_type.type == 'int':\n value = int(value)\n else:\n value = float(value)\n break\n return value\n\n def _get_default_as_pydantic_model(self) -> Optional[str]:\n for data_type in self.data_type.data_types or (self.data_type,):\n # TODO: Check nested data_types\n if data_type.is_dict or self.data_type.is_union:\n # TODO: Parse Union and dict model for default\n continue\n elif data_type.is_list and len(data_type.data_types) == 1:\n data_type = data_type.data_types[0]\n data_type.alias\n if (\n data_type.reference\n and isinstance(data_type.reference.source, BaseModel)\n and isinstance(self.default, list)\n ): # pragma: no cover\n return f'lambda :[{data_type.alias or data_type.reference.source.class_name}.parse_obj(v) for v in {repr(self.default)}]'\n elif data_type.reference and isinstance(\n data_type.reference.source, BaseModel\n ): # pragma: no cover\n return f'lambda :{data_type.alias or data_type.reference.source.class_name}.parse_obj({repr(self.default)})'\n return None\n\n def __str__(self) -> str:\n data: Dict[str, Any] = {\n k: v for k, v in self.extras.items() if k not in self._EXCLUDE_FIELD_KEYS\n }\n if self.alias:\n data['alias'] = self.alias\n if (\n self.constraints is not None\n and not self.self_reference()\n and not self.data_type.strict\n ):\n data = {\n **data,\n **{\n k: self._get_strict_field_constraint_value(k, v)\n for k, v in self.constraints.dict().items()\n },\n }\n\n if self.use_field_description:\n data.pop('description', None) # Description is part of field docstring\n\n if self.const:\n data['const'] = True\n\n discriminator = data.pop('discriminator', None)\n if discriminator:\n if isinstance(discriminator, str):\n data['discriminator'] = discriminator\n elif isinstance(discriminator, dict): # pragma: no cover\n data['discriminator'] = discriminator['propertyName']\n\n if self.required:\n default_factory = None\n elif self.default and 'default_factory' not in data:\n default_factory = self._get_default_as_pydantic_model()\n else:\n default_factory = data.pop('default_factory', None)\n\n field_arguments = sorted(\n f'{k}={repr(v)}' for k, v in data.items() if v is not None\n )\n\n if not field_arguments and not default_factory:\n if self.nullable and self.required:\n return 'Field(...)' # Field() is for mypy\n return ''\n\n if self.use_annotated:\n pass\n elif self.required:\n field_arguments = ['...', *field_arguments]\n elif default_factory:\n field_arguments = [f'default_factory={default_factory}', *field_arguments]\n else:\n field_arguments = [f'{repr(self.default)}', *field_arguments]\n\n return f'Field({\", \".join(field_arguments)})'\n\n @property\n def annotated(self) -> Optional[str]:\n if not self.use_annotated or not str(self):\n return None\n return f'Annotated[{self.type_hint}, {str(self)}]'\n\n\nclass BaseModel(DataModel):\n TEMPLATE_FILE_PATH: ClassVar[str] = 'pydantic/BaseModel.jinja2'\n BASE_CLASS: ClassVar[str] = 'pydantic.BaseModel'\n\n def __init__(\n self,\n *,\n reference: Reference,\n fields: List[DataModelField],\n decorators: Optional[List[str]] = None,\n base_classes: Optional[List[Reference]] = None,\n custom_base_class: Optional[str] = None,\n custom_template_dir: Optional[Path] = None,\n extra_template_data: Optional[DefaultDict[str, Any]] = None,\n path: Optional[Path] = None,\n description: Optional[str] = None,\n default: Any = UNDEFINED,\n nullable: bool = False,\n ):\n methods: List[str] = [field.method for field in fields if field.method]\n\n super().__init__(\n fields=fields, # type: ignore\n reference=reference,\n decorators=decorators,\n base_classes=base_classes,\n custom_base_class=custom_base_class,\n custom_template_dir=custom_template_dir,\n extra_template_data=extra_template_data,\n methods=methods,\n path=path,\n description=description,\n default=default,\n nullable=nullable,\n )\n\n config_parameters: Dict[str, Any] = {}\n\n additionalProperties = self.extra_template_data.get('additionalProperties')\n allow_extra_fields = self.extra_template_data.get('allow_extra_fields')\n if additionalProperties is not None or allow_extra_fields:\n config_parameters['extra'] = (\n 'Extra.allow'\n if additionalProperties or allow_extra_fields\n else 'Extra.forbid'\n )\n self._additional_imports.append(IMPORT_EXTRA)\n\n for config_attribute in 'allow_population_by_field_name', 'allow_mutation':\n if config_attribute in self.extra_template_data:\n config_parameters[config_attribute] = self.extra_template_data[\n config_attribute\n ]\n for data_type in self.all_data_types:\n if data_type.is_custom_type:\n config_parameters['arbitrary_types_allowed'] = True\n break\n\n if isinstance(self.extra_template_data.get('config'), dict):\n for key, value in self.extra_template_data['config'].items():\n config_parameters[key] = value\n\n if config_parameters:\n from datamodel_code_generator.model.pydantic import Config\n\n self.extra_template_data['config'] = Config.parse_obj(config_parameters)\n\n @property\n def imports(self) -> Tuple[Import, ...]:\n if any(f for f in self.fields if f.field):\n return chain_as_tuple(super().imports, (IMPORT_FIELD,))\n return super().imports\n\n @cached_property\n def template_file_path(self) -> Path:\n # This property is for Backward compatibility\n # Current version supports '{custom_template_dir}/BaseModel.jinja'\n # But, Future version will support only '{custom_template_dir}/pydantic/BaseModel.jinja'\n if self._custom_template_dir is not None:\n custom_template_file_path = (\n self._custom_template_dir / Path(self.TEMPLATE_FILE_PATH).name\n )\n if custom_template_file_path.exists():\n return custom_template_file_path\n return super().template_file_path\n", "path": "datamodel_code_generator/model/pydantic/base_model.py"}]} | 3,892 | 183 |
gh_patches_debug_5910 | rasdani/github-patches | git_diff | pantsbuild__pants-18673 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
JVM resource jar creation is broken on Mac by variation in `touch` command.
BSD touch uses `-t` to set the timestamp whereas GNU touch uses `-d`. We use / assume the GNU binary as noted here: https://github.com/pantsbuild/pants/pull/16950#discussion_r1157196330
As discovered here: https://pantsbuild.slack.com/archives/C046T6T9U/p1680604327733559?thread_ts=1680604327.733559&cid=C046T6T9U
Where the error looks like:
```
12:24:56.74 [ERROR] 1 Exception encountered:
Engine traceback:
in select
in pants.core.goals.check.check
in pants.backend.scala.goals.check.scalac_check (scalac)
in pants.backend.scala.compile.scalac.compile_scala_source
in pants.jvm.compile.compile_classpath_entries
in pants.jvm.resources.assemble_resources_jar
in pants.engine.process.fallible_to_exec_result_or_raise
Traceback (most recent call last):
File "/Users/jbenito/.cache/pants/setup/bootstrap-Darwin-x86_64/pants.1Nnv7r/install/lib/python3.9/site-packages/pants/engine/process.py", line 275, in fallible_to_exec_result_or_raise
raise ProcessExecutionFailure(
pants.engine.process.ProcessExecutionFailure: Process 'Build resources JAR for sdk/transport-security-web-lib/src/test/resources:resources' failed with exit code 1.
stdout:
stderr:
/usr/bin/touch: illegal option -- d
usage:
touch [-A [-][[hh]mm]SS] [-acfhm] [-r file] [-t [[CC]YY]MMDDhhmm[.SS]] file ...
```
It appears #16950 was cherry picked back to 2.13.1 and 2.14.0; so Pants has been broken for JVM resource jars since 2.13.1.
</issue>
<code>
[start of src/python/pants/jvm/resources.py]
1 # Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 import itertools
5 import logging
6 import shlex
7 from itertools import chain
8 from pathlib import Path
9
10 from pants.core.target_types import ResourcesFieldSet, ResourcesGeneratorFieldSet
11 from pants.core.util_rules import stripped_source_files
12 from pants.core.util_rules.source_files import SourceFilesRequest
13 from pants.core.util_rules.stripped_source_files import StrippedSourceFiles
14 from pants.core.util_rules.system_binaries import BashBinary, TouchBinary, ZipBinary
15 from pants.engine.fs import Digest, MergeDigests
16 from pants.engine.internals.selectors import MultiGet
17 from pants.engine.process import Process, ProcessResult
18 from pants.engine.rules import Get, collect_rules, rule
19 from pants.engine.target import SourcesField
20 from pants.engine.unions import UnionRule
21 from pants.jvm import compile
22 from pants.jvm.compile import (
23 ClasspathDependenciesRequest,
24 ClasspathEntry,
25 ClasspathEntryRequest,
26 ClasspathEntryRequests,
27 CompileResult,
28 FallibleClasspathEntries,
29 FallibleClasspathEntry,
30 )
31 from pants.jvm.subsystems import JvmSubsystem
32 from pants.util.logging import LogLevel
33
34 logger = logging.getLogger(__name__)
35
36
37 class JvmResourcesRequest(ClasspathEntryRequest):
38 field_sets = (
39 ResourcesFieldSet,
40 ResourcesGeneratorFieldSet,
41 )
42
43
44 @rule(desc="Assemble resources")
45 async def assemble_resources_jar(
46 zip: ZipBinary,
47 bash: BashBinary,
48 touch: TouchBinary,
49 jvm: JvmSubsystem,
50 request: JvmResourcesRequest,
51 ) -> FallibleClasspathEntry:
52 # Request the component's direct dependency classpath, and additionally any prerequisite.
53 # Filter out any dependencies that are generated by our current target so that each resource
54 # only appears in a single input JAR.
55 # NOTE: Generated dependencies will have the same dependencies as the current target, so we
56 # don't need to inspect those dependencies.
57 optional_prereq_request = [*((request.prerequisite,) if request.prerequisite else ())]
58 fallibles = await MultiGet(
59 Get(FallibleClasspathEntries, ClasspathEntryRequests(optional_prereq_request)),
60 Get(FallibleClasspathEntries, ClasspathDependenciesRequest(request, ignore_generated=True)),
61 )
62 direct_dependency_classpath_entries = FallibleClasspathEntries(
63 itertools.chain(*fallibles)
64 ).if_all_succeeded()
65
66 if direct_dependency_classpath_entries is None:
67 return FallibleClasspathEntry(
68 description=str(request.component),
69 result=CompileResult.DEPENDENCY_FAILED,
70 output=None,
71 exit_code=1,
72 )
73
74 source_files = await Get(
75 StrippedSourceFiles,
76 SourceFilesRequest([tgt.get(SourcesField) for tgt in request.component.members]),
77 )
78
79 output_filename = f"{request.component.representative.address.path_safe_spec}.resources.jar"
80 output_files = [output_filename]
81
82 # #16231: Valid JAR files need the directories of each resource file as well as the files
83 # themselves.
84
85 paths = {Path(filename) for filename in source_files.snapshot.files}
86 directories = {parent for path in paths for parent in path.parents}
87 input_files = {str(path) for path in chain(paths, directories)}
88
89 resources_jar_input_digest = source_files.snapshot.digest
90
91 input_filenames = " ".join(shlex.quote(file) for file in sorted(input_files))
92
93 resources_jar_result = await Get(
94 ProcessResult,
95 Process(
96 argv=[
97 bash.path,
98 "-c",
99 " ".join(
100 [
101 touch.path,
102 "-d 1980-01-01T00:00:00Z",
103 input_filenames,
104 "&&",
105 "TZ=UTC",
106 zip.path,
107 "-oX",
108 output_filename,
109 input_filenames,
110 ]
111 ),
112 ],
113 description=f"Build resources JAR for {request.component}",
114 input_digest=resources_jar_input_digest,
115 output_files=output_files,
116 level=LogLevel.DEBUG,
117 ),
118 )
119
120 output_digest = resources_jar_result.output_digest
121 cpe = ClasspathEntry(output_digest, output_files, [])
122
123 merged_cpe_digest = await Get(
124 Digest,
125 MergeDigests(chain((cpe.digest,), (i.digest for i in direct_dependency_classpath_entries))),
126 )
127
128 merged_cpe = ClasspathEntry.merge(
129 digest=merged_cpe_digest, entries=[cpe, *direct_dependency_classpath_entries]
130 )
131
132 return FallibleClasspathEntry(output_filename, CompileResult.SUCCEEDED, merged_cpe, 0)
133
134
135 def rules():
136 return [
137 *collect_rules(),
138 *compile.rules(),
139 *stripped_source_files.rules(),
140 UnionRule(ClasspathEntryRequest, JvmResourcesRequest),
141 ]
142
[end of src/python/pants/jvm/resources.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/python/pants/jvm/resources.py b/src/python/pants/jvm/resources.py
--- a/src/python/pants/jvm/resources.py
+++ b/src/python/pants/jvm/resources.py
@@ -98,8 +98,9 @@
"-c",
" ".join(
[
+ "TZ=UTC",
touch.path,
- "-d 1980-01-01T00:00:00Z",
+ "-t 198001010000.00",
input_filenames,
"&&",
"TZ=UTC",
| {"golden_diff": "diff --git a/src/python/pants/jvm/resources.py b/src/python/pants/jvm/resources.py\n--- a/src/python/pants/jvm/resources.py\n+++ b/src/python/pants/jvm/resources.py\n@@ -98,8 +98,9 @@\n \"-c\",\n \" \".join(\n [\n+ \"TZ=UTC\",\n touch.path,\n- \"-d 1980-01-01T00:00:00Z\",\n+ \"-t 198001010000.00\",\n input_filenames,\n \"&&\",\n \"TZ=UTC\",\n", "issue": "JVM resource jar creation is broken on Mac by variation in `touch` command.\nBSD touch uses `-t` to set the timestamp whereas GNU touch uses `-d`. We use / assume the GNU binary as noted here: https://github.com/pantsbuild/pants/pull/16950#discussion_r1157196330\n\nAs discovered here: https://pantsbuild.slack.com/archives/C046T6T9U/p1680604327733559?thread_ts=1680604327.733559&cid=C046T6T9U\n\nWhere the error looks like:\n```\n 12:24:56.74 [ERROR] 1 Exception encountered:\n \n Engine traceback:\n in select\n in pants.core.goals.check.check\n in pants.backend.scala.goals.check.scalac_check (scalac)\n in pants.backend.scala.compile.scalac.compile_scala_source\n in pants.jvm.compile.compile_classpath_entries\n in pants.jvm.resources.assemble_resources_jar\n in pants.engine.process.fallible_to_exec_result_or_raise\n Traceback (most recent call last):\n File \"/Users/jbenito/.cache/pants/setup/bootstrap-Darwin-x86_64/pants.1Nnv7r/install/lib/python3.9/site-packages/pants/engine/process.py\", line 275, in fallible_to_exec_result_or_raise\n raise ProcessExecutionFailure(\n pants.engine.process.ProcessExecutionFailure: Process 'Build resources JAR for sdk/transport-security-web-lib/src/test/resources:resources' failed with exit code 1.\n stdout:\n \n stderr:\n /usr/bin/touch: illegal option -- d\n usage:\n touch [-A [-][[hh]mm]SS] [-acfhm] [-r file] [-t [[CC]YY]MMDDhhmm[.SS]] file ...\n```\n\nIt appears #16950 was cherry picked back to 2.13.1 and 2.14.0; so Pants has been broken for JVM resource jars since 2.13.1.\n\n\n", "before_files": [{"content": "# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nimport itertools\nimport logging\nimport shlex\nfrom itertools import chain\nfrom pathlib import Path\n\nfrom pants.core.target_types import ResourcesFieldSet, ResourcesGeneratorFieldSet\nfrom pants.core.util_rules import stripped_source_files\nfrom pants.core.util_rules.source_files import SourceFilesRequest\nfrom pants.core.util_rules.stripped_source_files import StrippedSourceFiles\nfrom pants.core.util_rules.system_binaries import BashBinary, TouchBinary, ZipBinary\nfrom pants.engine.fs import Digest, MergeDigests\nfrom pants.engine.internals.selectors import MultiGet\nfrom pants.engine.process import Process, ProcessResult\nfrom pants.engine.rules import Get, collect_rules, rule\nfrom pants.engine.target import SourcesField\nfrom pants.engine.unions import UnionRule\nfrom pants.jvm import compile\nfrom pants.jvm.compile import (\n ClasspathDependenciesRequest,\n ClasspathEntry,\n ClasspathEntryRequest,\n ClasspathEntryRequests,\n CompileResult,\n FallibleClasspathEntries,\n FallibleClasspathEntry,\n)\nfrom pants.jvm.subsystems import JvmSubsystem\nfrom pants.util.logging import LogLevel\n\nlogger = logging.getLogger(__name__)\n\n\nclass JvmResourcesRequest(ClasspathEntryRequest):\n field_sets = (\n ResourcesFieldSet,\n ResourcesGeneratorFieldSet,\n )\n\n\n@rule(desc=\"Assemble resources\")\nasync def assemble_resources_jar(\n zip: ZipBinary,\n bash: BashBinary,\n touch: TouchBinary,\n jvm: JvmSubsystem,\n request: JvmResourcesRequest,\n) -> FallibleClasspathEntry:\n # Request the component's direct dependency classpath, and additionally any prerequisite.\n # Filter out any dependencies that are generated by our current target so that each resource\n # only appears in a single input JAR.\n # NOTE: Generated dependencies will have the same dependencies as the current target, so we\n # don't need to inspect those dependencies.\n optional_prereq_request = [*((request.prerequisite,) if request.prerequisite else ())]\n fallibles = await MultiGet(\n Get(FallibleClasspathEntries, ClasspathEntryRequests(optional_prereq_request)),\n Get(FallibleClasspathEntries, ClasspathDependenciesRequest(request, ignore_generated=True)),\n )\n direct_dependency_classpath_entries = FallibleClasspathEntries(\n itertools.chain(*fallibles)\n ).if_all_succeeded()\n\n if direct_dependency_classpath_entries is None:\n return FallibleClasspathEntry(\n description=str(request.component),\n result=CompileResult.DEPENDENCY_FAILED,\n output=None,\n exit_code=1,\n )\n\n source_files = await Get(\n StrippedSourceFiles,\n SourceFilesRequest([tgt.get(SourcesField) for tgt in request.component.members]),\n )\n\n output_filename = f\"{request.component.representative.address.path_safe_spec}.resources.jar\"\n output_files = [output_filename]\n\n # #16231: Valid JAR files need the directories of each resource file as well as the files\n # themselves.\n\n paths = {Path(filename) for filename in source_files.snapshot.files}\n directories = {parent for path in paths for parent in path.parents}\n input_files = {str(path) for path in chain(paths, directories)}\n\n resources_jar_input_digest = source_files.snapshot.digest\n\n input_filenames = \" \".join(shlex.quote(file) for file in sorted(input_files))\n\n resources_jar_result = await Get(\n ProcessResult,\n Process(\n argv=[\n bash.path,\n \"-c\",\n \" \".join(\n [\n touch.path,\n \"-d 1980-01-01T00:00:00Z\",\n input_filenames,\n \"&&\",\n \"TZ=UTC\",\n zip.path,\n \"-oX\",\n output_filename,\n input_filenames,\n ]\n ),\n ],\n description=f\"Build resources JAR for {request.component}\",\n input_digest=resources_jar_input_digest,\n output_files=output_files,\n level=LogLevel.DEBUG,\n ),\n )\n\n output_digest = resources_jar_result.output_digest\n cpe = ClasspathEntry(output_digest, output_files, [])\n\n merged_cpe_digest = await Get(\n Digest,\n MergeDigests(chain((cpe.digest,), (i.digest for i in direct_dependency_classpath_entries))),\n )\n\n merged_cpe = ClasspathEntry.merge(\n digest=merged_cpe_digest, entries=[cpe, *direct_dependency_classpath_entries]\n )\n\n return FallibleClasspathEntry(output_filename, CompileResult.SUCCEEDED, merged_cpe, 0)\n\n\ndef rules():\n return [\n *collect_rules(),\n *compile.rules(),\n *stripped_source_files.rules(),\n UnionRule(ClasspathEntryRequest, JvmResourcesRequest),\n ]\n", "path": "src/python/pants/jvm/resources.py"}]} | 2,374 | 139 |
gh_patches_debug_3096 | rasdani/github-patches | git_diff | wemake-services__wemake-python-styleguide-834 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bump mypy from 0.720 to 0.730
Bumps [mypy](https://github.com/python/mypy) from 0.720 to 0.730.
<details>
<summary>Commits</summary>
- [`7ad7f8b`](https://github.com/python/mypy/commit/7ad7f8bbe61e5e67aa7fd6f2efe280931dd2f620) Bump version to 0.730
- [`90776b8`](https://github.com/python/mypy/commit/90776b8b49dee8b5c84a7e90c1b563f2fd88f4f1) Document error codes ([#7451](https://github-redirect.dependabot.com/python/mypy/issues/7451))
- [`99475b2`](https://github.com/python/mypy/commit/99475b21705816a48a5f2cc0380907d21a93442f) Don't serialize redefined symbol nodes ([#7499](https://github-redirect.dependabot.com/python/mypy/issues/7499))
- [`8c17dd8`](https://github.com/python/mypy/commit/8c17dd863121138b20d92184786ed3777d4c574c) Don't compile mypyc/lib-rt/setup.py ([#7497](https://github-redirect.dependabot.com/python/mypy/issues/7497))
- [`41db9a0`](https://github.com/python/mypy/commit/41db9a0c570a3e190f3749cf0b681a31823dc0f7) Pass is_classmethod to bind_self() also for superype ([#7491](https://github-redirect.dependabot.com/python/mypy/issues/7491))
- [`2bdbacf`](https://github.com/python/mypy/commit/2bdbacf32a2b5201200dc2ed8ef5c7175b8de739) Attempt to fix travis on Python 3.8 beta ([#7492](https://github-redirect.dependabot.com/python/mypy/issues/7492))
- [`09c243d`](https://github.com/python/mypy/commit/09c243dcc12935b989367f31d1d25d7fd0ec634c) Point error to incompatible argument instead of call expression ([#7470](https://github-redirect.dependabot.com/python/mypy/issues/7470))
- [`88e2b67`](https://github.com/python/mypy/commit/88e2b67c4c2e8590dbee4aec272b3727b9566f0b) Support pickling of extension classes ([#7481](https://github-redirect.dependabot.com/python/mypy/issues/7481))
- [`9f1b8e9`](https://github.com/python/mypy/commit/9f1b8e930b812385fc866b3145785f7bb59361ef) Fix missing quotes in sample python snippet ([#7487](https://github-redirect.dependabot.com/python/mypy/issues/7487))
- [`37e5be1`](https://github.com/python/mypy/commit/37e5be10c845be3c036721c9462ef9cd90469236) Add http:// in front of the docs url for strict-optional ([#7485](https://github-redirect.dependabot.com/python/mypy/issues/7485))
- Additional commits viewable in [compare view](https://github.com/python/mypy/compare/v0.720...v0.730)
</details>
<br />
[](https://dependabot.com/compatibility-score.html?dependency-name=mypy&package-manager=pip&previous-version=0.720&new-version=0.730)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
- `@dependabot badge me` will comment on this PR with code to add a "Dependabot enabled" badge to your readme
Additionally, you can set the following in your Dependabot [dashboard](https://app.dependabot.com):
- Update frequency (including time of day and day of week)
- Pull request limits (per update run and/or open at any time)
- Automerge options (never/patch/minor, and dev/runtime dependencies)
- Out-of-range updates (receive only lockfile updates, if desired)
- Security updates (receive only security updates, if desired)
Finally, you can contact us by mentioning @dependabot.
</details>
</issue>
<code>
[start of wemake_python_styleguide/compat/nodes.py]
1 # -*- coding: utf-8 -*-
2
3 import ast
4
5 try: # pragma: no cover
6 from ast import Constant as Constant # type: ignore # noqa: WPS433, WPS113
7 except ImportError: # pragma: no cover
8 class Constant(ast.AST): # type: ignore # noqa: WPS440
9 """
10 Fallback for pythons that do not have ``ast.Constant``.
11
12 In this case ``Constant`` is replaced with:
13
14 - ``ast.Num``
15 - ``ast.Str`` and ``ast.Bytes``
16 - ``ast.NameConstant``
17
18 Only ``python3.8+`` has this node.
19 """
20
[end of wemake_python_styleguide/compat/nodes.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/wemake_python_styleguide/compat/nodes.py b/wemake_python_styleguide/compat/nodes.py
--- a/wemake_python_styleguide/compat/nodes.py
+++ b/wemake_python_styleguide/compat/nodes.py
@@ -3,7 +3,7 @@
import ast
try: # pragma: no cover
- from ast import Constant as Constant # type: ignore # noqa: WPS433, WPS113
+ from ast import Constant as Constant # noqa: WPS433, WPS113
except ImportError: # pragma: no cover
class Constant(ast.AST): # type: ignore # noqa: WPS440
"""
| {"golden_diff": "diff --git a/wemake_python_styleguide/compat/nodes.py b/wemake_python_styleguide/compat/nodes.py\n--- a/wemake_python_styleguide/compat/nodes.py\n+++ b/wemake_python_styleguide/compat/nodes.py\n@@ -3,7 +3,7 @@\n import ast\n \n try: # pragma: no cover\n- from ast import Constant as Constant # type: ignore # noqa: WPS433, WPS113\n+ from ast import Constant as Constant # noqa: WPS433, WPS113\n except ImportError: # pragma: no cover\n class Constant(ast.AST): # type: ignore # noqa: WPS440\n \"\"\"\n", "issue": "Bump mypy from 0.720 to 0.730\nBumps [mypy](https://github.com/python/mypy) from 0.720 to 0.730.\n<details>\n<summary>Commits</summary>\n\n- [`7ad7f8b`](https://github.com/python/mypy/commit/7ad7f8bbe61e5e67aa7fd6f2efe280931dd2f620) Bump version to 0.730\n- [`90776b8`](https://github.com/python/mypy/commit/90776b8b49dee8b5c84a7e90c1b563f2fd88f4f1) Document error codes ([#7451](https://github-redirect.dependabot.com/python/mypy/issues/7451))\n- [`99475b2`](https://github.com/python/mypy/commit/99475b21705816a48a5f2cc0380907d21a93442f) Don't serialize redefined symbol nodes ([#7499](https://github-redirect.dependabot.com/python/mypy/issues/7499))\n- [`8c17dd8`](https://github.com/python/mypy/commit/8c17dd863121138b20d92184786ed3777d4c574c) Don't compile mypyc/lib-rt/setup.py ([#7497](https://github-redirect.dependabot.com/python/mypy/issues/7497))\n- [`41db9a0`](https://github.com/python/mypy/commit/41db9a0c570a3e190f3749cf0b681a31823dc0f7) Pass is_classmethod to bind_self() also for superype ([#7491](https://github-redirect.dependabot.com/python/mypy/issues/7491))\n- [`2bdbacf`](https://github.com/python/mypy/commit/2bdbacf32a2b5201200dc2ed8ef5c7175b8de739) Attempt to fix travis on Python 3.8 beta ([#7492](https://github-redirect.dependabot.com/python/mypy/issues/7492))\n- [`09c243d`](https://github.com/python/mypy/commit/09c243dcc12935b989367f31d1d25d7fd0ec634c) Point error to incompatible argument instead of call expression ([#7470](https://github-redirect.dependabot.com/python/mypy/issues/7470))\n- [`88e2b67`](https://github.com/python/mypy/commit/88e2b67c4c2e8590dbee4aec272b3727b9566f0b) Support pickling of extension classes ([#7481](https://github-redirect.dependabot.com/python/mypy/issues/7481))\n- [`9f1b8e9`](https://github.com/python/mypy/commit/9f1b8e930b812385fc866b3145785f7bb59361ef) Fix missing quotes in sample python snippet ([#7487](https://github-redirect.dependabot.com/python/mypy/issues/7487))\n- [`37e5be1`](https://github.com/python/mypy/commit/37e5be10c845be3c036721c9462ef9cd90469236) Add http:// in front of the docs url for strict-optional ([#7485](https://github-redirect.dependabot.com/python/mypy/issues/7485))\n- Additional commits viewable in [compare view](https://github.com/python/mypy/compare/v0.720...v0.730)\n</details>\n<br />\n\n[](https://dependabot.com/compatibility-score.html?dependency-name=mypy&package-manager=pip&previous-version=0.720&new-version=0.730)\n\nDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n<details>\n<summary>Dependabot commands and options</summary>\n<br />\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it\n- `@dependabot merge` will merge this PR after your CI passes on it\n- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it\n- `@dependabot cancel merge` will cancel a previously requested merge and block automerging\n- `@dependabot reopen` will reopen this PR if it is closed\n- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually\n- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language\n- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language\n- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language\n- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language\n- `@dependabot badge me` will comment on this PR with code to add a \"Dependabot enabled\" badge to your readme\n\nAdditionally, you can set the following in your Dependabot [dashboard](https://app.dependabot.com):\n- Update frequency (including time of day and day of week)\n- Pull request limits (per update run and/or open at any time)\n- Automerge options (never/patch/minor, and dev/runtime dependencies)\n- Out-of-range updates (receive only lockfile updates, if desired)\n- Security updates (receive only security updates, if desired)\n\nFinally, you can contact us by mentioning @dependabot.\n\n</details>\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport ast\n\ntry: # pragma: no cover\n from ast import Constant as Constant # type: ignore # noqa: WPS433, WPS113\nexcept ImportError: # pragma: no cover\n class Constant(ast.AST): # type: ignore # noqa: WPS440\n \"\"\"\n Fallback for pythons that do not have ``ast.Constant``.\n\n In this case ``Constant`` is replaced with:\n\n - ``ast.Num``\n - ``ast.Str`` and ``ast.Bytes``\n - ``ast.NameConstant``\n\n Only ``python3.8+`` has this node.\n \"\"\"\n", "path": "wemake_python_styleguide/compat/nodes.py"}]} | 2,424 | 164 |
gh_patches_debug_35493 | rasdani/github-patches | git_diff | rasterio__rasterio-287 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Reprojection Example/Documentation
I was having some trouble following the [reprojection](https://github.com/mapbox/rasterio/blob/master/examples/reproject.py) example; the Affine parameters for `dst_transform` aren't referenced anywhere before they are applied:
https://github.com/mapbox/rasterio/blob/master/examples/reproject.py#L29
</issue>
<code>
[start of rasterio/transform.py]
1 import warnings
2
3 from affine import Affine
4
5 IDENTITY = Affine.identity()
6
7
8 def tastes_like_gdal(seq):
9 """Return True if `seq` matches the GDAL geotransform pattern."""
10 return seq[2] == seq[4] == 0.0 and seq[1] > 0 and seq[5] < 0
11
12
13 def guard_transform(transform):
14 """Return an Affine transformation instance"""
15 if not isinstance(transform, Affine):
16 if tastes_like_gdal(transform):
17 warnings.warn(
18 "GDAL-style transforms are deprecated and will not "
19 "be supported in Rasterio 1.0.",
20 FutureWarning,
21 stacklevel=2)
22 transform = Affine.from_gdal(*transform)
23 else:
24 transform = Affine(*transform)
25 return transform
26
[end of rasterio/transform.py]
[start of examples/reproject.py]
1 import os
2 import shutil
3 import subprocess
4 import tempfile
5
6 import numpy
7 import rasterio
8 from rasterio import Affine as A
9 from rasterio.warp import reproject, RESAMPLING
10
11 tempdir = '/tmp'
12 tiffname = os.path.join(tempdir, 'example.tif')
13
14 with rasterio.drivers():
15
16 # Consider a 512 x 512 raster centered on 0 degrees E and 0 degrees N
17 # with each pixel covering 15".
18 rows, cols = src_shape = (512, 512)
19 dpp = 1.0/240 # decimal degrees per pixel
20 # The following is equivalent to
21 # A(dpp, 0, -cols*dpp/2, 0, -dpp, rows*dpp/2).
22 src_transform = A.translation(-cols*dpp/2, rows*dpp/2) * A.scale(dpp, -dpp)
23 src_crs = {'init': 'EPSG:4326'}
24 source = numpy.ones(src_shape, numpy.uint8)*255
25
26 # Prepare to reproject this rasters to a 1024 x 1024 dataset in
27 # Web Mercator (EPSG:3857) with origin at -8928592, 2999585.
28 dst_shape = (1024, 1024)
29 dst_transform = A.from_gdal(-237481.5, 425.0, 0.0, 237536.4, 0.0, -425.0)
30 dst_transform = dst_transform.to_gdal()
31 dst_crs = {'init': 'EPSG:3857'}
32 destination = numpy.zeros(dst_shape, numpy.uint8)
33
34 reproject(
35 source,
36 destination,
37 src_transform=src_transform,
38 src_crs=src_crs,
39 dst_transform=dst_transform,
40 dst_crs=dst_crs,
41 resampling=RESAMPLING.nearest)
42
43 # Assert that the destination is only partly filled.
44 assert destination.any()
45 assert not destination.all()
46
47 # Write it out to a file.
48 with rasterio.open(
49 tiffname,
50 'w',
51 driver='GTiff',
52 width=dst_shape[1],
53 height=dst_shape[0],
54 count=1,
55 dtype=numpy.uint8,
56 nodata=0,
57 transform=dst_transform,
58 crs=dst_crs) as dst:
59 dst.write_band(1, destination)
60
61 info = subprocess.call(['open', tiffname])
62
63
[end of examples/reproject.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/reproject.py b/examples/reproject.py
--- a/examples/reproject.py
+++ b/examples/reproject.py
@@ -5,7 +5,7 @@
import numpy
import rasterio
-from rasterio import Affine as A
+from rasterio import transform
from rasterio.warp import reproject, RESAMPLING
tempdir = '/tmp'
@@ -17,17 +17,15 @@
# with each pixel covering 15".
rows, cols = src_shape = (512, 512)
dpp = 1.0/240 # decimal degrees per pixel
- # The following is equivalent to
- # A(dpp, 0, -cols*dpp/2, 0, -dpp, rows*dpp/2).
- src_transform = A.translation(-cols*dpp/2, rows*dpp/2) * A.scale(dpp, -dpp)
+ west, south, east, north = -cols*dpp/2, -rows*dpp/2, cols*dpp/2, rows*dpp/2
+ src_transform = transform.from_bounds(west, south, east, north, cols, rows)
src_crs = {'init': 'EPSG:4326'}
source = numpy.ones(src_shape, numpy.uint8)*255
# Prepare to reproject this rasters to a 1024 x 1024 dataset in
- # Web Mercator (EPSG:3857) with origin at -8928592, 2999585.
+ # Web Mercator (EPSG:3857) with origin at -237481.5, 237536.4.
dst_shape = (1024, 1024)
- dst_transform = A.from_gdal(-237481.5, 425.0, 0.0, 237536.4, 0.0, -425.0)
- dst_transform = dst_transform.to_gdal()
+ dst_transform = transform.from_origin(-237481.5, 237536.4, 425.0, 425.0)
dst_crs = {'init': 'EPSG:3857'}
destination = numpy.zeros(dst_shape, numpy.uint8)
@@ -59,4 +57,3 @@
dst.write_band(1, destination)
info = subprocess.call(['open', tiffname])
-
diff --git a/rasterio/transform.py b/rasterio/transform.py
--- a/rasterio/transform.py
+++ b/rasterio/transform.py
@@ -23,3 +23,18 @@
else:
transform = Affine(*transform)
return transform
+
+
+def from_origin(west, north, xsize, ysize):
+ """Return an Affine transformation for a georeferenced raster given
+ the coordinates of its upper left corner `west`, `north` and pixel
+ sizes `xsize`, `ysize`."""
+ return Affine.translation(west, north) * Affine.scale(xsize, -ysize)
+
+
+def from_bounds(west, south, east, north, width, height):
+ """Return an Affine transformation for a georeferenced raster given
+ its bounds `west`, `south`, `east`, `north` and its `width` and
+ `height` in number of pixels."""
+ return Affine.translation(west, north) * Affine.scale(
+ (east - west)/width, (south - north)/height)
| {"golden_diff": "diff --git a/examples/reproject.py b/examples/reproject.py\n--- a/examples/reproject.py\n+++ b/examples/reproject.py\n@@ -5,7 +5,7 @@\n \n import numpy\n import rasterio\n-from rasterio import Affine as A\n+from rasterio import transform\n from rasterio.warp import reproject, RESAMPLING\n \n tempdir = '/tmp'\n@@ -17,17 +17,15 @@\n # with each pixel covering 15\".\n rows, cols = src_shape = (512, 512)\n dpp = 1.0/240 # decimal degrees per pixel\n- # The following is equivalent to \n- # A(dpp, 0, -cols*dpp/2, 0, -dpp, rows*dpp/2).\n- src_transform = A.translation(-cols*dpp/2, rows*dpp/2) * A.scale(dpp, -dpp)\n+ west, south, east, north = -cols*dpp/2, -rows*dpp/2, cols*dpp/2, rows*dpp/2\n+ src_transform = transform.from_bounds(west, south, east, north, cols, rows)\n src_crs = {'init': 'EPSG:4326'}\n source = numpy.ones(src_shape, numpy.uint8)*255\n \n # Prepare to reproject this rasters to a 1024 x 1024 dataset in\n- # Web Mercator (EPSG:3857) with origin at -8928592, 2999585.\n+ # Web Mercator (EPSG:3857) with origin at -237481.5, 237536.4.\n dst_shape = (1024, 1024)\n- dst_transform = A.from_gdal(-237481.5, 425.0, 0.0, 237536.4, 0.0, -425.0)\n- dst_transform = dst_transform.to_gdal()\n+ dst_transform = transform.from_origin(-237481.5, 237536.4, 425.0, 425.0)\n dst_crs = {'init': 'EPSG:3857'}\n destination = numpy.zeros(dst_shape, numpy.uint8)\n \n@@ -59,4 +57,3 @@\n dst.write_band(1, destination)\n \n info = subprocess.call(['open', tiffname])\n-\ndiff --git a/rasterio/transform.py b/rasterio/transform.py\n--- a/rasterio/transform.py\n+++ b/rasterio/transform.py\n@@ -23,3 +23,18 @@\n else:\n transform = Affine(*transform)\n return transform\n+\n+\n+def from_origin(west, north, xsize, ysize):\n+ \"\"\"Return an Affine transformation for a georeferenced raster given\n+ the coordinates of its upper left corner `west`, `north` and pixel\n+ sizes `xsize`, `ysize`.\"\"\"\n+ return Affine.translation(west, north) * Affine.scale(xsize, -ysize)\n+\n+\n+def from_bounds(west, south, east, north, width, height):\n+ \"\"\"Return an Affine transformation for a georeferenced raster given\n+ its bounds `west`, `south`, `east`, `north` and its `width` and\n+ `height` in number of pixels.\"\"\"\n+ return Affine.translation(west, north) * Affine.scale(\n+ (east - west)/width, (south - north)/height)\n", "issue": "Reprojection Example/Documentation\nI was having some trouble following the [reprojection](https://github.com/mapbox/rasterio/blob/master/examples/reproject.py) example; the Affine parameters for `dst_transform` aren't referenced anywhere before they are applied:\n\nhttps://github.com/mapbox/rasterio/blob/master/examples/reproject.py#L29\n\n", "before_files": [{"content": "import warnings\n\nfrom affine import Affine\n\nIDENTITY = Affine.identity()\n\n\ndef tastes_like_gdal(seq):\n \"\"\"Return True if `seq` matches the GDAL geotransform pattern.\"\"\"\n return seq[2] == seq[4] == 0.0 and seq[1] > 0 and seq[5] < 0\n\n\ndef guard_transform(transform):\n \"\"\"Return an Affine transformation instance\"\"\"\n if not isinstance(transform, Affine):\n if tastes_like_gdal(transform):\n warnings.warn(\n \"GDAL-style transforms are deprecated and will not \"\n \"be supported in Rasterio 1.0.\",\n FutureWarning,\n stacklevel=2)\n transform = Affine.from_gdal(*transform)\n else:\n transform = Affine(*transform)\n return transform\n", "path": "rasterio/transform.py"}, {"content": "import os\nimport shutil\nimport subprocess\nimport tempfile\n\nimport numpy\nimport rasterio\nfrom rasterio import Affine as A\nfrom rasterio.warp import reproject, RESAMPLING\n\ntempdir = '/tmp'\ntiffname = os.path.join(tempdir, 'example.tif')\n\nwith rasterio.drivers():\n\n # Consider a 512 x 512 raster centered on 0 degrees E and 0 degrees N\n # with each pixel covering 15\".\n rows, cols = src_shape = (512, 512)\n dpp = 1.0/240 # decimal degrees per pixel\n # The following is equivalent to \n # A(dpp, 0, -cols*dpp/2, 0, -dpp, rows*dpp/2).\n src_transform = A.translation(-cols*dpp/2, rows*dpp/2) * A.scale(dpp, -dpp)\n src_crs = {'init': 'EPSG:4326'}\n source = numpy.ones(src_shape, numpy.uint8)*255\n\n # Prepare to reproject this rasters to a 1024 x 1024 dataset in\n # Web Mercator (EPSG:3857) with origin at -8928592, 2999585.\n dst_shape = (1024, 1024)\n dst_transform = A.from_gdal(-237481.5, 425.0, 0.0, 237536.4, 0.0, -425.0)\n dst_transform = dst_transform.to_gdal()\n dst_crs = {'init': 'EPSG:3857'}\n destination = numpy.zeros(dst_shape, numpy.uint8)\n\n reproject(\n source, \n destination, \n src_transform=src_transform,\n src_crs=src_crs,\n dst_transform=dst_transform,\n dst_crs=dst_crs,\n resampling=RESAMPLING.nearest)\n\n # Assert that the destination is only partly filled.\n assert destination.any()\n assert not destination.all()\n\n # Write it out to a file.\n with rasterio.open(\n tiffname, \n 'w',\n driver='GTiff',\n width=dst_shape[1],\n height=dst_shape[0],\n count=1,\n dtype=numpy.uint8,\n nodata=0,\n transform=dst_transform,\n crs=dst_crs) as dst:\n dst.write_band(1, destination)\n\ninfo = subprocess.call(['open', tiffname])\n\n", "path": "examples/reproject.py"}]} | 1,552 | 834 |
gh_patches_debug_1604 | rasdani/github-patches | git_diff | swcarpentry__python-novice-inflammation-946 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Code provided for students contain python code not compatible with python 3
At least one file in the `code` directory, e.g., `gen_inflammation.py` fails when running it with python 3. The [problem is the "division" not giving an integer](https://github.com/swcarpentry/python-novice-inflammation/blob/11643f14d31726f2f60873c4ca1230fff0bbf108/code/gen_inflammation.py#L19). It needs to be changed to
```diff
- upper / 4
+ upper // 4
```
This was spotted by a student trying to check their installation and running different files.
Other files may have similar errors. I'd suggest running and testing via CI everything we provide to the students.
</issue>
<code>
[start of code/gen_inflammation.py]
1 #!/usr/bin/env python
2
3 """
4 Generate pseudo-random patient inflammation data for use in Python lessons.
5 """
6
7 import random
8
9 n_patients = 60
10 n_days = 40
11 n_range = 20
12
13 middle = n_days / 2
14
15 for p in range(n_patients):
16 vals = []
17 for d in range(n_days):
18 upper = max(n_range - abs(d - middle), 0)
19 vals.append(random.randint(upper/4, upper))
20 print(','.join([str(v) for v in vals]))
21
[end of code/gen_inflammation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/code/gen_inflammation.py b/code/gen_inflammation.py
--- a/code/gen_inflammation.py
+++ b/code/gen_inflammation.py
@@ -16,5 +16,5 @@
vals = []
for d in range(n_days):
upper = max(n_range - abs(d - middle), 0)
- vals.append(random.randint(upper/4, upper))
+ vals.append(random.randint(upper//4, upper))
print(','.join([str(v) for v in vals]))
| {"golden_diff": "diff --git a/code/gen_inflammation.py b/code/gen_inflammation.py\n--- a/code/gen_inflammation.py\n+++ b/code/gen_inflammation.py\n@@ -16,5 +16,5 @@\n vals = []\n for d in range(n_days):\n upper = max(n_range - abs(d - middle), 0)\n- vals.append(random.randint(upper/4, upper))\n+ vals.append(random.randint(upper//4, upper))\n print(','.join([str(v) for v in vals]))\n", "issue": "Code provided for students contain python code not compatible with python 3\nAt least one file in the `code` directory, e.g., `gen_inflammation.py` fails when running it with python 3. The [problem is the \"division\" not giving an integer](https://github.com/swcarpentry/python-novice-inflammation/blob/11643f14d31726f2f60873c4ca1230fff0bbf108/code/gen_inflammation.py#L19). It needs to be changed to\r\n```diff\r\n- upper / 4\r\n+ upper // 4\r\n```\r\n\r\nThis was spotted by a student trying to check their installation and running different files.\r\nOther files may have similar errors. I'd suggest running and testing via CI everything we provide to the students.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"\nGenerate pseudo-random patient inflammation data for use in Python lessons.\n\"\"\"\n\nimport random\n\nn_patients = 60\nn_days = 40\nn_range = 20\n\nmiddle = n_days / 2\n\nfor p in range(n_patients):\n vals = []\n for d in range(n_days):\n upper = max(n_range - abs(d - middle), 0)\n vals.append(random.randint(upper/4, upper))\n print(','.join([str(v) for v in vals]))\n", "path": "code/gen_inflammation.py"}]} | 875 | 115 |
gh_patches_debug_19894 | rasdani/github-patches | git_diff | akvo__akvo-rsr-2891 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Show OrganisationIndicatorLabels belonging to all partners
The organisation with indicator labels defined, may not always be the primary organisation. Currently, as an interim solution we display labels defined by all organisations.
When there are multiple organizations with lots of labels (and possibly some of them very similar), this can turn out to be a problem.
</issue>
<code>
[start of akvo/rsr/templatetags/project_editor.py]
1 # -*- coding: utf-8 -*-
2
3 """Akvo RSR is covered by the GNU Affero General Public License.
4
5 See more details in the license.txt file located at the root folder of the
6 Akvo RSR module. For additional details on the GNU license please
7 see < http://www.gnu.org/licenses/agpl.html >.
8 """
9
10 from django import template
11 from django.db import models
12 from django.db.models import get_model, QuerySet
13
14 from akvo.rsr.models import ProjectEditorValidation
15
16 register = template.Library()
17 VALIDATION_CACHE = {}
18
19
20 def retrieve_model(obj):
21 """
22 Retrieves the model from 'obj', which can be either a Django Object or a string.
23 """
24 return get_model('rsr', obj.split('.')[0]) if isinstance(obj, basestring) else type(obj)
25
26
27 def retrieve_id(obj):
28 """
29 Retrieves the id from 'obj', which can be either a Django Object or a string.
30 """
31 if not isinstance(obj, basestring):
32 try:
33 return obj.id
34 except AttributeError:
35 return obj.project.id
36 else:
37 return "{0}_{1}".format(obj.split('.')[1], "new-0")
38
39
40 def get_validations():
41 """ Populate the VALIDATION_CACHE and return it."""
42
43 if VALIDATION_CACHE.get('CACHE_VALID', False):
44 return VALIDATION_CACHE
45
46 fields = ('validation', 'action', 'validation_set__pk')
47 for name, action, validation_set in ProjectEditorValidation.objects.values_list(*fields):
48 if action == ProjectEditorValidation.MANDATORY_ACTION:
49 action = 'mandatory'
50 elif action == ProjectEditorValidation.HIDDEN_ACTION:
51 action = 'hidden'
52
53 names = name.split('||')
54 for name in names:
55 indication = VALIDATION_CACHE.get(name, '')
56 indication += ' {0}-{1} '.format(action, validation_set)
57
58 if action == 'mandatory' and len(names) > 1:
59 other_names = set(names) - set([name])
60 for or_name in other_names:
61 indication += 'mandatory-{0}-or-{1} '.format(
62 validation_set, or_name.split('.')[1]
63 )
64
65 VALIDATION_CACHE[name] = indication.strip()
66
67 VALIDATION_CACHE['CACHE_VALID'] = True
68
69 return VALIDATION_CACHE
70
71
72 def invalidate_validation_cache():
73 global VALIDATION_CACHE
74 VALIDATION_CACHE = {}
75
76 return VALIDATION_CACHE
77
78
79 @register.filter
80 def obj_id(obj):
81 """
82 Generates the field's ID for a given model's field.
83
84 :returns "1234" or "1234_new-0"
85 """
86 return "{0}".format(retrieve_id(obj))
87
88
89 @register.filter
90 def field_id(obj, field):
91 """
92 Generates the field's ID for a given model's field.
93
94 :returns "rsr_project.title.1234"
95 """
96 return "{0}.{1}.{2}".format(
97 retrieve_model(obj)._meta.db_table,
98 field,
99 retrieve_id(obj)
100 )
101
102
103 @register.filter
104 def field_class(obj, field):
105 """
106 Similar to field_id(), but without the ID and a - to separate the fields. This is needed to
107 identify the class of a typeahead field.
108
109 :returns "rsr_relatedproject-related_project"
110 """
111 return "{0}-{1}".format(
112 retrieve_model(obj)._meta.db_table,
113 field
114 )
115
116
117 @register.filter
118 def field_class_id(obj, field):
119 """
120 Similar to field_id(), but without the ID and a - to separate the fields. This is needed to
121 identify the class of a typeahead field.
122
123 :returns "rsr_relatedproject-related_project"
124 """
125 return "{0}-{1}-{2}".format(
126 retrieve_model(obj)._meta.db_table,
127 field,
128 retrieve_id(obj)
129 )
130
131
132 @register.filter
133 def field_name(obj, field):
134 """
135 Retrieves the field's name for a given model's field, and makes first character uppercase. Also
136 taking into account that 'IATI' should always be in uppercase.
137
138 :returns "Project title"
139 """
140 def check_iati_in_name(name):
141 """Checks whether IATI is in the field name and capitalises that part."""
142 return name.replace('iati', 'IATI').replace('Iati', 'IATI')
143
144 return check_iati_in_name(retrieve_model(obj)._meta.get_field(field).verbose_name.capitalize())
145
146
147 @register.filter
148 def field_model_name(obj):
149 """
150 Retrieves the field's model name, and makes first character uppercase.
151
152 :returns "Related project"
153 """
154 return retrieve_model(obj)._meta.verbose_name.capitalize()
155
156
157 @register.filter
158 def help_text(obj, field):
159 """
160 Retrieves the help text for a given model's field.
161
162 :returns "If you are reporting multiple levels of projects in RSR, you can specify whether
163 this is a core, sub, or lower sub activity here."
164 """
165 return retrieve_model(obj)._meta.get_field(field).help_text
166
167
168 @register.filter
169 def max_length(obj, field):
170 """
171 Retrieves the max length of a given model's field.
172
173 :returns 100
174 """
175 return retrieve_model(obj)._meta.get_field(field).max_length
176
177
178 @register.filter
179 def value(obj, field):
180 """
181 Retrieves the value of a given object's field.
182
183 In case the object is a string, the supplied model and field are retrieved, and
184 the default value of the field returned, or an empty string if no default is specified.
185
186 In case the object is a Django object, the value of that object is retrieved.
187 If the object is a related object (e.g. ForeignKey), the primary key of the related object
188 is returned.
189
190 :returns "Project title"
191 :returns 1234 (in case of related object)
192 """
193 if isinstance(obj, basestring):
194 return ''
195 else:
196 field_value = getattr(obj, field)
197 if hasattr(field_value, 'pk'):
198 return field_value.pk
199 elif hasattr(field_value, 'url'):
200 return field_value.url
201 elif field_value is True:
202 return '1'
203 elif field_value is False:
204 return '2'
205 elif field_value in [0, 0.]:
206 return '0'
207 else:
208 return field_value or ''
209
210
211 @register.filter
212 def choices(obj, field):
213 """
214 Retrieves the choices of a given object's field and the IDs of the choices
215
216 :returns [((1, "Core Activity"), (2, "Sub Activity"), (3, "Lower Sub Activity")), [1, 2, 3]]
217 """
218
219 def first_items_list(iterable):
220 return [item[0] for item in iterable]
221
222 def values_list_of(model, *fields):
223 if isinstance(model, QuerySet):
224 objects = model
225 else:
226 objects = get_model('rsr', model).objects.all()
227 return objects.values_list(*fields)
228
229 def choices_and_ids(model, *fields):
230 choices_list = values_list_of(model, *fields)
231 return [
232 choices_list,
233 first_items_list(choices_list)
234 ]
235
236 model = retrieve_model(obj)
237 model_field = model._meta.get_field(field)
238
239 if not isinstance(model_field, models.ForeignKey):
240 return [model_field.choices, first_items_list(model_field.choices)]
241
242 elif isinstance(obj, get_model('rsr', 'BudgetItem')) or \
243 (isinstance(obj, basestring) and 'BudgetItem' in obj):
244 # The ForeignKey field on budget items is the budget item labels
245 return choices_and_ids('budgetitemlabel', 'id', 'label')
246
247 elif isinstance(obj, get_model('rsr', 'ProjectLocation')) or \
248 (isinstance(obj, basestring) and 'ProjectLocation' in obj):
249 # The ForeignKey field on locations is the countries
250 return choices_and_ids('country', 'id', 'name')
251
252 elif isinstance(obj, get_model('rsr', 'IndicatorLabel')) or \
253 (isinstance(obj, basestring) and 'IndicatorLabel' in obj):
254
255 if isinstance(obj, basestring) and 'IndicatorLabel' in obj:
256 # String looking like: u'IndicatorLabel.5577_22634_19197', 5577 is the project ID
257 project_pk = obj.split('.')[1].split('_')[0]
258 project = get_model('rsr', 'Project').objects.get(pk=project_pk)
259 else:
260 project = obj.indicator.result.project
261 organisation_indicator_labels = get_model('rsr', 'OrganisationIndicatorLabel').objects.filter(
262 organisation=project.primary_organisation
263 )
264 return choices_and_ids(organisation_indicator_labels, 'id', 'label')
265
266
267 @register.filter
268 def manytomany_value(obj):
269 """
270 Retrieves the id of a given object's field.
271
272 :returns ((1, "Akvo/Chum"), (2, "Yep"))
273 """
274 return '' if isinstance(obj, basestring) else obj.pk
275
276
277 @register.filter
278 def manytomany_choices(obj, field):
279 """
280 Retrieves the choices of a given object's Model. All objects of that Model should be displayed.
281
282 :returns ((1, "Akvo/Chum"), (2, "Yep"))
283 """
284 return retrieve_model(obj).objects.all().values_list('id', field)
285
286
287 @register.filter
288 def mandatory_or_hidden(validations, field):
289 """
290 Retrieves the mandatory and hidden fields for project editor validations.
291
292 :returns A string of the form mandatory-{validation ID} and/or hidden-{validation_ID}
293 """
294
295 if '.' in field:
296 # Model fields like 'rsr_relatedproject.12.relation'
297 field_name_list = field.split('.')
298 new_field_name = '.'.join([field_name_list[0], field_name_list[1]])
299
300 else:
301 # Full models like 'rsr_relatedproject'
302 new_field_name = field
303
304 indication = get_validations().get(new_field_name, '')
305
306 # XXX: Not sure why exactly 'rsr_project.current_image' needs to be special
307 # cased, but this code just retains the behavior of the older code.
308 if field == 'rsr_project.current_image':
309 indication = ' '.join(
310 filter(lambda x: not x.startswith('mandatory'), indication.split())
311 )
312
313 return indication
314
[end of akvo/rsr/templatetags/project_editor.py]
[start of akvo/rsr/models/organisation_indicator_label.py]
1 # -*- coding: utf-8 -*-
2
3 # Akvo Reporting is covered by the GNU Affero General Public License.
4 # See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6
7
8 from django.db import models
9 from django.utils.translation import ugettext_lazy as _
10
11 from ..fields import ValidXMLCharField
12
13
14 class OrganisationIndicatorLabel(models.Model):
15 from .organisation import Organisation
16
17 organisation = models.ForeignKey(Organisation, verbose_name=_(u'organisation'),
18 related_name='indicator_labels')
19 label = ValidXMLCharField(_(u'label'), max_length=100)
20
21 class Meta:
22 app_label = 'rsr'
23 verbose_name = _(u'organisation indicator label')
24 verbose_name_plural = _(u'organisation indicator labels')
25 unique_together = ('organisation', 'label')
26
27 def __unicode__(self):
28 return self.label
29
[end of akvo/rsr/models/organisation_indicator_label.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/akvo/rsr/models/organisation_indicator_label.py b/akvo/rsr/models/organisation_indicator_label.py
--- a/akvo/rsr/models/organisation_indicator_label.py
+++ b/akvo/rsr/models/organisation_indicator_label.py
@@ -23,6 +23,7 @@
verbose_name = _(u'organisation indicator label')
verbose_name_plural = _(u'organisation indicator labels')
unique_together = ('organisation', 'label')
+ ordering = ('organisation', 'label')
def __unicode__(self):
return self.label
diff --git a/akvo/rsr/templatetags/project_editor.py b/akvo/rsr/templatetags/project_editor.py
--- a/akvo/rsr/templatetags/project_editor.py
+++ b/akvo/rsr/templatetags/project_editor.py
@@ -259,8 +259,8 @@
else:
project = obj.indicator.result.project
organisation_indicator_labels = get_model('rsr', 'OrganisationIndicatorLabel').objects.filter(
- organisation=project.primary_organisation
- )
+ organisation=project.all_partners()
+ ).distinct()
return choices_and_ids(organisation_indicator_labels, 'id', 'label')
| {"golden_diff": "diff --git a/akvo/rsr/models/organisation_indicator_label.py b/akvo/rsr/models/organisation_indicator_label.py\n--- a/akvo/rsr/models/organisation_indicator_label.py\n+++ b/akvo/rsr/models/organisation_indicator_label.py\n@@ -23,6 +23,7 @@\n verbose_name = _(u'organisation indicator label')\n verbose_name_plural = _(u'organisation indicator labels')\n unique_together = ('organisation', 'label')\n+ ordering = ('organisation', 'label')\n \n def __unicode__(self):\n return self.label\ndiff --git a/akvo/rsr/templatetags/project_editor.py b/akvo/rsr/templatetags/project_editor.py\n--- a/akvo/rsr/templatetags/project_editor.py\n+++ b/akvo/rsr/templatetags/project_editor.py\n@@ -259,8 +259,8 @@\n else:\n project = obj.indicator.result.project\n organisation_indicator_labels = get_model('rsr', 'OrganisationIndicatorLabel').objects.filter(\n- organisation=project.primary_organisation\n- )\n+ organisation=project.all_partners()\n+ ).distinct()\n return choices_and_ids(organisation_indicator_labels, 'id', 'label')\n", "issue": "Show OrganisationIndicatorLabels belonging to all partners\nThe organisation with indicator labels defined, may not always be the primary organisation. Currently, as an interim solution we display labels defined by all organisations. \r\n\r\nWhen there are multiple organizations with lots of labels (and possibly some of them very similar), this can turn out to be a problem. \n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\n\nSee more details in the license.txt file located at the root folder of the\nAkvo RSR module. For additional details on the GNU license please\nsee < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nfrom django import template\nfrom django.db import models\nfrom django.db.models import get_model, QuerySet\n\nfrom akvo.rsr.models import ProjectEditorValidation\n\nregister = template.Library()\nVALIDATION_CACHE = {}\n\n\ndef retrieve_model(obj):\n \"\"\"\n Retrieves the model from 'obj', which can be either a Django Object or a string.\n \"\"\"\n return get_model('rsr', obj.split('.')[0]) if isinstance(obj, basestring) else type(obj)\n\n\ndef retrieve_id(obj):\n \"\"\"\n Retrieves the id from 'obj', which can be either a Django Object or a string.\n \"\"\"\n if not isinstance(obj, basestring):\n try:\n return obj.id\n except AttributeError:\n return obj.project.id\n else:\n return \"{0}_{1}\".format(obj.split('.')[1], \"new-0\")\n\n\ndef get_validations():\n \"\"\" Populate the VALIDATION_CACHE and return it.\"\"\"\n\n if VALIDATION_CACHE.get('CACHE_VALID', False):\n return VALIDATION_CACHE\n\n fields = ('validation', 'action', 'validation_set__pk')\n for name, action, validation_set in ProjectEditorValidation.objects.values_list(*fields):\n if action == ProjectEditorValidation.MANDATORY_ACTION:\n action = 'mandatory'\n elif action == ProjectEditorValidation.HIDDEN_ACTION:\n action = 'hidden'\n\n names = name.split('||')\n for name in names:\n indication = VALIDATION_CACHE.get(name, '')\n indication += ' {0}-{1} '.format(action, validation_set)\n\n if action == 'mandatory' and len(names) > 1:\n other_names = set(names) - set([name])\n for or_name in other_names:\n indication += 'mandatory-{0}-or-{1} '.format(\n validation_set, or_name.split('.')[1]\n )\n\n VALIDATION_CACHE[name] = indication.strip()\n\n VALIDATION_CACHE['CACHE_VALID'] = True\n\n return VALIDATION_CACHE\n\n\ndef invalidate_validation_cache():\n global VALIDATION_CACHE\n VALIDATION_CACHE = {}\n\n return VALIDATION_CACHE\n\n\[email protected]\ndef obj_id(obj):\n \"\"\"\n Generates the field's ID for a given model's field.\n\n :returns \"1234\" or \"1234_new-0\"\n \"\"\"\n return \"{0}\".format(retrieve_id(obj))\n\n\[email protected]\ndef field_id(obj, field):\n \"\"\"\n Generates the field's ID for a given model's field.\n\n :returns \"rsr_project.title.1234\"\n \"\"\"\n return \"{0}.{1}.{2}\".format(\n retrieve_model(obj)._meta.db_table,\n field,\n retrieve_id(obj)\n )\n\n\[email protected]\ndef field_class(obj, field):\n \"\"\"\n Similar to field_id(), but without the ID and a - to separate the fields. This is needed to\n identify the class of a typeahead field.\n\n :returns \"rsr_relatedproject-related_project\"\n \"\"\"\n return \"{0}-{1}\".format(\n retrieve_model(obj)._meta.db_table,\n field\n )\n\n\[email protected]\ndef field_class_id(obj, field):\n \"\"\"\n Similar to field_id(), but without the ID and a - to separate the fields. This is needed to\n identify the class of a typeahead field.\n\n :returns \"rsr_relatedproject-related_project\"\n \"\"\"\n return \"{0}-{1}-{2}\".format(\n retrieve_model(obj)._meta.db_table,\n field,\n retrieve_id(obj)\n )\n\n\[email protected]\ndef field_name(obj, field):\n \"\"\"\n Retrieves the field's name for a given model's field, and makes first character uppercase. Also\n taking into account that 'IATI' should always be in uppercase.\n\n :returns \"Project title\"\n \"\"\"\n def check_iati_in_name(name):\n \"\"\"Checks whether IATI is in the field name and capitalises that part.\"\"\"\n return name.replace('iati', 'IATI').replace('Iati', 'IATI')\n\n return check_iati_in_name(retrieve_model(obj)._meta.get_field(field).verbose_name.capitalize())\n\n\[email protected]\ndef field_model_name(obj):\n \"\"\"\n Retrieves the field's model name, and makes first character uppercase.\n\n :returns \"Related project\"\n \"\"\"\n return retrieve_model(obj)._meta.verbose_name.capitalize()\n\n\[email protected]\ndef help_text(obj, field):\n \"\"\"\n Retrieves the help text for a given model's field.\n\n :returns \"If you are reporting multiple levels of projects in RSR, you can specify whether\n this is a core, sub, or lower sub activity here.\"\n \"\"\"\n return retrieve_model(obj)._meta.get_field(field).help_text\n\n\[email protected]\ndef max_length(obj, field):\n \"\"\"\n Retrieves the max length of a given model's field.\n\n :returns 100\n \"\"\"\n return retrieve_model(obj)._meta.get_field(field).max_length\n\n\[email protected]\ndef value(obj, field):\n \"\"\"\n Retrieves the value of a given object's field.\n\n In case the object is a string, the supplied model and field are retrieved, and\n the default value of the field returned, or an empty string if no default is specified.\n\n In case the object is a Django object, the value of that object is retrieved.\n If the object is a related object (e.g. ForeignKey), the primary key of the related object\n is returned.\n\n :returns \"Project title\"\n :returns 1234 (in case of related object)\n \"\"\"\n if isinstance(obj, basestring):\n return ''\n else:\n field_value = getattr(obj, field)\n if hasattr(field_value, 'pk'):\n return field_value.pk\n elif hasattr(field_value, 'url'):\n return field_value.url\n elif field_value is True:\n return '1'\n elif field_value is False:\n return '2'\n elif field_value in [0, 0.]:\n return '0'\n else:\n return field_value or ''\n\n\[email protected]\ndef choices(obj, field):\n \"\"\"\n Retrieves the choices of a given object's field and the IDs of the choices\n\n :returns [((1, \"Core Activity\"), (2, \"Sub Activity\"), (3, \"Lower Sub Activity\")), [1, 2, 3]]\n \"\"\"\n\n def first_items_list(iterable):\n return [item[0] for item in iterable]\n\n def values_list_of(model, *fields):\n if isinstance(model, QuerySet):\n objects = model\n else:\n objects = get_model('rsr', model).objects.all()\n return objects.values_list(*fields)\n\n def choices_and_ids(model, *fields):\n choices_list = values_list_of(model, *fields)\n return [\n choices_list,\n first_items_list(choices_list)\n ]\n\n model = retrieve_model(obj)\n model_field = model._meta.get_field(field)\n\n if not isinstance(model_field, models.ForeignKey):\n return [model_field.choices, first_items_list(model_field.choices)]\n\n elif isinstance(obj, get_model('rsr', 'BudgetItem')) or \\\n (isinstance(obj, basestring) and 'BudgetItem' in obj):\n # The ForeignKey field on budget items is the budget item labels\n return choices_and_ids('budgetitemlabel', 'id', 'label')\n\n elif isinstance(obj, get_model('rsr', 'ProjectLocation')) or \\\n (isinstance(obj, basestring) and 'ProjectLocation' in obj):\n # The ForeignKey field on locations is the countries\n return choices_and_ids('country', 'id', 'name')\n\n elif isinstance(obj, get_model('rsr', 'IndicatorLabel')) or \\\n (isinstance(obj, basestring) and 'IndicatorLabel' in obj):\n\n if isinstance(obj, basestring) and 'IndicatorLabel' in obj:\n # String looking like: u'IndicatorLabel.5577_22634_19197', 5577 is the project ID\n project_pk = obj.split('.')[1].split('_')[0]\n project = get_model('rsr', 'Project').objects.get(pk=project_pk)\n else:\n project = obj.indicator.result.project\n organisation_indicator_labels = get_model('rsr', 'OrganisationIndicatorLabel').objects.filter(\n organisation=project.primary_organisation\n )\n return choices_and_ids(organisation_indicator_labels, 'id', 'label')\n\n\[email protected]\ndef manytomany_value(obj):\n \"\"\"\n Retrieves the id of a given object's field.\n\n :returns ((1, \"Akvo/Chum\"), (2, \"Yep\"))\n \"\"\"\n return '' if isinstance(obj, basestring) else obj.pk\n\n\[email protected]\ndef manytomany_choices(obj, field):\n \"\"\"\n Retrieves the choices of a given object's Model. All objects of that Model should be displayed.\n\n :returns ((1, \"Akvo/Chum\"), (2, \"Yep\"))\n \"\"\"\n return retrieve_model(obj).objects.all().values_list('id', field)\n\n\[email protected]\ndef mandatory_or_hidden(validations, field):\n \"\"\"\n Retrieves the mandatory and hidden fields for project editor validations.\n\n :returns A string of the form mandatory-{validation ID} and/or hidden-{validation_ID}\n \"\"\"\n\n if '.' in field:\n # Model fields like 'rsr_relatedproject.12.relation'\n field_name_list = field.split('.')\n new_field_name = '.'.join([field_name_list[0], field_name_list[1]])\n\n else:\n # Full models like 'rsr_relatedproject'\n new_field_name = field\n\n indication = get_validations().get(new_field_name, '')\n\n # XXX: Not sure why exactly 'rsr_project.current_image' needs to be special\n # cased, but this code just retains the behavior of the older code.\n if field == 'rsr_project.current_image':\n indication = ' '.join(\n filter(lambda x: not x.startswith('mandatory'), indication.split())\n )\n\n return indication\n", "path": "akvo/rsr/templatetags/project_editor.py"}, {"content": "# -*- coding: utf-8 -*-\n\n# Akvo Reporting is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\n\nfrom django.db import models\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom ..fields import ValidXMLCharField\n\n\nclass OrganisationIndicatorLabel(models.Model):\n from .organisation import Organisation\n\n organisation = models.ForeignKey(Organisation, verbose_name=_(u'organisation'),\n related_name='indicator_labels')\n label = ValidXMLCharField(_(u'label'), max_length=100)\n\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'organisation indicator label')\n verbose_name_plural = _(u'organisation indicator labels')\n unique_together = ('organisation', 'label')\n\n def __unicode__(self):\n return self.label\n", "path": "akvo/rsr/models/organisation_indicator_label.py"}]} | 4,001 | 277 |
gh_patches_debug_8998 | rasdani/github-patches | git_diff | Gallopsled__pwntools-1706 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
socket.socket doesnt have unrecv. bug in udp server
https://github.com/Gallopsled/pwntools/blob/5e279e7993f1f57cba2ba128f6bd8a27c19ea25f/pwnlib/tubes/server.py#L120
as mentioned above this is a bug.
line should be
```python
self.unrecv(data)
```
if its necessary at all
</issue>
<code>
[start of pwnlib/tubes/server.py]
1 from __future__ import absolute_import
2 from __future__ import division
3
4 import errno
5 import socket
6 import threading
7
8 from pwnlib.context import context
9 from pwnlib.log import getLogger
10 from pwnlib.tubes.sock import sock
11 from pwnlib.tubes.remote import remote
12
13 log = getLogger(__name__)
14
15 class server(sock):
16 r"""Creates an TCP or UDP-server to listen for connections. It supports
17 both IPv4 and IPv6.
18
19 Arguments:
20 port(int): The port to connect to.
21 Defaults to a port auto-selected by the operating system.
22 bindaddr(str): The address to bind to.
23 Defaults to ``0.0.0.0`` / `::`.
24 fam: The string "any", "ipv4" or "ipv6" or an integer to pass to :func:`socket.getaddrinfo`.
25 typ: The string "tcp" or "udp" or an integer to pass to :func:`socket.getaddrinfo`.
26 callback: A function to be started on incoming connections. It should take a :class:`pwnlib.tubes.remote` as its only argument.
27
28 Examples:
29
30 >>> s = server(8888)
31 >>> client_conn = remote('localhost', s.lport)
32 >>> server_conn = s.next_connection()
33 >>> client_conn.sendline(b'Hello')
34 >>> server_conn.recvline()
35 b'Hello\n'
36 >>> def cb(r):
37 ... client_input = r.readline()
38 ... r.send(client_input[::-1])
39 ...
40 >>> t = server(8889, callback=cb)
41 >>> client_conn = remote('localhost', t.lport)
42 >>> client_conn.sendline(b'callback')
43 >>> client_conn.recv()
44 b'\nkcabllac'
45 """
46
47 #: Local port
48 lport = 0
49
50 #: Local host
51 lhost = None
52
53 #: Socket type (e.g. socket.SOCK_STREAM)
54 type = None
55
56 #: Socket family
57 family = None
58
59 #: Socket protocol
60 protocol = None
61
62 #: Canonical name of the listening interface
63 canonname = None
64
65 #: Sockaddr structure that is being listened on
66 sockaddr = None
67
68 _accepter = None
69
70 def __init__(self, port=0, bindaddr = "0.0.0.0", fam = "any", typ = "tcp",
71 callback = None, blocking = False, *args, **kwargs):
72 super(server, self).__init__(*args, **kwargs)
73
74 port = int(port)
75 fam = {socket.AF_INET: 'ipv4',
76 socket.AF_INET6: 'ipv6'}.get(fam, fam)
77
78 fam = self._get_family(fam)
79 typ = self._get_type(typ)
80
81 if fam == socket.AF_INET6 and bindaddr == '0.0.0.0':
82 bindaddr = '::'
83
84 h = self.waitfor('Trying to bind to %s on port %d' % (bindaddr, port))
85
86 for res in socket.getaddrinfo(bindaddr, port, fam, typ, 0, socket.AI_PASSIVE):
87 self.family, self.type, self.proto, self.canonname, self.sockaddr = res
88
89 if self.type not in [socket.SOCK_STREAM, socket.SOCK_DGRAM]:
90 continue
91
92 h.status("Trying %s" % self.sockaddr[0])
93 listen_sock = socket.socket(self.family, self.type, self.proto)
94 listen_sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
95 listen_sock.bind(self.sockaddr)
96 self.lhost, self.lport = listen_sock.getsockname()[:2]
97 if self.type == socket.SOCK_STREAM:
98 listen_sock.listen(1)
99 break
100 else:
101 h.failure()
102 self.error("Could not bind to %s on port %d" % (bindaddr, port))
103
104 h.success()
105
106 self.sock = listen_sock
107 self.connections_waiting = threading.Event()
108 self.connections = []
109 def accepter():
110 while True:
111 h = self.waitfor('Waiting for connections on %s:%s' % (self.lhost, self.lport))
112 while True:
113 try:
114 if self.type == socket.SOCK_STREAM:
115 sock, rhost = listen_sock.accept()
116 else:
117 data, rhost = listen_sock.recvfrom(4096)
118 listen_sock.connect(rhost)
119 sock = listen_sock
120 sock.unrecv(data)
121 sock.settimeout(self.timeout)
122 break
123 except socket.error as e:
124 if e.errno == errno.EINTR:
125 continue
126 h.failure()
127 self.exception("Socket failure while waiting for connection")
128 sock = None
129 return
130
131 self.rhost, self.rport = rhost[:2]
132 r = remote(self.rhost, self.rport, sock = sock)
133 h.success('Got connection from %s on port %d' % (self.rhost, self.rport))
134 if callback:
135 if not blocking:
136 t = context.Thread(target = callback, args = (r,))
137 t.daemon = True
138 t.start()
139 else:
140 callback(r)
141 else:
142 self.connections.append(r)
143 if not self.connections_waiting.is_set():
144 self.connections_waiting.set()
145
146 self._accepter = context.Thread(target = accepter)
147 self._accepter.daemon = True
148 self._accepter.start()
149
150 def next_connection(self):
151 if not self.connections_waiting.is_set():
152 self.connections_waiting.wait()
153 conn = self.connections.pop(0)
154 if not self.connections:
155 self.connections_waiting.clear()
156 return conn
157
158 def close(self):
159 # since `close` is scheduled to run on exit we must check that we got
160 # a connection or the program will hang in the `join` call above
161 if self._accepter and self._accepter.is_alive():
162 return
163 super(server, self).close()
164
[end of pwnlib/tubes/server.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pwnlib/tubes/server.py b/pwnlib/tubes/server.py
--- a/pwnlib/tubes/server.py
+++ b/pwnlib/tubes/server.py
@@ -117,7 +117,7 @@
data, rhost = listen_sock.recvfrom(4096)
listen_sock.connect(rhost)
sock = listen_sock
- sock.unrecv(data)
+ self.unrecv(data)
sock.settimeout(self.timeout)
break
except socket.error as e:
| {"golden_diff": "diff --git a/pwnlib/tubes/server.py b/pwnlib/tubes/server.py\n--- a/pwnlib/tubes/server.py\n+++ b/pwnlib/tubes/server.py\n@@ -117,7 +117,7 @@\n data, rhost = listen_sock.recvfrom(4096)\n listen_sock.connect(rhost)\n sock = listen_sock\n- sock.unrecv(data)\n+ self.unrecv(data)\n sock.settimeout(self.timeout)\n break\n except socket.error as e:\n", "issue": "socket.socket doesnt have unrecv. bug in udp server\nhttps://github.com/Gallopsled/pwntools/blob/5e279e7993f1f57cba2ba128f6bd8a27c19ea25f/pwnlib/tubes/server.py#L120\r\n\r\nas mentioned above this is a bug.\r\n\r\nline should be\r\n```python\r\nself.unrecv(data)\r\n```\r\nif its necessary at all\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\n\nimport errno\nimport socket\nimport threading\n\nfrom pwnlib.context import context\nfrom pwnlib.log import getLogger\nfrom pwnlib.tubes.sock import sock\nfrom pwnlib.tubes.remote import remote\n\nlog = getLogger(__name__)\n\nclass server(sock):\n r\"\"\"Creates an TCP or UDP-server to listen for connections. It supports\n both IPv4 and IPv6.\n\n Arguments:\n port(int): The port to connect to.\n Defaults to a port auto-selected by the operating system.\n bindaddr(str): The address to bind to.\n Defaults to ``0.0.0.0`` / `::`.\n fam: The string \"any\", \"ipv4\" or \"ipv6\" or an integer to pass to :func:`socket.getaddrinfo`.\n typ: The string \"tcp\" or \"udp\" or an integer to pass to :func:`socket.getaddrinfo`.\n callback: A function to be started on incoming connections. It should take a :class:`pwnlib.tubes.remote` as its only argument.\n\n Examples:\n\n >>> s = server(8888)\n >>> client_conn = remote('localhost', s.lport)\n >>> server_conn = s.next_connection()\n >>> client_conn.sendline(b'Hello')\n >>> server_conn.recvline()\n b'Hello\\n'\n >>> def cb(r):\n ... client_input = r.readline()\n ... r.send(client_input[::-1])\n ...\n >>> t = server(8889, callback=cb)\n >>> client_conn = remote('localhost', t.lport)\n >>> client_conn.sendline(b'callback')\n >>> client_conn.recv()\n b'\\nkcabllac'\n \"\"\"\n\n #: Local port\n lport = 0\n\n #: Local host\n lhost = None\n\n #: Socket type (e.g. socket.SOCK_STREAM)\n type = None\n\n #: Socket family\n family = None\n\n #: Socket protocol\n protocol = None\n\n #: Canonical name of the listening interface\n canonname = None\n\n #: Sockaddr structure that is being listened on\n sockaddr = None\n\n _accepter = None\n\n def __init__(self, port=0, bindaddr = \"0.0.0.0\", fam = \"any\", typ = \"tcp\",\n callback = None, blocking = False, *args, **kwargs):\n super(server, self).__init__(*args, **kwargs)\n\n port = int(port)\n fam = {socket.AF_INET: 'ipv4',\n socket.AF_INET6: 'ipv6'}.get(fam, fam)\n\n fam = self._get_family(fam)\n typ = self._get_type(typ)\n\n if fam == socket.AF_INET6 and bindaddr == '0.0.0.0':\n bindaddr = '::'\n\n h = self.waitfor('Trying to bind to %s on port %d' % (bindaddr, port))\n\n for res in socket.getaddrinfo(bindaddr, port, fam, typ, 0, socket.AI_PASSIVE):\n self.family, self.type, self.proto, self.canonname, self.sockaddr = res\n\n if self.type not in [socket.SOCK_STREAM, socket.SOCK_DGRAM]:\n continue\n\n h.status(\"Trying %s\" % self.sockaddr[0])\n listen_sock = socket.socket(self.family, self.type, self.proto)\n listen_sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)\n listen_sock.bind(self.sockaddr)\n self.lhost, self.lport = listen_sock.getsockname()[:2]\n if self.type == socket.SOCK_STREAM:\n listen_sock.listen(1)\n break\n else:\n h.failure()\n self.error(\"Could not bind to %s on port %d\" % (bindaddr, port))\n\n h.success()\n\n self.sock = listen_sock\n self.connections_waiting = threading.Event()\n self.connections = []\n def accepter():\n while True:\n h = self.waitfor('Waiting for connections on %s:%s' % (self.lhost, self.lport))\n while True:\n try:\n if self.type == socket.SOCK_STREAM:\n sock, rhost = listen_sock.accept()\n else:\n data, rhost = listen_sock.recvfrom(4096)\n listen_sock.connect(rhost)\n sock = listen_sock\n sock.unrecv(data)\n sock.settimeout(self.timeout)\n break\n except socket.error as e:\n if e.errno == errno.EINTR:\n continue\n h.failure()\n self.exception(\"Socket failure while waiting for connection\")\n sock = None\n return\n\n self.rhost, self.rport = rhost[:2]\n r = remote(self.rhost, self.rport, sock = sock)\n h.success('Got connection from %s on port %d' % (self.rhost, self.rport))\n if callback:\n if not blocking:\n t = context.Thread(target = callback, args = (r,))\n t.daemon = True\n t.start()\n else:\n callback(r)\n else:\n self.connections.append(r)\n if not self.connections_waiting.is_set():\n self.connections_waiting.set()\n\n self._accepter = context.Thread(target = accepter)\n self._accepter.daemon = True\n self._accepter.start()\n\n def next_connection(self):\n if not self.connections_waiting.is_set():\n self.connections_waiting.wait()\n conn = self.connections.pop(0)\n if not self.connections:\n self.connections_waiting.clear()\n return conn\n\n def close(self):\n # since `close` is scheduled to run on exit we must check that we got\n # a connection or the program will hang in the `join` call above\n if self._accepter and self._accepter.is_alive():\n return\n super(server, self).close()\n", "path": "pwnlib/tubes/server.py"}]} | 2,320 | 114 |
gh_patches_debug_931 | rasdani/github-patches | git_diff | AUTOMATIC1111__stable-diffusion-webui-7353 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Bug]: thumbnail cards are not loading the preview image
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What happened?
just getting black image, and if I try to update an image, it goes black too.
It was working before checkpoints were added, I don't know if that's related.
### Steps to reproduce the problem
1. Go to ....
2. Press ....
3. ...
### What should have happened?
should see the preview images
### Commit where the problem happens
0a8515085ef258d4b76fdc000f7ed9d55751d6b8
### What platforms do you use to access the UI ?
_No response_
### What browsers do you use to access the UI ?
_No response_
### Command Line Arguments
```Shell
--api --cors-allow-origins http://localhost:5173 --administrator --no-half-vae --no-half --disable-safe-unpickle --force-cpu --xformers
```
### List of extensions
all of them
### Console logs
```Shell
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 94, in receive
return self.receive_nowait()
File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 89, in receive_nowait
raise WouldBlock
anyio.WouldBlock
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 77, in call_next
message = await recv_stream.receive()
File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 114, in receive
raise EndOfStream
anyio.EndOfStream
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\stable-diffusion-webui\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 407, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "D:\stable-diffusion-webui\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\fastapi\applications.py", line 270, in __call__
await super().__call__(scope, receive, send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\applications.py", line 124, in __call__
await self.middleware_stack(scope, receive, send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 184, in __call__
raise exc
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 106, in __call__
response = await self.dispatch_func(request, call_next)
File "D:\stable-diffusion-webui\extensions\auto-sd-paint-ext\backend\app.py", line 391, in app_encryption_middleware
res: StreamingResponse = await call_next(req)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 80, in call_next
raise app_exc
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 69, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 106, in __call__
response = await self.dispatch_func(request, call_next)
File "D:\stable-diffusion-webui\modules\api\api.py", line 96, in log_and_time
res: Response = await call_next(req)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 80, in call_next
raise app_exc
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 69, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 24, in __call__
await responder(scope, receive, send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 43, in __call__
await self.app(scope, receive, self.send_with_gzip)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\cors.py", line 84, in __call__
await self.app(scope, receive, send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
raise exc
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "D:\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in __call__
raise e
File "D:\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 706, in __call__
await route.handle(scope, receive, send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 276, in handle
await self.app(scope, receive, send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 66, in app
response = await func(request)
File "D:\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 235, in app
raw_response = await run_endpoint_function(
File "D:\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 163, in run_endpoint_function
return await run_in_threadpool(dependant.call, **values)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool
return await anyio.to_thread.run_sync(func, *args)
File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
result = context.run(func, *args)
File "D:\stable-diffusion-webui\modules\ui_extra_networks.py", line 28, in fetch_file
if not any([Path(x).resolve() in Path(filename).resolve().parents for x in allowed_dirs]):
File "D:\stable-diffusion-webui\modules\ui_extra_networks.py", line 28, in <listcomp>
if not any([Path(x).resolve() in Path(filename).resolve().parents for x in allowed_dirs]):
File "D:\Python\Python310\lib\pathlib.py", line 960, in __new__
self = cls._from_parts(args)
File "D:\Python\Python310\lib\pathlib.py", line 594, in _from_parts
drv, root, parts = self._parse_args(args)
File "D:\Python\Python310\lib\pathlib.py", line 578, in _parse_args
a = os.fspath(a)
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
### Additional information
_No response_
</issue>
<code>
[start of modules/ui_extra_networks_checkpoints.py]
1 import html
2 import json
3 import os
4 import urllib.parse
5
6 from modules import shared, ui_extra_networks, sd_models
7
8
9 class ExtraNetworksPageCheckpoints(ui_extra_networks.ExtraNetworksPage):
10 def __init__(self):
11 super().__init__('Checkpoints')
12
13 def refresh(self):
14 shared.refresh_checkpoints()
15
16 def list_items(self):
17 for name, checkpoint in sd_models.checkpoints_list.items():
18 path, ext = os.path.splitext(checkpoint.filename)
19 previews = [path + ".png", path + ".preview.png"]
20
21 preview = None
22 for file in previews:
23 if os.path.isfile(file):
24 preview = self.link_preview(file)
25 break
26
27 yield {
28 "name": checkpoint.name_for_extra,
29 "filename": path,
30 "preview": preview,
31 "search_term": self.search_terms_from_path(checkpoint.filename),
32 "onclick": '"' + html.escape(f"""return selectCheckpoint({json.dumps(name)})""") + '"',
33 "local_preview": path + ".png",
34 }
35
36 def allowed_directories_for_previews(self):
37 return [shared.cmd_opts.ckpt_dir, sd_models.model_path]
38
39
[end of modules/ui_extra_networks_checkpoints.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/modules/ui_extra_networks_checkpoints.py b/modules/ui_extra_networks_checkpoints.py
--- a/modules/ui_extra_networks_checkpoints.py
+++ b/modules/ui_extra_networks_checkpoints.py
@@ -34,5 +34,5 @@
}
def allowed_directories_for_previews(self):
- return [shared.cmd_opts.ckpt_dir, sd_models.model_path]
+ return [v for v in [shared.cmd_opts.ckpt_dir, sd_models.model_path] if v is not None]
| {"golden_diff": "diff --git a/modules/ui_extra_networks_checkpoints.py b/modules/ui_extra_networks_checkpoints.py\n--- a/modules/ui_extra_networks_checkpoints.py\n+++ b/modules/ui_extra_networks_checkpoints.py\n@@ -34,5 +34,5 @@\n }\r\n \r\n def allowed_directories_for_previews(self):\r\n- return [shared.cmd_opts.ckpt_dir, sd_models.model_path]\r\n+ return [v for v in [shared.cmd_opts.ckpt_dir, sd_models.model_path] if v is not None]\n", "issue": "[Bug]: thumbnail cards are not loading the preview image\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues and checked the recent builds/commits\n\n### What happened?\n\njust getting black image, and if I try to update an image, it goes black too.\r\n\r\nIt was working before checkpoints were added, I don't know if that's related.\n\n### Steps to reproduce the problem\n\n1. Go to .... \r\n2. Press ....\r\n3. ...\r\n\n\n### What should have happened?\n\nshould see the preview images\n\n### Commit where the problem happens\n\n0a8515085ef258d4b76fdc000f7ed9d55751d6b8\n\n### What platforms do you use to access the UI ?\n\n_No response_\n\n### What browsers do you use to access the UI ?\n\n_No response_\n\n### Command Line Arguments\n\n```Shell\n--api --cors-allow-origins http://localhost:5173 --administrator --no-half-vae --no-half --disable-safe-unpickle --force-cpu --xformers\n```\n\n\n### List of extensions\n\nall of them\n\n### Console logs\n\n```Shell\nERROR: Exception in ASGI application\r\nTraceback (most recent call last):\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\anyio\\streams\\memory.py\", line 94, in receive\r\n return self.receive_nowait()\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\anyio\\streams\\memory.py\", line 89, in receive_nowait\r\n raise WouldBlock\r\nanyio.WouldBlock\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\middleware\\base.py\", line 77, in call_next\r\n message = await recv_stream.receive()\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\anyio\\streams\\memory.py\", line 114, in receive\r\n raise EndOfStream\r\nanyio.EndOfStream\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\uvicorn\\protocols\\http\\h11_impl.py\", line 407, in run_asgi\r\n result = await app( # type: ignore[func-returns-value]\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\uvicorn\\middleware\\proxy_headers.py\", line 78, in __call__\r\n return await self.app(scope, receive, send)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\fastapi\\applications.py\", line 270, in __call__\r\n await super().__call__(scope, receive, send)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\applications.py\", line 124, in __call__\r\n await self.middleware_stack(scope, receive, send)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\middleware\\errors.py\", line 184, in __call__\r\n raise exc\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\middleware\\errors.py\", line 162, in __call__\r\n await self.app(scope, receive, _send)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\middleware\\base.py\", line 106, in __call__\r\n response = await self.dispatch_func(request, call_next)\r\n File \"D:\\stable-diffusion-webui\\extensions\\auto-sd-paint-ext\\backend\\app.py\", line 391, in app_encryption_middleware\r\n res: StreamingResponse = await call_next(req)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\middleware\\base.py\", line 80, in call_next\r\n raise app_exc\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\middleware\\base.py\", line 69, in coro\r\n await self.app(scope, receive_or_disconnect, send_no_error)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\middleware\\base.py\", line 106, in __call__\r\n response = await self.dispatch_func(request, call_next)\r\n File \"D:\\stable-diffusion-webui\\modules\\api\\api.py\", line 96, in log_and_time\r\n res: Response = await call_next(req)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\middleware\\base.py\", line 80, in call_next\r\n raise app_exc\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\middleware\\base.py\", line 69, in coro\r\n await self.app(scope, receive_or_disconnect, send_no_error)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\middleware\\gzip.py\", line 24, in __call__\r\n await responder(scope, receive, send)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\middleware\\gzip.py\", line 43, in __call__\r\n await self.app(scope, receive, self.send_with_gzip)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\middleware\\cors.py\", line 84, in __call__\r\n await self.app(scope, receive, send)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\middleware\\exceptions.py\", line 79, in __call__\r\n raise exc\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\middleware\\exceptions.py\", line 68, in __call__\r\n await self.app(scope, receive, sender)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\fastapi\\middleware\\asyncexitstack.py\", line 21, in __call__\r\n raise e\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\fastapi\\middleware\\asyncexitstack.py\", line 18, in __call__\r\n await self.app(scope, receive, send)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\routing.py\", line 706, in __call__\r\n await route.handle(scope, receive, send)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\routing.py\", line 276, in handle\r\n await self.app(scope, receive, send)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\routing.py\", line 66, in app\r\n response = await func(request)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\fastapi\\routing.py\", line 235, in app\r\n raw_response = await run_endpoint_function(\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\fastapi\\routing.py\", line 163, in run_endpoint_function\r\n return await run_in_threadpool(dependant.call, **values)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\starlette\\concurrency.py\", line 41, in run_in_threadpool\r\n return await anyio.to_thread.run_sync(func, *args)\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\anyio\\to_thread.py\", line 31, in run_sync\r\n return await get_asynclib().run_sync_in_worker_thread(\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\anyio\\_backends\\_asyncio.py\", line 937, in run_sync_in_worker_thread\r\n return await future\r\n File \"D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\anyio\\_backends\\_asyncio.py\", line 867, in run\r\n result = context.run(func, *args)\r\n File \"D:\\stable-diffusion-webui\\modules\\ui_extra_networks.py\", line 28, in fetch_file\r\n if not any([Path(x).resolve() in Path(filename).resolve().parents for x in allowed_dirs]):\r\n File \"D:\\stable-diffusion-webui\\modules\\ui_extra_networks.py\", line 28, in <listcomp>\r\n if not any([Path(x).resolve() in Path(filename).resolve().parents for x in allowed_dirs]):\r\n File \"D:\\Python\\Python310\\lib\\pathlib.py\", line 960, in __new__\r\n self = cls._from_parts(args)\r\n File \"D:\\Python\\Python310\\lib\\pathlib.py\", line 594, in _from_parts\r\n drv, root, parts = self._parse_args(args)\r\n File \"D:\\Python\\Python310\\lib\\pathlib.py\", line 578, in _parse_args\r\n a = os.fspath(a)\r\nTypeError: expected str, bytes or os.PathLike object, not NoneType\n```\n\n\n### Additional information\n\n_No response_\n", "before_files": [{"content": "import html\r\nimport json\r\nimport os\r\nimport urllib.parse\r\n\r\nfrom modules import shared, ui_extra_networks, sd_models\r\n\r\n\r\nclass ExtraNetworksPageCheckpoints(ui_extra_networks.ExtraNetworksPage):\r\n def __init__(self):\r\n super().__init__('Checkpoints')\r\n\r\n def refresh(self):\r\n shared.refresh_checkpoints()\r\n\r\n def list_items(self):\r\n for name, checkpoint in sd_models.checkpoints_list.items():\r\n path, ext = os.path.splitext(checkpoint.filename)\r\n previews = [path + \".png\", path + \".preview.png\"]\r\n\r\n preview = None\r\n for file in previews:\r\n if os.path.isfile(file):\r\n preview = self.link_preview(file)\r\n break\r\n\r\n yield {\r\n \"name\": checkpoint.name_for_extra,\r\n \"filename\": path,\r\n \"preview\": preview,\r\n \"search_term\": self.search_terms_from_path(checkpoint.filename),\r\n \"onclick\": '\"' + html.escape(f\"\"\"return selectCheckpoint({json.dumps(name)})\"\"\") + '\"',\r\n \"local_preview\": path + \".png\",\r\n }\r\n\r\n def allowed_directories_for_previews(self):\r\n return [shared.cmd_opts.ckpt_dir, sd_models.model_path]\r\n\r\n", "path": "modules/ui_extra_networks_checkpoints.py"}]} | 3,014 | 112 |
gh_patches_debug_24699 | rasdani/github-patches | git_diff | HypothesisWorks__hypothesis-285 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
django_polymorphic breaks model generation
`django_polymorphic` adds mandatory fields (named `*_ptr`) to models, but gives them values when the model is created. Hypothesis sees these as normal non-nullable fields, which trigger the relevant health check. However, explicitly providing a value for one of these fields causes an exception to be thrown in the model's constructor.
</issue>
<code>
[start of src/hypothesis/extra/django/models.py]
1 # coding=utf-8
2 #
3 # This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
4 #
5 # Most of this work is copyright (C) 2013-2015 David R. MacIver
6 # ([email protected]), but it contains contributions by others. See
7 # https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
8 # full list of people who may hold copyright, and consult the git log if you
9 # need to determine who owns an individual contribution.
10 #
11 # This Source Code Form is subject to the terms of the Mozilla Public License,
12 # v. 2.0. If a copy of the MPL was not distributed with this file, You can
13 # obtain one at http://mozilla.org/MPL/2.0/.
14 #
15 # END HEADER
16
17 from __future__ import division, print_function, absolute_import
18
19 import django.db.models as dm
20 from django.db import IntegrityError
21
22 import hypothesis.strategies as st
23 import hypothesis.extra.fakefactory as ff
24 from hypothesis.errors import InvalidArgument
25 from hypothesis.extra.datetime import datetimes
26 from hypothesis.searchstrategy.strategies import SearchStrategy
27
28
29 class ModelNotSupported(Exception):
30 pass
31
32
33 def referenced_models(model, seen=None):
34 if seen is None:
35 seen = set()
36 for f in model._meta.concrete_fields:
37 if isinstance(f, dm.ForeignKey):
38 t = f.rel.to
39 if t not in seen:
40 seen.add(t)
41 referenced_models(t, seen)
42 return seen
43
44
45 __default_field_mappings = None
46
47
48 def field_mappings():
49 global __default_field_mappings
50
51 if __default_field_mappings is None:
52 __default_field_mappings = {
53 dm.SmallIntegerField: st.integers(-32768, 32767),
54 dm.IntegerField: st.integers(-2147483648, 2147483647),
55 dm.BigIntegerField:
56 st.integers(-9223372036854775808, 9223372036854775807),
57 dm.PositiveIntegerField: st.integers(0, 2147483647),
58 dm.PositiveSmallIntegerField: st.integers(0, 32767),
59 dm.BinaryField: st.binary(),
60 dm.BooleanField: st.booleans(),
61 dm.CharField: st.text(),
62 dm.TextField: st.text(),
63 dm.DateTimeField: datetimes(allow_naive=False),
64 dm.EmailField: ff.fake_factory(u'email'),
65 dm.FloatField: st.floats(),
66 dm.NullBooleanField: st.one_of(st.none(), st.booleans()),
67 }
68 return __default_field_mappings
69
70
71 def add_default_field_mapping(field_type, strategy):
72 field_mappings()[field_type] = strategy
73
74
75 def models(model, **extra):
76 result = {}
77 mappings = field_mappings()
78 mandatory = set()
79 for f in model._meta.concrete_fields:
80 if isinstance(f, dm.AutoField):
81 continue
82 try:
83 mapped = mappings[type(f)]
84 except KeyError:
85 if not f.null:
86 mandatory.add(f.name)
87 continue
88 if f.null:
89 mapped = st.one_of(st.none(), mapped)
90 result[f.name] = mapped
91 missed = {x for x in mandatory if x not in extra}
92 if missed:
93 raise InvalidArgument((
94 u'Missing arguments for mandatory field%s %s for model %s' % (
95 u's' if len(missed) > 1 else u'',
96 u', '.join(missed),
97 model.__name__,
98 )))
99 for k, v in extra.items():
100 if isinstance(v, SearchStrategy):
101 result[k] = v
102 else:
103 result[k] = st.just(v)
104 result.update(extra)
105 return ModelStrategy(model, result)
106
107
108 class ModelStrategy(SearchStrategy):
109
110 def __init__(self, model, mappings):
111 super(ModelStrategy, self).__init__()
112 self.model = model
113 self.arg_strategy = st.fixed_dictionaries(mappings)
114
115 def __repr__(self):
116 return u'ModelStrategy(%s)' % (self.model.__name__,)
117
118 def do_draw(self, data):
119 try:
120 result, _ = self.model.objects.get_or_create(
121 **self.arg_strategy.do_draw(data)
122 )
123 return result
124 except IntegrityError:
125 data.mark_invalid()
126
[end of src/hypothesis/extra/django/models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/hypothesis/extra/django/models.py b/src/hypothesis/extra/django/models.py
--- a/src/hypothesis/extra/django/models.py
+++ b/src/hypothesis/extra/django/models.py
@@ -23,6 +23,7 @@
import hypothesis.extra.fakefactory as ff
from hypothesis.errors import InvalidArgument
from hypothesis.extra.datetime import datetimes
+from hypothesis.utils.conventions import UniqueIdentifier
from hypothesis.searchstrategy.strategies import SearchStrategy
@@ -72,6 +73,9 @@
field_mappings()[field_type] = strategy
+default_value = UniqueIdentifier(u'default_value')
+
+
def models(model, **extra):
result = {}
mappings = field_mappings()
@@ -96,12 +100,9 @@
u', '.join(missed),
model.__name__,
)))
- for k, v in extra.items():
- if isinstance(v, SearchStrategy):
- result[k] = v
- else:
- result[k] = st.just(v)
result.update(extra)
+ # Remove default_values so we don't try to generate anything for those.
+ result = {k: v for k, v in result.items() if v is not default_value}
return ModelStrategy(model, result)
| {"golden_diff": "diff --git a/src/hypothesis/extra/django/models.py b/src/hypothesis/extra/django/models.py\n--- a/src/hypothesis/extra/django/models.py\n+++ b/src/hypothesis/extra/django/models.py\n@@ -23,6 +23,7 @@\n import hypothesis.extra.fakefactory as ff\n from hypothesis.errors import InvalidArgument\n from hypothesis.extra.datetime import datetimes\n+from hypothesis.utils.conventions import UniqueIdentifier\n from hypothesis.searchstrategy.strategies import SearchStrategy\n \n \n@@ -72,6 +73,9 @@\n field_mappings()[field_type] = strategy\n \n \n+default_value = UniqueIdentifier(u'default_value')\n+\n+\n def models(model, **extra):\n result = {}\n mappings = field_mappings()\n@@ -96,12 +100,9 @@\n u', '.join(missed),\n model.__name__,\n )))\n- for k, v in extra.items():\n- if isinstance(v, SearchStrategy):\n- result[k] = v\n- else:\n- result[k] = st.just(v)\n result.update(extra)\n+ # Remove default_values so we don't try to generate anything for those.\n+ result = {k: v for k, v in result.items() if v is not default_value}\n return ModelStrategy(model, result)\n", "issue": "django_polymorphic breaks model generation\n`django_polymorphic` adds mandatory fields (named `*_ptr`) to models, but gives them values when the model is created. Hypothesis sees these as normal non-nullable fields, which trigger the relevant health check. However, explicitly providing a value for one of these fields causes an exception to be thrown in the model's constructor.\n\n", "before_files": [{"content": "# coding=utf-8\n#\n# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)\n#\n# Most of this work is copyright (C) 2013-2015 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a\n# full list of people who may hold copyright, and consult the git log if you\n# need to determine who owns an individual contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at http://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nfrom __future__ import division, print_function, absolute_import\n\nimport django.db.models as dm\nfrom django.db import IntegrityError\n\nimport hypothesis.strategies as st\nimport hypothesis.extra.fakefactory as ff\nfrom hypothesis.errors import InvalidArgument\nfrom hypothesis.extra.datetime import datetimes\nfrom hypothesis.searchstrategy.strategies import SearchStrategy\n\n\nclass ModelNotSupported(Exception):\n pass\n\n\ndef referenced_models(model, seen=None):\n if seen is None:\n seen = set()\n for f in model._meta.concrete_fields:\n if isinstance(f, dm.ForeignKey):\n t = f.rel.to\n if t not in seen:\n seen.add(t)\n referenced_models(t, seen)\n return seen\n\n\n__default_field_mappings = None\n\n\ndef field_mappings():\n global __default_field_mappings\n\n if __default_field_mappings is None:\n __default_field_mappings = {\n dm.SmallIntegerField: st.integers(-32768, 32767),\n dm.IntegerField: st.integers(-2147483648, 2147483647),\n dm.BigIntegerField:\n st.integers(-9223372036854775808, 9223372036854775807),\n dm.PositiveIntegerField: st.integers(0, 2147483647),\n dm.PositiveSmallIntegerField: st.integers(0, 32767),\n dm.BinaryField: st.binary(),\n dm.BooleanField: st.booleans(),\n dm.CharField: st.text(),\n dm.TextField: st.text(),\n dm.DateTimeField: datetimes(allow_naive=False),\n dm.EmailField: ff.fake_factory(u'email'),\n dm.FloatField: st.floats(),\n dm.NullBooleanField: st.one_of(st.none(), st.booleans()),\n }\n return __default_field_mappings\n\n\ndef add_default_field_mapping(field_type, strategy):\n field_mappings()[field_type] = strategy\n\n\ndef models(model, **extra):\n result = {}\n mappings = field_mappings()\n mandatory = set()\n for f in model._meta.concrete_fields:\n if isinstance(f, dm.AutoField):\n continue\n try:\n mapped = mappings[type(f)]\n except KeyError:\n if not f.null:\n mandatory.add(f.name)\n continue\n if f.null:\n mapped = st.one_of(st.none(), mapped)\n result[f.name] = mapped\n missed = {x for x in mandatory if x not in extra}\n if missed:\n raise InvalidArgument((\n u'Missing arguments for mandatory field%s %s for model %s' % (\n u's' if len(missed) > 1 else u'',\n u', '.join(missed),\n model.__name__,\n )))\n for k, v in extra.items():\n if isinstance(v, SearchStrategy):\n result[k] = v\n else:\n result[k] = st.just(v)\n result.update(extra)\n return ModelStrategy(model, result)\n\n\nclass ModelStrategy(SearchStrategy):\n\n def __init__(self, model, mappings):\n super(ModelStrategy, self).__init__()\n self.model = model\n self.arg_strategy = st.fixed_dictionaries(mappings)\n\n def __repr__(self):\n return u'ModelStrategy(%s)' % (self.model.__name__,)\n\n def do_draw(self, data):\n try:\n result, _ = self.model.objects.get_or_create(\n **self.arg_strategy.do_draw(data)\n )\n return result\n except IntegrityError:\n data.mark_invalid()\n", "path": "src/hypothesis/extra/django/models.py"}]} | 1,872 | 286 |
gh_patches_debug_22783 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-3992 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Increase WebSocket message timestamp precision
#### Is your feature request related to a problem? Please describe.
Timestamps for WebSocket messages are rounded to whole seconds. For at least some protocols or analyses, a higher precision may be useful or even necessary for productive use; one example is measuring the performance impact of mitmproxy.
#### Describe the solution you'd like
Avoid converting the timestamp to an integer and use a float instead:
https://github.com/mitmproxy/mitmproxy/blob/7fdcbb09e6034ab1f76724965cfdf45f3d775129/mitmproxy/websocket.py#L28
Basic support for higher-precision timestamps seems to be as simple as changing the type of `WebSocketMessage.timestamp` from `int` to `float` and dropping the `int()` call around `time.time()`. I'm not sure if more is needed e.g. to ensure backward compatibility, but I was able to read a previous dump just fine with that modification (which makes sense since integers in the relevant range are strictly a subset of double-precision floats).
#### Describe alternatives you've considered
If keeping the `timestamp` an `int` is unavoidable, the Linux approach of storing the microseconds in a second integer (between 0 and 999999, inclusive) could be used. I don't think this is a good idea though.
#### Additional context
None
</issue>
<code>
[start of mitmproxy/websocket.py]
1 import time
2 import queue
3 from typing import List, Optional
4
5 from wsproto.frame_protocol import CloseReason
6 from wsproto.frame_protocol import Opcode
7
8 from mitmproxy import flow
9 from mitmproxy.net import websockets
10 from mitmproxy.coretypes import serializable
11 from mitmproxy.utils import strutils, human
12
13
14 class WebSocketMessage(serializable.Serializable):
15 """
16 A WebSocket message sent from one endpoint to the other.
17 """
18
19 def __init__(
20 self, type: int, from_client: bool, content: bytes, timestamp: Optional[int]=None, killed: bool=False
21 ) -> None:
22 self.type = Opcode(type) # type: ignore
23 """indicates either TEXT or BINARY (from wsproto.frame_protocol.Opcode)."""
24 self.from_client = from_client
25 """True if this messages was sent by the client."""
26 self.content = content
27 """A byte-string representing the content of this message."""
28 self.timestamp: int = timestamp or int(time.time())
29 """Timestamp of when this message was received or created."""
30 self.killed = killed
31 """True if this messages was killed and should not be sent to the other endpoint."""
32
33 @classmethod
34 def from_state(cls, state):
35 return cls(*state)
36
37 def get_state(self):
38 return int(self.type), self.from_client, self.content, self.timestamp, self.killed
39
40 def set_state(self, state):
41 self.type, self.from_client, self.content, self.timestamp, self.killed = state
42 self.type = Opcode(self.type) # replace enum with bare int
43
44 def __repr__(self):
45 if self.type == Opcode.TEXT:
46 return "text message: {}".format(repr(self.content))
47 else:
48 return "binary message: {}".format(strutils.bytes_to_escaped_str(self.content))
49
50 def kill(self):
51 """
52 Kill this message.
53
54 It will not be sent to the other endpoint. This has no effect in streaming mode.
55 """
56 self.killed = True
57
58
59 class WebSocketFlow(flow.Flow):
60 """
61 A WebSocketFlow is a simplified representation of a Websocket connection.
62 """
63
64 def __init__(self, client_conn, server_conn, handshake_flow, live=None):
65 super().__init__("websocket", client_conn, server_conn, live)
66
67 self.messages: List[WebSocketMessage] = []
68 """A list containing all WebSocketMessage's."""
69 self.close_sender = 'client'
70 """'client' if the client initiated connection closing."""
71 self.close_code = CloseReason.NORMAL_CLOSURE
72 """WebSocket close code."""
73 self.close_message = '(message missing)'
74 """WebSocket close message."""
75 self.close_reason = 'unknown status code'
76 """WebSocket close reason."""
77 self.stream = False
78 """True of this connection is streaming directly to the other endpoint."""
79 self.handshake_flow = handshake_flow
80 """The HTTP flow containing the initial WebSocket handshake."""
81 self.ended = False
82 """True when the WebSocket connection has been closed."""
83
84 self._inject_messages_client = queue.Queue(maxsize=1)
85 self._inject_messages_server = queue.Queue(maxsize=1)
86
87 if handshake_flow:
88 self.client_key = websockets.get_client_key(handshake_flow.request.headers)
89 self.client_protocol = websockets.get_protocol(handshake_flow.request.headers)
90 self.client_extensions = websockets.get_extensions(handshake_flow.request.headers)
91 self.server_accept = websockets.get_server_accept(handshake_flow.response.headers)
92 self.server_protocol = websockets.get_protocol(handshake_flow.response.headers)
93 self.server_extensions = websockets.get_extensions(handshake_flow.response.headers)
94 else:
95 self.client_key = ''
96 self.client_protocol = ''
97 self.client_extensions = ''
98 self.server_accept = ''
99 self.server_protocol = ''
100 self.server_extensions = ''
101
102 _stateobject_attributes = flow.Flow._stateobject_attributes.copy()
103 # mypy doesn't support update with kwargs
104 _stateobject_attributes.update(dict(
105 messages=List[WebSocketMessage],
106 close_sender=str,
107 close_code=int,
108 close_message=str,
109 close_reason=str,
110 client_key=str,
111 client_protocol=str,
112 client_extensions=str,
113 server_accept=str,
114 server_protocol=str,
115 server_extensions=str,
116 # Do not include handshake_flow, to prevent recursive serialization!
117 # Since mitmproxy-console currently only displays HTTPFlows,
118 # dumping the handshake_flow will include the WebSocketFlow too.
119 ))
120
121 def get_state(self):
122 d = super().get_state()
123 d['close_code'] = int(d['close_code']) # replace enum with bare int
124 return d
125
126 @classmethod
127 def from_state(cls, state):
128 f = cls(None, None, None)
129 f.set_state(state)
130 return f
131
132 def __repr__(self):
133 return "<WebSocketFlow ({} messages)>".format(len(self.messages))
134
135 def message_info(self, message: WebSocketMessage) -> str:
136 return "{client} {direction} WebSocket {type} message {direction} {server}{endpoint}".format(
137 type=message.type,
138 client=human.format_address(self.client_conn.address),
139 server=human.format_address(self.server_conn.address),
140 direction="->" if message.from_client else "<-",
141 endpoint=self.handshake_flow.request.path,
142 )
143
144 def inject_message(self, endpoint, payload):
145 """
146 Inject and send a full WebSocket message to the remote endpoint.
147 This might corrupt your WebSocket connection! Be careful!
148
149 The endpoint needs to be either flow.client_conn or flow.server_conn.
150
151 If ``payload`` is of type ``bytes`` then the message is flagged as
152 being binary If it is of type ``str`` encoded as UTF-8 and sent as
153 text.
154
155 :param payload: The message body to send.
156 :type payload: ``bytes`` or ``str``
157 """
158
159 if endpoint == self.client_conn:
160 self._inject_messages_client.put(payload)
161 elif endpoint == self.server_conn:
162 self._inject_messages_server.put(payload)
163 else:
164 raise ValueError('Invalid endpoint')
165
[end of mitmproxy/websocket.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mitmproxy/websocket.py b/mitmproxy/websocket.py
--- a/mitmproxy/websocket.py
+++ b/mitmproxy/websocket.py
@@ -17,7 +17,7 @@
"""
def __init__(
- self, type: int, from_client: bool, content: bytes, timestamp: Optional[int]=None, killed: bool=False
+ self, type: int, from_client: bool, content: bytes, timestamp: Optional[float]=None, killed: bool=False
) -> None:
self.type = Opcode(type) # type: ignore
"""indicates either TEXT or BINARY (from wsproto.frame_protocol.Opcode)."""
@@ -25,7 +25,7 @@
"""True if this messages was sent by the client."""
self.content = content
"""A byte-string representing the content of this message."""
- self.timestamp: int = timestamp or int(time.time())
+ self.timestamp: float = timestamp or time.time()
"""Timestamp of when this message was received or created."""
self.killed = killed
"""True if this messages was killed and should not be sent to the other endpoint."""
| {"golden_diff": "diff --git a/mitmproxy/websocket.py b/mitmproxy/websocket.py\n--- a/mitmproxy/websocket.py\n+++ b/mitmproxy/websocket.py\n@@ -17,7 +17,7 @@\n \"\"\"\n \n def __init__(\n- self, type: int, from_client: bool, content: bytes, timestamp: Optional[int]=None, killed: bool=False\n+ self, type: int, from_client: bool, content: bytes, timestamp: Optional[float]=None, killed: bool=False\n ) -> None:\n self.type = Opcode(type) # type: ignore\n \"\"\"indicates either TEXT or BINARY (from wsproto.frame_protocol.Opcode).\"\"\"\n@@ -25,7 +25,7 @@\n \"\"\"True if this messages was sent by the client.\"\"\"\n self.content = content\n \"\"\"A byte-string representing the content of this message.\"\"\"\n- self.timestamp: int = timestamp or int(time.time())\n+ self.timestamp: float = timestamp or time.time()\n \"\"\"Timestamp of when this message was received or created.\"\"\"\n self.killed = killed\n \"\"\"True if this messages was killed and should not be sent to the other endpoint.\"\"\"\n", "issue": "Increase WebSocket message timestamp precision\n#### Is your feature request related to a problem? Please describe.\r\nTimestamps for WebSocket messages are rounded to whole seconds. For at least some protocols or analyses, a higher precision may be useful or even necessary for productive use; one example is measuring the performance impact of mitmproxy.\r\n\r\n#### Describe the solution you'd like\r\nAvoid converting the timestamp to an integer and use a float instead:\r\n\r\nhttps://github.com/mitmproxy/mitmproxy/blob/7fdcbb09e6034ab1f76724965cfdf45f3d775129/mitmproxy/websocket.py#L28\r\n\r\nBasic support for higher-precision timestamps seems to be as simple as changing the type of `WebSocketMessage.timestamp` from `int` to `float` and dropping the `int()` call around `time.time()`. I'm not sure if more is needed e.g. to ensure backward compatibility, but I was able to read a previous dump just fine with that modification (which makes sense since integers in the relevant range are strictly a subset of double-precision floats).\r\n\r\n#### Describe alternatives you've considered\r\nIf keeping the `timestamp` an `int` is unavoidable, the Linux approach of storing the microseconds in a second integer (between 0 and 999999, inclusive) could be used. I don't think this is a good idea though.\r\n\r\n#### Additional context\r\nNone\n", "before_files": [{"content": "import time\nimport queue\nfrom typing import List, Optional\n\nfrom wsproto.frame_protocol import CloseReason\nfrom wsproto.frame_protocol import Opcode\n\nfrom mitmproxy import flow\nfrom mitmproxy.net import websockets\nfrom mitmproxy.coretypes import serializable\nfrom mitmproxy.utils import strutils, human\n\n\nclass WebSocketMessage(serializable.Serializable):\n \"\"\"\n A WebSocket message sent from one endpoint to the other.\n \"\"\"\n\n def __init__(\n self, type: int, from_client: bool, content: bytes, timestamp: Optional[int]=None, killed: bool=False\n ) -> None:\n self.type = Opcode(type) # type: ignore\n \"\"\"indicates either TEXT or BINARY (from wsproto.frame_protocol.Opcode).\"\"\"\n self.from_client = from_client\n \"\"\"True if this messages was sent by the client.\"\"\"\n self.content = content\n \"\"\"A byte-string representing the content of this message.\"\"\"\n self.timestamp: int = timestamp or int(time.time())\n \"\"\"Timestamp of when this message was received or created.\"\"\"\n self.killed = killed\n \"\"\"True if this messages was killed and should not be sent to the other endpoint.\"\"\"\n\n @classmethod\n def from_state(cls, state):\n return cls(*state)\n\n def get_state(self):\n return int(self.type), self.from_client, self.content, self.timestamp, self.killed\n\n def set_state(self, state):\n self.type, self.from_client, self.content, self.timestamp, self.killed = state\n self.type = Opcode(self.type) # replace enum with bare int\n\n def __repr__(self):\n if self.type == Opcode.TEXT:\n return \"text message: {}\".format(repr(self.content))\n else:\n return \"binary message: {}\".format(strutils.bytes_to_escaped_str(self.content))\n\n def kill(self):\n \"\"\"\n Kill this message.\n\n It will not be sent to the other endpoint. This has no effect in streaming mode.\n \"\"\"\n self.killed = True\n\n\nclass WebSocketFlow(flow.Flow):\n \"\"\"\n A WebSocketFlow is a simplified representation of a Websocket connection.\n \"\"\"\n\n def __init__(self, client_conn, server_conn, handshake_flow, live=None):\n super().__init__(\"websocket\", client_conn, server_conn, live)\n\n self.messages: List[WebSocketMessage] = []\n \"\"\"A list containing all WebSocketMessage's.\"\"\"\n self.close_sender = 'client'\n \"\"\"'client' if the client initiated connection closing.\"\"\"\n self.close_code = CloseReason.NORMAL_CLOSURE\n \"\"\"WebSocket close code.\"\"\"\n self.close_message = '(message missing)'\n \"\"\"WebSocket close message.\"\"\"\n self.close_reason = 'unknown status code'\n \"\"\"WebSocket close reason.\"\"\"\n self.stream = False\n \"\"\"True of this connection is streaming directly to the other endpoint.\"\"\"\n self.handshake_flow = handshake_flow\n \"\"\"The HTTP flow containing the initial WebSocket handshake.\"\"\"\n self.ended = False\n \"\"\"True when the WebSocket connection has been closed.\"\"\"\n\n self._inject_messages_client = queue.Queue(maxsize=1)\n self._inject_messages_server = queue.Queue(maxsize=1)\n\n if handshake_flow:\n self.client_key = websockets.get_client_key(handshake_flow.request.headers)\n self.client_protocol = websockets.get_protocol(handshake_flow.request.headers)\n self.client_extensions = websockets.get_extensions(handshake_flow.request.headers)\n self.server_accept = websockets.get_server_accept(handshake_flow.response.headers)\n self.server_protocol = websockets.get_protocol(handshake_flow.response.headers)\n self.server_extensions = websockets.get_extensions(handshake_flow.response.headers)\n else:\n self.client_key = ''\n self.client_protocol = ''\n self.client_extensions = ''\n self.server_accept = ''\n self.server_protocol = ''\n self.server_extensions = ''\n\n _stateobject_attributes = flow.Flow._stateobject_attributes.copy()\n # mypy doesn't support update with kwargs\n _stateobject_attributes.update(dict(\n messages=List[WebSocketMessage],\n close_sender=str,\n close_code=int,\n close_message=str,\n close_reason=str,\n client_key=str,\n client_protocol=str,\n client_extensions=str,\n server_accept=str,\n server_protocol=str,\n server_extensions=str,\n # Do not include handshake_flow, to prevent recursive serialization!\n # Since mitmproxy-console currently only displays HTTPFlows,\n # dumping the handshake_flow will include the WebSocketFlow too.\n ))\n\n def get_state(self):\n d = super().get_state()\n d['close_code'] = int(d['close_code']) # replace enum with bare int\n return d\n\n @classmethod\n def from_state(cls, state):\n f = cls(None, None, None)\n f.set_state(state)\n return f\n\n def __repr__(self):\n return \"<WebSocketFlow ({} messages)>\".format(len(self.messages))\n\n def message_info(self, message: WebSocketMessage) -> str:\n return \"{client} {direction} WebSocket {type} message {direction} {server}{endpoint}\".format(\n type=message.type,\n client=human.format_address(self.client_conn.address),\n server=human.format_address(self.server_conn.address),\n direction=\"->\" if message.from_client else \"<-\",\n endpoint=self.handshake_flow.request.path,\n )\n\n def inject_message(self, endpoint, payload):\n \"\"\"\n Inject and send a full WebSocket message to the remote endpoint.\n This might corrupt your WebSocket connection! Be careful!\n\n The endpoint needs to be either flow.client_conn or flow.server_conn.\n\n If ``payload`` is of type ``bytes`` then the message is flagged as\n being binary If it is of type ``str`` encoded as UTF-8 and sent as\n text.\n\n :param payload: The message body to send.\n :type payload: ``bytes`` or ``str``\n \"\"\"\n\n if endpoint == self.client_conn:\n self._inject_messages_client.put(payload)\n elif endpoint == self.server_conn:\n self._inject_messages_server.put(payload)\n else:\n raise ValueError('Invalid endpoint')\n", "path": "mitmproxy/websocket.py"}]} | 2,522 | 255 |
gh_patches_debug_1180 | rasdani/github-patches | git_diff | encode__httpx-1054 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Type-checking our tests
I know this is not a standard thing to do across Encode projects, but I've been wondering if it would be worth starting to type-hint our tests.
I've seen at least two instances of this recently:
- In HTTPX: https://github.com/encode/httpx/pull/648#discussion_r359862603
- In Starlette: https://github.com/encode/starlette/issues/722
My rationale is based on two aspects:
- It improves our upfront knowledge about how users will actually use HTTPX — currently their usage of type hints in the wild is not reflected anywhere.
- It helps us catch type hint inconsistencies we wouldn't see in the core package.
The main counter-argument, I suppose, is that type hinting tests is tedious. I think that's fair, but I believe the two pro's above make it compelling.
Thoughts?
</issue>
<code>
[start of httpx/_types.py]
1 """
2 Type definitions for type checking purposes.
3 """
4
5 import ssl
6 from http.cookiejar import CookieJar
7 from typing import (
8 IO,
9 TYPE_CHECKING,
10 AsyncIterator,
11 Callable,
12 Dict,
13 Iterator,
14 List,
15 Mapping,
16 Optional,
17 Sequence,
18 Tuple,
19 Union,
20 )
21
22 if TYPE_CHECKING: # pragma: no cover
23 from ._auth import Auth # noqa: F401
24 from ._config import Proxy, Timeout # noqa: F401
25 from ._models import URL, Cookies, Headers, QueryParams, Request # noqa: F401
26
27
28 PrimitiveData = Optional[Union[str, int, float, bool]]
29
30 URLTypes = Union["URL", str]
31
32 QueryParamTypes = Union[
33 "QueryParams",
34 Mapping[str, Union[PrimitiveData, Sequence[PrimitiveData]]],
35 List[Tuple[str, PrimitiveData]],
36 str,
37 ]
38
39 HeaderTypes = Union[
40 "Headers",
41 Dict[str, str],
42 Dict[bytes, bytes],
43 Sequence[Tuple[str, str]],
44 Sequence[Tuple[bytes, bytes]],
45 ]
46
47 CookieTypes = Union["Cookies", CookieJar, Dict[str, str]]
48
49 CertTypes = Union[str, Tuple[str, str], Tuple[str, str, str]]
50 VerifyTypes = Union[str, bool, ssl.SSLContext]
51 TimeoutTypes = Union[
52 Optional[float],
53 Tuple[Optional[float], Optional[float], Optional[float], Optional[float]],
54 "Timeout",
55 ]
56 ProxiesTypes = Union[URLTypes, "Proxy", Dict[URLTypes, Union[URLTypes, "Proxy"]]]
57
58 AuthTypes = Union[
59 Tuple[Union[str, bytes], Union[str, bytes]],
60 Callable[["Request"], "Request"],
61 "Auth",
62 ]
63
64 RequestData = Union[dict, str, bytes, Iterator[bytes], AsyncIterator[bytes]]
65
66 FileContent = Union[IO[str], IO[bytes], str, bytes]
67 FileTypes = Union[
68 # file (or text)
69 FileContent,
70 # (filename, file (or text))
71 Tuple[Optional[str], FileContent],
72 # (filename, file (or text), content_type)
73 Tuple[Optional[str], FileContent, Optional[str]],
74 ]
75 RequestFiles = Union[Mapping[str, FileTypes], List[Tuple[str, FileTypes]]]
76
[end of httpx/_types.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/httpx/_types.py b/httpx/_types.py
--- a/httpx/_types.py
+++ b/httpx/_types.py
@@ -72,4 +72,4 @@
# (filename, file (or text), content_type)
Tuple[Optional[str], FileContent, Optional[str]],
]
-RequestFiles = Union[Mapping[str, FileTypes], List[Tuple[str, FileTypes]]]
+RequestFiles = Union[Mapping[str, FileTypes], Sequence[Tuple[str, FileTypes]]]
| {"golden_diff": "diff --git a/httpx/_types.py b/httpx/_types.py\n--- a/httpx/_types.py\n+++ b/httpx/_types.py\n@@ -72,4 +72,4 @@\n # (filename, file (or text), content_type)\n Tuple[Optional[str], FileContent, Optional[str]],\n ]\n-RequestFiles = Union[Mapping[str, FileTypes], List[Tuple[str, FileTypes]]]\n+RequestFiles = Union[Mapping[str, FileTypes], Sequence[Tuple[str, FileTypes]]]\n", "issue": "Type-checking our tests\nI know this is not a standard thing to do across Encode projects, but I've been wondering if it would be worth starting to type-hint our tests.\r\n\r\nI've seen at least two instances of this recently:\r\n\r\n- In HTTPX: https://github.com/encode/httpx/pull/648#discussion_r359862603\r\n- In Starlette: https://github.com/encode/starlette/issues/722\r\n\r\nMy rationale is based on two aspects:\r\n\r\n- It improves our upfront knowledge about how users will actually use HTTPX \u2014 currently their usage of type hints in the wild is not reflected anywhere.\r\n- It helps us catch type hint inconsistencies we wouldn't see in the core package.\r\n\r\nThe main counter-argument, I suppose, is that type hinting tests is tedious. I think that's fair, but I believe the two pro's above make it compelling.\r\n\r\nThoughts?\n", "before_files": [{"content": "\"\"\"\nType definitions for type checking purposes.\n\"\"\"\n\nimport ssl\nfrom http.cookiejar import CookieJar\nfrom typing import (\n IO,\n TYPE_CHECKING,\n AsyncIterator,\n Callable,\n Dict,\n Iterator,\n List,\n Mapping,\n Optional,\n Sequence,\n Tuple,\n Union,\n)\n\nif TYPE_CHECKING: # pragma: no cover\n from ._auth import Auth # noqa: F401\n from ._config import Proxy, Timeout # noqa: F401\n from ._models import URL, Cookies, Headers, QueryParams, Request # noqa: F401\n\n\nPrimitiveData = Optional[Union[str, int, float, bool]]\n\nURLTypes = Union[\"URL\", str]\n\nQueryParamTypes = Union[\n \"QueryParams\",\n Mapping[str, Union[PrimitiveData, Sequence[PrimitiveData]]],\n List[Tuple[str, PrimitiveData]],\n str,\n]\n\nHeaderTypes = Union[\n \"Headers\",\n Dict[str, str],\n Dict[bytes, bytes],\n Sequence[Tuple[str, str]],\n Sequence[Tuple[bytes, bytes]],\n]\n\nCookieTypes = Union[\"Cookies\", CookieJar, Dict[str, str]]\n\nCertTypes = Union[str, Tuple[str, str], Tuple[str, str, str]]\nVerifyTypes = Union[str, bool, ssl.SSLContext]\nTimeoutTypes = Union[\n Optional[float],\n Tuple[Optional[float], Optional[float], Optional[float], Optional[float]],\n \"Timeout\",\n]\nProxiesTypes = Union[URLTypes, \"Proxy\", Dict[URLTypes, Union[URLTypes, \"Proxy\"]]]\n\nAuthTypes = Union[\n Tuple[Union[str, bytes], Union[str, bytes]],\n Callable[[\"Request\"], \"Request\"],\n \"Auth\",\n]\n\nRequestData = Union[dict, str, bytes, Iterator[bytes], AsyncIterator[bytes]]\n\nFileContent = Union[IO[str], IO[bytes], str, bytes]\nFileTypes = Union[\n # file (or text)\n FileContent,\n # (filename, file (or text))\n Tuple[Optional[str], FileContent],\n # (filename, file (or text), content_type)\n Tuple[Optional[str], FileContent, Optional[str]],\n]\nRequestFiles = Union[Mapping[str, FileTypes], List[Tuple[str, FileTypes]]]\n", "path": "httpx/_types.py"}]} | 1,367 | 112 |
gh_patches_debug_12379 | rasdani/github-patches | git_diff | buildbot__buildbot-189 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Added FilterOut class from deleted file master/buildbot/status/web/st…
…atus_json.py
This class was removed in c3e1aaede2fc02507bccb548dd26e694bf32ba6a but still in use in StatusPush
</issue>
<code>
[start of master/buildbot/libvirtbuildslave.py]
1 # This file is part of Buildbot. Buildbot is free software: you can
2 # redistribute it and/or modify it under the terms of the GNU General Public
3 # License as published by the Free Software Foundation, version 2.
4 #
5 # This program is distributed in the hope that it will be useful, but WITHOUT
6 # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
7 # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
8 # details.
9 #
10 # You should have received a copy of the GNU General Public License along with
11 # this program; if not, write to the Free Software Foundation, Inc., 51
12 # Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
13 #
14 # Portions Copyright Buildbot Team Members
15 # Portions Copyright 2010 Isotoma Limited
16
17 import os
18
19 from twisted.internet import defer, utils, reactor, threads
20 from twisted.python import log
21 from buildbot.buildslave import AbstractBuildSlave, AbstractLatentBuildSlave
22
23 import libvirt
24
25
26 class WorkQueue(object):
27 """
28 I am a class that turns parallel access into serial access.
29
30 I exist because we want to run libvirt access in threads as we don't
31 trust calls not to block, but under load libvirt doesnt seem to like
32 this kind of threaded use.
33 """
34
35 def __init__(self):
36 self.queue = []
37
38 def _process(self):
39 log.msg("Looking to start a piece of work now...")
40
41 # Is there anything to do?
42 if not self.queue:
43 log.msg("_process called when there is no work")
44 return
45
46 # Peek at the top of the stack - get a function to call and
47 # a deferred to fire when its all over
48 d, next_operation, args, kwargs = self.queue[0]
49
50 # Start doing some work - expects a deferred
51 try:
52 d2 = next_operation(*args, **kwargs)
53 except:
54 d2 = defer.fail()
55
56 # Whenever a piece of work is done, whether it worked or not
57 # call this to schedule the next piece of work
58 def _work_done(res):
59 log.msg("Completed a piece of work")
60 self.queue.pop(0)
61 if self.queue:
62 log.msg("Preparing next piece of work")
63 reactor.callLater(0, self._process)
64 return res
65 d2.addBoth(_work_done)
66
67 # When the work is done, trigger d
68 d2.chainDeferred(d)
69
70 def execute(self, cb, *args, **kwargs):
71 kickstart_processing = not self.queue
72 d = defer.Deferred()
73 self.queue.append((d, cb, args, kwargs))
74 if kickstart_processing:
75 self._process()
76 return d
77
78 def executeInThread(self, cb, *args, **kwargs):
79 return self.execute(threads.deferToThread, cb, *args, **kwargs)
80
81
82 # A module is effectively a singleton class, so this is OK
83 queue = WorkQueue()
84
85
86 class Domain(object):
87
88 """
89 I am a wrapper around a libvirt Domain object
90 """
91
92 def __init__(self, connection, domain):
93 self.connection = connection
94 self.domain = domain
95
96 def create(self):
97 return queue.executeInThread(self.domain.create)
98
99 def shutdown(self):
100 return queue.executeInThread(self.domain.shutdown)
101
102 def destroy(self):
103 return queue.executeInThread(self.domain.destroy)
104
105
106 class Connection(object):
107
108 """
109 I am a wrapper around a libvirt Connection object.
110 """
111
112 def __init__(self, uri):
113 self.uri = uri
114 self.connection = libvirt.open(uri)
115
116 def lookupByName(self, name):
117 """ I lookup an existing prefined domain """
118 d = queue.executeInThread(self.connection.lookupByName, name)
119 def _(res):
120 return Domain(self, res)
121 d.addCallback(_)
122 return d
123
124 def create(self, xml):
125 """ I take libvirt XML and start a new VM """
126 d = queue.executeInThread(self.connection.createXML, xml, 0)
127 def _(res):
128 return Domain(self, res)
129 d.addCallback(_)
130 return d
131
132
133 class LibVirtSlave(AbstractLatentBuildSlave):
134
135 def __init__(self, name, password, connection, hd_image, base_image = None, xml=None, max_builds=None, notify_on_missing=[],
136 missing_timeout=60*20, build_wait_timeout=60*10, properties={}, locks=None):
137 AbstractLatentBuildSlave.__init__(self, name, password, max_builds, notify_on_missing,
138 missing_timeout, build_wait_timeout, properties, locks)
139 self.name = name
140 self.connection = connection
141 self.image = hd_image
142 self.base_image = base_image
143 self.xml = xml
144
145 self.insubstantiate_after_build = True
146 self.cheap_copy = True
147 self.graceful_shutdown = False
148
149 self.domain = None
150
151 def _prepare_base_image(self):
152 """
153 I am a private method for creating (possibly cheap) copies of a
154 base_image for start_instance to boot.
155 """
156 if not self.base_image:
157 return defer.succeed(True)
158
159 if self.cheap_copy:
160 clone_cmd = "qemu-img"
161 clone_args = "create -b %(base)s -f qcow2 %(image)s"
162 else:
163 clone_cmd = "cp"
164 clone_args = "%(base)s %(image)s"
165
166 clone_args = clone_args % {
167 "base": self.base_image,
168 "image": self.image,
169 }
170
171 log.msg("Cloning base image: %s %s'" % (clone_cmd, clone_args))
172
173 def _log_result(res):
174 log.msg("Cloning exit code was: %d" % res)
175 return res
176
177 d = utils.getProcessValue(clone_cmd, clone_args.split())
178 d.addBoth(_log_result)
179 return d
180
181 def start_instance(self, build):
182 """
183 I start a new instance of a VM.
184
185 If a base_image is specified, I will make a clone of that otherwise i will
186 use image directly.
187
188 If i'm not given libvirt domain definition XML, I will look for my name
189 in the list of defined virtual machines and start that.
190 """
191 if self.domain is not None:
192 raise ValueError('domain active')
193
194 d = self._prepare_base_image()
195
196 def _start(res):
197 if self.xml:
198 return self.connection.create(self.xml)
199 d = self.connection.lookupByName(self.name)
200 def _really_start(res):
201 return res.create()
202 d.addCallback(_really_start)
203 return d
204 d.addCallback(_start)
205
206 def _started(res):
207 self.domain = res
208 return True
209 d.addCallback(_started)
210
211 def _start_failed(failure):
212 log.msg("Cannot start a VM (%s), failing gracefully and triggering a new build check" % self.name)
213 log.err(failure)
214 self.domain = None
215 return False
216 d.addErrback(_start_failed)
217
218 return d
219
220 def stop_instance(self, fast=False):
221 """
222 I attempt to stop a running VM.
223 I make sure any connection to the slave is removed.
224 If the VM was using a cloned image, I remove the clone
225 When everything is tidied up, I ask that bbot looks for work to do
226 """
227 log.msg("Attempting to stop '%s'" % self.name)
228 if self.domain is None:
229 log.msg("I don't think that domain is evening running, aborting")
230 return defer.succeed(None)
231
232 domain = self.domain
233 self.domain = None
234
235 if self.graceful_shutdown and not fast:
236 log.msg("Graceful shutdown chosen for %s" % self.name)
237 d = domain.shutdown()
238 else:
239 d = domain.destroy()
240
241 def _disconnect(res):
242 log.msg("VM destroyed (%s): Forcing its connection closed." % self.name)
243 return AbstractBuildSlave.disconnect(self)
244 d.addCallback(_disconnect)
245
246 def _disconnected(res):
247 log.msg("We forced disconnection (%s), cleaning up and triggering new build" % self.name)
248 if self.base_image:
249 os.remove(self.image)
250 self.botmaster.maybeStartBuildsForSlave(self.name)
251 return res
252 d.addBoth(_disconnected)
253
254 return d
255
256 def buildFinished(self, *args, **kwargs):
257 """
258 I insubstantiate a slave after it has done a build, if that is
259 desired behaviour.
260 """
261 AbstractLatentBuildSlave.buildFinished(self, *args, **kwargs)
262 if self.insubstantiate_after_build:
263 log.msg("Got buildFinished notification - attempting to insubstantiate")
264 self.insubstantiate()
265
266
267
[end of master/buildbot/libvirtbuildslave.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/master/buildbot/libvirtbuildslave.py b/master/buildbot/libvirtbuildslave.py
--- a/master/buildbot/libvirtbuildslave.py
+++ b/master/buildbot/libvirtbuildslave.py
@@ -198,13 +198,13 @@
return self.connection.create(self.xml)
d = self.connection.lookupByName(self.name)
def _really_start(res):
- return res.create()
+ self.domain = res
+ return self.domain.create()
d.addCallback(_really_start)
return d
d.addCallback(_start)
def _started(res):
- self.domain = res
return True
d.addCallback(_started)
| {"golden_diff": "diff --git a/master/buildbot/libvirtbuildslave.py b/master/buildbot/libvirtbuildslave.py\n--- a/master/buildbot/libvirtbuildslave.py\n+++ b/master/buildbot/libvirtbuildslave.py\n@@ -198,13 +198,13 @@\n return self.connection.create(self.xml)\n d = self.connection.lookupByName(self.name)\n def _really_start(res):\n- return res.create()\n+ self.domain = res\n+ return self.domain.create()\n d.addCallback(_really_start)\n return d\n d.addCallback(_start)\n \n def _started(res):\n- self.domain = res\n return True\n d.addCallback(_started)\n", "issue": "Added FilterOut class from deleted file master/buildbot/status/web/st\u2026\n\u2026atus_json.py\n\nThis class was removed in c3e1aaede2fc02507bccb548dd26e694bf32ba6a but still in use in StatusPush\n\n", "before_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Portions Copyright Buildbot Team Members\n# Portions Copyright 2010 Isotoma Limited\n\nimport os\n\nfrom twisted.internet import defer, utils, reactor, threads\nfrom twisted.python import log\nfrom buildbot.buildslave import AbstractBuildSlave, AbstractLatentBuildSlave\n\nimport libvirt\n\n\nclass WorkQueue(object):\n \"\"\"\n I am a class that turns parallel access into serial access.\n\n I exist because we want to run libvirt access in threads as we don't\n trust calls not to block, but under load libvirt doesnt seem to like\n this kind of threaded use.\n \"\"\"\n\n def __init__(self):\n self.queue = []\n\n def _process(self):\n log.msg(\"Looking to start a piece of work now...\")\n\n # Is there anything to do?\n if not self.queue:\n log.msg(\"_process called when there is no work\")\n return\n\n # Peek at the top of the stack - get a function to call and\n # a deferred to fire when its all over\n d, next_operation, args, kwargs = self.queue[0]\n\n # Start doing some work - expects a deferred\n try:\n d2 = next_operation(*args, **kwargs)\n except:\n d2 = defer.fail()\n\n # Whenever a piece of work is done, whether it worked or not \n # call this to schedule the next piece of work\n def _work_done(res):\n log.msg(\"Completed a piece of work\")\n self.queue.pop(0)\n if self.queue:\n log.msg(\"Preparing next piece of work\")\n reactor.callLater(0, self._process)\n return res\n d2.addBoth(_work_done)\n\n # When the work is done, trigger d\n d2.chainDeferred(d)\n\n def execute(self, cb, *args, **kwargs):\n kickstart_processing = not self.queue\n d = defer.Deferred()\n self.queue.append((d, cb, args, kwargs))\n if kickstart_processing:\n self._process()\n return d\n\n def executeInThread(self, cb, *args, **kwargs):\n return self.execute(threads.deferToThread, cb, *args, **kwargs)\n\n\n# A module is effectively a singleton class, so this is OK\nqueue = WorkQueue()\n\n\nclass Domain(object):\n\n \"\"\"\n I am a wrapper around a libvirt Domain object\n \"\"\"\n\n def __init__(self, connection, domain):\n self.connection = connection\n self.domain = domain\n\n def create(self):\n return queue.executeInThread(self.domain.create)\n\n def shutdown(self):\n return queue.executeInThread(self.domain.shutdown)\n\n def destroy(self):\n return queue.executeInThread(self.domain.destroy)\n\n\nclass Connection(object):\n\n \"\"\"\n I am a wrapper around a libvirt Connection object.\n \"\"\"\n\n def __init__(self, uri):\n self.uri = uri\n self.connection = libvirt.open(uri)\n\n def lookupByName(self, name):\n \"\"\" I lookup an existing prefined domain \"\"\"\n d = queue.executeInThread(self.connection.lookupByName, name)\n def _(res):\n return Domain(self, res)\n d.addCallback(_)\n return d\n\n def create(self, xml):\n \"\"\" I take libvirt XML and start a new VM \"\"\"\n d = queue.executeInThread(self.connection.createXML, xml, 0)\n def _(res):\n return Domain(self, res)\n d.addCallback(_)\n return d\n\n\nclass LibVirtSlave(AbstractLatentBuildSlave):\n\n def __init__(self, name, password, connection, hd_image, base_image = None, xml=None, max_builds=None, notify_on_missing=[],\n missing_timeout=60*20, build_wait_timeout=60*10, properties={}, locks=None):\n AbstractLatentBuildSlave.__init__(self, name, password, max_builds, notify_on_missing,\n missing_timeout, build_wait_timeout, properties, locks)\n self.name = name\n self.connection = connection\n self.image = hd_image\n self.base_image = base_image\n self.xml = xml\n\n self.insubstantiate_after_build = True\n self.cheap_copy = True\n self.graceful_shutdown = False\n\n self.domain = None\n\n def _prepare_base_image(self):\n \"\"\"\n I am a private method for creating (possibly cheap) copies of a\n base_image for start_instance to boot.\n \"\"\"\n if not self.base_image:\n return defer.succeed(True)\n\n if self.cheap_copy:\n clone_cmd = \"qemu-img\"\n clone_args = \"create -b %(base)s -f qcow2 %(image)s\"\n else:\n clone_cmd = \"cp\"\n clone_args = \"%(base)s %(image)s\"\n\n clone_args = clone_args % {\n \"base\": self.base_image,\n \"image\": self.image,\n }\n\n log.msg(\"Cloning base image: %s %s'\" % (clone_cmd, clone_args))\n\n def _log_result(res):\n log.msg(\"Cloning exit code was: %d\" % res)\n return res\n\n d = utils.getProcessValue(clone_cmd, clone_args.split())\n d.addBoth(_log_result)\n return d\n\n def start_instance(self, build):\n \"\"\"\n I start a new instance of a VM.\n\n If a base_image is specified, I will make a clone of that otherwise i will\n use image directly.\n\n If i'm not given libvirt domain definition XML, I will look for my name\n in the list of defined virtual machines and start that.\n \"\"\"\n if self.domain is not None:\n raise ValueError('domain active')\n\n d = self._prepare_base_image()\n\n def _start(res):\n if self.xml:\n return self.connection.create(self.xml)\n d = self.connection.lookupByName(self.name)\n def _really_start(res):\n return res.create()\n d.addCallback(_really_start)\n return d\n d.addCallback(_start)\n\n def _started(res):\n self.domain = res\n return True\n d.addCallback(_started)\n\n def _start_failed(failure):\n log.msg(\"Cannot start a VM (%s), failing gracefully and triggering a new build check\" % self.name)\n log.err(failure)\n self.domain = None\n return False\n d.addErrback(_start_failed)\n\n return d\n\n def stop_instance(self, fast=False):\n \"\"\"\n I attempt to stop a running VM.\n I make sure any connection to the slave is removed.\n If the VM was using a cloned image, I remove the clone\n When everything is tidied up, I ask that bbot looks for work to do\n \"\"\"\n log.msg(\"Attempting to stop '%s'\" % self.name)\n if self.domain is None:\n log.msg(\"I don't think that domain is evening running, aborting\")\n return defer.succeed(None)\n\n domain = self.domain\n self.domain = None\n\n if self.graceful_shutdown and not fast:\n log.msg(\"Graceful shutdown chosen for %s\" % self.name)\n d = domain.shutdown()\n else:\n d = domain.destroy()\n\n def _disconnect(res):\n log.msg(\"VM destroyed (%s): Forcing its connection closed.\" % self.name)\n return AbstractBuildSlave.disconnect(self)\n d.addCallback(_disconnect)\n\n def _disconnected(res):\n log.msg(\"We forced disconnection (%s), cleaning up and triggering new build\" % self.name)\n if self.base_image:\n os.remove(self.image)\n self.botmaster.maybeStartBuildsForSlave(self.name)\n return res\n d.addBoth(_disconnected)\n\n return d\n\n def buildFinished(self, *args, **kwargs):\n \"\"\"\n I insubstantiate a slave after it has done a build, if that is\n desired behaviour.\n \"\"\"\n AbstractLatentBuildSlave.buildFinished(self, *args, **kwargs)\n if self.insubstantiate_after_build:\n log.msg(\"Got buildFinished notification - attempting to insubstantiate\")\n self.insubstantiate()\n\n\n", "path": "master/buildbot/libvirtbuildslave.py"}]} | 3,231 | 149 |
gh_patches_debug_40075 | rasdani/github-patches | git_diff | edgedb__edgedb-5864 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Change how globals are passed in GraphQL
Currently global variables are passed as their own special `globals` field. This is a non-standard field and it gets in the way of existing frameworks and libraries.
An alternative to that would be to use the `variables` field that's part of the GraphQL standard and add `__globals__` to that.
</issue>
<code>
[start of edb/testbase/http.py]
1 #
2 # This source file is part of the EdgeDB open source project.
3 #
4 # Copyright 2019-present MagicStack Inc. and the EdgeDB authors.
5 #
6 # Licensed under the Apache License, Version 2.0 (the "License");
7 # you may not use this file except in compliance with the License.
8 # You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing, software
13 # distributed under the License is distributed on an "AS IS" BASIS,
14 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15 # See the License for the specific language governing permissions and
16 # limitations under the License.
17 #
18
19
20 from __future__ import annotations
21 from typing import *
22
23 import json
24 import urllib.parse
25 import urllib.request
26
27 import edgedb
28
29 from edb.errors import base as base_errors
30
31 from edb.common import assert_data_shape
32
33 from . import server
34
35
36 bag = assert_data_shape.bag
37
38
39 class BaseHttpExtensionTest(server.QueryTestCase):
40 EXTENSION_SETUP: List[str] = []
41
42 @classmethod
43 def get_extension_name(cls):
44 raise NotImplementedError
45
46 @classmethod
47 def get_extension_path(cls):
48 return cls.get_extension_name()
49
50 @classmethod
51 def get_api_prefix(cls):
52 extpath = cls.get_extension_path()
53 dbname = cls.get_database_name()
54 return f'/db/{dbname}/{extpath}'
55
56 @classmethod
57 def get_setup_script(cls):
58 script = super().get_setup_script()
59
60 extname = cls.get_extension_name()
61 script += f'\nCREATE EXTENSION pgcrypto;\n'
62 script += f'\nCREATE EXTENSION {extname};\n'
63 script += "\n".join(cls.EXTENSION_SETUP)
64 return script
65
66 @classmethod
67 def tearDownClass(cls):
68 extname = cls.get_extension_name()
69 cls.loop.run_until_complete(
70 cls.con.execute(f'DROP EXTENSION {extname};')
71 )
72 super().tearDownClass()
73
74
75 class ExtAuthTestCase(BaseHttpExtensionTest):
76
77 @classmethod
78 def get_extension_name(cls):
79 return 'auth'
80
81 @classmethod
82 def get_extension_path(cls):
83 return 'ext/auth'
84
85
86 class EdgeQLTestCase(BaseHttpExtensionTest):
87
88 @classmethod
89 def get_extension_name(cls):
90 return 'edgeql_http'
91
92 @classmethod
93 def get_extension_path(cls):
94 return 'edgeql'
95
96 def edgeql_query(
97 self, query, *, use_http_post=True, variables=None, globals=None):
98 req_data = {
99 'query': query
100 }
101
102 if use_http_post:
103 if variables is not None:
104 req_data['variables'] = variables
105 if globals is not None:
106 req_data['globals'] = globals
107 req = urllib.request.Request(self.http_addr, method='POST')
108 req.add_header('Content-Type', 'application/json')
109 response = urllib.request.urlopen(
110 req, json.dumps(req_data).encode(), context=self.tls_context
111 )
112 resp_data = json.loads(response.read())
113 else:
114 if variables is not None:
115 req_data['variables'] = json.dumps(variables)
116 if globals is not None:
117 req_data['globals'] = json.dumps(globals)
118 response = urllib.request.urlopen(
119 f'{self.http_addr}/?{urllib.parse.urlencode(req_data)}',
120 context=self.tls_context,
121 )
122 resp_data = json.loads(response.read())
123
124 if 'data' in resp_data:
125 return resp_data['data']
126
127 err = resp_data['error']
128
129 ex_msg = err['message'].strip()
130 ex_code = err['code']
131
132 raise edgedb.EdgeDBError._from_code(ex_code, ex_msg)
133
134 def assert_edgeql_query_result(self, query, result, *,
135 msg=None, sort=None,
136 use_http_post=True,
137 variables=None,
138 globals=None):
139 res = self.edgeql_query(
140 query,
141 use_http_post=use_http_post,
142 variables=variables,
143 globals=globals)
144
145 if sort is not None:
146 # GQL will always have a single object returned. The data is
147 # in the top-level fields, so that's what needs to be sorted.
148 for r in res.values():
149 assert_data_shape.sort_results(r, sort)
150
151 assert_data_shape.assert_data_shape(
152 res, result, self.fail, message=msg)
153 return res
154
155
156 class GraphQLTestCase(BaseHttpExtensionTest):
157
158 @classmethod
159 def get_extension_name(cls):
160 return 'graphql'
161
162 def graphql_query(self, query, *, operation_name=None,
163 use_http_post=True,
164 variables=None,
165 globals=None):
166 req_data = {
167 'query': query
168 }
169
170 if operation_name is not None:
171 req_data['operationName'] = operation_name
172
173 if use_http_post:
174 if variables is not None:
175 req_data['variables'] = variables
176 if globals is not None:
177 req_data['globals'] = globals
178 req = urllib.request.Request(self.http_addr, method='POST')
179 req.add_header('Content-Type', 'application/json')
180 response = urllib.request.urlopen(
181 req, json.dumps(req_data).encode(), context=self.tls_context
182 )
183 resp_data = json.loads(response.read())
184 else:
185 if variables is not None:
186 req_data['variables'] = json.dumps(variables)
187 if globals is not None:
188 req_data['globals'] = json.dumps(globals)
189 response = urllib.request.urlopen(
190 f'{self.http_addr}/?{urllib.parse.urlencode(req_data)}',
191 context=self.tls_context,
192 )
193 resp_data = json.loads(response.read())
194
195 if 'data' in resp_data:
196 return resp_data['data']
197
198 err = resp_data['errors'][0]
199
200 typename, msg = err['message'].split(':', 1)
201 msg = msg.strip()
202
203 try:
204 ex_type = getattr(edgedb, typename)
205 except AttributeError:
206 raise AssertionError(
207 f'server returned an invalid exception typename: {typename!r}'
208 f'\n Message: {msg}')
209
210 ex = ex_type(msg)
211
212 if 'locations' in err:
213 # XXX Fix this when LSP "location" objects are implemented
214 ex._attrs[base_errors.FIELD_LINE_START] = str(
215 err['locations'][0]['line']).encode()
216 ex._attrs[base_errors.FIELD_COLUMN_START] = str(
217 err['locations'][0]['column']).encode()
218
219 raise ex
220
221 def assert_graphql_query_result(self, query, result, *,
222 msg=None, sort=None,
223 operation_name=None,
224 use_http_post=True,
225 variables=None,
226 globals=None):
227 res = self.graphql_query(
228 query,
229 operation_name=operation_name,
230 use_http_post=use_http_post,
231 variables=variables,
232 globals=globals)
233
234 if sort is not None:
235 # GQL will always have a single object returned. The data is
236 # in the top-level fields, so that's what needs to be sorted.
237 for r in res.values():
238 assert_data_shape.sort_results(r, sort)
239
240 assert_data_shape.assert_data_shape(
241 res, result, self.fail, message=msg)
242 return res
243
[end of edb/testbase/http.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/edb/testbase/http.py b/edb/testbase/http.py
--- a/edb/testbase/http.py
+++ b/edb/testbase/http.py
@@ -162,7 +162,8 @@
def graphql_query(self, query, *, operation_name=None,
use_http_post=True,
variables=None,
- globals=None):
+ globals=None,
+ deprecated_globals=None):
req_data = {
'query': query
}
@@ -174,7 +175,13 @@
if variables is not None:
req_data['variables'] = variables
if globals is not None:
- req_data['globals'] = globals
+ if variables is None:
+ req_data['variables'] = dict()
+ req_data['variables']['__globals__'] = globals
+ # Support testing the old way of sending globals.
+ if deprecated_globals is not None:
+ req_data['globals'] = deprecated_globals
+
req = urllib.request.Request(self.http_addr, method='POST')
req.add_header('Content-Type', 'application/json')
response = urllib.request.urlopen(
@@ -182,10 +189,15 @@
)
resp_data = json.loads(response.read())
else:
+ if globals is not None:
+ if variables is None:
+ variables = dict()
+ variables['__globals__'] = globals
+ # Support testing the old way of sending globals.
+ if deprecated_globals is not None:
+ req_data['globals'] = json.dumps(deprecated_globals)
if variables is not None:
req_data['variables'] = json.dumps(variables)
- if globals is not None:
- req_data['globals'] = json.dumps(globals)
response = urllib.request.urlopen(
f'{self.http_addr}/?{urllib.parse.urlencode(req_data)}',
context=self.tls_context,
@@ -223,13 +235,15 @@
operation_name=None,
use_http_post=True,
variables=None,
- globals=None):
+ globals=None,
+ deprecated_globals=None):
res = self.graphql_query(
query,
operation_name=operation_name,
use_http_post=use_http_post,
variables=variables,
- globals=globals)
+ globals=globals,
+ deprecated_globals=deprecated_globals)
if sort is not None:
# GQL will always have a single object returned. The data is
| {"golden_diff": "diff --git a/edb/testbase/http.py b/edb/testbase/http.py\n--- a/edb/testbase/http.py\n+++ b/edb/testbase/http.py\n@@ -162,7 +162,8 @@\n def graphql_query(self, query, *, operation_name=None,\n use_http_post=True,\n variables=None,\n- globals=None):\n+ globals=None,\n+ deprecated_globals=None):\n req_data = {\n 'query': query\n }\n@@ -174,7 +175,13 @@\n if variables is not None:\n req_data['variables'] = variables\n if globals is not None:\n- req_data['globals'] = globals\n+ if variables is None:\n+ req_data['variables'] = dict()\n+ req_data['variables']['__globals__'] = globals\n+ # Support testing the old way of sending globals.\n+ if deprecated_globals is not None:\n+ req_data['globals'] = deprecated_globals\n+\n req = urllib.request.Request(self.http_addr, method='POST')\n req.add_header('Content-Type', 'application/json')\n response = urllib.request.urlopen(\n@@ -182,10 +189,15 @@\n )\n resp_data = json.loads(response.read())\n else:\n+ if globals is not None:\n+ if variables is None:\n+ variables = dict()\n+ variables['__globals__'] = globals\n+ # Support testing the old way of sending globals.\n+ if deprecated_globals is not None:\n+ req_data['globals'] = json.dumps(deprecated_globals)\n if variables is not None:\n req_data['variables'] = json.dumps(variables)\n- if globals is not None:\n- req_data['globals'] = json.dumps(globals)\n response = urllib.request.urlopen(\n f'{self.http_addr}/?{urllib.parse.urlencode(req_data)}',\n context=self.tls_context,\n@@ -223,13 +235,15 @@\n operation_name=None,\n use_http_post=True,\n variables=None,\n- globals=None):\n+ globals=None,\n+ deprecated_globals=None):\n res = self.graphql_query(\n query,\n operation_name=operation_name,\n use_http_post=use_http_post,\n variables=variables,\n- globals=globals)\n+ globals=globals,\n+ deprecated_globals=deprecated_globals)\n \n if sort is not None:\n # GQL will always have a single object returned. The data is\n", "issue": "Change how globals are passed in GraphQL\nCurrently global variables are passed as their own special `globals` field. This is a non-standard field and it gets in the way of existing frameworks and libraries.\r\n\r\nAn alternative to that would be to use the `variables` field that's part of the GraphQL standard and add `__globals__` to that.\n", "before_files": [{"content": "#\n# This source file is part of the EdgeDB open source project.\n#\n# Copyright 2019-present MagicStack Inc. and the EdgeDB authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n\nfrom __future__ import annotations\nfrom typing import *\n\nimport json\nimport urllib.parse\nimport urllib.request\n\nimport edgedb\n\nfrom edb.errors import base as base_errors\n\nfrom edb.common import assert_data_shape\n\nfrom . import server\n\n\nbag = assert_data_shape.bag\n\n\nclass BaseHttpExtensionTest(server.QueryTestCase):\n EXTENSION_SETUP: List[str] = []\n\n @classmethod\n def get_extension_name(cls):\n raise NotImplementedError\n\n @classmethod\n def get_extension_path(cls):\n return cls.get_extension_name()\n\n @classmethod\n def get_api_prefix(cls):\n extpath = cls.get_extension_path()\n dbname = cls.get_database_name()\n return f'/db/{dbname}/{extpath}'\n\n @classmethod\n def get_setup_script(cls):\n script = super().get_setup_script()\n\n extname = cls.get_extension_name()\n script += f'\\nCREATE EXTENSION pgcrypto;\\n'\n script += f'\\nCREATE EXTENSION {extname};\\n'\n script += \"\\n\".join(cls.EXTENSION_SETUP)\n return script\n\n @classmethod\n def tearDownClass(cls):\n extname = cls.get_extension_name()\n cls.loop.run_until_complete(\n cls.con.execute(f'DROP EXTENSION {extname};')\n )\n super().tearDownClass()\n\n\nclass ExtAuthTestCase(BaseHttpExtensionTest):\n\n @classmethod\n def get_extension_name(cls):\n return 'auth'\n\n @classmethod\n def get_extension_path(cls):\n return 'ext/auth'\n\n\nclass EdgeQLTestCase(BaseHttpExtensionTest):\n\n @classmethod\n def get_extension_name(cls):\n return 'edgeql_http'\n\n @classmethod\n def get_extension_path(cls):\n return 'edgeql'\n\n def edgeql_query(\n self, query, *, use_http_post=True, variables=None, globals=None):\n req_data = {\n 'query': query\n }\n\n if use_http_post:\n if variables is not None:\n req_data['variables'] = variables\n if globals is not None:\n req_data['globals'] = globals\n req = urllib.request.Request(self.http_addr, method='POST')\n req.add_header('Content-Type', 'application/json')\n response = urllib.request.urlopen(\n req, json.dumps(req_data).encode(), context=self.tls_context\n )\n resp_data = json.loads(response.read())\n else:\n if variables is not None:\n req_data['variables'] = json.dumps(variables)\n if globals is not None:\n req_data['globals'] = json.dumps(globals)\n response = urllib.request.urlopen(\n f'{self.http_addr}/?{urllib.parse.urlencode(req_data)}',\n context=self.tls_context,\n )\n resp_data = json.loads(response.read())\n\n if 'data' in resp_data:\n return resp_data['data']\n\n err = resp_data['error']\n\n ex_msg = err['message'].strip()\n ex_code = err['code']\n\n raise edgedb.EdgeDBError._from_code(ex_code, ex_msg)\n\n def assert_edgeql_query_result(self, query, result, *,\n msg=None, sort=None,\n use_http_post=True,\n variables=None,\n globals=None):\n res = self.edgeql_query(\n query,\n use_http_post=use_http_post,\n variables=variables,\n globals=globals)\n\n if sort is not None:\n # GQL will always have a single object returned. The data is\n # in the top-level fields, so that's what needs to be sorted.\n for r in res.values():\n assert_data_shape.sort_results(r, sort)\n\n assert_data_shape.assert_data_shape(\n res, result, self.fail, message=msg)\n return res\n\n\nclass GraphQLTestCase(BaseHttpExtensionTest):\n\n @classmethod\n def get_extension_name(cls):\n return 'graphql'\n\n def graphql_query(self, query, *, operation_name=None,\n use_http_post=True,\n variables=None,\n globals=None):\n req_data = {\n 'query': query\n }\n\n if operation_name is not None:\n req_data['operationName'] = operation_name\n\n if use_http_post:\n if variables is not None:\n req_data['variables'] = variables\n if globals is not None:\n req_data['globals'] = globals\n req = urllib.request.Request(self.http_addr, method='POST')\n req.add_header('Content-Type', 'application/json')\n response = urllib.request.urlopen(\n req, json.dumps(req_data).encode(), context=self.tls_context\n )\n resp_data = json.loads(response.read())\n else:\n if variables is not None:\n req_data['variables'] = json.dumps(variables)\n if globals is not None:\n req_data['globals'] = json.dumps(globals)\n response = urllib.request.urlopen(\n f'{self.http_addr}/?{urllib.parse.urlencode(req_data)}',\n context=self.tls_context,\n )\n resp_data = json.loads(response.read())\n\n if 'data' in resp_data:\n return resp_data['data']\n\n err = resp_data['errors'][0]\n\n typename, msg = err['message'].split(':', 1)\n msg = msg.strip()\n\n try:\n ex_type = getattr(edgedb, typename)\n except AttributeError:\n raise AssertionError(\n f'server returned an invalid exception typename: {typename!r}'\n f'\\n Message: {msg}')\n\n ex = ex_type(msg)\n\n if 'locations' in err:\n # XXX Fix this when LSP \"location\" objects are implemented\n ex._attrs[base_errors.FIELD_LINE_START] = str(\n err['locations'][0]['line']).encode()\n ex._attrs[base_errors.FIELD_COLUMN_START] = str(\n err['locations'][0]['column']).encode()\n\n raise ex\n\n def assert_graphql_query_result(self, query, result, *,\n msg=None, sort=None,\n operation_name=None,\n use_http_post=True,\n variables=None,\n globals=None):\n res = self.graphql_query(\n query,\n operation_name=operation_name,\n use_http_post=use_http_post,\n variables=variables,\n globals=globals)\n\n if sort is not None:\n # GQL will always have a single object returned. The data is\n # in the top-level fields, so that's what needs to be sorted.\n for r in res.values():\n assert_data_shape.sort_results(r, sort)\n\n assert_data_shape.assert_data_shape(\n res, result, self.fail, message=msg)\n return res\n", "path": "edb/testbase/http.py"}]} | 2,813 | 538 |
gh_patches_debug_10160 | rasdani/github-patches | git_diff | comic__grand-challenge.org-581 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CKEditor Image upload makes the GUI undismissable
The browser keeps the changed fields state after the image is uploaded, and for some reason we're unable to dismiss the gui.
</issue>
<code>
[start of app/config/urls.py]
1 from django.conf import settings
2 from django.conf.urls import include
3 from django.contrib import admin
4 from django.template.response import TemplateResponse
5 from django.urls import re_path, path
6 from django.views.generic import TemplateView, RedirectView
7
8 from grandchallenge.core.views import comicmain
9 from grandchallenge.pages.views import FaviconView
10
11 admin.autodiscover()
12
13
14 def handler500(request):
15 context = {"request": request}
16 template_name = "500.html"
17 return TemplateResponse(request, template_name, context, status=500)
18
19
20 urlpatterns = [
21 path("", comicmain, name="home"),
22 path(
23 "robots.txt/",
24 TemplateView.as_view(
25 template_name="robots.txt", content_type="text/plain"
26 ),
27 ),
28 # Favicons
29 path(
30 "favicon.ico/",
31 FaviconView.as_view(rel="shortcut icon"),
32 name="favicon",
33 ),
34 path(
35 "apple-touch-icon.png/",
36 FaviconView.as_view(rel="apple-touch-icon"),
37 name="apple-touch-icon",
38 ),
39 path(
40 "apple-touch-icon-precomposed.png/",
41 FaviconView.as_view(rel="apple-touch-icon-precomposed"),
42 name="apple-touch-icon-precomposed",
43 ),
44 path(
45 "apple-touch-icon-<int:size>x<int>.png/",
46 FaviconView.as_view(rel="apple-touch-icon"),
47 name="apple-touch-icon-sized",
48 ),
49 path(
50 "apple-touch-icon-<int:size>x<int>-precomposed.png/",
51 FaviconView.as_view(rel="apple-touch-icon-precomposed"),
52 name="apple-touch-icon-precomposed-sized",
53 ),
54 path(settings.ADMIN_URL, admin.site.urls),
55 path(
56 "site/<slug:challenge_short_name>/",
57 include("grandchallenge.core.urls"),
58 name="site",
59 ),
60 path(
61 "stats/",
62 include("grandchallenge.statistics.urls", namespace="statistics"),
63 ),
64 # Do not change the api namespace without updating the view names in
65 # all of the serializers
66 path("api/", include("grandchallenge.api.urls", namespace="api")),
67 # Used for logging in and managing grandchallenge.profiles. This is done on
68 # the framework level because it is too hard to get this all under each
69 # project
70 path("accounts/", include("grandchallenge.profiles.urls")),
71 path("socialauth/", include("social_django.urls", namespace="social")),
72 path(
73 "challenges/",
74 include("grandchallenge.challenges.urls", namespace="challenges"),
75 ),
76 re_path(
77 r"^(?i)all_challenges/$",
78 RedirectView.as_view(pattern_name="challenges:list", permanent=False),
79 ),
80 path("cases/", include("grandchallenge.cases.urls", namespace="cases")),
81 path(
82 "algorithms/",
83 include("grandchallenge.algorithms.urls", namespace="algorithms"),
84 ),
85 # ========== catch all ====================
86 # when all other urls have been checked, try to load page from main project
87 # keep this url at the bottom of this list, because urls are checked in
88 # order
89 path("<slug:page_title>/", comicmain, name="mainproject-home"),
90 path(
91 "media/", include("grandchallenge.serving.urls", namespace="serving")
92 ),
93 ]
94 if settings.DEBUG and settings.ENABLE_DEBUG_TOOLBAR:
95 import debug_toolbar
96
97 urlpatterns = [
98 path("__debug__/", include(debug_toolbar.urls))
99 ] + urlpatterns
100
[end of app/config/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/app/config/urls.py b/app/config/urls.py
--- a/app/config/urls.py
+++ b/app/config/urls.py
@@ -73,8 +73,12 @@
"challenges/",
include("grandchallenge.challenges.urls", namespace="challenges"),
),
- re_path(
- r"^(?i)all_challenges/$",
+ path(
+ "all_challenges/",
+ RedirectView.as_view(pattern_name="challenges:list", permanent=False),
+ ),
+ path(
+ "All_Challenges/",
RedirectView.as_view(pattern_name="challenges:list", permanent=False),
),
path("cases/", include("grandchallenge.cases.urls", namespace="cases")),
| {"golden_diff": "diff --git a/app/config/urls.py b/app/config/urls.py\n--- a/app/config/urls.py\n+++ b/app/config/urls.py\n@@ -73,8 +73,12 @@\n \"challenges/\",\n include(\"grandchallenge.challenges.urls\", namespace=\"challenges\"),\n ),\n- re_path(\n- r\"^(?i)all_challenges/$\",\n+ path(\n+ \"all_challenges/\",\n+ RedirectView.as_view(pattern_name=\"challenges:list\", permanent=False),\n+ ),\n+ path(\n+ \"All_Challenges/\",\n RedirectView.as_view(pattern_name=\"challenges:list\", permanent=False),\n ),\n path(\"cases/\", include(\"grandchallenge.cases.urls\", namespace=\"cases\")),\n", "issue": "CKEditor Image upload makes the GUI undismissable\nThe browser keeps the changed fields state after the image is uploaded, and for some reason we're unable to dismiss the gui.\n", "before_files": [{"content": "from django.conf import settings\nfrom django.conf.urls import include\nfrom django.contrib import admin\nfrom django.template.response import TemplateResponse\nfrom django.urls import re_path, path\nfrom django.views.generic import TemplateView, RedirectView\n\nfrom grandchallenge.core.views import comicmain\nfrom grandchallenge.pages.views import FaviconView\n\nadmin.autodiscover()\n\n\ndef handler500(request):\n context = {\"request\": request}\n template_name = \"500.html\"\n return TemplateResponse(request, template_name, context, status=500)\n\n\nurlpatterns = [\n path(\"\", comicmain, name=\"home\"),\n path(\n \"robots.txt/\",\n TemplateView.as_view(\n template_name=\"robots.txt\", content_type=\"text/plain\"\n ),\n ),\n # Favicons\n path(\n \"favicon.ico/\",\n FaviconView.as_view(rel=\"shortcut icon\"),\n name=\"favicon\",\n ),\n path(\n \"apple-touch-icon.png/\",\n FaviconView.as_view(rel=\"apple-touch-icon\"),\n name=\"apple-touch-icon\",\n ),\n path(\n \"apple-touch-icon-precomposed.png/\",\n FaviconView.as_view(rel=\"apple-touch-icon-precomposed\"),\n name=\"apple-touch-icon-precomposed\",\n ),\n path(\n \"apple-touch-icon-<int:size>x<int>.png/\",\n FaviconView.as_view(rel=\"apple-touch-icon\"),\n name=\"apple-touch-icon-sized\",\n ),\n path(\n \"apple-touch-icon-<int:size>x<int>-precomposed.png/\",\n FaviconView.as_view(rel=\"apple-touch-icon-precomposed\"),\n name=\"apple-touch-icon-precomposed-sized\",\n ),\n path(settings.ADMIN_URL, admin.site.urls),\n path(\n \"site/<slug:challenge_short_name>/\",\n include(\"grandchallenge.core.urls\"),\n name=\"site\",\n ),\n path(\n \"stats/\",\n include(\"grandchallenge.statistics.urls\", namespace=\"statistics\"),\n ),\n # Do not change the api namespace without updating the view names in\n # all of the serializers\n path(\"api/\", include(\"grandchallenge.api.urls\", namespace=\"api\")),\n # Used for logging in and managing grandchallenge.profiles. This is done on\n # the framework level because it is too hard to get this all under each\n # project\n path(\"accounts/\", include(\"grandchallenge.profiles.urls\")),\n path(\"socialauth/\", include(\"social_django.urls\", namespace=\"social\")),\n path(\n \"challenges/\",\n include(\"grandchallenge.challenges.urls\", namespace=\"challenges\"),\n ),\n re_path(\n r\"^(?i)all_challenges/$\",\n RedirectView.as_view(pattern_name=\"challenges:list\", permanent=False),\n ),\n path(\"cases/\", include(\"grandchallenge.cases.urls\", namespace=\"cases\")),\n path(\n \"algorithms/\",\n include(\"grandchallenge.algorithms.urls\", namespace=\"algorithms\"),\n ),\n # ========== catch all ====================\n # when all other urls have been checked, try to load page from main project\n # keep this url at the bottom of this list, because urls are checked in\n # order\n path(\"<slug:page_title>/\", comicmain, name=\"mainproject-home\"),\n path(\n \"media/\", include(\"grandchallenge.serving.urls\", namespace=\"serving\")\n ),\n]\nif settings.DEBUG and settings.ENABLE_DEBUG_TOOLBAR:\n import debug_toolbar\n\n urlpatterns = [\n path(\"__debug__/\", include(debug_toolbar.urls))\n ] + urlpatterns\n", "path": "app/config/urls.py"}]} | 1,500 | 159 |
gh_patches_debug_32710 | rasdani/github-patches | git_diff | rotki__rotki-1599 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
If local premium DB is larger and more recent than remote don't ask user for replacement
## Problem Definition
If local premium DB is larger and more recent than remote don't ask user for replacement and instead replace remote.

## Task
Do it lol
</issue>
<code>
[start of rotkehlchen/premium/sync.py]
1 import base64
2 import logging
3 import shutil
4 from enum import Enum
5 from typing import Any, Dict, NamedTuple, Optional
6
7 from typing_extensions import Literal
8
9 from rotkehlchen.data_handler import DataHandler
10 from rotkehlchen.errors import (
11 PremiumAuthenticationError,
12 RemoteError,
13 RotkehlchenPermissionError,
14 UnableToDecryptRemoteData,
15 )
16 from rotkehlchen.logging import RotkehlchenLogsAdapter
17 from rotkehlchen.premium.premium import Premium, PremiumCredentials, premium_create_and_verify
18 from rotkehlchen.utils.misc import timestamp_to_date, ts_now
19
20 logger = logging.getLogger(__name__)
21 log = RotkehlchenLogsAdapter(logger)
22
23
24 class CanSync(Enum):
25 YES = 0
26 NO = 1
27 ASK_USER = 2
28
29
30 class SyncCheckResult(NamedTuple):
31 # The result of the sync check
32 can_sync: CanSync
33 # If result is ASK_USER, what should the message be?
34 message: str
35 payload: Optional[Dict[str, Any]]
36
37
38 class PremiumSyncManager():
39
40 def __init__(self, data: DataHandler, password: str) -> None:
41 self.last_data_upload_ts = 0
42 self.data = data
43 self.password = password
44 self.premium: Optional[Premium] = None
45
46 def _can_sync_data_from_server(self, new_account: bool) -> SyncCheckResult:
47 """
48 Checks if the remote data can be pulled from the server.
49
50 Returns a SyncCheckResult denoting whether we can pull for sure,
51 whether we can't pull or whether the user should be asked. If the user
52 should be asked a message is also returned
53 """
54 log.debug('can sync data from server -- start')
55 if self.premium is None:
56 return SyncCheckResult(can_sync=CanSync.NO, message='', payload=None)
57
58 b64_encoded_data, our_hash = self.data.compress_and_encrypt_db(self.password)
59
60 try:
61 metadata = self.premium.query_last_data_metadata()
62 except RemoteError as e:
63 log.debug('can sync data from server failed', error=str(e))
64 return SyncCheckResult(can_sync=CanSync.NO, message='', payload=None)
65
66 if new_account:
67 return SyncCheckResult(can_sync=CanSync.YES, message='', payload=None)
68
69 if not self.data.db.get_premium_sync():
70 # If it's not a new account and the db setting for premium syncing is off stop
71 return SyncCheckResult(can_sync=CanSync.NO, message='', payload=None)
72
73 log.debug(
74 'CAN_PULL',
75 ours=our_hash,
76 theirs=metadata.data_hash,
77 )
78 if our_hash == metadata.data_hash:
79 log.debug('sync from server stopped -- same hash')
80 # same hash -- no need to get anything
81 return SyncCheckResult(can_sync=CanSync.NO, message='', payload=None)
82
83 our_last_write_ts = self.data.db.get_last_write_ts()
84 data_bytes_size = len(base64.b64decode(b64_encoded_data))
85 if our_last_write_ts >= metadata.last_modify_ts:
86 message = (
87 'Detected remote database BUT with older last modification timestamp '
88 'than the local one. '
89 )
90 else:
91 if data_bytes_size > metadata.data_size:
92 message = (
93 'Detected newer remote database BUT with smaller size than the local one. '
94 )
95 else:
96 message = 'Detected newer remote database. '
97
98 return SyncCheckResult(
99 can_sync=CanSync.ASK_USER,
100 message=message,
101 payload={
102 'local_size': data_bytes_size,
103 'remote_size': metadata.data_size,
104 'local_last_modified': timestamp_to_date(our_last_write_ts),
105 'remote_last_modified': timestamp_to_date(metadata.last_modify_ts),
106 },
107 )
108
109 def _sync_data_from_server_and_replace_local(self) -> bool:
110 """
111 Performs syncing of data from server and replaces local db
112
113 Returns true for success and False for error/failure
114
115 May raise:
116 - PremiumAuthenticationError due to an UnableToDecryptRemoteData
117 coming from decompress_and_decrypt_db. This happens when the given password
118 does not match the one on the saved DB.
119 """
120 assert self.premium, 'This function has to be called with a not None premium'
121 try:
122 result = self.premium.pull_data()
123 except RemoteError as e:
124 log.debug('sync from server -- pulling failed.', error=str(e))
125 return False
126
127 if result['data'] is None:
128 log.debug('sync from server -- no data found.')
129 return False
130
131 try:
132 self.data.decompress_and_decrypt_db(self.password, result['data'])
133 except UnableToDecryptRemoteData:
134 raise PremiumAuthenticationError(
135 'The given password can not unlock the database that was retrieved from '
136 'the server. Make sure to use the same password as when the account was created.',
137 )
138
139 return True
140
141 def maybe_upload_data_to_server(self) -> None:
142 # if user has no premium do nothing
143 if self.premium is None:
144 return
145
146 # upload only once per hour
147 diff = ts_now() - self.last_data_upload_ts
148 if diff < 3600:
149 return
150
151 b64_encoded_data, our_hash = self.data.compress_and_encrypt_db(self.password)
152 try:
153 metadata = self.premium.query_last_data_metadata()
154 except RemoteError as e:
155 log.debug(
156 'upload to server stopped -- query last metadata failed',
157 error=str(e),
158 )
159 return
160
161 log.debug(
162 'CAN_PUSH',
163 ours=our_hash,
164 theirs=metadata.data_hash,
165 )
166 if our_hash == metadata.data_hash:
167 log.debug('upload to server stopped -- same hash')
168 # same hash -- no need to upload anything
169 return
170
171 our_last_write_ts = self.data.db.get_last_write_ts()
172 if our_last_write_ts <= metadata.last_modify_ts:
173 # Server's DB was modified after our local DB
174 log.debug('upload to server stopped -- remote db more recent than local')
175 return
176
177 data_bytes_size = len(base64.b64decode(b64_encoded_data))
178 if data_bytes_size < metadata.data_size:
179 # Let's be conservative.
180 # TODO: Here perhaps prompt user in the future
181 log.debug('upload to server stopped -- remote db bigger than local')
182 return
183
184 try:
185 self.premium.upload_data(
186 data_blob=b64_encoded_data,
187 our_hash=our_hash,
188 last_modify_ts=our_last_write_ts,
189 compression_type='zlib',
190 )
191 except RemoteError as e:
192 log.debug('upload to server -- upload error', error=str(e))
193 return
194
195 # update the last data upload value
196 self.last_data_upload_ts = ts_now()
197 self.data.db.update_last_data_upload_ts(self.last_data_upload_ts)
198 log.debug('upload to server -- success')
199
200 def try_premium_at_start(
201 self,
202 given_premium_credentials: Optional[PremiumCredentials],
203 username: str,
204 create_new: bool,
205 sync_approval: Literal['yes', 'no', 'unknown'],
206 ) -> Optional[Premium]:
207 """
208 Check if new user provided api pair or we already got one in the DB
209
210 Returns the created premium if user's premium credentials were fine.
211
212 If not it will raise PremiumAuthenticationError.
213
214 If no credentials were given it returns None
215 """
216
217 if given_premium_credentials is not None:
218 assert create_new, 'We should never get here for an already existing account'
219
220 try:
221 self.premium = premium_create_and_verify(given_premium_credentials)
222 except PremiumAuthenticationError as e:
223 log.error('Given API key is invalid')
224 # At this point we are at a new user trying to create an account with
225 # premium API keys and we failed. But a directory was created. Remove it.
226 # But create a backup of it in case something went really wrong
227 # and the directory contained data we did not want to lose
228 shutil.move(
229 self.data.user_data_dir, # type: ignore
230 self.data.data_directory / f'auto_backup_{username}_{ts_now()}',
231 )
232 raise PremiumAuthenticationError(
233 'Could not verify keys for the new account. '
234 '{}'.format(str(e)),
235 )
236
237 # else, if we got premium data in the DB initialize it and try to sync with the server
238 db_credentials = self.data.db.get_rotkehlchen_premium()
239 if db_credentials:
240 assert not create_new, 'We should never get here for a new account'
241 try:
242 self.premium = premium_create_and_verify(db_credentials)
243 except PremiumAuthenticationError as e:
244 message = (
245 f'Could not authenticate with the rotkehlchen server with '
246 f'the API keys found in the Database. Error: {str(e)}'
247 )
248 log.error(message)
249 raise PremiumAuthenticationError(message)
250
251 if self.premium is None:
252 return None
253
254 result = self._can_sync_data_from_server(new_account=create_new)
255 if result.can_sync == CanSync.ASK_USER:
256 if sync_approval == 'unknown':
257 log.info('DB data at server newer than local')
258 raise RotkehlchenPermissionError(result.message, result.payload)
259 elif sync_approval == 'yes':
260 log.info('User approved data sync from server')
261 if self._sync_data_from_server_and_replace_local():
262 if create_new:
263 # if we successfully synced data from the server and this is
264 # a new account, make sure the api keys are properly stored
265 # in the DB
266 self.data.db.set_rotkehlchen_premium(self.premium.credentials)
267 else:
268 log.debug('Could sync data from server but user refused')
269 elif result.can_sync == CanSync.YES:
270 log.info('User approved data sync from server')
271 if self._sync_data_from_server_and_replace_local():
272 if create_new:
273 # if we successfully synced data from the server and this is
274 # a new account, make sure the api keys are properly stored
275 # in the DB
276 self.data.db.set_rotkehlchen_premium(self.premium.credentials)
277
278 # else result.can_sync was no, so we do nothing
279
280 # Success, return premium
281 return self.premium
282
[end of rotkehlchen/premium/sync.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/rotkehlchen/premium/sync.py b/rotkehlchen/premium/sync.py
--- a/rotkehlchen/premium/sync.py
+++ b/rotkehlchen/premium/sync.py
@@ -82,18 +82,21 @@
our_last_write_ts = self.data.db.get_last_write_ts()
data_bytes_size = len(base64.b64decode(b64_encoded_data))
- if our_last_write_ts >= metadata.last_modify_ts:
+
+ local_more_recent = our_last_write_ts >= metadata.last_modify_ts
+ local_bigger = data_bytes_size >= metadata.data_size
+
+ if local_more_recent and local_bigger:
+ log.debug('sync from server stopped -- local is both newer and bigger')
+ return SyncCheckResult(can_sync=CanSync.NO, message='', payload=None)
+
+ if local_more_recent is False: # remote is more recent
message = (
- 'Detected remote database BUT with older last modification timestamp '
+ 'Detected remote database with more recent modification timestamp '
'than the local one. '
)
- else:
- if data_bytes_size > metadata.data_size:
- message = (
- 'Detected newer remote database BUT with smaller size than the local one. '
- )
- else:
- message = 'Detected newer remote database. '
+ else: # remote is bigger
+ message = 'Detected remote database with bigger size than the local one. '
return SyncCheckResult(
can_sync=CanSync.ASK_USER,
@@ -254,7 +257,7 @@
result = self._can_sync_data_from_server(new_account=create_new)
if result.can_sync == CanSync.ASK_USER:
if sync_approval == 'unknown':
- log.info('DB data at server newer than local')
+ log.info('Remote DB is possibly newer. Ask user.')
raise RotkehlchenPermissionError(result.message, result.payload)
elif sync_approval == 'yes':
log.info('User approved data sync from server')
| {"golden_diff": "diff --git a/rotkehlchen/premium/sync.py b/rotkehlchen/premium/sync.py\n--- a/rotkehlchen/premium/sync.py\n+++ b/rotkehlchen/premium/sync.py\n@@ -82,18 +82,21 @@\n \n our_last_write_ts = self.data.db.get_last_write_ts()\n data_bytes_size = len(base64.b64decode(b64_encoded_data))\n- if our_last_write_ts >= metadata.last_modify_ts:\n+\n+ local_more_recent = our_last_write_ts >= metadata.last_modify_ts\n+ local_bigger = data_bytes_size >= metadata.data_size\n+\n+ if local_more_recent and local_bigger:\n+ log.debug('sync from server stopped -- local is both newer and bigger')\n+ return SyncCheckResult(can_sync=CanSync.NO, message='', payload=None)\n+\n+ if local_more_recent is False: # remote is more recent\n message = (\n- 'Detected remote database BUT with older last modification timestamp '\n+ 'Detected remote database with more recent modification timestamp '\n 'than the local one. '\n )\n- else:\n- if data_bytes_size > metadata.data_size:\n- message = (\n- 'Detected newer remote database BUT with smaller size than the local one. '\n- )\n- else:\n- message = 'Detected newer remote database. '\n+ else: # remote is bigger\n+ message = 'Detected remote database with bigger size than the local one. '\n \n return SyncCheckResult(\n can_sync=CanSync.ASK_USER,\n@@ -254,7 +257,7 @@\n result = self._can_sync_data_from_server(new_account=create_new)\n if result.can_sync == CanSync.ASK_USER:\n if sync_approval == 'unknown':\n- log.info('DB data at server newer than local')\n+ log.info('Remote DB is possibly newer. Ask user.')\n raise RotkehlchenPermissionError(result.message, result.payload)\n elif sync_approval == 'yes':\n log.info('User approved data sync from server')\n", "issue": "If local premium DB is larger and more recent than remote don't ask user for replacement\n## Problem Definition\r\n\r\nIf local premium DB is larger and more recent than remote don't ask user for replacement and instead replace remote.\r\n\r\n\r\n\r\n## Task\r\n\r\nDo it lol\n", "before_files": [{"content": "import base64\nimport logging\nimport shutil\nfrom enum import Enum\nfrom typing import Any, Dict, NamedTuple, Optional\n\nfrom typing_extensions import Literal\n\nfrom rotkehlchen.data_handler import DataHandler\nfrom rotkehlchen.errors import (\n PremiumAuthenticationError,\n RemoteError,\n RotkehlchenPermissionError,\n UnableToDecryptRemoteData,\n)\nfrom rotkehlchen.logging import RotkehlchenLogsAdapter\nfrom rotkehlchen.premium.premium import Premium, PremiumCredentials, premium_create_and_verify\nfrom rotkehlchen.utils.misc import timestamp_to_date, ts_now\n\nlogger = logging.getLogger(__name__)\nlog = RotkehlchenLogsAdapter(logger)\n\n\nclass CanSync(Enum):\n YES = 0\n NO = 1\n ASK_USER = 2\n\n\nclass SyncCheckResult(NamedTuple):\n # The result of the sync check\n can_sync: CanSync\n # If result is ASK_USER, what should the message be?\n message: str\n payload: Optional[Dict[str, Any]]\n\n\nclass PremiumSyncManager():\n\n def __init__(self, data: DataHandler, password: str) -> None:\n self.last_data_upload_ts = 0\n self.data = data\n self.password = password\n self.premium: Optional[Premium] = None\n\n def _can_sync_data_from_server(self, new_account: bool) -> SyncCheckResult:\n \"\"\"\n Checks if the remote data can be pulled from the server.\n\n Returns a SyncCheckResult denoting whether we can pull for sure,\n whether we can't pull or whether the user should be asked. If the user\n should be asked a message is also returned\n \"\"\"\n log.debug('can sync data from server -- start')\n if self.premium is None:\n return SyncCheckResult(can_sync=CanSync.NO, message='', payload=None)\n\n b64_encoded_data, our_hash = self.data.compress_and_encrypt_db(self.password)\n\n try:\n metadata = self.premium.query_last_data_metadata()\n except RemoteError as e:\n log.debug('can sync data from server failed', error=str(e))\n return SyncCheckResult(can_sync=CanSync.NO, message='', payload=None)\n\n if new_account:\n return SyncCheckResult(can_sync=CanSync.YES, message='', payload=None)\n\n if not self.data.db.get_premium_sync():\n # If it's not a new account and the db setting for premium syncing is off stop\n return SyncCheckResult(can_sync=CanSync.NO, message='', payload=None)\n\n log.debug(\n 'CAN_PULL',\n ours=our_hash,\n theirs=metadata.data_hash,\n )\n if our_hash == metadata.data_hash:\n log.debug('sync from server stopped -- same hash')\n # same hash -- no need to get anything\n return SyncCheckResult(can_sync=CanSync.NO, message='', payload=None)\n\n our_last_write_ts = self.data.db.get_last_write_ts()\n data_bytes_size = len(base64.b64decode(b64_encoded_data))\n if our_last_write_ts >= metadata.last_modify_ts:\n message = (\n 'Detected remote database BUT with older last modification timestamp '\n 'than the local one. '\n )\n else:\n if data_bytes_size > metadata.data_size:\n message = (\n 'Detected newer remote database BUT with smaller size than the local one. '\n )\n else:\n message = 'Detected newer remote database. '\n\n return SyncCheckResult(\n can_sync=CanSync.ASK_USER,\n message=message,\n payload={\n 'local_size': data_bytes_size,\n 'remote_size': metadata.data_size,\n 'local_last_modified': timestamp_to_date(our_last_write_ts),\n 'remote_last_modified': timestamp_to_date(metadata.last_modify_ts),\n },\n )\n\n def _sync_data_from_server_and_replace_local(self) -> bool:\n \"\"\"\n Performs syncing of data from server and replaces local db\n\n Returns true for success and False for error/failure\n\n May raise:\n - PremiumAuthenticationError due to an UnableToDecryptRemoteData\n coming from decompress_and_decrypt_db. This happens when the given password\n does not match the one on the saved DB.\n \"\"\"\n assert self.premium, 'This function has to be called with a not None premium'\n try:\n result = self.premium.pull_data()\n except RemoteError as e:\n log.debug('sync from server -- pulling failed.', error=str(e))\n return False\n\n if result['data'] is None:\n log.debug('sync from server -- no data found.')\n return False\n\n try:\n self.data.decompress_and_decrypt_db(self.password, result['data'])\n except UnableToDecryptRemoteData:\n raise PremiumAuthenticationError(\n 'The given password can not unlock the database that was retrieved from '\n 'the server. Make sure to use the same password as when the account was created.',\n )\n\n return True\n\n def maybe_upload_data_to_server(self) -> None:\n # if user has no premium do nothing\n if self.premium is None:\n return\n\n # upload only once per hour\n diff = ts_now() - self.last_data_upload_ts\n if diff < 3600:\n return\n\n b64_encoded_data, our_hash = self.data.compress_and_encrypt_db(self.password)\n try:\n metadata = self.premium.query_last_data_metadata()\n except RemoteError as e:\n log.debug(\n 'upload to server stopped -- query last metadata failed',\n error=str(e),\n )\n return\n\n log.debug(\n 'CAN_PUSH',\n ours=our_hash,\n theirs=metadata.data_hash,\n )\n if our_hash == metadata.data_hash:\n log.debug('upload to server stopped -- same hash')\n # same hash -- no need to upload anything\n return\n\n our_last_write_ts = self.data.db.get_last_write_ts()\n if our_last_write_ts <= metadata.last_modify_ts:\n # Server's DB was modified after our local DB\n log.debug('upload to server stopped -- remote db more recent than local')\n return\n\n data_bytes_size = len(base64.b64decode(b64_encoded_data))\n if data_bytes_size < metadata.data_size:\n # Let's be conservative.\n # TODO: Here perhaps prompt user in the future\n log.debug('upload to server stopped -- remote db bigger than local')\n return\n\n try:\n self.premium.upload_data(\n data_blob=b64_encoded_data,\n our_hash=our_hash,\n last_modify_ts=our_last_write_ts,\n compression_type='zlib',\n )\n except RemoteError as e:\n log.debug('upload to server -- upload error', error=str(e))\n return\n\n # update the last data upload value\n self.last_data_upload_ts = ts_now()\n self.data.db.update_last_data_upload_ts(self.last_data_upload_ts)\n log.debug('upload to server -- success')\n\n def try_premium_at_start(\n self,\n given_premium_credentials: Optional[PremiumCredentials],\n username: str,\n create_new: bool,\n sync_approval: Literal['yes', 'no', 'unknown'],\n ) -> Optional[Premium]:\n \"\"\"\n Check if new user provided api pair or we already got one in the DB\n\n Returns the created premium if user's premium credentials were fine.\n\n If not it will raise PremiumAuthenticationError.\n\n If no credentials were given it returns None\n \"\"\"\n\n if given_premium_credentials is not None:\n assert create_new, 'We should never get here for an already existing account'\n\n try:\n self.premium = premium_create_and_verify(given_premium_credentials)\n except PremiumAuthenticationError as e:\n log.error('Given API key is invalid')\n # At this point we are at a new user trying to create an account with\n # premium API keys and we failed. But a directory was created. Remove it.\n # But create a backup of it in case something went really wrong\n # and the directory contained data we did not want to lose\n shutil.move(\n self.data.user_data_dir, # type: ignore\n self.data.data_directory / f'auto_backup_{username}_{ts_now()}',\n )\n raise PremiumAuthenticationError(\n 'Could not verify keys for the new account. '\n '{}'.format(str(e)),\n )\n\n # else, if we got premium data in the DB initialize it and try to sync with the server\n db_credentials = self.data.db.get_rotkehlchen_premium()\n if db_credentials:\n assert not create_new, 'We should never get here for a new account'\n try:\n self.premium = premium_create_and_verify(db_credentials)\n except PremiumAuthenticationError as e:\n message = (\n f'Could not authenticate with the rotkehlchen server with '\n f'the API keys found in the Database. Error: {str(e)}'\n )\n log.error(message)\n raise PremiumAuthenticationError(message)\n\n if self.premium is None:\n return None\n\n result = self._can_sync_data_from_server(new_account=create_new)\n if result.can_sync == CanSync.ASK_USER:\n if sync_approval == 'unknown':\n log.info('DB data at server newer than local')\n raise RotkehlchenPermissionError(result.message, result.payload)\n elif sync_approval == 'yes':\n log.info('User approved data sync from server')\n if self._sync_data_from_server_and_replace_local():\n if create_new:\n # if we successfully synced data from the server and this is\n # a new account, make sure the api keys are properly stored\n # in the DB\n self.data.db.set_rotkehlchen_premium(self.premium.credentials)\n else:\n log.debug('Could sync data from server but user refused')\n elif result.can_sync == CanSync.YES:\n log.info('User approved data sync from server')\n if self._sync_data_from_server_and_replace_local():\n if create_new:\n # if we successfully synced data from the server and this is\n # a new account, make sure the api keys are properly stored\n # in the DB\n self.data.db.set_rotkehlchen_premium(self.premium.credentials)\n\n # else result.can_sync was no, so we do nothing\n\n # Success, return premium\n return self.premium\n", "path": "rotkehlchen/premium/sync.py"}]} | 3,655 | 460 |
gh_patches_debug_11530 | rasdani/github-patches | git_diff | dmlc__dgl-3841 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
I can't run dgl/examples/pytorch/han/train_sampling.py with cuda
## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
Steps to reproduce the behavior:
1. I just run dgl/examples/pytorch/han/train_sampling.py with cuda, it has an error
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
<img width="1103" alt="image" src="https://user-images.githubusercontent.com/53086386/156488968-32cde64a-1c14-4de7-93de-211ca4dd306e.png">
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
- DGL Version: 0.8.0+cu11.1
- Backend Library & Version: PyTorch0.9.1
- OS: Linux
- How you installed DGL: pip
- Build command you used (if compiling from source):
- Python version: 3.7.4
- CUDA/cuDNN version: cuda11.1
- GPU models and configuration: A100
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
</issue>
<code>
[start of python/dgl/utils/checks.py]
1 """Checking and logging utilities."""
2 # pylint: disable=invalid-name
3 from __future__ import absolute_import, division
4 from collections.abc import Mapping
5
6 from ..base import DGLError
7 from .._ffi.function import _init_api
8 from .. import backend as F
9
10 def prepare_tensor(g, data, name):
11 """Convert the data to ID tensor and check its ID type and context.
12
13 If the data is already in tensor type, raise error if its ID type
14 and context does not match the graph's.
15 Otherwise, convert it to tensor type of the graph's ID type and
16 ctx and return.
17
18 Parameters
19 ----------
20 g : DGLHeteroGraph
21 Graph.
22 data : int, iterable of int, tensor
23 Data.
24 name : str
25 Name of the data.
26
27 Returns
28 -------
29 Tensor
30 Data in tensor object.
31 """
32 if F.is_tensor(data):
33 if not g.is_pinned() and (F.dtype(data) != g.idtype or F.context(data) != g.device):
34 raise DGLError('Expect argument "{}" to have data type {} and device '
35 'context {}. But got {} and {}.'.format(
36 name, g.idtype, g.device, F.dtype(data), F.context(data)))
37 ret = data
38 else:
39 data = F.tensor(data)
40 if (not (F.ndim(data) > 0 and F.shape(data)[0] == 0) and # empty tensor
41 F.dtype(data) not in (F.int32, F.int64)):
42 raise DGLError('Expect argument "{}" to have data type int32 or int64,'
43 ' but got {}.'.format(name, F.dtype(data)))
44 ret = F.copy_to(F.astype(data, g.idtype), g.device)
45
46 if F.ndim(ret) == 0:
47 ret = F.unsqueeze(ret, 0)
48 if F.ndim(ret) > 1:
49 raise DGLError('Expect a 1-D tensor for argument "{}". But got {}.'.format(
50 name, ret))
51 return ret
52
53 def prepare_tensor_dict(g, data, name):
54 """Convert a dictionary of data to a dictionary of ID tensors.
55
56 Calls ``prepare_tensor`` on each key-value pair.
57
58 Parameters
59 ----------
60 g : DGLHeteroGraph
61 Graph.
62 data : dict[str, (int, iterable of int, tensor)]
63 Data dict.
64 name : str
65 Name of the data.
66
67 Returns
68 -------
69 dict[str, tensor]
70 """
71 return {key : prepare_tensor(g, val, '{}["{}"]'.format(name, key))
72 for key, val in data.items()}
73
74 def prepare_tensor_or_dict(g, data, name):
75 """Convert data to either a tensor or a dictionary depending on input type.
76
77 Parameters
78 ----------
79 g : DGLHeteroGraph
80 Graph.
81 data : dict[str, (int, iterable of int, tensor)]
82 Data dict.
83 name : str
84 Name of the data.
85
86 Returns
87 -------
88 tensor or dict[str, tensor]
89 """
90 return prepare_tensor_dict(g, data, name) if isinstance(data, Mapping) \
91 else prepare_tensor(g, data, name)
92
93 def parse_edges_arg_to_eid(g, edges, etid, argname='edges'):
94 """Parse the :attr:`edges` argument and return an edge ID tensor.
95
96 The resulting edge ID tensor has the same ID type and device of :attr:`g`.
97
98 Parameters
99 ----------
100 g : DGLGraph
101 Graph
102 edges : pair of Tensor, Tensor, iterable[int]
103 Argument for specifying edges.
104 etid : int
105 Edge type ID.
106 argname : str, optional
107 Argument name.
108
109 Returns
110 -------
111 Tensor
112 Edge ID tensor
113 """
114 if isinstance(edges, tuple):
115 u, v = edges
116 u = prepare_tensor(g, u, '{}[0]'.format(argname))
117 v = prepare_tensor(g, v, '{}[1]'.format(argname))
118 eid = g.edge_ids(u, v, etype=g.canonical_etypes[etid])
119 else:
120 eid = prepare_tensor(g, edges, argname)
121 return eid
122
123 def check_all_same_idtype(glist, name):
124 """Check all the graphs have the same idtype."""
125 if len(glist) == 0:
126 return
127 idtype = glist[0].idtype
128 for i, g in enumerate(glist):
129 if g.idtype != idtype:
130 raise DGLError('Expect {}[{}] to have {} type ID, but got {}.'.format(
131 name, i, idtype, g.idtype))
132
133 def check_device(data, device):
134 """Check if data is on the target device.
135
136 Parameters
137 ----------
138 data : Tensor or dict[str, Tensor]
139 device: Backend device.
140
141 Returns
142 -------
143 Bool: True if the data is on the target device.
144 """
145 if isinstance(data, dict):
146 for v in data.values():
147 if v.device != device:
148 return False
149 elif data.device != device:
150 return False
151 return True
152
153 def check_all_same_device(glist, name):
154 """Check all the graphs have the same device."""
155 if len(glist) == 0:
156 return
157 device = glist[0].device
158 for i, g in enumerate(glist):
159 if g.device != device:
160 raise DGLError('Expect {}[{}] to be on device {}, but got {}.'.format(
161 name, i, device, g.device))
162
163 def check_all_same_schema(schemas, name):
164 """Check the list of schemas are the same."""
165 if len(schemas) == 0:
166 return
167
168 for i, schema in enumerate(schemas):
169 if schema != schemas[0]:
170 raise DGLError(
171 'Expect all graphs to have the same schema on {}, '
172 'but graph {} got\n\t{}\nwhich is different from\n\t{}.'.format(
173 name, i, schema, schemas[0]))
174
175 def check_all_same_schema_for_keys(schemas, keys, name):
176 """Check the list of schemas are the same on the given keys."""
177 if len(schemas) == 0:
178 return
179
180 head = None
181 keys = set(keys)
182 for i, schema in enumerate(schemas):
183 if not keys.issubset(schema.keys()):
184 raise DGLError(
185 'Expect all graphs to have keys {} on {}, '
186 'but graph {} got keys {}.'.format(
187 keys, name, i, schema.keys()))
188
189 if head is None:
190 head = {k: schema[k] for k in keys}
191 else:
192 target = {k: schema[k] for k in keys}
193 if target != head:
194 raise DGLError(
195 'Expect all graphs to have the same schema for keys {} on {}, '
196 'but graph {} got \n\t{}\n which is different from\n\t{}.'.format(
197 keys, name, i, target, head))
198
199 def check_valid_idtype(idtype):
200 """Check whether the value of the idtype argument is valid (int32/int64)
201
202 Parameters
203 ----------
204 idtype : data type
205 The framework object of a data type.
206 """
207 if idtype not in [None, F.int32, F.int64]:
208 raise DGLError('Expect idtype to be a framework object of int32/int64, '
209 'got {}'.format(idtype))
210
211 def is_sorted_srcdst(src, dst, num_src=None, num_dst=None):
212 """Checks whether an edge list is in ascending src-major order (e.g., first
213 sorted by ``src`` and then by ``dst``).
214
215 Parameters
216 ----------
217 src : IdArray
218 The tensor of source nodes for each edge.
219 dst : IdArray
220 The tensor of destination nodes for each edge.
221 num_src : int, optional
222 The number of source nodes.
223 num_dst : int, optional
224 The number of destination nodes.
225
226 Returns
227 -------
228 bool, bool
229 Whether ``src`` is in ascending order, and whether ``dst`` is
230 in ascending order with respect to ``src``.
231 """
232 # for some versions of MXNET and TensorFlow, num_src and num_dst get
233 # incorrectly marked as floats, so force them as integers here
234 if num_src is None:
235 num_src = int(F.as_scalar(F.max(src, dim=0)+1))
236 if num_dst is None:
237 num_dst = int(F.as_scalar(F.max(dst, dim=0)+1))
238
239 src = F.zerocopy_to_dgl_ndarray(src)
240 dst = F.zerocopy_to_dgl_ndarray(dst)
241 sorted_status = _CAPI_DGLCOOIsSorted(src, dst, num_src, num_dst)
242
243 row_sorted = sorted_status > 0
244 col_sorted = sorted_status > 1
245
246 return row_sorted, col_sorted
247
248 _init_api("dgl.utils.checks")
249
[end of python/dgl/utils/checks.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/python/dgl/utils/checks.py b/python/dgl/utils/checks.py
--- a/python/dgl/utils/checks.py
+++ b/python/dgl/utils/checks.py
@@ -30,7 +30,7 @@
Data in tensor object.
"""
if F.is_tensor(data):
- if not g.is_pinned() and (F.dtype(data) != g.idtype or F.context(data) != g.device):
+ if (F.dtype(data) != g.idtype or F.context(data) != g.device) and not g.is_pinned():
raise DGLError('Expect argument "{}" to have data type {} and device '
'context {}. But got {} and {}.'.format(
name, g.idtype, g.device, F.dtype(data), F.context(data)))
| {"golden_diff": "diff --git a/python/dgl/utils/checks.py b/python/dgl/utils/checks.py\n--- a/python/dgl/utils/checks.py\n+++ b/python/dgl/utils/checks.py\n@@ -30,7 +30,7 @@\n Data in tensor object.\n \"\"\"\n if F.is_tensor(data):\n- if not g.is_pinned() and (F.dtype(data) != g.idtype or F.context(data) != g.device):\n+ if (F.dtype(data) != g.idtype or F.context(data) != g.device) and not g.is_pinned():\n raise DGLError('Expect argument \"{}\" to have data type {} and device '\n 'context {}. But got {} and {}.'.format(\n name, g.idtype, g.device, F.dtype(data), F.context(data)))\n", "issue": "I can't run dgl/examples/pytorch/han/train_sampling.py with cuda\n## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. I just run dgl/examples/pytorch/han/train_sampling.py with cuda, it has an error\r\n\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n<img width=\"1103\" alt=\"image\" src=\"https://user-images.githubusercontent.com/53086386/156488968-32cde64a-1c14-4de7-93de-211ca4dd306e.png\">\r\n\r\n\r\n## Expected behavior\r\n\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\n## Environment\r\n\r\n - DGL Version: 0.8.0+cu11.1\r\n - Backend Library & Version: PyTorch0.9.1\r\n - OS: Linux\r\n - How you installed DGL: pip\r\n - Build command you used (if compiling from source):\r\n - Python version: 3.7.4\r\n - CUDA/cuDNN version: cuda11.1\r\n - GPU models and configuration: A100\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n\n", "before_files": [{"content": "\"\"\"Checking and logging utilities.\"\"\"\n# pylint: disable=invalid-name\nfrom __future__ import absolute_import, division\nfrom collections.abc import Mapping\n\nfrom ..base import DGLError\nfrom .._ffi.function import _init_api\nfrom .. import backend as F\n\ndef prepare_tensor(g, data, name):\n \"\"\"Convert the data to ID tensor and check its ID type and context.\n\n If the data is already in tensor type, raise error if its ID type\n and context does not match the graph's.\n Otherwise, convert it to tensor type of the graph's ID type and\n ctx and return.\n\n Parameters\n ----------\n g : DGLHeteroGraph\n Graph.\n data : int, iterable of int, tensor\n Data.\n name : str\n Name of the data.\n\n Returns\n -------\n Tensor\n Data in tensor object.\n \"\"\"\n if F.is_tensor(data):\n if not g.is_pinned() and (F.dtype(data) != g.idtype or F.context(data) != g.device):\n raise DGLError('Expect argument \"{}\" to have data type {} and device '\n 'context {}. But got {} and {}.'.format(\n name, g.idtype, g.device, F.dtype(data), F.context(data)))\n ret = data\n else:\n data = F.tensor(data)\n if (not (F.ndim(data) > 0 and F.shape(data)[0] == 0) and # empty tensor\n F.dtype(data) not in (F.int32, F.int64)):\n raise DGLError('Expect argument \"{}\" to have data type int32 or int64,'\n ' but got {}.'.format(name, F.dtype(data)))\n ret = F.copy_to(F.astype(data, g.idtype), g.device)\n\n if F.ndim(ret) == 0:\n ret = F.unsqueeze(ret, 0)\n if F.ndim(ret) > 1:\n raise DGLError('Expect a 1-D tensor for argument \"{}\". But got {}.'.format(\n name, ret))\n return ret\n\ndef prepare_tensor_dict(g, data, name):\n \"\"\"Convert a dictionary of data to a dictionary of ID tensors.\n\n Calls ``prepare_tensor`` on each key-value pair.\n\n Parameters\n ----------\n g : DGLHeteroGraph\n Graph.\n data : dict[str, (int, iterable of int, tensor)]\n Data dict.\n name : str\n Name of the data.\n\n Returns\n -------\n dict[str, tensor]\n \"\"\"\n return {key : prepare_tensor(g, val, '{}[\"{}\"]'.format(name, key))\n for key, val in data.items()}\n\ndef prepare_tensor_or_dict(g, data, name):\n \"\"\"Convert data to either a tensor or a dictionary depending on input type.\n\n Parameters\n ----------\n g : DGLHeteroGraph\n Graph.\n data : dict[str, (int, iterable of int, tensor)]\n Data dict.\n name : str\n Name of the data.\n\n Returns\n -------\n tensor or dict[str, tensor]\n \"\"\"\n return prepare_tensor_dict(g, data, name) if isinstance(data, Mapping) \\\n else prepare_tensor(g, data, name)\n\ndef parse_edges_arg_to_eid(g, edges, etid, argname='edges'):\n \"\"\"Parse the :attr:`edges` argument and return an edge ID tensor.\n\n The resulting edge ID tensor has the same ID type and device of :attr:`g`.\n\n Parameters\n ----------\n g : DGLGraph\n Graph\n edges : pair of Tensor, Tensor, iterable[int]\n Argument for specifying edges.\n etid : int\n Edge type ID.\n argname : str, optional\n Argument name.\n\n Returns\n -------\n Tensor\n Edge ID tensor\n \"\"\"\n if isinstance(edges, tuple):\n u, v = edges\n u = prepare_tensor(g, u, '{}[0]'.format(argname))\n v = prepare_tensor(g, v, '{}[1]'.format(argname))\n eid = g.edge_ids(u, v, etype=g.canonical_etypes[etid])\n else:\n eid = prepare_tensor(g, edges, argname)\n return eid\n\ndef check_all_same_idtype(glist, name):\n \"\"\"Check all the graphs have the same idtype.\"\"\"\n if len(glist) == 0:\n return\n idtype = glist[0].idtype\n for i, g in enumerate(glist):\n if g.idtype != idtype:\n raise DGLError('Expect {}[{}] to have {} type ID, but got {}.'.format(\n name, i, idtype, g.idtype))\n\ndef check_device(data, device):\n \"\"\"Check if data is on the target device.\n\n Parameters\n ----------\n data : Tensor or dict[str, Tensor]\n device: Backend device.\n\n Returns\n -------\n Bool: True if the data is on the target device.\n \"\"\"\n if isinstance(data, dict):\n for v in data.values():\n if v.device != device:\n return False\n elif data.device != device:\n return False\n return True\n\ndef check_all_same_device(glist, name):\n \"\"\"Check all the graphs have the same device.\"\"\"\n if len(glist) == 0:\n return\n device = glist[0].device\n for i, g in enumerate(glist):\n if g.device != device:\n raise DGLError('Expect {}[{}] to be on device {}, but got {}.'.format(\n name, i, device, g.device))\n\ndef check_all_same_schema(schemas, name):\n \"\"\"Check the list of schemas are the same.\"\"\"\n if len(schemas) == 0:\n return\n\n for i, schema in enumerate(schemas):\n if schema != schemas[0]:\n raise DGLError(\n 'Expect all graphs to have the same schema on {}, '\n 'but graph {} got\\n\\t{}\\nwhich is different from\\n\\t{}.'.format(\n name, i, schema, schemas[0]))\n\ndef check_all_same_schema_for_keys(schemas, keys, name):\n \"\"\"Check the list of schemas are the same on the given keys.\"\"\"\n if len(schemas) == 0:\n return\n\n head = None\n keys = set(keys)\n for i, schema in enumerate(schemas):\n if not keys.issubset(schema.keys()):\n raise DGLError(\n 'Expect all graphs to have keys {} on {}, '\n 'but graph {} got keys {}.'.format(\n keys, name, i, schema.keys()))\n\n if head is None:\n head = {k: schema[k] for k in keys}\n else:\n target = {k: schema[k] for k in keys}\n if target != head:\n raise DGLError(\n 'Expect all graphs to have the same schema for keys {} on {}, '\n 'but graph {} got \\n\\t{}\\n which is different from\\n\\t{}.'.format(\n keys, name, i, target, head))\n\ndef check_valid_idtype(idtype):\n \"\"\"Check whether the value of the idtype argument is valid (int32/int64)\n\n Parameters\n ----------\n idtype : data type\n The framework object of a data type.\n \"\"\"\n if idtype not in [None, F.int32, F.int64]:\n raise DGLError('Expect idtype to be a framework object of int32/int64, '\n 'got {}'.format(idtype))\n\ndef is_sorted_srcdst(src, dst, num_src=None, num_dst=None):\n \"\"\"Checks whether an edge list is in ascending src-major order (e.g., first\n sorted by ``src`` and then by ``dst``).\n\n Parameters\n ----------\n src : IdArray\n The tensor of source nodes for each edge.\n dst : IdArray\n The tensor of destination nodes for each edge.\n num_src : int, optional\n The number of source nodes.\n num_dst : int, optional\n The number of destination nodes.\n\n Returns\n -------\n bool, bool\n Whether ``src`` is in ascending order, and whether ``dst`` is\n in ascending order with respect to ``src``.\n \"\"\"\n # for some versions of MXNET and TensorFlow, num_src and num_dst get\n # incorrectly marked as floats, so force them as integers here\n if num_src is None:\n num_src = int(F.as_scalar(F.max(src, dim=0)+1))\n if num_dst is None:\n num_dst = int(F.as_scalar(F.max(dst, dim=0)+1))\n\n src = F.zerocopy_to_dgl_ndarray(src)\n dst = F.zerocopy_to_dgl_ndarray(dst)\n sorted_status = _CAPI_DGLCOOIsSorted(src, dst, num_src, num_dst)\n\n row_sorted = sorted_status > 0\n col_sorted = sorted_status > 1\n\n return row_sorted, col_sorted\n\n_init_api(\"dgl.utils.checks\")\n", "path": "python/dgl/utils/checks.py"}]} | 3,449 | 170 |
gh_patches_debug_3171 | rasdani/github-patches | git_diff | svthalia__concrexit-1743 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot enter float number as contribution during benefactor renewal
### Describe the bug
Cannot enter float number as contribution during benefactor renewal
### How to reproduce
Steps to reproduce the behaviour:
1. Go to http://localhost:8000/user/membership/
2. Add a benefactor membership renewal
3. The form will not error when trying to send
### Expected behaviour
Can enter float numbers as contribution. Like 7.69.
### Screenshots
<img width="691" alt="Screenshot 2021-06-20 at 18 48 35" src="https://user-images.githubusercontent.com/1799914/122682192-4edaf880-d1f8-11eb-99e2-26eaf5379ae8.png">
</issue>
<code>
[start of website/registrations/forms.py]
1 """The forms defined by the registrations package."""
2 from django import forms
3 from django.core.exceptions import NON_FIELD_ERRORS, ValidationError
4 from django.forms import TypedChoiceField
5 from django.urls import reverse_lazy
6 from django.utils import timezone
7 from django.utils.safestring import mark_safe
8 from django.utils.text import capfirst
9 from django.utils.translation import gettext_lazy as _
10
11 from members.models import Membership
12 from payments.widgets import SignatureWidget
13 from registrations import services
14 from .models import Registration, Renewal, Reference
15
16
17 class BaseRegistrationForm(forms.ModelForm):
18 """Base form for membership registrations."""
19
20 birthday = forms.DateField(
21 widget=forms.widgets.SelectDateWidget(
22 years=range(timezone.now().year - 50, timezone.now().year - 10)
23 ),
24 label=capfirst(_("birthday")),
25 )
26
27 privacy_policy = forms.BooleanField(required=True,)
28
29 def __init__(self, *args, **kwargs):
30 super().__init__(*args, **kwargs)
31 self.fields["privacy_policy"].label = mark_safe(
32 _('I accept the <a href="{}">privacy policy</a>.').format(
33 reverse_lazy("singlepages:privacy-policy")
34 )
35 )
36
37
38 class RegistrationAdminForm(forms.ModelForm):
39 """Custom admin form for Registration model to add the widget for the signature."""
40
41 class Meta:
42 fields = "__all__"
43 model = Registration
44 widgets = {
45 "signature": SignatureWidget(),
46 }
47
48
49 class MemberRegistrationForm(BaseRegistrationForm):
50 """Form for member registrations."""
51
52 this_year = timezone.now().year
53 years = reversed(
54 [(x, "{} - {}".format(x, x + 1)) for x in range(this_year - 20, this_year + 1)]
55 )
56
57 starting_year = TypedChoiceField(
58 choices=years,
59 coerce=int,
60 empty_value=this_year,
61 required=False,
62 help_text=_("What lecture year did you start studying at Radboud University?"),
63 )
64
65 class Meta:
66 model = Registration
67 widgets = {
68 "signature": SignatureWidget(),
69 }
70 fields = (
71 "length",
72 "first_name",
73 "last_name",
74 "birthday",
75 "email",
76 "phone_number",
77 "student_number",
78 "programme",
79 "starting_year",
80 "address_street",
81 "address_street2",
82 "address_postal_code",
83 "address_city",
84 "address_country",
85 "optin_birthday",
86 "optin_mailinglist",
87 "membership_type",
88 "direct_debit",
89 "initials",
90 "iban",
91 "bic",
92 "signature",
93 )
94
95
96 class BenefactorRegistrationForm(BaseRegistrationForm):
97 """Form for benefactor registrations."""
98
99 icis_employee = forms.BooleanField(
100 required=False, label=_("I am an employee of iCIS")
101 )
102
103 class Meta:
104 model = Registration
105 widgets = {
106 "signature": SignatureWidget(),
107 }
108 fields = (
109 "length",
110 "first_name",
111 "last_name",
112 "birthday",
113 "email",
114 "phone_number",
115 "student_number",
116 "address_street",
117 "address_street2",
118 "address_postal_code",
119 "address_city",
120 "address_country",
121 "optin_birthday",
122 "optin_mailinglist",
123 "contribution",
124 "membership_type",
125 "direct_debit",
126 "initials",
127 "iban",
128 "bic",
129 "signature",
130 )
131
132
133 class RenewalForm(forms.ModelForm):
134 """Form for membership renewals."""
135
136 privacy_policy = forms.BooleanField(required=True,)
137
138 icis_employee = forms.BooleanField(
139 required=False, label=_("I am an employee of iCIS")
140 )
141
142 contribution = forms.IntegerField(required=False,)
143
144 def __init__(self, *args, **kwargs):
145 super().__init__(*args, **kwargs)
146 self.fields["privacy_policy"].label = mark_safe(
147 _('I accept the <a href="{}">privacy policy</a>.').format(
148 reverse_lazy("singlepages:privacy-policy")
149 )
150 )
151
152 class Meta:
153 model = Renewal
154 fields = (
155 "member",
156 "length",
157 "contribution",
158 "membership_type",
159 "no_references",
160 "remarks",
161 )
162
163
164 class ReferenceForm(forms.ModelForm):
165 def clean(self):
166 super().clean()
167 membership = self.cleaned_data["member"].current_membership
168 if membership and membership.type == Membership.BENEFACTOR:
169 raise ValidationError(_("Benefactors cannot give references."))
170
171 membership = self.cleaned_data["member"].latest_membership
172 if (
173 membership
174 and membership.until
175 and membership.until < services.calculate_membership_since()
176 ):
177 raise ValidationError(
178 _(
179 "It's not possible to give references for "
180 "memberships that start after your own "
181 "membership's end."
182 )
183 )
184
185 class Meta:
186 model = Reference
187 fields = "__all__"
188 error_messages = {
189 NON_FIELD_ERRORS: {
190 "unique_together": _(
191 "You've already given a reference for this person."
192 ),
193 }
194 }
195
[end of website/registrations/forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/website/registrations/forms.py b/website/registrations/forms.py
--- a/website/registrations/forms.py
+++ b/website/registrations/forms.py
@@ -139,7 +139,7 @@
required=False, label=_("I am an employee of iCIS")
)
- contribution = forms.IntegerField(required=False,)
+ contribution = forms.DecimalField(required=False, max_digits=5, decimal_places=2,)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
| {"golden_diff": "diff --git a/website/registrations/forms.py b/website/registrations/forms.py\n--- a/website/registrations/forms.py\n+++ b/website/registrations/forms.py\n@@ -139,7 +139,7 @@\n required=False, label=_(\"I am an employee of iCIS\")\n )\n \n- contribution = forms.IntegerField(required=False,)\n+ contribution = forms.DecimalField(required=False, max_digits=5, decimal_places=2,)\n \n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n", "issue": "Cannot enter float number as contribution during benefactor renewal\n### Describe the bug\r\nCannot enter float number as contribution during benefactor renewal\r\n\r\n### How to reproduce\r\nSteps to reproduce the behaviour:\r\n1. Go to http://localhost:8000/user/membership/\r\n2. Add a benefactor membership renewal \r\n3. The form will not error when trying to send\r\n\r\n### Expected behaviour\r\nCan enter float numbers as contribution. Like 7.69.\r\n\r\n### Screenshots\r\n<img width=\"691\" alt=\"Screenshot 2021-06-20 at 18 48 35\" src=\"https://user-images.githubusercontent.com/1799914/122682192-4edaf880-d1f8-11eb-99e2-26eaf5379ae8.png\">\r\n\r\n\n", "before_files": [{"content": "\"\"\"The forms defined by the registrations package.\"\"\"\nfrom django import forms\nfrom django.core.exceptions import NON_FIELD_ERRORS, ValidationError\nfrom django.forms import TypedChoiceField\nfrom django.urls import reverse_lazy\nfrom django.utils import timezone\nfrom django.utils.safestring import mark_safe\nfrom django.utils.text import capfirst\nfrom django.utils.translation import gettext_lazy as _\n\nfrom members.models import Membership\nfrom payments.widgets import SignatureWidget\nfrom registrations import services\nfrom .models import Registration, Renewal, Reference\n\n\nclass BaseRegistrationForm(forms.ModelForm):\n \"\"\"Base form for membership registrations.\"\"\"\n\n birthday = forms.DateField(\n widget=forms.widgets.SelectDateWidget(\n years=range(timezone.now().year - 50, timezone.now().year - 10)\n ),\n label=capfirst(_(\"birthday\")),\n )\n\n privacy_policy = forms.BooleanField(required=True,)\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.fields[\"privacy_policy\"].label = mark_safe(\n _('I accept the <a href=\"{}\">privacy policy</a>.').format(\n reverse_lazy(\"singlepages:privacy-policy\")\n )\n )\n\n\nclass RegistrationAdminForm(forms.ModelForm):\n \"\"\"Custom admin form for Registration model to add the widget for the signature.\"\"\"\n\n class Meta:\n fields = \"__all__\"\n model = Registration\n widgets = {\n \"signature\": SignatureWidget(),\n }\n\n\nclass MemberRegistrationForm(BaseRegistrationForm):\n \"\"\"Form for member registrations.\"\"\"\n\n this_year = timezone.now().year\n years = reversed(\n [(x, \"{} - {}\".format(x, x + 1)) for x in range(this_year - 20, this_year + 1)]\n )\n\n starting_year = TypedChoiceField(\n choices=years,\n coerce=int,\n empty_value=this_year,\n required=False,\n help_text=_(\"What lecture year did you start studying at Radboud University?\"),\n )\n\n class Meta:\n model = Registration\n widgets = {\n \"signature\": SignatureWidget(),\n }\n fields = (\n \"length\",\n \"first_name\",\n \"last_name\",\n \"birthday\",\n \"email\",\n \"phone_number\",\n \"student_number\",\n \"programme\",\n \"starting_year\",\n \"address_street\",\n \"address_street2\",\n \"address_postal_code\",\n \"address_city\",\n \"address_country\",\n \"optin_birthday\",\n \"optin_mailinglist\",\n \"membership_type\",\n \"direct_debit\",\n \"initials\",\n \"iban\",\n \"bic\",\n \"signature\",\n )\n\n\nclass BenefactorRegistrationForm(BaseRegistrationForm):\n \"\"\"Form for benefactor registrations.\"\"\"\n\n icis_employee = forms.BooleanField(\n required=False, label=_(\"I am an employee of iCIS\")\n )\n\n class Meta:\n model = Registration\n widgets = {\n \"signature\": SignatureWidget(),\n }\n fields = (\n \"length\",\n \"first_name\",\n \"last_name\",\n \"birthday\",\n \"email\",\n \"phone_number\",\n \"student_number\",\n \"address_street\",\n \"address_street2\",\n \"address_postal_code\",\n \"address_city\",\n \"address_country\",\n \"optin_birthday\",\n \"optin_mailinglist\",\n \"contribution\",\n \"membership_type\",\n \"direct_debit\",\n \"initials\",\n \"iban\",\n \"bic\",\n \"signature\",\n )\n\n\nclass RenewalForm(forms.ModelForm):\n \"\"\"Form for membership renewals.\"\"\"\n\n privacy_policy = forms.BooleanField(required=True,)\n\n icis_employee = forms.BooleanField(\n required=False, label=_(\"I am an employee of iCIS\")\n )\n\n contribution = forms.IntegerField(required=False,)\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.fields[\"privacy_policy\"].label = mark_safe(\n _('I accept the <a href=\"{}\">privacy policy</a>.').format(\n reverse_lazy(\"singlepages:privacy-policy\")\n )\n )\n\n class Meta:\n model = Renewal\n fields = (\n \"member\",\n \"length\",\n \"contribution\",\n \"membership_type\",\n \"no_references\",\n \"remarks\",\n )\n\n\nclass ReferenceForm(forms.ModelForm):\n def clean(self):\n super().clean()\n membership = self.cleaned_data[\"member\"].current_membership\n if membership and membership.type == Membership.BENEFACTOR:\n raise ValidationError(_(\"Benefactors cannot give references.\"))\n\n membership = self.cleaned_data[\"member\"].latest_membership\n if (\n membership\n and membership.until\n and membership.until < services.calculate_membership_since()\n ):\n raise ValidationError(\n _(\n \"It's not possible to give references for \"\n \"memberships that start after your own \"\n \"membership's end.\"\n )\n )\n\n class Meta:\n model = Reference\n fields = \"__all__\"\n error_messages = {\n NON_FIELD_ERRORS: {\n \"unique_together\": _(\n \"You've already given a reference for this person.\"\n ),\n }\n }\n", "path": "website/registrations/forms.py"}]} | 2,304 | 124 |
gh_patches_debug_26779 | rasdani/github-patches | git_diff | streamlit__streamlit-5029 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
List supported languages for syntax highlighting with `st.code`
**Link to doc page in question (if any):** https://docs.streamlit.io/library/api-reference/text/st.code
**Name of the Streamlit feature whose docs need improvement:** `st.code`
**What you think the docs should say:** The docs should provide a list of all supported languages for syntax highlighting.
</issue>
<code>
[start of lib/streamlit/elements/markdown.py]
1 # Copyright 2018-2022 Streamlit Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import cast, Optional, TYPE_CHECKING, Union
16
17 from streamlit import type_util
18 from streamlit.proto.Markdown_pb2 import Markdown as MarkdownProto
19 from .utils import clean_text
20
21 if TYPE_CHECKING:
22 import sympy
23
24 from streamlit.delta_generator import DeltaGenerator
25
26
27 class MarkdownMixin:
28 def markdown(self, body: str, unsafe_allow_html: bool = False) -> "DeltaGenerator":
29 """Display string formatted as Markdown.
30
31 Parameters
32 ----------
33 body : str
34 The string to display as Github-flavored Markdown. Syntax
35 information can be found at: https://github.github.com/gfm.
36
37 This also supports:
38
39 * Emoji shortcodes, such as `:+1:` and `:sunglasses:`.
40 For a list of all supported codes,
41 see https://share.streamlit.io/streamlit/emoji-shortcodes.
42
43 * LaTeX expressions, by wrapping them in "$" or "$$" (the "$$"
44 must be on their own lines). Supported LaTeX functions are listed
45 at https://katex.org/docs/supported.html.
46
47 unsafe_allow_html : bool
48 By default, any HTML tags found in the body will be escaped and
49 therefore treated as pure text. This behavior may be turned off by
50 setting this argument to True.
51
52 That said, we *strongly advise against it*. It is hard to write
53 secure HTML, so by using this argument you may be compromising your
54 users' security. For more information, see:
55
56 https://github.com/streamlit/streamlit/issues/152
57
58 *Also note that `unsafe_allow_html` is a temporary measure and may
59 be removed from Streamlit at any time.*
60
61 If you decide to turn on HTML anyway, we ask you to please tell us
62 your exact use case here:
63
64 https://discuss.streamlit.io/t/96
65
66 This will help us come up with safe APIs that allow you to do what
67 you want.
68
69 Example
70 -------
71 >>> st.markdown('Streamlit is **_really_ cool**.')
72
73 """
74 markdown_proto = MarkdownProto()
75
76 markdown_proto.body = clean_text(body)
77 markdown_proto.allow_html = unsafe_allow_html
78
79 return self.dg._enqueue("markdown", markdown_proto)
80
81 def header(self, body: str, anchor: Optional[str] = None) -> "DeltaGenerator":
82 """Display text in header formatting.
83
84 Parameters
85 ----------
86 body : str
87 The text to display.
88
89 anchor : str
90 The anchor name of the header that can be accessed with #anchor
91 in the URL. If omitted, it generates an anchor using the body.
92
93 Example
94 -------
95 >>> st.header('This is a header')
96
97 """
98 header_proto = MarkdownProto()
99 if anchor is None:
100 header_proto.body = f"## {clean_text(body)}"
101 else:
102 header_proto.body = f'<h2 data-anchor="{anchor}">{clean_text(body)}</h2>'
103 header_proto.allow_html = True
104 return self.dg._enqueue("markdown", header_proto)
105
106 def subheader(self, body: str, anchor: Optional[str] = None) -> "DeltaGenerator":
107 """Display text in subheader formatting.
108
109 Parameters
110 ----------
111 body : str
112 The text to display.
113
114 anchor : str
115 The anchor name of the header that can be accessed with #anchor
116 in the URL. If omitted, it generates an anchor using the body.
117
118 Example
119 -------
120 >>> st.subheader('This is a subheader')
121
122 """
123 subheader_proto = MarkdownProto()
124 if anchor is None:
125 subheader_proto.body = f"### {clean_text(body)}"
126 else:
127 subheader_proto.body = f'<h3 data-anchor="{anchor}">{clean_text(body)}</h3>'
128 subheader_proto.allow_html = True
129
130 return self.dg._enqueue("markdown", subheader_proto)
131
132 def code(self, body: str, language: Optional[str] = "python") -> "DeltaGenerator":
133 """Display a code block with optional syntax highlighting.
134
135 (This is a convenience wrapper around `st.markdown()`)
136
137 Parameters
138 ----------
139 body : str
140 The string to display as code.
141
142 language : str
143 The language that the code is written in, for syntax highlighting.
144 If omitted, the code will be unstyled.
145
146 Example
147 -------
148 >>> code = '''def hello():
149 ... print("Hello, Streamlit!")'''
150 >>> st.code(code, language='python')
151
152 """
153 code_proto = MarkdownProto()
154 markdown = "```%(language)s\n%(body)s\n```" % {
155 "language": language or "",
156 "body": body,
157 }
158 code_proto.body = clean_text(markdown)
159 return self.dg._enqueue("markdown", code_proto)
160
161 def title(self, body: str, anchor: Optional[str] = None) -> "DeltaGenerator":
162 """Display text in title formatting.
163
164 Each document should have a single `st.title()`, although this is not
165 enforced.
166
167 Parameters
168 ----------
169 body : str
170 The text to display.
171
172 anchor : str
173 The anchor name of the header that can be accessed with #anchor
174 in the URL. If omitted, it generates an anchor using the body.
175
176 Example
177 -------
178 >>> st.title('This is a title')
179
180 """
181 title_proto = MarkdownProto()
182 if anchor is None:
183 title_proto.body = f"# {clean_text(body)}"
184 else:
185 title_proto.body = f'<h1 data-anchor="{anchor}">{clean_text(body)}</h1>'
186 title_proto.allow_html = True
187 return self.dg._enqueue("markdown", title_proto)
188
189 def caption(self, body: str, unsafe_allow_html: bool = False) -> "DeltaGenerator":
190 """Display text in small font.
191
192 This should be used for captions, asides, footnotes, sidenotes, and
193 other explanatory text.
194
195 Parameters
196 ----------
197 body : str
198 The text to display.
199
200 unsafe_allow_html : bool
201 By default, any HTML tags found in strings will be escaped and
202 therefore treated as pure text. This behavior may be turned off by
203 setting this argument to True.
204
205 That said, *we strongly advise against it*. It is hard to write secure
206 HTML, so by using this argument you may be compromising your users'
207 security. For more information, see:
208
209 https://github.com/streamlit/streamlit/issues/152
210
211 **Also note that `unsafe_allow_html` is a temporary measure and may be
212 removed from Streamlit at any time.**
213
214 If you decide to turn on HTML anyway, we ask you to please tell us your
215 exact use case here:
216 https://discuss.streamlit.io/t/96 .
217
218 This will help us come up with safe APIs that allow you to do what you
219 want.
220
221 Example
222 -------
223 >>> st.caption('This is a string that explains something above.')
224
225 """
226 caption_proto = MarkdownProto()
227 caption_proto.body = clean_text(body)
228 caption_proto.allow_html = unsafe_allow_html
229 caption_proto.is_caption = True
230 return self.dg._enqueue("markdown", caption_proto)
231
232 def latex(self, body: Union[str, "sympy.Expr"]) -> "DeltaGenerator":
233 # This docstring needs to be "raw" because of the backslashes in the
234 # example below.
235 r"""Display mathematical expressions formatted as LaTeX.
236
237 Supported LaTeX functions are listed at
238 https://katex.org/docs/supported.html.
239
240 Parameters
241 ----------
242 body : str or SymPy expression
243 The string or SymPy expression to display as LaTeX. If str, it's
244 a good idea to use raw Python strings since LaTeX uses backslashes
245 a lot.
246
247
248 Example
249 -------
250 >>> st.latex(r'''
251 ... a + ar + a r^2 + a r^3 + \cdots + a r^{n-1} =
252 ... \sum_{k=0}^{n-1} ar^k =
253 ... a \left(\frac{1-r^{n}}{1-r}\right)
254 ... ''')
255
256 """
257 if type_util.is_sympy_expession(body):
258 import sympy
259
260 body = sympy.latex(body)
261
262 latex_proto = MarkdownProto()
263 latex_proto.body = "$$\n%s\n$$" % clean_text(body)
264 return self.dg._enqueue("markdown", latex_proto)
265
266 @property
267 def dg(self) -> "DeltaGenerator":
268 """Get our DeltaGenerator."""
269 return cast("DeltaGenerator", self)
270
[end of lib/streamlit/elements/markdown.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lib/streamlit/elements/markdown.py b/lib/streamlit/elements/markdown.py
--- a/lib/streamlit/elements/markdown.py
+++ b/lib/streamlit/elements/markdown.py
@@ -36,7 +36,7 @@
This also supports:
- * Emoji shortcodes, such as `:+1:` and `:sunglasses:`.
+ * Emoji shortcodes, such as ``:+1:`` and ``:sunglasses:``.
For a list of all supported codes,
see https://share.streamlit.io/streamlit/emoji-shortcodes.
@@ -55,7 +55,7 @@
https://github.com/streamlit/streamlit/issues/152
- *Also note that `unsafe_allow_html` is a temporary measure and may
+ *Also note that ``unsafe_allow_html`` is a temporary measure and may
be removed from Streamlit at any time.*
If you decide to turn on HTML anyway, we ask you to please tell us
@@ -143,6 +143,10 @@
The language that the code is written in, for syntax highlighting.
If omitted, the code will be unstyled.
+ For a list of available ``language`` imports, see:
+
+ https://github.com/react-syntax-highlighter/react-syntax-highlighter/blob/master/AVAILABLE_LANGUAGES_PRISM.MD
+
Example
-------
>>> code = '''def hello():
| {"golden_diff": "diff --git a/lib/streamlit/elements/markdown.py b/lib/streamlit/elements/markdown.py\n--- a/lib/streamlit/elements/markdown.py\n+++ b/lib/streamlit/elements/markdown.py\n@@ -36,7 +36,7 @@\n \n This also supports:\n \n- * Emoji shortcodes, such as `:+1:` and `:sunglasses:`.\n+ * Emoji shortcodes, such as ``:+1:`` and ``:sunglasses:``.\n For a list of all supported codes,\n see https://share.streamlit.io/streamlit/emoji-shortcodes.\n \n@@ -55,7 +55,7 @@\n \n https://github.com/streamlit/streamlit/issues/152\n \n- *Also note that `unsafe_allow_html` is a temporary measure and may\n+ *Also note that ``unsafe_allow_html`` is a temporary measure and may\n be removed from Streamlit at any time.*\n \n If you decide to turn on HTML anyway, we ask you to please tell us\n@@ -143,6 +143,10 @@\n The language that the code is written in, for syntax highlighting.\n If omitted, the code will be unstyled.\n \n+ For a list of available ``language`` imports, see:\n+\n+ https://github.com/react-syntax-highlighter/react-syntax-highlighter/blob/master/AVAILABLE_LANGUAGES_PRISM.MD\n+\n Example\n -------\n >>> code = '''def hello():\n", "issue": "List supported languages for syntax highlighting with `st.code`\n**Link to doc page in question (if any):** https://docs.streamlit.io/library/api-reference/text/st.code\r\n\r\n**Name of the Streamlit feature whose docs need improvement:** `st.code`\r\n\r\n**What you think the docs should say:** The docs should provide a list of all supported languages for syntax highlighting.\r\n\n", "before_files": [{"content": "# Copyright 2018-2022 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import cast, Optional, TYPE_CHECKING, Union\n\nfrom streamlit import type_util\nfrom streamlit.proto.Markdown_pb2 import Markdown as MarkdownProto\nfrom .utils import clean_text\n\nif TYPE_CHECKING:\n import sympy\n\n from streamlit.delta_generator import DeltaGenerator\n\n\nclass MarkdownMixin:\n def markdown(self, body: str, unsafe_allow_html: bool = False) -> \"DeltaGenerator\":\n \"\"\"Display string formatted as Markdown.\n\n Parameters\n ----------\n body : str\n The string to display as Github-flavored Markdown. Syntax\n information can be found at: https://github.github.com/gfm.\n\n This also supports:\n\n * Emoji shortcodes, such as `:+1:` and `:sunglasses:`.\n For a list of all supported codes,\n see https://share.streamlit.io/streamlit/emoji-shortcodes.\n\n * LaTeX expressions, by wrapping them in \"$\" or \"$$\" (the \"$$\"\n must be on their own lines). Supported LaTeX functions are listed\n at https://katex.org/docs/supported.html.\n\n unsafe_allow_html : bool\n By default, any HTML tags found in the body will be escaped and\n therefore treated as pure text. This behavior may be turned off by\n setting this argument to True.\n\n That said, we *strongly advise against it*. It is hard to write\n secure HTML, so by using this argument you may be compromising your\n users' security. For more information, see:\n\n https://github.com/streamlit/streamlit/issues/152\n\n *Also note that `unsafe_allow_html` is a temporary measure and may\n be removed from Streamlit at any time.*\n\n If you decide to turn on HTML anyway, we ask you to please tell us\n your exact use case here:\n\n https://discuss.streamlit.io/t/96\n\n This will help us come up with safe APIs that allow you to do what\n you want.\n\n Example\n -------\n >>> st.markdown('Streamlit is **_really_ cool**.')\n\n \"\"\"\n markdown_proto = MarkdownProto()\n\n markdown_proto.body = clean_text(body)\n markdown_proto.allow_html = unsafe_allow_html\n\n return self.dg._enqueue(\"markdown\", markdown_proto)\n\n def header(self, body: str, anchor: Optional[str] = None) -> \"DeltaGenerator\":\n \"\"\"Display text in header formatting.\n\n Parameters\n ----------\n body : str\n The text to display.\n\n anchor : str\n The anchor name of the header that can be accessed with #anchor\n in the URL. If omitted, it generates an anchor using the body.\n\n Example\n -------\n >>> st.header('This is a header')\n\n \"\"\"\n header_proto = MarkdownProto()\n if anchor is None:\n header_proto.body = f\"## {clean_text(body)}\"\n else:\n header_proto.body = f'<h2 data-anchor=\"{anchor}\">{clean_text(body)}</h2>'\n header_proto.allow_html = True\n return self.dg._enqueue(\"markdown\", header_proto)\n\n def subheader(self, body: str, anchor: Optional[str] = None) -> \"DeltaGenerator\":\n \"\"\"Display text in subheader formatting.\n\n Parameters\n ----------\n body : str\n The text to display.\n\n anchor : str\n The anchor name of the header that can be accessed with #anchor\n in the URL. If omitted, it generates an anchor using the body.\n\n Example\n -------\n >>> st.subheader('This is a subheader')\n\n \"\"\"\n subheader_proto = MarkdownProto()\n if anchor is None:\n subheader_proto.body = f\"### {clean_text(body)}\"\n else:\n subheader_proto.body = f'<h3 data-anchor=\"{anchor}\">{clean_text(body)}</h3>'\n subheader_proto.allow_html = True\n\n return self.dg._enqueue(\"markdown\", subheader_proto)\n\n def code(self, body: str, language: Optional[str] = \"python\") -> \"DeltaGenerator\":\n \"\"\"Display a code block with optional syntax highlighting.\n\n (This is a convenience wrapper around `st.markdown()`)\n\n Parameters\n ----------\n body : str\n The string to display as code.\n\n language : str\n The language that the code is written in, for syntax highlighting.\n If omitted, the code will be unstyled.\n\n Example\n -------\n >>> code = '''def hello():\n ... print(\"Hello, Streamlit!\")'''\n >>> st.code(code, language='python')\n\n \"\"\"\n code_proto = MarkdownProto()\n markdown = \"```%(language)s\\n%(body)s\\n```\" % {\n \"language\": language or \"\",\n \"body\": body,\n }\n code_proto.body = clean_text(markdown)\n return self.dg._enqueue(\"markdown\", code_proto)\n\n def title(self, body: str, anchor: Optional[str] = None) -> \"DeltaGenerator\":\n \"\"\"Display text in title formatting.\n\n Each document should have a single `st.title()`, although this is not\n enforced.\n\n Parameters\n ----------\n body : str\n The text to display.\n\n anchor : str\n The anchor name of the header that can be accessed with #anchor\n in the URL. If omitted, it generates an anchor using the body.\n\n Example\n -------\n >>> st.title('This is a title')\n\n \"\"\"\n title_proto = MarkdownProto()\n if anchor is None:\n title_proto.body = f\"# {clean_text(body)}\"\n else:\n title_proto.body = f'<h1 data-anchor=\"{anchor}\">{clean_text(body)}</h1>'\n title_proto.allow_html = True\n return self.dg._enqueue(\"markdown\", title_proto)\n\n def caption(self, body: str, unsafe_allow_html: bool = False) -> \"DeltaGenerator\":\n \"\"\"Display text in small font.\n\n This should be used for captions, asides, footnotes, sidenotes, and\n other explanatory text.\n\n Parameters\n ----------\n body : str\n The text to display.\n\n unsafe_allow_html : bool\n By default, any HTML tags found in strings will be escaped and\n therefore treated as pure text. This behavior may be turned off by\n setting this argument to True.\n\n That said, *we strongly advise against it*. It is hard to write secure\n HTML, so by using this argument you may be compromising your users'\n security. For more information, see:\n\n https://github.com/streamlit/streamlit/issues/152\n\n **Also note that `unsafe_allow_html` is a temporary measure and may be\n removed from Streamlit at any time.**\n\n If you decide to turn on HTML anyway, we ask you to please tell us your\n exact use case here:\n https://discuss.streamlit.io/t/96 .\n\n This will help us come up with safe APIs that allow you to do what you\n want.\n\n Example\n -------\n >>> st.caption('This is a string that explains something above.')\n\n \"\"\"\n caption_proto = MarkdownProto()\n caption_proto.body = clean_text(body)\n caption_proto.allow_html = unsafe_allow_html\n caption_proto.is_caption = True\n return self.dg._enqueue(\"markdown\", caption_proto)\n\n def latex(self, body: Union[str, \"sympy.Expr\"]) -> \"DeltaGenerator\":\n # This docstring needs to be \"raw\" because of the backslashes in the\n # example below.\n r\"\"\"Display mathematical expressions formatted as LaTeX.\n\n Supported LaTeX functions are listed at\n https://katex.org/docs/supported.html.\n\n Parameters\n ----------\n body : str or SymPy expression\n The string or SymPy expression to display as LaTeX. If str, it's\n a good idea to use raw Python strings since LaTeX uses backslashes\n a lot.\n\n\n Example\n -------\n >>> st.latex(r'''\n ... a + ar + a r^2 + a r^3 + \\cdots + a r^{n-1} =\n ... \\sum_{k=0}^{n-1} ar^k =\n ... a \\left(\\frac{1-r^{n}}{1-r}\\right)\n ... ''')\n\n \"\"\"\n if type_util.is_sympy_expession(body):\n import sympy\n\n body = sympy.latex(body)\n\n latex_proto = MarkdownProto()\n latex_proto.body = \"$$\\n%s\\n$$\" % clean_text(body)\n return self.dg._enqueue(\"markdown\", latex_proto)\n\n @property\n def dg(self) -> \"DeltaGenerator\":\n \"\"\"Get our DeltaGenerator.\"\"\"\n return cast(\"DeltaGenerator\", self)\n", "path": "lib/streamlit/elements/markdown.py"}]} | 3,367 | 323 |
gh_patches_debug_22906 | rasdani/github-patches | git_diff | xonsh__xonsh-2016 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Troublesome command entering python mode
I'm trying to run this command which I used to use often in zsh:
```
svn diff ./test.py | vim -R -
```
but it keeps trying to evaluate in Python mode.
I can usually wrestle these types of troublesome commands into submission with `$()` and `@()` but I can't figure this one out.
</issue>
<code>
[start of xonsh/commands_cache.py]
1 # -*- coding: utf-8 -*-
2 """Module for caching command & alias names as well as for predicting whether
3 a command will be able to be run in the background.
4
5 A background predictor is a function that accepect a single argument list
6 and returns whethere or not the process can be run in the background (returns
7 True) or must be run the foreground (returns False).
8 """
9 import os
10 import builtins
11 import argparse
12 import collections
13 import collections.abc as cabc
14
15 from xonsh.platform import ON_WINDOWS, pathbasename
16 from xonsh.tools import executables_in
17 from xonsh.lazyasd import lazyobject
18
19
20 class CommandsCache(cabc.Mapping):
21 """A lazy cache representing the commands available on the file system.
22 The keys are the command names and the values a tuple of (loc, has_alias)
23 where loc is either a str pointing to the executable on the file system or
24 None (if no executable exists) and has_alias is a boolean flag for whether
25 the command has an alias.
26 """
27
28 def __init__(self):
29 self._cmds_cache = {}
30 self._path_checksum = None
31 self._alias_checksum = None
32 self._path_mtime = -1
33 self.threadable_predictors = default_threadable_predictors()
34
35 def __contains__(self, key):
36 _ = self.all_commands
37 return self.lazyin(key)
38
39 def __iter__(self):
40 for cmd, (path, is_alias) in self.all_commands.items():
41 if ON_WINDOWS and path is not None:
42 # All comand keys are stored in uppercase on Windows.
43 # This ensures the original command name is returned.
44 cmd = pathbasename(path)
45 yield cmd
46
47 def __len__(self):
48 return len(self.all_commands)
49
50 def __getitem__(self, key):
51 _ = self.all_commands
52 return self.lazyget(key)
53
54 def is_empty(self):
55 """Returns whether the cache is populated or not."""
56 return len(self._cmds_cache) == 0
57
58 @staticmethod
59 def get_possible_names(name):
60 """Generates the possible `PATHEXT` extension variants of a given executable
61 name on Windows as a list, conserving the ordering in `PATHEXT`.
62 Returns a list as `name` being the only item in it on other platforms."""
63 if ON_WINDOWS:
64 pathext = builtins.__xonsh_env__.get('PATHEXT')
65 name = name.upper()
66 return [
67 name + ext
68 for ext in ([''] + pathext)
69 ]
70 else:
71 return [name]
72
73 @property
74 def all_commands(self):
75 paths = builtins.__xonsh_env__.get('PATH', [])
76 pathset = frozenset(x for x in paths if os.path.isdir(x))
77 # did PATH change?
78 path_hash = hash(pathset)
79 cache_valid = path_hash == self._path_checksum
80 self._path_checksum = path_hash
81 # did aliases change?
82 alss = getattr(builtins, 'aliases', set())
83 al_hash = hash(frozenset(alss))
84 cache_valid = cache_valid and al_hash == self._alias_checksum
85 self._alias_checksum = al_hash
86 # did the contents of any directory in PATH change?
87 max_mtime = 0
88 for path in pathset:
89 mtime = os.stat(path).st_mtime
90 if mtime > max_mtime:
91 max_mtime = mtime
92 cache_valid = cache_valid and (max_mtime <= self._path_mtime)
93 self._path_mtime = max_mtime
94 if cache_valid:
95 return self._cmds_cache
96 allcmds = {}
97 for path in reversed(paths):
98 # iterate backwards so that entries at the front of PATH overwrite
99 # entries at the back.
100 for cmd in executables_in(path):
101 key = cmd.upper() if ON_WINDOWS else cmd
102 allcmds[key] = (os.path.join(path, cmd), cmd in alss)
103 for cmd in alss:
104 if cmd not in allcmds:
105 key = cmd.upper() if ON_WINDOWS else cmd
106 allcmds[key] = (cmd, True)
107 self._cmds_cache = allcmds
108 return allcmds
109
110 def cached_name(self, name):
111 """Returns the name that would appear in the cache, if it exists."""
112 if name is None:
113 return None
114 cached = pathbasename(name)
115 if ON_WINDOWS:
116 keys = self.get_possible_names(cached)
117 cached = next((k for k in keys if k in self._cmds_cache), None)
118 return cached
119
120 def lazyin(self, key):
121 """Checks if the value is in the current cache without the potential to
122 update the cache. It just says whether the value is known *now*. This
123 may not reflect precisely what is on the $PATH.
124 """
125 return self.cached_name(key) in self._cmds_cache
126
127 def lazyiter(self):
128 """Returns an iterator over the current cache contents without the
129 potential to update the cache. This may not reflect what is on the
130 $PATH.
131 """
132 return iter(self._cmds_cache)
133
134 def lazylen(self):
135 """Returns the length of the current cache contents without the
136 potential to update the cache. This may not reflect precisely
137 what is on the $PATH.
138 """
139 return len(self._cmds_cache)
140
141 def lazyget(self, key, default=None):
142 """A lazy value getter."""
143 return self._cmds_cache.get(self.cached_name(key), default)
144
145 def locate_binary(self, name):
146 """Locates an executable on the file system using the cache."""
147 # make sure the cache is up to date by accessing the property
148 _ = self.all_commands
149 return self.lazy_locate_binary(name)
150
151 def lazy_locate_binary(self, name):
152 """Locates an executable in the cache, without checking its validity."""
153 possibilities = self.get_possible_names(name)
154 if ON_WINDOWS:
155 # Windows users expect to be able to execute files in the same
156 # directory without `./`
157 local_bin = next((fn for fn in possibilities if os.path.isfile(fn)),
158 None)
159 if local_bin:
160 return os.path.abspath(local_bin)
161 cached = next((cmd for cmd in possibilities if cmd in self._cmds_cache),
162 None)
163 if cached:
164 (path, is_alias) = self._cmds_cache[cached]
165 return path if not is_alias else None
166 elif os.path.isfile(name) and name != pathbasename(name):
167 return name
168
169 def predict_threadable(self, cmd):
170 """Predicts whether a command list is able to be run on a background
171 thread, rather than the main thread.
172 """
173 name = self.cached_name(cmd[0])
174 predictors = self.threadable_predictors
175 if ON_WINDOWS:
176 # On all names (keys) are stored in upper case so instead
177 # we get the original cmd or alias name
178 path, _ = self.lazyget(name, (None, None))
179 if path is None:
180 return True
181 else:
182 name = pathbasename(path)
183 if name not in predictors:
184 pre, ext = os.path.splitext(name)
185 if pre in predictors:
186 predictors[name] = predictors[pre]
187 predictor = predictors[name]
188 return predictor(cmd[1:])
189
190 #
191 # Background Predictors
192 #
193
194
195 def predict_true(args):
196 """Always say the process is threadable."""
197 return True
198
199
200 def predict_false(args):
201 """Never say the process is threadable."""
202 return False
203
204
205 @lazyobject
206 def SHELL_PREDICTOR_PARSER():
207 p = argparse.ArgumentParser('shell', add_help=False)
208 p.add_argument('-c', nargs='?', default=None)
209 p.add_argument('filename', nargs='?', default=None)
210 return p
211
212
213 def predict_shell(args):
214 """Precict the backgroundability of the normal shell interface, which
215 comes down to whether it is being run in subproc mode.
216 """
217 ns, _ = SHELL_PREDICTOR_PARSER.parse_known_args(args)
218 if ns.c is None and ns.filename is None:
219 pred = False
220 else:
221 pred = True
222 return pred
223
224
225 @lazyobject
226 def HELP_VER_PREDICTOR_PARSER():
227 p = argparse.ArgumentParser('cmd', add_help=False)
228 p.add_argument('-h', '--help', dest='help',
229 action='store_true', default=None)
230 p.add_argument('-v', '-V', '--version', dest='version',
231 action='store_true', default=None)
232 return p
233
234
235 def predict_help_ver(args):
236 """Precict the backgroundability of commands that have help & version
237 switches: -h, --help, -v, -V, --version. If either of these options is
238 present, the command is assumed to print to stdout normally and is therefore
239 threadable. Otherwise, the command is assumed to not be threadable.
240 This is useful for commands, like top, that normally enter alternate mode
241 but may not in certain circumstances.
242 """
243 ns, _ = HELP_VER_PREDICTOR_PARSER.parse_known_args(args)
244 pred = ns.help is not None or ns.version is not None
245 return pred
246
247
248 def default_threadable_predictors():
249 """Generates a new defaultdict for known threadable predictors.
250 The default is to predict true.
251 """
252 # alphabetical, for what it is worth.
253 predictors = {
254 'bash': predict_shell,
255 'csh': predict_shell,
256 'clear': predict_false,
257 'cls': predict_false,
258 'cmd': predict_shell,
259 'fish': predict_shell,
260 'htop': predict_help_ver,
261 'ksh': predict_shell,
262 'less': predict_help_ver,
263 'man': predict_help_ver,
264 'more': predict_help_ver,
265 'mvim': predict_help_ver,
266 'mutt': predict_help_ver,
267 'nano': predict_help_ver,
268 'psql': predict_false,
269 'ranger': predict_help_ver,
270 'scp': predict_false,
271 'sh': predict_shell,
272 'ssh': predict_false,
273 'startx': predict_false,
274 'sudo': predict_help_ver,
275 'tcsh': predict_shell,
276 'top': predict_help_ver,
277 'vi': predict_false,
278 'vim': predict_false,
279 'vimpager': predict_help_ver,
280 'xo': predict_help_ver,
281 'xonsh': predict_shell,
282 'zsh': predict_shell,
283 }
284 return collections.defaultdict(lambda: predict_true, predictors)
285
[end of xonsh/commands_cache.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/xonsh/commands_cache.py b/xonsh/commands_cache.py
--- a/xonsh/commands_cache.py
+++ b/xonsh/commands_cache.py
@@ -256,6 +256,7 @@
'clear': predict_false,
'cls': predict_false,
'cmd': predict_shell,
+ 'ex': predict_false,
'fish': predict_shell,
'htop': predict_help_ver,
'ksh': predict_shell,
@@ -267,6 +268,8 @@
'nano': predict_help_ver,
'psql': predict_false,
'ranger': predict_help_ver,
+ 'rview': predict_false,
+ 'rvim': predict_false,
'scp': predict_false,
'sh': predict_shell,
'ssh': predict_false,
@@ -275,6 +278,7 @@
'tcsh': predict_shell,
'top': predict_help_ver,
'vi': predict_false,
+ 'view': predict_false,
'vim': predict_false,
'vimpager': predict_help_ver,
'xo': predict_help_ver,
| {"golden_diff": "diff --git a/xonsh/commands_cache.py b/xonsh/commands_cache.py\n--- a/xonsh/commands_cache.py\n+++ b/xonsh/commands_cache.py\n@@ -256,6 +256,7 @@\n 'clear': predict_false,\n 'cls': predict_false,\n 'cmd': predict_shell,\n+ 'ex': predict_false,\n 'fish': predict_shell,\n 'htop': predict_help_ver,\n 'ksh': predict_shell,\n@@ -267,6 +268,8 @@\n 'nano': predict_help_ver,\n 'psql': predict_false,\n 'ranger': predict_help_ver,\n+ 'rview': predict_false,\n+ 'rvim': predict_false,\n 'scp': predict_false,\n 'sh': predict_shell,\n 'ssh': predict_false,\n@@ -275,6 +278,7 @@\n 'tcsh': predict_shell,\n 'top': predict_help_ver,\n 'vi': predict_false,\n+ 'view': predict_false,\n 'vim': predict_false,\n 'vimpager': predict_help_ver,\n 'xo': predict_help_ver,\n", "issue": "Troublesome command entering python mode\nI'm trying to run this command which I used to use often in zsh:\r\n```\r\nsvn diff ./test.py | vim -R -\r\n```\r\nbut it keeps trying to evaluate in Python mode. \r\n\r\nI can usually wrestle these types of troublesome commands into submission with `$()` and `@()` but I can't figure this one out.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Module for caching command & alias names as well as for predicting whether\na command will be able to be run in the background.\n\nA background predictor is a function that accepect a single argument list\nand returns whethere or not the process can be run in the background (returns\nTrue) or must be run the foreground (returns False).\n\"\"\"\nimport os\nimport builtins\nimport argparse\nimport collections\nimport collections.abc as cabc\n\nfrom xonsh.platform import ON_WINDOWS, pathbasename\nfrom xonsh.tools import executables_in\nfrom xonsh.lazyasd import lazyobject\n\n\nclass CommandsCache(cabc.Mapping):\n \"\"\"A lazy cache representing the commands available on the file system.\n The keys are the command names and the values a tuple of (loc, has_alias)\n where loc is either a str pointing to the executable on the file system or\n None (if no executable exists) and has_alias is a boolean flag for whether\n the command has an alias.\n \"\"\"\n\n def __init__(self):\n self._cmds_cache = {}\n self._path_checksum = None\n self._alias_checksum = None\n self._path_mtime = -1\n self.threadable_predictors = default_threadable_predictors()\n\n def __contains__(self, key):\n _ = self.all_commands\n return self.lazyin(key)\n\n def __iter__(self):\n for cmd, (path, is_alias) in self.all_commands.items():\n if ON_WINDOWS and path is not None:\n # All comand keys are stored in uppercase on Windows.\n # This ensures the original command name is returned.\n cmd = pathbasename(path)\n yield cmd\n\n def __len__(self):\n return len(self.all_commands)\n\n def __getitem__(self, key):\n _ = self.all_commands\n return self.lazyget(key)\n\n def is_empty(self):\n \"\"\"Returns whether the cache is populated or not.\"\"\"\n return len(self._cmds_cache) == 0\n\n @staticmethod\n def get_possible_names(name):\n \"\"\"Generates the possible `PATHEXT` extension variants of a given executable\n name on Windows as a list, conserving the ordering in `PATHEXT`.\n Returns a list as `name` being the only item in it on other platforms.\"\"\"\n if ON_WINDOWS:\n pathext = builtins.__xonsh_env__.get('PATHEXT')\n name = name.upper()\n return [\n name + ext\n for ext in ([''] + pathext)\n ]\n else:\n return [name]\n\n @property\n def all_commands(self):\n paths = builtins.__xonsh_env__.get('PATH', [])\n pathset = frozenset(x for x in paths if os.path.isdir(x))\n # did PATH change?\n path_hash = hash(pathset)\n cache_valid = path_hash == self._path_checksum\n self._path_checksum = path_hash\n # did aliases change?\n alss = getattr(builtins, 'aliases', set())\n al_hash = hash(frozenset(alss))\n cache_valid = cache_valid and al_hash == self._alias_checksum\n self._alias_checksum = al_hash\n # did the contents of any directory in PATH change?\n max_mtime = 0\n for path in pathset:\n mtime = os.stat(path).st_mtime\n if mtime > max_mtime:\n max_mtime = mtime\n cache_valid = cache_valid and (max_mtime <= self._path_mtime)\n self._path_mtime = max_mtime\n if cache_valid:\n return self._cmds_cache\n allcmds = {}\n for path in reversed(paths):\n # iterate backwards so that entries at the front of PATH overwrite\n # entries at the back.\n for cmd in executables_in(path):\n key = cmd.upper() if ON_WINDOWS else cmd\n allcmds[key] = (os.path.join(path, cmd), cmd in alss)\n for cmd in alss:\n if cmd not in allcmds:\n key = cmd.upper() if ON_WINDOWS else cmd\n allcmds[key] = (cmd, True)\n self._cmds_cache = allcmds\n return allcmds\n\n def cached_name(self, name):\n \"\"\"Returns the name that would appear in the cache, if it exists.\"\"\"\n if name is None:\n return None\n cached = pathbasename(name)\n if ON_WINDOWS:\n keys = self.get_possible_names(cached)\n cached = next((k for k in keys if k in self._cmds_cache), None)\n return cached\n\n def lazyin(self, key):\n \"\"\"Checks if the value is in the current cache without the potential to\n update the cache. It just says whether the value is known *now*. This\n may not reflect precisely what is on the $PATH.\n \"\"\"\n return self.cached_name(key) in self._cmds_cache\n\n def lazyiter(self):\n \"\"\"Returns an iterator over the current cache contents without the\n potential to update the cache. This may not reflect what is on the\n $PATH.\n \"\"\"\n return iter(self._cmds_cache)\n\n def lazylen(self):\n \"\"\"Returns the length of the current cache contents without the\n potential to update the cache. This may not reflect precisely\n what is on the $PATH.\n \"\"\"\n return len(self._cmds_cache)\n\n def lazyget(self, key, default=None):\n \"\"\"A lazy value getter.\"\"\"\n return self._cmds_cache.get(self.cached_name(key), default)\n\n def locate_binary(self, name):\n \"\"\"Locates an executable on the file system using the cache.\"\"\"\n # make sure the cache is up to date by accessing the property\n _ = self.all_commands\n return self.lazy_locate_binary(name)\n\n def lazy_locate_binary(self, name):\n \"\"\"Locates an executable in the cache, without checking its validity.\"\"\"\n possibilities = self.get_possible_names(name)\n if ON_WINDOWS:\n # Windows users expect to be able to execute files in the same\n # directory without `./`\n local_bin = next((fn for fn in possibilities if os.path.isfile(fn)),\n None)\n if local_bin:\n return os.path.abspath(local_bin)\n cached = next((cmd for cmd in possibilities if cmd in self._cmds_cache),\n None)\n if cached:\n (path, is_alias) = self._cmds_cache[cached]\n return path if not is_alias else None\n elif os.path.isfile(name) and name != pathbasename(name):\n return name\n\n def predict_threadable(self, cmd):\n \"\"\"Predicts whether a command list is able to be run on a background\n thread, rather than the main thread.\n \"\"\"\n name = self.cached_name(cmd[0])\n predictors = self.threadable_predictors\n if ON_WINDOWS:\n # On all names (keys) are stored in upper case so instead\n # we get the original cmd or alias name\n path, _ = self.lazyget(name, (None, None))\n if path is None:\n return True\n else:\n name = pathbasename(path)\n if name not in predictors:\n pre, ext = os.path.splitext(name)\n if pre in predictors:\n predictors[name] = predictors[pre]\n predictor = predictors[name]\n return predictor(cmd[1:])\n\n#\n# Background Predictors\n#\n\n\ndef predict_true(args):\n \"\"\"Always say the process is threadable.\"\"\"\n return True\n\n\ndef predict_false(args):\n \"\"\"Never say the process is threadable.\"\"\"\n return False\n\n\n@lazyobject\ndef SHELL_PREDICTOR_PARSER():\n p = argparse.ArgumentParser('shell', add_help=False)\n p.add_argument('-c', nargs='?', default=None)\n p.add_argument('filename', nargs='?', default=None)\n return p\n\n\ndef predict_shell(args):\n \"\"\"Precict the backgroundability of the normal shell interface, which\n comes down to whether it is being run in subproc mode.\n \"\"\"\n ns, _ = SHELL_PREDICTOR_PARSER.parse_known_args(args)\n if ns.c is None and ns.filename is None:\n pred = False\n else:\n pred = True\n return pred\n\n\n@lazyobject\ndef HELP_VER_PREDICTOR_PARSER():\n p = argparse.ArgumentParser('cmd', add_help=False)\n p.add_argument('-h', '--help', dest='help',\n action='store_true', default=None)\n p.add_argument('-v', '-V', '--version', dest='version',\n action='store_true', default=None)\n return p\n\n\ndef predict_help_ver(args):\n \"\"\"Precict the backgroundability of commands that have help & version\n switches: -h, --help, -v, -V, --version. If either of these options is\n present, the command is assumed to print to stdout normally and is therefore\n threadable. Otherwise, the command is assumed to not be threadable.\n This is useful for commands, like top, that normally enter alternate mode\n but may not in certain circumstances.\n \"\"\"\n ns, _ = HELP_VER_PREDICTOR_PARSER.parse_known_args(args)\n pred = ns.help is not None or ns.version is not None\n return pred\n\n\ndef default_threadable_predictors():\n \"\"\"Generates a new defaultdict for known threadable predictors.\n The default is to predict true.\n \"\"\"\n # alphabetical, for what it is worth.\n predictors = {\n 'bash': predict_shell,\n 'csh': predict_shell,\n 'clear': predict_false,\n 'cls': predict_false,\n 'cmd': predict_shell,\n 'fish': predict_shell,\n 'htop': predict_help_ver,\n 'ksh': predict_shell,\n 'less': predict_help_ver,\n 'man': predict_help_ver,\n 'more': predict_help_ver,\n 'mvim': predict_help_ver,\n 'mutt': predict_help_ver,\n 'nano': predict_help_ver,\n 'psql': predict_false,\n 'ranger': predict_help_ver,\n 'scp': predict_false,\n 'sh': predict_shell,\n 'ssh': predict_false,\n 'startx': predict_false,\n 'sudo': predict_help_ver,\n 'tcsh': predict_shell,\n 'top': predict_help_ver,\n 'vi': predict_false,\n 'vim': predict_false,\n 'vimpager': predict_help_ver,\n 'xo': predict_help_ver,\n 'xonsh': predict_shell,\n 'zsh': predict_shell,\n }\n return collections.defaultdict(lambda: predict_true, predictors)\n", "path": "xonsh/commands_cache.py"}]} | 3,645 | 255 |
gh_patches_debug_28131 | rasdani/github-patches | git_diff | strawberry-graphql__strawberry-530 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add support for field deprecation
I was looking in the code and I couldn't find a way to mark field as deprecated. I also didn't find any issues regarding this.
I know that it is handled by `deprecation_reason` in graphql's `GraphQLField` but how I mark a field as deprecated in strawberry-graphql?
<!-- POLAR PLEDGE BADGE START -->
## Upvote & Fund
- We're using [Polar.sh](https://polar.sh/strawberry-graphql) so you can upvote and help fund this issue.
- We receive the funding once the issue is completed & confirmed by you.
- Thank you in advance for helping prioritize & fund our backlog.
<a href="https://polar.sh/strawberry-graphql/strawberry/issues/375">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/strawberry-graphql/strawberry/issues/375/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/strawberry-graphql/strawberry/issues/375/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
</issue>
<code>
[start of strawberry/field.py]
1 import dataclasses
2 import inspect
3 from typing import Callable, List, Optional, Type
4
5 from .arguments import get_arguments_from_resolver
6 from .permission import BasePermission
7 from .types.types import FederationFieldParams, FieldDefinition
8 from .utils.str_converters import to_camel_case
9
10
11 class StrawberryField(dataclasses.Field):
12 _field_definition: FieldDefinition
13
14 def __init__(self, field_definition: FieldDefinition):
15 self._field_definition = field_definition
16
17 super().__init__( # type: ignore
18 default=dataclasses.MISSING,
19 default_factory=dataclasses.MISSING,
20 init=field_definition.base_resolver is None,
21 repr=True,
22 hash=None,
23 compare=True,
24 metadata=None,
25 )
26
27 def __call__(self, resolver: Callable) -> Callable:
28 """Migrate the field definition to the resolver"""
29
30 field_definition = self._field_definition
31 # note that field_definition.name is finalized in type_resolver._get_fields
32
33 field_definition.origin_name = resolver.__name__
34 field_definition.origin = resolver
35 field_definition.base_resolver = resolver
36 field_definition.arguments = get_arguments_from_resolver(resolver)
37 field_definition.type = resolver.__annotations__.get("return", None)
38
39 if not inspect.ismethod(resolver):
40 # resolver is a normal function
41 resolver._field_definition = field_definition # type: ignore
42 else:
43 # resolver is a bound method and immutable (most likely a
44 # classmethod or an instance method). We need to monkeypatch its
45 # underlying .__func__ function
46 # https://stackoverflow.com/a/7891681/8134178
47 resolver.__func__._field_definition = field_definition # type:ignore
48
49 return resolver
50
51 def __setattr__(self, name, value):
52 if name == "type":
53 self._field_definition.type = value
54
55 if value and name == "name":
56 if not self._field_definition.origin_name:
57 self._field_definition.origin_name = value
58
59 if not self._field_definition.name:
60 self._field_definition.name = to_camel_case(value)
61
62 return super().__setattr__(name, value)
63
64
65 def field(
66 resolver: Optional[Callable] = None,
67 *,
68 name: Optional[str] = None,
69 is_subscription: bool = False,
70 description: Optional[str] = None,
71 permission_classes: Optional[List[Type[BasePermission]]] = None,
72 federation: Optional[FederationFieldParams] = None
73 ):
74 """Annotates a method or property as a GraphQL field.
75
76 This is normally used inside a type declaration:
77
78 >>> @strawberry.type:
79 >>> class X:
80 >>> field_abc: str = strawberry.field(description="ABC")
81
82 >>> @strawberry.field(description="ABC")
83 >>> def field_with_resolver(self, info) -> str:
84 >>> return "abc"
85
86 it can be used both as decorator and as a normal function.
87 """
88
89 field_definition = FieldDefinition(
90 origin_name=None, # modified by resolver in __call__
91 name=name, # modified by resolver in __call__
92 type=None, # type: ignore
93 origin=resolver, # type: ignore
94 description=description,
95 base_resolver=resolver,
96 is_subscription=is_subscription,
97 permission_classes=permission_classes or [],
98 arguments=[], # modified by resolver in __call__
99 federation=federation or FederationFieldParams(),
100 )
101
102 field_ = StrawberryField(field_definition)
103
104 if resolver:
105 return field_(resolver)
106 return field_
107
[end of strawberry/field.py]
[start of strawberry/types/types.py]
1 import dataclasses
2 from typing import Any, Callable, Dict, List, Optional, Type, Union
3
4 from strawberry.permission import BasePermission
5 from strawberry.union import StrawberryUnion
6
7
8 undefined = object()
9
10
11 @dataclasses.dataclass
12 class FederationTypeParams:
13 keys: List[str] = dataclasses.field(default_factory=list)
14 extend: bool = False
15
16
17 @dataclasses.dataclass
18 class TypeDefinition:
19 name: str
20 is_input: bool
21 is_interface: bool
22 is_generic: bool
23 origin: Type
24 description: Optional[str]
25 federation: FederationTypeParams
26 interfaces: List["TypeDefinition"]
27
28 _fields: List["FieldDefinition"]
29 _type_params: Dict[str, Type] = dataclasses.field(default_factory=dict, init=False)
30
31 def get_field(self, name: str) -> Optional["FieldDefinition"]:
32 return next((field for field in self.fields if field.name == name), None)
33
34 @property
35 def fields(self) -> List["FieldDefinition"]:
36 from .type_resolver import _resolve_types
37
38 return _resolve_types(self._fields)
39
40 @property
41 def type_params(self) -> Dict[str, Type]:
42 if not self._type_params:
43 from .type_resolver import _get_type_params
44
45 self._type_params = _get_type_params(self.fields)
46
47 return self._type_params
48
49
50 @dataclasses.dataclass
51 class ArgumentDefinition:
52 name: Optional[str] = None
53 origin_name: Optional[str] = None
54 type: Optional[Type] = None
55 origin: Optional[Type] = None
56 child: Optional["ArgumentDefinition"] = None
57 is_subscription: bool = False
58 is_optional: bool = False
59 is_child_optional: bool = False
60 is_list: bool = False
61 is_union: bool = False
62 description: Optional[str] = None
63 default_value: Any = undefined
64
65
66 @dataclasses.dataclass
67 class FederationFieldParams:
68 provides: List[str] = dataclasses.field(default_factory=list)
69 requires: List[str] = dataclasses.field(default_factory=list)
70 external: bool = False
71
72
73 @dataclasses.dataclass
74 class FieldDefinition:
75 name: Optional[str]
76 origin_name: Optional[str]
77 type: Optional[Union[Type, StrawberryUnion]]
78 origin: Union[Type, Callable]
79 child: Optional["FieldDefinition"] = None
80 is_subscription: bool = False
81 is_optional: bool = False
82 is_child_optional: bool = False
83 is_list: bool = False
84 is_union: bool = False
85 federation: FederationFieldParams = dataclasses.field(
86 default_factory=FederationFieldParams
87 )
88 arguments: List[ArgumentDefinition] = dataclasses.field(default_factory=list)
89 description: Optional[str] = None
90 base_resolver: Optional[Callable] = None
91 permission_classes: List[Type[BasePermission]] = dataclasses.field(
92 default_factory=list
93 )
94 default_value: Any = undefined
95
[end of strawberry/types/types.py]
[start of strawberry/schema/types/fields.py]
1 import typing
2
3 from graphql import GraphQLField, GraphQLInputField
4
5 from strawberry.arguments import UNSET
6 from strawberry.field import FieldDefinition
7 from strawberry.resolvers import get_resolver
8 from strawberry.types.types import undefined
9
10 from .arguments import convert_arguments
11 from .type import get_graphql_type
12 from .types import Field, TypeMap
13
14
15 def get_field(
16 field: FieldDefinition,
17 is_input: bool,
18 type_map: TypeMap,
19 ) -> Field:
20 graphql_type = get_graphql_type(field, type_map)
21
22 TypeClass: typing.Union[
23 typing.Type[GraphQLInputField], typing.Type[GraphQLField]
24 ] = GraphQLField
25
26 kwargs: typing.Dict[str, typing.Any] = {
27 "description": field.description,
28 }
29
30 resolver = get_resolver(field)
31
32 if is_input:
33 TypeClass = GraphQLInputField
34 if field.default_value not in (undefined, UNSET):
35 kwargs["default_value"] = field.default_value
36 elif field.is_subscription:
37 kwargs["args"] = convert_arguments(field.arguments, type_map)
38 kwargs["subscribe"] = resolver
39 kwargs["resolve"] = lambda event, *args, **kwargs: event
40 else:
41 kwargs["args"] = convert_arguments(field.arguments, type_map)
42 kwargs["resolve"] = resolver
43
44 return TypeClass(graphql_type, **kwargs) # type: ignore
45
[end of strawberry/schema/types/fields.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/strawberry/field.py b/strawberry/field.py
--- a/strawberry/field.py
+++ b/strawberry/field.py
@@ -69,7 +69,8 @@
is_subscription: bool = False,
description: Optional[str] = None,
permission_classes: Optional[List[Type[BasePermission]]] = None,
- federation: Optional[FederationFieldParams] = None
+ federation: Optional[FederationFieldParams] = None,
+ deprecation_reason: Optional[str] = None,
):
"""Annotates a method or property as a GraphQL field.
@@ -97,6 +98,7 @@
permission_classes=permission_classes or [],
arguments=[], # modified by resolver in __call__
federation=federation or FederationFieldParams(),
+ deprecation_reason=deprecation_reason,
)
field_ = StrawberryField(field_definition)
diff --git a/strawberry/schema/types/fields.py b/strawberry/schema/types/fields.py
--- a/strawberry/schema/types/fields.py
+++ b/strawberry/schema/types/fields.py
@@ -41,4 +41,7 @@
kwargs["args"] = convert_arguments(field.arguments, type_map)
kwargs["resolve"] = resolver
+ if not is_input:
+ kwargs["deprecation_reason"] = field.deprecation_reason
+
return TypeClass(graphql_type, **kwargs) # type: ignore
diff --git a/strawberry/types/types.py b/strawberry/types/types.py
--- a/strawberry/types/types.py
+++ b/strawberry/types/types.py
@@ -92,3 +92,4 @@
default_factory=list
)
default_value: Any = undefined
+ deprecation_reason: Optional[str] = None
| {"golden_diff": "diff --git a/strawberry/field.py b/strawberry/field.py\n--- a/strawberry/field.py\n+++ b/strawberry/field.py\n@@ -69,7 +69,8 @@\n is_subscription: bool = False,\n description: Optional[str] = None,\n permission_classes: Optional[List[Type[BasePermission]]] = None,\n- federation: Optional[FederationFieldParams] = None\n+ federation: Optional[FederationFieldParams] = None,\n+ deprecation_reason: Optional[str] = None,\n ):\n \"\"\"Annotates a method or property as a GraphQL field.\n \n@@ -97,6 +98,7 @@\n permission_classes=permission_classes or [],\n arguments=[], # modified by resolver in __call__\n federation=federation or FederationFieldParams(),\n+ deprecation_reason=deprecation_reason,\n )\n \n field_ = StrawberryField(field_definition)\ndiff --git a/strawberry/schema/types/fields.py b/strawberry/schema/types/fields.py\n--- a/strawberry/schema/types/fields.py\n+++ b/strawberry/schema/types/fields.py\n@@ -41,4 +41,7 @@\n kwargs[\"args\"] = convert_arguments(field.arguments, type_map)\n kwargs[\"resolve\"] = resolver\n \n+ if not is_input:\n+ kwargs[\"deprecation_reason\"] = field.deprecation_reason\n+\n return TypeClass(graphql_type, **kwargs) # type: ignore\ndiff --git a/strawberry/types/types.py b/strawberry/types/types.py\n--- a/strawberry/types/types.py\n+++ b/strawberry/types/types.py\n@@ -92,3 +92,4 @@\n default_factory=list\n )\n default_value: Any = undefined\n+ deprecation_reason: Optional[str] = None\n", "issue": "Add support for field deprecation\nI was looking in the code and I couldn't find a way to mark field as deprecated. I also didn't find any issues regarding this.\r\nI know that it is handled by `deprecation_reason` in graphql's `GraphQLField` but how I mark a field as deprecated in strawberry-graphql?\n\n<!-- POLAR PLEDGE BADGE START -->\n## Upvote & Fund\n\n- We're using [Polar.sh](https://polar.sh/strawberry-graphql) so you can upvote and help fund this issue.\n- We receive the funding once the issue is completed & confirmed by you.\n- Thank you in advance for helping prioritize & fund our backlog.\n\n<a href=\"https://polar.sh/strawberry-graphql/strawberry/issues/375\">\n<picture>\n <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://polar.sh/api/github/strawberry-graphql/strawberry/issues/375/pledge.svg?darkmode=1\">\n <img alt=\"Fund with Polar\" src=\"https://polar.sh/api/github/strawberry-graphql/strawberry/issues/375/pledge.svg\">\n</picture>\n</a>\n<!-- POLAR PLEDGE BADGE END -->\n\n", "before_files": [{"content": "import dataclasses\nimport inspect\nfrom typing import Callable, List, Optional, Type\n\nfrom .arguments import get_arguments_from_resolver\nfrom .permission import BasePermission\nfrom .types.types import FederationFieldParams, FieldDefinition\nfrom .utils.str_converters import to_camel_case\n\n\nclass StrawberryField(dataclasses.Field):\n _field_definition: FieldDefinition\n\n def __init__(self, field_definition: FieldDefinition):\n self._field_definition = field_definition\n\n super().__init__( # type: ignore\n default=dataclasses.MISSING,\n default_factory=dataclasses.MISSING,\n init=field_definition.base_resolver is None,\n repr=True,\n hash=None,\n compare=True,\n metadata=None,\n )\n\n def __call__(self, resolver: Callable) -> Callable:\n \"\"\"Migrate the field definition to the resolver\"\"\"\n\n field_definition = self._field_definition\n # note that field_definition.name is finalized in type_resolver._get_fields\n\n field_definition.origin_name = resolver.__name__\n field_definition.origin = resolver\n field_definition.base_resolver = resolver\n field_definition.arguments = get_arguments_from_resolver(resolver)\n field_definition.type = resolver.__annotations__.get(\"return\", None)\n\n if not inspect.ismethod(resolver):\n # resolver is a normal function\n resolver._field_definition = field_definition # type: ignore\n else:\n # resolver is a bound method and immutable (most likely a\n # classmethod or an instance method). We need to monkeypatch its\n # underlying .__func__ function\n # https://stackoverflow.com/a/7891681/8134178\n resolver.__func__._field_definition = field_definition # type:ignore\n\n return resolver\n\n def __setattr__(self, name, value):\n if name == \"type\":\n self._field_definition.type = value\n\n if value and name == \"name\":\n if not self._field_definition.origin_name:\n self._field_definition.origin_name = value\n\n if not self._field_definition.name:\n self._field_definition.name = to_camel_case(value)\n\n return super().__setattr__(name, value)\n\n\ndef field(\n resolver: Optional[Callable] = None,\n *,\n name: Optional[str] = None,\n is_subscription: bool = False,\n description: Optional[str] = None,\n permission_classes: Optional[List[Type[BasePermission]]] = None,\n federation: Optional[FederationFieldParams] = None\n):\n \"\"\"Annotates a method or property as a GraphQL field.\n\n This is normally used inside a type declaration:\n\n >>> @strawberry.type:\n >>> class X:\n >>> field_abc: str = strawberry.field(description=\"ABC\")\n\n >>> @strawberry.field(description=\"ABC\")\n >>> def field_with_resolver(self, info) -> str:\n >>> return \"abc\"\n\n it can be used both as decorator and as a normal function.\n \"\"\"\n\n field_definition = FieldDefinition(\n origin_name=None, # modified by resolver in __call__\n name=name, # modified by resolver in __call__\n type=None, # type: ignore\n origin=resolver, # type: ignore\n description=description,\n base_resolver=resolver,\n is_subscription=is_subscription,\n permission_classes=permission_classes or [],\n arguments=[], # modified by resolver in __call__\n federation=federation or FederationFieldParams(),\n )\n\n field_ = StrawberryField(field_definition)\n\n if resolver:\n return field_(resolver)\n return field_\n", "path": "strawberry/field.py"}, {"content": "import dataclasses\nfrom typing import Any, Callable, Dict, List, Optional, Type, Union\n\nfrom strawberry.permission import BasePermission\nfrom strawberry.union import StrawberryUnion\n\n\nundefined = object()\n\n\[email protected]\nclass FederationTypeParams:\n keys: List[str] = dataclasses.field(default_factory=list)\n extend: bool = False\n\n\[email protected]\nclass TypeDefinition:\n name: str\n is_input: bool\n is_interface: bool\n is_generic: bool\n origin: Type\n description: Optional[str]\n federation: FederationTypeParams\n interfaces: List[\"TypeDefinition\"]\n\n _fields: List[\"FieldDefinition\"]\n _type_params: Dict[str, Type] = dataclasses.field(default_factory=dict, init=False)\n\n def get_field(self, name: str) -> Optional[\"FieldDefinition\"]:\n return next((field for field in self.fields if field.name == name), None)\n\n @property\n def fields(self) -> List[\"FieldDefinition\"]:\n from .type_resolver import _resolve_types\n\n return _resolve_types(self._fields)\n\n @property\n def type_params(self) -> Dict[str, Type]:\n if not self._type_params:\n from .type_resolver import _get_type_params\n\n self._type_params = _get_type_params(self.fields)\n\n return self._type_params\n\n\[email protected]\nclass ArgumentDefinition:\n name: Optional[str] = None\n origin_name: Optional[str] = None\n type: Optional[Type] = None\n origin: Optional[Type] = None\n child: Optional[\"ArgumentDefinition\"] = None\n is_subscription: bool = False\n is_optional: bool = False\n is_child_optional: bool = False\n is_list: bool = False\n is_union: bool = False\n description: Optional[str] = None\n default_value: Any = undefined\n\n\[email protected]\nclass FederationFieldParams:\n provides: List[str] = dataclasses.field(default_factory=list)\n requires: List[str] = dataclasses.field(default_factory=list)\n external: bool = False\n\n\[email protected]\nclass FieldDefinition:\n name: Optional[str]\n origin_name: Optional[str]\n type: Optional[Union[Type, StrawberryUnion]]\n origin: Union[Type, Callable]\n child: Optional[\"FieldDefinition\"] = None\n is_subscription: bool = False\n is_optional: bool = False\n is_child_optional: bool = False\n is_list: bool = False\n is_union: bool = False\n federation: FederationFieldParams = dataclasses.field(\n default_factory=FederationFieldParams\n )\n arguments: List[ArgumentDefinition] = dataclasses.field(default_factory=list)\n description: Optional[str] = None\n base_resolver: Optional[Callable] = None\n permission_classes: List[Type[BasePermission]] = dataclasses.field(\n default_factory=list\n )\n default_value: Any = undefined\n", "path": "strawberry/types/types.py"}, {"content": "import typing\n\nfrom graphql import GraphQLField, GraphQLInputField\n\nfrom strawberry.arguments import UNSET\nfrom strawberry.field import FieldDefinition\nfrom strawberry.resolvers import get_resolver\nfrom strawberry.types.types import undefined\n\nfrom .arguments import convert_arguments\nfrom .type import get_graphql_type\nfrom .types import Field, TypeMap\n\n\ndef get_field(\n field: FieldDefinition,\n is_input: bool,\n type_map: TypeMap,\n) -> Field:\n graphql_type = get_graphql_type(field, type_map)\n\n TypeClass: typing.Union[\n typing.Type[GraphQLInputField], typing.Type[GraphQLField]\n ] = GraphQLField\n\n kwargs: typing.Dict[str, typing.Any] = {\n \"description\": field.description,\n }\n\n resolver = get_resolver(field)\n\n if is_input:\n TypeClass = GraphQLInputField\n if field.default_value not in (undefined, UNSET):\n kwargs[\"default_value\"] = field.default_value\n elif field.is_subscription:\n kwargs[\"args\"] = convert_arguments(field.arguments, type_map)\n kwargs[\"subscribe\"] = resolver\n kwargs[\"resolve\"] = lambda event, *args, **kwargs: event\n else:\n kwargs[\"args\"] = convert_arguments(field.arguments, type_map)\n kwargs[\"resolve\"] = resolver\n\n return TypeClass(graphql_type, **kwargs) # type: ignore\n", "path": "strawberry/schema/types/fields.py"}]} | 3,042 | 402 |
gh_patches_debug_7660 | rasdani/github-patches | git_diff | pyjanitor-devs__pyjanitor-447 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[DOC] Clarify original-dataframe mutation behavior in pyjanitor function docstrings
# Brief Description of Fix
Currently, some pyjanitor functions mutate the original dataframe and others return a copy. Solutions are heavily discussed in #79 but no conclusion has been reached. At the moment, it is unclear, without experimentation from the user, which behavior applies in each function.
In the interim, I propose to explicitly clarify this behavior in each function's docstring so the user has a clear idea regarding the function's mutating behavior. Below is a sample of what this could look like for `.clean_names()`:
"""
Clean column names.
Takes all column names, converts them to lowercase, then replaces all
spaces with underscores. <b>Does not mutate original dataframe.</b>
"""
Happy to add this line somewhere else in the docstring if inappropriate here.
- [Link to documentation page](https://pyjanitor.readthedocs.io/reference/index.html)
- [Link to exact file to be edited](https://github.com/ericmjl/pyjanitor/blob/dev/janitor/functions.py)
</issue>
<code>
[start of janitor/finance.py]
1 """
2 Finance-specific data cleaning functions.
3 """
4
5 import json
6 from datetime import date, datetime
7 from functools import lru_cache
8
9 import pandas as pd
10 import pandas_flavor as pf
11 import requests
12
13 from janitor import check
14
15 from .utils import deprecated_alias
16
17 currency_set = {
18 "AUD",
19 "BGN",
20 "BRL",
21 "CAD",
22 "CHF",
23 "CNY",
24 "CZK",
25 "DKK",
26 "EUR",
27 "GBP",
28 "HKD",
29 "HRK",
30 "HUF",
31 "IDR",
32 "ILS",
33 "INR",
34 "ISK",
35 "JPY",
36 "KRW",
37 "MXN",
38 "MYR",
39 "NOK",
40 "NZD",
41 "PHP",
42 "PLN",
43 "RON",
44 "RUB",
45 "SEK",
46 "SGD",
47 "THB",
48 "TRY",
49 "USD",
50 "ZAR",
51 }
52
53
54 def _check_currency(currency: str):
55 if currency not in currency_set:
56 raise ValueError(
57 f"currency {currency} not in supported currency set, "
58 f"{currency_set}"
59 )
60
61
62 @lru_cache(maxsize=32)
63 def _convert_currency(
64 from_currency: str = None,
65 to_currency: str = None,
66 historical_date: date = None,
67 ):
68 """
69 Currency conversion for Pandas DataFrame column.
70
71 Helper function for `convert_currency` method.
72
73 The API used is: https://exchangeratesapi.io/
74 """
75
76 url = "https://api.exchangeratesapi.io"
77
78 if historical_date:
79 check("historical_date", historical_date, [datetime, date])
80 if isinstance(historical_date, datetime):
81 if historical_date < datetime(1999, 1, 4):
82 raise ValueError(
83 "historical_date:datetime must be later than 1999-01-04!"
84 )
85 string_date = str(historical_date)[:10]
86 else:
87 if historical_date < date(1999, 1, 4):
88 raise ValueError(
89 "historical_date:date must be later than 1999-01-04!"
90 )
91 string_date = str(historical_date)
92 url = url + "/%s" % string_date
93 else:
94 url = url + "/latest"
95
96 _check_currency(from_currency)
97 _check_currency(to_currency)
98
99 payload = {"base": from_currency, "symbols": to_currency}
100
101 result = requests.get(url, params=payload)
102
103 if result.status_code != 200:
104 raise ConnectionError(
105 "Exchange Rate API failed to receive a 200 "
106 "response from the server. "
107 "Please try again later."
108 )
109
110 currency_dict = json.loads(result.text)
111 rate = currency_dict["rates"][to_currency]
112 return rate
113
114
115 @pf.register_dataframe_method
116 @deprecated_alias(colname="column_name")
117 def convert_currency(
118 df: pd.DataFrame,
119 column_name: str = None,
120 from_currency: str = None,
121 to_currency: str = None,
122 historical_date: date = None,
123 make_new_column: bool = False,
124 ) -> pd.DataFrame:
125 """
126 Converts a column from one currency to another, with an option to
127 convert based on historical exchange values.
128
129 :param df: A pandas dataframe.
130 :param column_name: Name of the new column. Should be a string, in order
131 for the column name to be compatible with the Feather binary
132 format (this is a useful thing to have).
133 :param from_currency: The base currency to convert from.
134 May be any of: currency_set = {"AUD", "BGN", "BRL", "CAD", "CHF",
135 "CNY", "CZK", "DKK", "EUR", "GBP", "HKD", "HRK", "HUF", "IDR",
136 "ILS", "INR", "ISK", "JPY", "KRW", "MXN", "MYR", "NOK", "NZD",
137 "PHP", "PLN", "RON", "RUB", "SEK", "SGD", "THB", "TRY", "USD",
138 "ZAR"}
139 :param to_currency: The target currency to convert to.
140 May be any of: currency_set = {"AUD", "BGN", "BRL", "CAD", "CHF",
141 "CNY", "CZK", "DKK", "EUR", "GBP", "HKD", "HRK", "HUF", "IDR",
142 "ILS", "INR", "ISK", "JPY", "KRW", "MXN", "MYR", "NOK", "NZD",
143 "PHP", "PLN", "RON", "RUB", "SEK", "SGD", "THB", "TRY", "USD",
144 "ZAR"}
145 :param historical_date: If supplied, get exchange rate on a certain\
146 date. If not supplied, get the latest exchange rate. The exchange\
147 rates go back to Jan. 4, 1999.
148 :param make_new_column: Generates new column for converted currency if
149 True, otherwise, converts currency in place.
150
151 :Setup:
152
153 .. code-block:: python
154
155 import pandas as pd
156 import janitor
157 from datetime import date
158
159 data_dict = {
160 "a": [1.23452345, 2.456234, 3.2346125] * 3,
161 "Bell__Chart": [1/3, 2/7, 3/2] * 3,
162 "decorated-elephant": [1/234, 2/13, 3/167] * 3,
163 "animals": ["rabbit", "leopard", "lion"] * 3,
164 "cities": ["Cambridge", "Shanghai", "Basel"] * 3,
165 }
166
167 example_dataframe = pd.DataFrame(data_dict)
168
169 :Example: Converting a column from one currency to another using rates
170
171 from 01/01/2018:
172
173 .. code-block:: python
174
175 example_dataframe.convert_currency('a', from_currency='USD',
176 to_currency='EUR', historical_date=date(2018,1,1))
177
178 :Output:
179
180 .. code-block:: python
181
182 a Bell__Chart decorated-elephant animals cities
183 0 1.029370 0.333333 0.004274 rabbit Cambridge
184 1 2.048056 0.285714 0.153846 leopard Shanghai
185 2 2.697084 1.500000 0.017964 lion Basel
186 3 1.029370 0.333333 0.004274 rabbit Cambridge
187 4 2.048056 0.285714 0.153846 leopard Shanghai
188 5 2.697084 1.500000 0.017964 lion Basel
189 6 1.029370 0.333333 0.004274 rabbit Cambridge
190 7 2.048056 0.285714 0.153846 leopard Shanghai
191 8 2.697084 1.500000 0.017964 lion Basel
192
193 """
194
195 rate = _convert_currency(from_currency, to_currency, historical_date)
196
197 if make_new_column:
198 new_column_name = column_name + "_" + to_currency
199 df[new_column_name] = df[column_name] * rate
200
201 else:
202 df[column_name] = df[column_name] * rate
203
204 return df
205
[end of janitor/finance.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/janitor/finance.py b/janitor/finance.py
--- a/janitor/finance.py
+++ b/janitor/finance.py
@@ -126,6 +126,8 @@
Converts a column from one currency to another, with an option to
convert based on historical exchange values.
+ This method mutates the original DataFrame.
+
:param df: A pandas dataframe.
:param column_name: Name of the new column. Should be a string, in order
for the column name to be compatible with the Feather binary
| {"golden_diff": "diff --git a/janitor/finance.py b/janitor/finance.py\n--- a/janitor/finance.py\n+++ b/janitor/finance.py\n@@ -126,6 +126,8 @@\n Converts a column from one currency to another, with an option to\n convert based on historical exchange values.\n \n+ This method mutates the original DataFrame.\n+\n :param df: A pandas dataframe.\n :param column_name: Name of the new column. Should be a string, in order\n for the column name to be compatible with the Feather binary\n", "issue": "[DOC] Clarify original-dataframe mutation behavior in pyjanitor function docstrings\n# Brief Description of Fix\r\nCurrently, some pyjanitor functions mutate the original dataframe and others return a copy. Solutions are heavily discussed in #79 but no conclusion has been reached. At the moment, it is unclear, without experimentation from the user, which behavior applies in each function. \r\n\r\nIn the interim, I propose to explicitly clarify this behavior in each function's docstring so the user has a clear idea regarding the function's mutating behavior. Below is a sample of what this could look like for `.clean_names()`:\r\n\r\n\"\"\"\r\nClean column names.\r\n Takes all column names, converts them to lowercase, then replaces all\r\n spaces with underscores. <b>Does not mutate original dataframe.</b>\r\n\"\"\"\r\n\r\nHappy to add this line somewhere else in the docstring if inappropriate here. \r\n\r\n- [Link to documentation page](https://pyjanitor.readthedocs.io/reference/index.html)\r\n- [Link to exact file to be edited](https://github.com/ericmjl/pyjanitor/blob/dev/janitor/functions.py)\r\n\n", "before_files": [{"content": "\"\"\"\nFinance-specific data cleaning functions.\n\"\"\"\n\nimport json\nfrom datetime import date, datetime\nfrom functools import lru_cache\n\nimport pandas as pd\nimport pandas_flavor as pf\nimport requests\n\nfrom janitor import check\n\nfrom .utils import deprecated_alias\n\ncurrency_set = {\n \"AUD\",\n \"BGN\",\n \"BRL\",\n \"CAD\",\n \"CHF\",\n \"CNY\",\n \"CZK\",\n \"DKK\",\n \"EUR\",\n \"GBP\",\n \"HKD\",\n \"HRK\",\n \"HUF\",\n \"IDR\",\n \"ILS\",\n \"INR\",\n \"ISK\",\n \"JPY\",\n \"KRW\",\n \"MXN\",\n \"MYR\",\n \"NOK\",\n \"NZD\",\n \"PHP\",\n \"PLN\",\n \"RON\",\n \"RUB\",\n \"SEK\",\n \"SGD\",\n \"THB\",\n \"TRY\",\n \"USD\",\n \"ZAR\",\n}\n\n\ndef _check_currency(currency: str):\n if currency not in currency_set:\n raise ValueError(\n f\"currency {currency} not in supported currency set, \"\n f\"{currency_set}\"\n )\n\n\n@lru_cache(maxsize=32)\ndef _convert_currency(\n from_currency: str = None,\n to_currency: str = None,\n historical_date: date = None,\n):\n \"\"\"\n Currency conversion for Pandas DataFrame column.\n\n Helper function for `convert_currency` method.\n\n The API used is: https://exchangeratesapi.io/\n \"\"\"\n\n url = \"https://api.exchangeratesapi.io\"\n\n if historical_date:\n check(\"historical_date\", historical_date, [datetime, date])\n if isinstance(historical_date, datetime):\n if historical_date < datetime(1999, 1, 4):\n raise ValueError(\n \"historical_date:datetime must be later than 1999-01-04!\"\n )\n string_date = str(historical_date)[:10]\n else:\n if historical_date < date(1999, 1, 4):\n raise ValueError(\n \"historical_date:date must be later than 1999-01-04!\"\n )\n string_date = str(historical_date)\n url = url + \"/%s\" % string_date\n else:\n url = url + \"/latest\"\n\n _check_currency(from_currency)\n _check_currency(to_currency)\n\n payload = {\"base\": from_currency, \"symbols\": to_currency}\n\n result = requests.get(url, params=payload)\n\n if result.status_code != 200:\n raise ConnectionError(\n \"Exchange Rate API failed to receive a 200 \"\n \"response from the server. \"\n \"Please try again later.\"\n )\n\n currency_dict = json.loads(result.text)\n rate = currency_dict[\"rates\"][to_currency]\n return rate\n\n\[email protected]_dataframe_method\n@deprecated_alias(colname=\"column_name\")\ndef convert_currency(\n df: pd.DataFrame,\n column_name: str = None,\n from_currency: str = None,\n to_currency: str = None,\n historical_date: date = None,\n make_new_column: bool = False,\n) -> pd.DataFrame:\n \"\"\"\n Converts a column from one currency to another, with an option to\n convert based on historical exchange values.\n\n :param df: A pandas dataframe.\n :param column_name: Name of the new column. Should be a string, in order\n for the column name to be compatible with the Feather binary\n format (this is a useful thing to have).\n :param from_currency: The base currency to convert from.\n May be any of: currency_set = {\"AUD\", \"BGN\", \"BRL\", \"CAD\", \"CHF\",\n \"CNY\", \"CZK\", \"DKK\", \"EUR\", \"GBP\", \"HKD\", \"HRK\", \"HUF\", \"IDR\",\n \"ILS\", \"INR\", \"ISK\", \"JPY\", \"KRW\", \"MXN\", \"MYR\", \"NOK\", \"NZD\",\n \"PHP\", \"PLN\", \"RON\", \"RUB\", \"SEK\", \"SGD\", \"THB\", \"TRY\", \"USD\",\n \"ZAR\"}\n :param to_currency: The target currency to convert to.\n May be any of: currency_set = {\"AUD\", \"BGN\", \"BRL\", \"CAD\", \"CHF\",\n \"CNY\", \"CZK\", \"DKK\", \"EUR\", \"GBP\", \"HKD\", \"HRK\", \"HUF\", \"IDR\",\n \"ILS\", \"INR\", \"ISK\", \"JPY\", \"KRW\", \"MXN\", \"MYR\", \"NOK\", \"NZD\",\n \"PHP\", \"PLN\", \"RON\", \"RUB\", \"SEK\", \"SGD\", \"THB\", \"TRY\", \"USD\",\n \"ZAR\"}\n :param historical_date: If supplied, get exchange rate on a certain\\\n date. If not supplied, get the latest exchange rate. The exchange\\\n rates go back to Jan. 4, 1999.\n :param make_new_column: Generates new column for converted currency if\n True, otherwise, converts currency in place.\n\n :Setup:\n\n .. code-block:: python\n\n import pandas as pd\n import janitor\n from datetime import date\n\n data_dict = {\n \"a\": [1.23452345, 2.456234, 3.2346125] * 3,\n \"Bell__Chart\": [1/3, 2/7, 3/2] * 3,\n \"decorated-elephant\": [1/234, 2/13, 3/167] * 3,\n \"animals\": [\"rabbit\", \"leopard\", \"lion\"] * 3,\n \"cities\": [\"Cambridge\", \"Shanghai\", \"Basel\"] * 3,\n }\n\n example_dataframe = pd.DataFrame(data_dict)\n\n :Example: Converting a column from one currency to another using rates\n\n from 01/01/2018:\n\n .. code-block:: python\n\n example_dataframe.convert_currency('a', from_currency='USD',\n to_currency='EUR', historical_date=date(2018,1,1))\n\n :Output:\n\n .. code-block:: python\n\n a Bell__Chart decorated-elephant animals cities\n 0 1.029370 0.333333 0.004274 rabbit Cambridge\n 1 2.048056 0.285714 0.153846 leopard Shanghai\n 2 2.697084 1.500000 0.017964 lion Basel\n 3 1.029370 0.333333 0.004274 rabbit Cambridge\n 4 2.048056 0.285714 0.153846 leopard Shanghai\n 5 2.697084 1.500000 0.017964 lion Basel\n 6 1.029370 0.333333 0.004274 rabbit Cambridge\n 7 2.048056 0.285714 0.153846 leopard Shanghai\n 8 2.697084 1.500000 0.017964 lion Basel\n\n \"\"\"\n\n rate = _convert_currency(from_currency, to_currency, historical_date)\n\n if make_new_column:\n new_column_name = column_name + \"_\" + to_currency\n df[new_column_name] = df[column_name] * rate\n\n else:\n df[column_name] = df[column_name] * rate\n\n return df\n", "path": "janitor/finance.py"}]} | 3,147 | 127 |
gh_patches_debug_57471 | rasdani/github-patches | git_diff | d2l-ai__d2l-en-2279 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ModuleNotFoundError when running the official pytorch colab notebook

I can replicate the error at multiple official pytorch colab notebooks, e.g.
https://colab.research.google.com/github/d2l-ai/d2l-pytorch-colab/blob/master/chapter_linear-classification/image-classification-dataset.ipynb#scrollTo=ee445cce
</issue>
<code>
[start of setup.py]
1 from setuptools import setup, find_packages
2 import d2l
3
4 requirements = [
5 'ipython>=7.23',
6 'jupyter',
7 'numpy',
8 'matplotlib',
9 'requests',
10 'pandas',
11 'gym'
12 ]
13
14 setup(
15 name='d2l',
16 version=d2l.__version__,
17 python_requires='>=3.5',
18 author='D2L Developers',
19 author_email='[email protected]',
20 url='https://d2l.ai',
21 description='Dive into Deep Learning',
22 license='MIT-0',
23 packages=find_packages(),
24 zip_safe=True,
25 install_requires=requirements,
26 )
27
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -2,10 +2,10 @@
import d2l
requirements = [
- 'ipython>=7.23',
'jupyter',
'numpy',
'matplotlib',
+ 'matplotlib-inline',
'requests',
'pandas',
'gym'
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -2,10 +2,10 @@\n import d2l\n \n requirements = [\n- 'ipython>=7.23',\n 'jupyter',\n 'numpy',\n 'matplotlib',\n+ 'matplotlib-inline',\n 'requests',\n 'pandas',\n 'gym'\n", "issue": "ModuleNotFoundError when running the official pytorch colab notebook\n\r\n\r\nI can replicate the error at multiple official pytorch colab notebooks, e.g. \r\n\r\nhttps://colab.research.google.com/github/d2l-ai/d2l-pytorch-colab/blob/master/chapter_linear-classification/image-classification-dataset.ipynb#scrollTo=ee445cce\r\n\r\n\r\n\n", "before_files": [{"content": "from setuptools import setup, find_packages\nimport d2l\n\nrequirements = [\n 'ipython>=7.23',\n 'jupyter',\n 'numpy',\n 'matplotlib',\n 'requests',\n 'pandas',\n 'gym'\n]\n\nsetup(\n name='d2l',\n version=d2l.__version__,\n python_requires='>=3.5',\n author='D2L Developers',\n author_email='[email protected]',\n url='https://d2l.ai',\n description='Dive into Deep Learning',\n license='MIT-0',\n packages=find_packages(),\n zip_safe=True,\n install_requires=requirements,\n)\n", "path": "setup.py"}]} | 864 | 84 |
gh_patches_debug_892 | rasdani/github-patches | git_diff | rasterio__rasterio-437 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Check for "ndarray-like" instead of ndarray in _warp; other places
I want to use `rasterio.warp.reproject` on an `xray.Dataset` with `xray.Dataset.apply` (http://xray.readthedocs.org/en/stable/). xray has a feature to turn the dataset into a `np.ndarray`, but that means losing all my metadata.
At https://github.com/mapbox/rasterio/blob/master/rasterio/_warp.pyx#L249, _warp checks that the source is an `np.ndarray` (whereas the source in my case is an `xray.DataArray` - satisfying the same interfaces as `np.ndarray`), so I get an invalid source error.
It could be a good idea to check for something like
```
def is_ndarray_like(source):
return hasattr(source, '__array__')
```
instead of
```
isinstance(source, np.ndarray)
```
so other numpy-like arrays can be used.
</issue>
<code>
[start of rasterio/dtypes.py]
1 # Mapping of GDAL to Numpy data types.
2 #
3 # Since 0.13 we are not importing numpy here and data types are strings.
4 # Happily strings can be used throughout Numpy and so existing code will
5 # break.
6 #
7 # Within Rasterio, to test data types, we use Numpy's dtype() factory to
8 # do something like this:
9 #
10 # if np.dtype(destination.dtype) == np.dtype(rasterio.uint8): ...
11 #
12
13 bool_ = 'bool'
14 ubyte = uint8 = 'uint8'
15 uint16 = 'uint16'
16 int16 = 'int16'
17 uint32 = 'uint32'
18 int32 = 'int32'
19 float32 = 'float32'
20 float64 = 'float64'
21 complex_ = 'complex'
22 complex64 = 'complex64'
23 complex128 = 'complex128'
24
25 # Not supported:
26 # GDT_CInt16 = 8, GDT_CInt32 = 9, GDT_CFloat32 = 10, GDT_CFloat64 = 11
27
28 dtype_fwd = {
29 0: None, # GDT_Unknown
30 1: ubyte, # GDT_Byte
31 2: uint16, # GDT_UInt16
32 3: int16, # GDT_Int16
33 4: uint32, # GDT_UInt32
34 5: int32, # GDT_Int32
35 6: float32, # GDT_Float32
36 7: float64, # GDT_Float64
37 8: complex_, # GDT_CInt16
38 9: complex_, # GDT_CInt32
39 10: complex64, # GDT_CFloat32
40 11: complex128 } # GDT_CFloat64
41
42 dtype_rev = dict((v, k) for k, v in dtype_fwd.items())
43 dtype_rev['uint8'] = 1
44
45 typename_fwd = {
46 0: 'Unknown',
47 1: 'Byte',
48 2: 'UInt16',
49 3: 'Int16',
50 4: 'UInt32',
51 5: 'Int32',
52 6: 'Float32',
53 7: 'Float64',
54 8: 'CInt16',
55 9: 'CInt32',
56 10: 'CFloat32',
57 11: 'CFloat64' }
58
59 typename_rev = dict((v, k) for k, v in typename_fwd.items())
60
61 def _gdal_typename(dt):
62 try:
63 return typename_fwd[dtype_rev[dt]]
64 except KeyError:
65 return typename_fwd[dtype_rev[dt().dtype.name]]
66
67 def check_dtype(dt):
68 if dt not in dtype_rev:
69 try:
70 return dt().dtype.name in dtype_rev
71 except:
72 return False
73 return True
74
75
76 def get_minimum_int_dtype(values):
77 """
78 Uses range checking to determine the minimum integer data type required
79 to represent values.
80
81 :param values: numpy array
82 :return: named data type that can be later used to create a numpy dtype
83 """
84
85 min_value = values.min()
86 max_value = values.max()
87
88 if min_value >= 0:
89 if max_value <= 255:
90 return uint8
91 elif max_value <= 65535:
92 return uint16
93 elif max_value <= 4294967295:
94 return uint32
95 elif min_value >= -32768 and max_value <= 32767:
96 return int16
97 elif min_value >= -2147483648 and max_value <= 2147483647:
98 return int32
99
[end of rasterio/dtypes.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/rasterio/dtypes.py b/rasterio/dtypes.py
--- a/rasterio/dtypes.py
+++ b/rasterio/dtypes.py
@@ -96,3 +96,9 @@
return int16
elif min_value >= -2147483648 and max_value <= 2147483647:
return int32
+
+
+def is_ndarray(array):
+ import numpy
+
+ return isinstance(array, numpy.ndarray) or hasattr(array, '__array__')
| {"golden_diff": "diff --git a/rasterio/dtypes.py b/rasterio/dtypes.py\n--- a/rasterio/dtypes.py\n+++ b/rasterio/dtypes.py\n@@ -96,3 +96,9 @@\n return int16\n elif min_value >= -2147483648 and max_value <= 2147483647:\n return int32\n+\n+\n+def is_ndarray(array):\n+ import numpy\n+\n+ return isinstance(array, numpy.ndarray) or hasattr(array, '__array__')\n", "issue": "Check for \"ndarray-like\" instead of ndarray in _warp; other places\nI want to use `rasterio.warp.reproject` on an `xray.Dataset` with `xray.Dataset.apply` (http://xray.readthedocs.org/en/stable/). xray has a feature to turn the dataset into a `np.ndarray`, but that means losing all my metadata.\n\nAt https://github.com/mapbox/rasterio/blob/master/rasterio/_warp.pyx#L249, _warp checks that the source is an `np.ndarray` (whereas the source in my case is an `xray.DataArray` - satisfying the same interfaces as `np.ndarray`), so I get an invalid source error.\n\nIt could be a good idea to check for something like\n\n```\ndef is_ndarray_like(source):\n return hasattr(source, '__array__')\n```\n\ninstead of\n\n```\nisinstance(source, np.ndarray)\n```\n\nso other numpy-like arrays can be used.\n\n", "before_files": [{"content": "# Mapping of GDAL to Numpy data types.\n#\n# Since 0.13 we are not importing numpy here and data types are strings.\n# Happily strings can be used throughout Numpy and so existing code will\n# break.\n#\n# Within Rasterio, to test data types, we use Numpy's dtype() factory to \n# do something like this:\n#\n# if np.dtype(destination.dtype) == np.dtype(rasterio.uint8): ...\n#\n\nbool_ = 'bool'\nubyte = uint8 = 'uint8'\nuint16 = 'uint16'\nint16 = 'int16'\nuint32 = 'uint32'\nint32 = 'int32'\nfloat32 = 'float32'\nfloat64 = 'float64'\ncomplex_ = 'complex'\ncomplex64 = 'complex64'\ncomplex128 = 'complex128'\n\n# Not supported:\n# GDT_CInt16 = 8, GDT_CInt32 = 9, GDT_CFloat32 = 10, GDT_CFloat64 = 11\n\ndtype_fwd = {\n 0: None, # GDT_Unknown\n 1: ubyte, # GDT_Byte\n 2: uint16, # GDT_UInt16\n 3: int16, # GDT_Int16\n 4: uint32, # GDT_UInt32\n 5: int32, # GDT_Int32\n 6: float32, # GDT_Float32\n 7: float64, # GDT_Float64\n 8: complex_, # GDT_CInt16\n 9: complex_, # GDT_CInt32\n 10: complex64, # GDT_CFloat32\n 11: complex128 } # GDT_CFloat64\n\ndtype_rev = dict((v, k) for k, v in dtype_fwd.items())\ndtype_rev['uint8'] = 1\n\ntypename_fwd = {\n 0: 'Unknown',\n 1: 'Byte',\n 2: 'UInt16',\n 3: 'Int16',\n 4: 'UInt32',\n 5: 'Int32',\n 6: 'Float32',\n 7: 'Float64',\n 8: 'CInt16',\n 9: 'CInt32',\n 10: 'CFloat32',\n 11: 'CFloat64' }\n\ntypename_rev = dict((v, k) for k, v in typename_fwd.items())\n\ndef _gdal_typename(dt):\n try:\n return typename_fwd[dtype_rev[dt]]\n except KeyError:\n return typename_fwd[dtype_rev[dt().dtype.name]]\n\ndef check_dtype(dt):\n if dt not in dtype_rev:\n try:\n return dt().dtype.name in dtype_rev\n except:\n return False\n return True\n\n\ndef get_minimum_int_dtype(values):\n \"\"\"\n Uses range checking to determine the minimum integer data type required\n to represent values.\n\n :param values: numpy array\n :return: named data type that can be later used to create a numpy dtype\n \"\"\"\n\n min_value = values.min()\n max_value = values.max()\n \n if min_value >= 0:\n if max_value <= 255:\n return uint8\n elif max_value <= 65535:\n return uint16\n elif max_value <= 4294967295:\n return uint32\n elif min_value >= -32768 and max_value <= 32767:\n return int16\n elif min_value >= -2147483648 and max_value <= 2147483647:\n return int32\n", "path": "rasterio/dtypes.py"}]} | 1,821 | 124 |
gh_patches_debug_38521 | rasdani/github-patches | git_diff | pypa__setuptools-2633 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Forward compatibility][BUG] pkg_resources.extern.VendorImporter lacks find_spec() method
### setuptools version
54.1.2 and main
### Python version
3.10.0a7
### OS
All
### Additional environment information
When running tests for setuptools_scm.
### Description
We get:
```python-traceback
<frozen importlib._bootstrap>:933: in _find_spec
???
E AttributeError: 'VendorImporter' object has no attribute 'find_spec'
During handling of the above exception, another exception occurred:
/usr/lib/python3.10/site-packages/pluggy/hooks.py:286: in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
/usr/lib/python3.10/site-packages/pluggy/manager.py:93: in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
/usr/lib/python3.10/site-packages/pluggy/manager.py:84: in <lambda>
self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
/usr/lib/python3.10/site-packages/_pytest/doctest.py:129: in pytest_collect_file
elif _is_doctest(config, path, parent):
/usr/lib/python3.10/site-packages/_pytest/doctest.py:147: in _is_doctest
if path.check(fnmatch=glob):
/usr/lib/python3.10/site-packages/py/_path/local.py:387: in check
return super(LocalPath, self).check(**kw)
/usr/lib/python3.10/site-packages/py/_path/common.py:241: in check
return self.Checkers(self)._evaluate(kw)
/usr/lib/python3.10/site-packages/py/_path/common.py:108: in _evaluate
if py.code.getrawcode(meth).co_argcount > 1:
/usr/lib/python3.10/site-packages/py/_vendored_packages/apipkg/__init__.py:152: in __makeattr
result = importobj(modpath, attrname)
/usr/lib/python3.10/site-packages/py/_vendored_packages/apipkg/__init__.py:72: in importobj
module = __import__(modpath, None, None, ['__doc__'])
/usr/lib/python3.10/site-packages/py/_code/code.py:7: in <module>
reprlib = py.builtin._tryimport('repr', 'reprlib')
/usr/lib/python3.10/site-packages/py/_builtin.py:144: in _tryimport
__import__(name)
<frozen importlib._bootstrap>:1021: in _find_and_load
???
<frozen importlib._bootstrap>:996: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:935: in _find_spec
???
<frozen importlib._bootstrap>:908: in _find_spec_legacy
???
E ImportWarning: VendorImporter.find_spec() not found; falling back to find_module()
```
The traceback is not very helpful, but the VendorImporter is from pkg_resourecs.
Python added a warning: https://bugs.python.org/issue42134 but other packages treat it as error.
### Expected behavior
No ImportWarning.
### How to Reproduce
1. Get Python 3.10.0a7 and tox (e.g. on Fedora via `$ sudo dnf --enablerepo=updates-testing install python3.10 tox`)
2. git clone https://github.com/pypa/setuptools_scm.git and cd in
3. Run `tox -e py310-test`
### Output
```console
py310-test develop-inst-nodeps: .../setuptools_scm
py310-test installed: attrs==20.3.0,iniconfig==1.1.1,packaging==20.9,pluggy==0.13.1,py==1.10.0,pyparsing==2.4.7,pytest==6.2.3,-e git+https://github.com/pypa/setuptools_scm.git@abb67b15985f380d8cf4451b9f2ef3dd11cb8a91#egg=setuptools_scm,toml==0.10.2
py310-test run-test-pre: PYTHONHASHSEED='1426720794'
py310-test run-test: commands[0] | pytest
============================= test session starts ==============================
platform linux -- Python 3.10.0a7, pytest-6.2.3, py-1.10.0, pluggy-0.13.1
cachedir: .tox/py310-test/.pytest_cache
setuptools version 53.0.0 from '.../setuptools_scm/.tox/py310-test/lib/python3.10/site-packages/setuptools/__init__.py'
setuptools_scm version 6.0.1 from '.../setuptools_scm/src/setuptools_scm/__init__.py'
rootdir: .../setuptools_scm, configfile: tox.ini, testpaths: testing
collected 0 items / 1 error
==================================== ERRORS ====================================
________________________ ERROR collecting test session _________________________
<frozen importlib._bootstrap>:933: in _find_spec
???
E AttributeError: 'VendorImporter' object has no attribute 'find_spec'
During handling of the above exception, another exception occurred:
.tox/py310-test/lib/python3.10/site-packages/pluggy/hooks.py:286: in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
.tox/py310-test/lib/python3.10/site-packages/pluggy/manager.py:93: in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
.tox/py310-test/lib/python3.10/site-packages/pluggy/manager.py:84: in <lambda>
self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
.tox/py310-test/lib/python3.10/site-packages/_pytest/doctest.py:129: in pytest_collect_file
elif _is_doctest(config, path, parent):
.tox/py310-test/lib/python3.10/site-packages/_pytest/doctest.py:147: in _is_doctest
if path.check(fnmatch=glob):
.tox/py310-test/lib/python3.10/site-packages/py/_path/local.py:387: in check
return super(LocalPath, self).check(**kw)
.tox/py310-test/lib/python3.10/site-packages/py/_path/common.py:241: in check
return self.Checkers(self)._evaluate(kw)
.tox/py310-test/lib/python3.10/site-packages/py/_path/common.py:108: in _evaluate
if py.code.getrawcode(meth).co_argcount > 1:
.tox/py310-test/lib/python3.10/site-packages/py/_vendored_packages/apipkg/__init__.py:152: in __makeattr
result = importobj(modpath, attrname)
.tox/py310-test/lib/python3.10/site-packages/py/_vendored_packages/apipkg/__init__.py:72: in importobj
module = __import__(modpath, None, None, ['__doc__'])
.tox/py310-test/lib/python3.10/site-packages/py/_code/code.py:7: in <module>
reprlib = py.builtin._tryimport('repr', 'reprlib')
.tox/py310-test/lib/python3.10/site-packages/py/_builtin.py:144: in _tryimport
__import__(name)
<frozen importlib._bootstrap>:1021: in _find_and_load
???
<frozen importlib._bootstrap>:996: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:935: in _find_spec
???
<frozen importlib._bootstrap>:908: in _find_spec_legacy
???
E ImportWarning: VendorImporter.find_spec() not found; falling back to find_module()
=========================== short test summary info ============================
ERROR - ImportWarning: VendorImporter.find_spec() not found; falling back to...
!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!
=============================== 1 error in 0.16s ===============================
ERROR: InvocationError for command .../setuptools_scm/.tox/py310-test/bin/pytest (exited with code 2)
___________________________________ summary ____________________________________
ERROR: py310-test: commands failed
```
### Code of Conduct
I agree to follow the PSF Code of Conduct
</issue>
<code>
[start of setuptools/extern/__init__.py]
1 import sys
2
3
4 class VendorImporter:
5 """
6 A PEP 302 meta path importer for finding optionally-vendored
7 or otherwise naturally-installed packages from root_name.
8 """
9
10 def __init__(self, root_name, vendored_names=(), vendor_pkg=None):
11 self.root_name = root_name
12 self.vendored_names = set(vendored_names)
13 self.vendor_pkg = vendor_pkg or root_name.replace('extern', '_vendor')
14
15 @property
16 def search_path(self):
17 """
18 Search first the vendor package then as a natural package.
19 """
20 yield self.vendor_pkg + '.'
21 yield ''
22
23 def find_module(self, fullname, path=None):
24 """
25 Return self when fullname starts with root_name and the
26 target module is one vendored through this importer.
27 """
28 root, base, target = fullname.partition(self.root_name + '.')
29 if root:
30 return
31 if not any(map(target.startswith, self.vendored_names)):
32 return
33 return self
34
35 def load_module(self, fullname):
36 """
37 Iterate over the search path to locate and load fullname.
38 """
39 root, base, target = fullname.partition(self.root_name + '.')
40 for prefix in self.search_path:
41 try:
42 extant = prefix + target
43 __import__(extant)
44 mod = sys.modules[extant]
45 sys.modules[fullname] = mod
46 return mod
47 except ImportError:
48 pass
49 else:
50 raise ImportError(
51 "The '{target}' package is required; "
52 "normally this is bundled with this package so if you get "
53 "this warning, consult the packager of your "
54 "distribution.".format(**locals())
55 )
56
57 def create_module(self, spec):
58 return self.load_module(spec.name)
59
60 def exec_module(self, module):
61 pass
62
63 def install(self):
64 """
65 Install this importer into sys.meta_path if not already present.
66 """
67 if self not in sys.meta_path:
68 sys.meta_path.append(self)
69
70
71 names = 'packaging', 'pyparsing', 'ordered_set',
72 VendorImporter(__name__, names, 'setuptools._vendor').install()
73
[end of setuptools/extern/__init__.py]
[start of pkg_resources/extern/__init__.py]
1 import sys
2
3
4 class VendorImporter:
5 """
6 A PEP 302 meta path importer for finding optionally-vendored
7 or otherwise naturally-installed packages from root_name.
8 """
9
10 def __init__(self, root_name, vendored_names=(), vendor_pkg=None):
11 self.root_name = root_name
12 self.vendored_names = set(vendored_names)
13 self.vendor_pkg = vendor_pkg or root_name.replace('extern', '_vendor')
14
15 @property
16 def search_path(self):
17 """
18 Search first the vendor package then as a natural package.
19 """
20 yield self.vendor_pkg + '.'
21 yield ''
22
23 def find_module(self, fullname, path=None):
24 """
25 Return self when fullname starts with root_name and the
26 target module is one vendored through this importer.
27 """
28 root, base, target = fullname.partition(self.root_name + '.')
29 if root:
30 return
31 if not any(map(target.startswith, self.vendored_names)):
32 return
33 return self
34
35 def load_module(self, fullname):
36 """
37 Iterate over the search path to locate and load fullname.
38 """
39 root, base, target = fullname.partition(self.root_name + '.')
40 for prefix in self.search_path:
41 try:
42 extant = prefix + target
43 __import__(extant)
44 mod = sys.modules[extant]
45 sys.modules[fullname] = mod
46 return mod
47 except ImportError:
48 pass
49 else:
50 raise ImportError(
51 "The '{target}' package is required; "
52 "normally this is bundled with this package so if you get "
53 "this warning, consult the packager of your "
54 "distribution.".format(**locals())
55 )
56
57 def create_module(self, spec):
58 return self.load_module(spec.name)
59
60 def exec_module(self, module):
61 pass
62
63 def install(self):
64 """
65 Install this importer into sys.meta_path if not already present.
66 """
67 if self not in sys.meta_path:
68 sys.meta_path.append(self)
69
70
71 names = 'packaging', 'pyparsing', 'appdirs'
72 VendorImporter(__name__, names).install()
73
[end of pkg_resources/extern/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pkg_resources/extern/__init__.py b/pkg_resources/extern/__init__.py
--- a/pkg_resources/extern/__init__.py
+++ b/pkg_resources/extern/__init__.py
@@ -1,3 +1,4 @@
+import importlib.util
import sys
@@ -20,17 +21,10 @@
yield self.vendor_pkg + '.'
yield ''
- def find_module(self, fullname, path=None):
- """
- Return self when fullname starts with root_name and the
- target module is one vendored through this importer.
- """
+ def _module_matches_namespace(self, fullname):
+ """Figure out if the target module is vendored."""
root, base, target = fullname.partition(self.root_name + '.')
- if root:
- return
- if not any(map(target.startswith, self.vendored_names)):
- return
- return self
+ return not root and any(map(target.startswith, self.vendored_names))
def load_module(self, fullname):
"""
@@ -60,6 +54,13 @@
def exec_module(self, module):
pass
+ def find_spec(self, fullname, path=None, target=None):
+ """Return a module spec for vendored names."""
+ return (
+ importlib.util.spec_from_loader(fullname, self)
+ if self._module_matches_namespace(fullname) else None
+ )
+
def install(self):
"""
Install this importer into sys.meta_path if not already present.
diff --git a/setuptools/extern/__init__.py b/setuptools/extern/__init__.py
--- a/setuptools/extern/__init__.py
+++ b/setuptools/extern/__init__.py
@@ -1,3 +1,4 @@
+import importlib.util
import sys
@@ -20,17 +21,10 @@
yield self.vendor_pkg + '.'
yield ''
- def find_module(self, fullname, path=None):
- """
- Return self when fullname starts with root_name and the
- target module is one vendored through this importer.
- """
+ def _module_matches_namespace(self, fullname):
+ """Figure out if the target module is vendored."""
root, base, target = fullname.partition(self.root_name + '.')
- if root:
- return
- if not any(map(target.startswith, self.vendored_names)):
- return
- return self
+ return not root and any(map(target.startswith, self.vendored_names))
def load_module(self, fullname):
"""
@@ -60,6 +54,13 @@
def exec_module(self, module):
pass
+ def find_spec(self, fullname, path=None, target=None):
+ """Return a module spec for vendored names."""
+ return (
+ importlib.util.spec_from_loader(fullname, self)
+ if self._module_matches_namespace(fullname) else None
+ )
+
def install(self):
"""
Install this importer into sys.meta_path if not already present.
| {"golden_diff": "diff --git a/pkg_resources/extern/__init__.py b/pkg_resources/extern/__init__.py\n--- a/pkg_resources/extern/__init__.py\n+++ b/pkg_resources/extern/__init__.py\n@@ -1,3 +1,4 @@\n+import importlib.util\n import sys\n \n \n@@ -20,17 +21,10 @@\n yield self.vendor_pkg + '.'\n yield ''\n \n- def find_module(self, fullname, path=None):\n- \"\"\"\n- Return self when fullname starts with root_name and the\n- target module is one vendored through this importer.\n- \"\"\"\n+ def _module_matches_namespace(self, fullname):\n+ \"\"\"Figure out if the target module is vendored.\"\"\"\n root, base, target = fullname.partition(self.root_name + '.')\n- if root:\n- return\n- if not any(map(target.startswith, self.vendored_names)):\n- return\n- return self\n+ return not root and any(map(target.startswith, self.vendored_names))\n \n def load_module(self, fullname):\n \"\"\"\n@@ -60,6 +54,13 @@\n def exec_module(self, module):\n pass\n \n+ def find_spec(self, fullname, path=None, target=None):\n+ \"\"\"Return a module spec for vendored names.\"\"\"\n+ return (\n+ importlib.util.spec_from_loader(fullname, self)\n+ if self._module_matches_namespace(fullname) else None\n+ )\n+\n def install(self):\n \"\"\"\n Install this importer into sys.meta_path if not already present.\ndiff --git a/setuptools/extern/__init__.py b/setuptools/extern/__init__.py\n--- a/setuptools/extern/__init__.py\n+++ b/setuptools/extern/__init__.py\n@@ -1,3 +1,4 @@\n+import importlib.util\n import sys\n \n \n@@ -20,17 +21,10 @@\n yield self.vendor_pkg + '.'\n yield ''\n \n- def find_module(self, fullname, path=None):\n- \"\"\"\n- Return self when fullname starts with root_name and the\n- target module is one vendored through this importer.\n- \"\"\"\n+ def _module_matches_namespace(self, fullname):\n+ \"\"\"Figure out if the target module is vendored.\"\"\"\n root, base, target = fullname.partition(self.root_name + '.')\n- if root:\n- return\n- if not any(map(target.startswith, self.vendored_names)):\n- return\n- return self\n+ return not root and any(map(target.startswith, self.vendored_names))\n \n def load_module(self, fullname):\n \"\"\"\n@@ -60,6 +54,13 @@\n def exec_module(self, module):\n pass\n \n+ def find_spec(self, fullname, path=None, target=None):\n+ \"\"\"Return a module spec for vendored names.\"\"\"\n+ return (\n+ importlib.util.spec_from_loader(fullname, self)\n+ if self._module_matches_namespace(fullname) else None\n+ )\n+\n def install(self):\n \"\"\"\n Install this importer into sys.meta_path if not already present.\n", "issue": "[Forward compatibility][BUG] pkg_resources.extern.VendorImporter lacks find_spec() method\n### setuptools version\r\n\r\n54.1.2 and main\r\n\r\n### Python version\r\n\r\n3.10.0a7\r\n\r\n### OS\r\n\r\nAll\r\n\r\n### Additional environment information\r\n\r\nWhen running tests for setuptools_scm.\r\n\r\n### Description\r\n\r\nWe get:\r\n\r\n```python-traceback\r\n<frozen importlib._bootstrap>:933: in _find_spec\r\n ???\r\nE AttributeError: 'VendorImporter' object has no attribute 'find_spec'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n/usr/lib/python3.10/site-packages/pluggy/hooks.py:286: in __call__\r\n return self._hookexec(self, self.get_hookimpls(), kwargs)\r\n/usr/lib/python3.10/site-packages/pluggy/manager.py:93: in _hookexec\r\n return self._inner_hookexec(hook, methods, kwargs)\r\n/usr/lib/python3.10/site-packages/pluggy/manager.py:84: in <lambda>\r\n self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(\r\n/usr/lib/python3.10/site-packages/_pytest/doctest.py:129: in pytest_collect_file\r\n elif _is_doctest(config, path, parent):\r\n/usr/lib/python3.10/site-packages/_pytest/doctest.py:147: in _is_doctest\r\n if path.check(fnmatch=glob):\r\n/usr/lib/python3.10/site-packages/py/_path/local.py:387: in check\r\n return super(LocalPath, self).check(**kw)\r\n/usr/lib/python3.10/site-packages/py/_path/common.py:241: in check\r\n return self.Checkers(self)._evaluate(kw)\r\n/usr/lib/python3.10/site-packages/py/_path/common.py:108: in _evaluate\r\n if py.code.getrawcode(meth).co_argcount > 1:\r\n/usr/lib/python3.10/site-packages/py/_vendored_packages/apipkg/__init__.py:152: in __makeattr\r\n result = importobj(modpath, attrname)\r\n/usr/lib/python3.10/site-packages/py/_vendored_packages/apipkg/__init__.py:72: in importobj\r\n module = __import__(modpath, None, None, ['__doc__'])\r\n/usr/lib/python3.10/site-packages/py/_code/code.py:7: in <module>\r\n reprlib = py.builtin._tryimport('repr', 'reprlib')\r\n/usr/lib/python3.10/site-packages/py/_builtin.py:144: in _tryimport\r\n __import__(name)\r\n<frozen importlib._bootstrap>:1021: in _find_and_load\r\n ???\r\n<frozen importlib._bootstrap>:996: in _find_and_load_unlocked\r\n ???\r\n<frozen importlib._bootstrap>:935: in _find_spec\r\n ???\r\n<frozen importlib._bootstrap>:908: in _find_spec_legacy\r\n ???\r\nE ImportWarning: VendorImporter.find_spec() not found; falling back to find_module()\r\n```\r\n\r\nThe traceback is not very helpful, but the VendorImporter is from pkg_resourecs.\r\n\r\nPython added a warning: https://bugs.python.org/issue42134 but other packages treat it as error.\r\n\r\n### Expected behavior\r\n\r\nNo ImportWarning.\r\n\r\n### How to Reproduce\r\n\r\n1. Get Python 3.10.0a7 and tox (e.g. on Fedora via `$ sudo dnf --enablerepo=updates-testing install python3.10 tox`)\r\n2. git clone https://github.com/pypa/setuptools_scm.git and cd in\r\n3. Run `tox -e py310-test`\r\n\r\n### Output\r\n\r\n```console\r\npy310-test develop-inst-nodeps: .../setuptools_scm\r\npy310-test installed: attrs==20.3.0,iniconfig==1.1.1,packaging==20.9,pluggy==0.13.1,py==1.10.0,pyparsing==2.4.7,pytest==6.2.3,-e git+https://github.com/pypa/setuptools_scm.git@abb67b15985f380d8cf4451b9f2ef3dd11cb8a91#egg=setuptools_scm,toml==0.10.2\r\npy310-test run-test-pre: PYTHONHASHSEED='1426720794'\r\npy310-test run-test: commands[0] | pytest\r\n============================= test session starts ==============================\r\nplatform linux -- Python 3.10.0a7, pytest-6.2.3, py-1.10.0, pluggy-0.13.1\r\ncachedir: .tox/py310-test/.pytest_cache\r\nsetuptools version 53.0.0 from '.../setuptools_scm/.tox/py310-test/lib/python3.10/site-packages/setuptools/__init__.py'\r\nsetuptools_scm version 6.0.1 from '.../setuptools_scm/src/setuptools_scm/__init__.py'\r\nrootdir: .../setuptools_scm, configfile: tox.ini, testpaths: testing\r\ncollected 0 items / 1 error\r\n\r\n==================================== ERRORS ====================================\r\n________________________ ERROR collecting test session _________________________\r\n<frozen importlib._bootstrap>:933: in _find_spec\r\n ???\r\nE AttributeError: 'VendorImporter' object has no attribute 'find_spec'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n.tox/py310-test/lib/python3.10/site-packages/pluggy/hooks.py:286: in __call__\r\n return self._hookexec(self, self.get_hookimpls(), kwargs)\r\n.tox/py310-test/lib/python3.10/site-packages/pluggy/manager.py:93: in _hookexec\r\n return self._inner_hookexec(hook, methods, kwargs)\r\n.tox/py310-test/lib/python3.10/site-packages/pluggy/manager.py:84: in <lambda>\r\n self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(\r\n.tox/py310-test/lib/python3.10/site-packages/_pytest/doctest.py:129: in pytest_collect_file\r\n elif _is_doctest(config, path, parent):\r\n.tox/py310-test/lib/python3.10/site-packages/_pytest/doctest.py:147: in _is_doctest\r\n if path.check(fnmatch=glob):\r\n.tox/py310-test/lib/python3.10/site-packages/py/_path/local.py:387: in check\r\n return super(LocalPath, self).check(**kw)\r\n.tox/py310-test/lib/python3.10/site-packages/py/_path/common.py:241: in check\r\n return self.Checkers(self)._evaluate(kw)\r\n.tox/py310-test/lib/python3.10/site-packages/py/_path/common.py:108: in _evaluate\r\n if py.code.getrawcode(meth).co_argcount > 1:\r\n.tox/py310-test/lib/python3.10/site-packages/py/_vendored_packages/apipkg/__init__.py:152: in __makeattr\r\n result = importobj(modpath, attrname)\r\n.tox/py310-test/lib/python3.10/site-packages/py/_vendored_packages/apipkg/__init__.py:72: in importobj\r\n module = __import__(modpath, None, None, ['__doc__'])\r\n.tox/py310-test/lib/python3.10/site-packages/py/_code/code.py:7: in <module>\r\n reprlib = py.builtin._tryimport('repr', 'reprlib')\r\n.tox/py310-test/lib/python3.10/site-packages/py/_builtin.py:144: in _tryimport\r\n __import__(name)\r\n<frozen importlib._bootstrap>:1021: in _find_and_load\r\n ???\r\n<frozen importlib._bootstrap>:996: in _find_and_load_unlocked\r\n ???\r\n<frozen importlib._bootstrap>:935: in _find_spec\r\n ???\r\n<frozen importlib._bootstrap>:908: in _find_spec_legacy\r\n ???\r\nE ImportWarning: VendorImporter.find_spec() not found; falling back to find_module()\r\n=========================== short test summary info ============================\r\nERROR - ImportWarning: VendorImporter.find_spec() not found; falling back to...\r\n!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!\r\n=============================== 1 error in 0.16s ===============================\r\nERROR: InvocationError for command .../setuptools_scm/.tox/py310-test/bin/pytest (exited with code 2)\r\n___________________________________ summary ____________________________________\r\nERROR: py310-test: commands failed\r\n```\r\n\r\n\r\n### Code of Conduct\r\n\r\nI agree to follow the PSF Code of Conduct\n", "before_files": [{"content": "import sys\n\n\nclass VendorImporter:\n \"\"\"\n A PEP 302 meta path importer for finding optionally-vendored\n or otherwise naturally-installed packages from root_name.\n \"\"\"\n\n def __init__(self, root_name, vendored_names=(), vendor_pkg=None):\n self.root_name = root_name\n self.vendored_names = set(vendored_names)\n self.vendor_pkg = vendor_pkg or root_name.replace('extern', '_vendor')\n\n @property\n def search_path(self):\n \"\"\"\n Search first the vendor package then as a natural package.\n \"\"\"\n yield self.vendor_pkg + '.'\n yield ''\n\n def find_module(self, fullname, path=None):\n \"\"\"\n Return self when fullname starts with root_name and the\n target module is one vendored through this importer.\n \"\"\"\n root, base, target = fullname.partition(self.root_name + '.')\n if root:\n return\n if not any(map(target.startswith, self.vendored_names)):\n return\n return self\n\n def load_module(self, fullname):\n \"\"\"\n Iterate over the search path to locate and load fullname.\n \"\"\"\n root, base, target = fullname.partition(self.root_name + '.')\n for prefix in self.search_path:\n try:\n extant = prefix + target\n __import__(extant)\n mod = sys.modules[extant]\n sys.modules[fullname] = mod\n return mod\n except ImportError:\n pass\n else:\n raise ImportError(\n \"The '{target}' package is required; \"\n \"normally this is bundled with this package so if you get \"\n \"this warning, consult the packager of your \"\n \"distribution.\".format(**locals())\n )\n\n def create_module(self, spec):\n return self.load_module(spec.name)\n\n def exec_module(self, module):\n pass\n\n def install(self):\n \"\"\"\n Install this importer into sys.meta_path if not already present.\n \"\"\"\n if self not in sys.meta_path:\n sys.meta_path.append(self)\n\n\nnames = 'packaging', 'pyparsing', 'ordered_set',\nVendorImporter(__name__, names, 'setuptools._vendor').install()\n", "path": "setuptools/extern/__init__.py"}, {"content": "import sys\n\n\nclass VendorImporter:\n \"\"\"\n A PEP 302 meta path importer for finding optionally-vendored\n or otherwise naturally-installed packages from root_name.\n \"\"\"\n\n def __init__(self, root_name, vendored_names=(), vendor_pkg=None):\n self.root_name = root_name\n self.vendored_names = set(vendored_names)\n self.vendor_pkg = vendor_pkg or root_name.replace('extern', '_vendor')\n\n @property\n def search_path(self):\n \"\"\"\n Search first the vendor package then as a natural package.\n \"\"\"\n yield self.vendor_pkg + '.'\n yield ''\n\n def find_module(self, fullname, path=None):\n \"\"\"\n Return self when fullname starts with root_name and the\n target module is one vendored through this importer.\n \"\"\"\n root, base, target = fullname.partition(self.root_name + '.')\n if root:\n return\n if not any(map(target.startswith, self.vendored_names)):\n return\n return self\n\n def load_module(self, fullname):\n \"\"\"\n Iterate over the search path to locate and load fullname.\n \"\"\"\n root, base, target = fullname.partition(self.root_name + '.')\n for prefix in self.search_path:\n try:\n extant = prefix + target\n __import__(extant)\n mod = sys.modules[extant]\n sys.modules[fullname] = mod\n return mod\n except ImportError:\n pass\n else:\n raise ImportError(\n \"The '{target}' package is required; \"\n \"normally this is bundled with this package so if you get \"\n \"this warning, consult the packager of your \"\n \"distribution.\".format(**locals())\n )\n\n def create_module(self, spec):\n return self.load_module(spec.name)\n\n def exec_module(self, module):\n pass\n\n def install(self):\n \"\"\"\n Install this importer into sys.meta_path if not already present.\n \"\"\"\n if self not in sys.meta_path:\n sys.meta_path.append(self)\n\n\nnames = 'packaging', 'pyparsing', 'appdirs'\nVendorImporter(__name__, names).install()\n", "path": "pkg_resources/extern/__init__.py"}]} | 3,753 | 678 |
gh_patches_debug_22553 | rasdani/github-patches | git_diff | mesonbuild__meson-10573 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Stripping non-ASCII characters
**Describe the bug**
We have some Fortran files that contain non-ASCII characters in comments. They compile fine but when used with meson, I get errors in `depscan.py`:
```
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd6 in position 3329: invalid continuation byte
```
Instead of writing a custom script that strips out these non-ASCII characters, I wonder if there's a way to do it from within meson. Alternatively, is it possible to just bypass this check or make it skip comments?
**To Reproduce**
I'm working off of this pyOptSparse build at mdolab/pyoptsparse#300. The offending Fortran source file is unfortunately not publicly available, but any Fortran file with a non-ASCII character in a comment line should do.
**Expected behavior**
First, I think better debugging info would be helpful. It's not clear from the message which file was the issue, and I had to check with `chardetect` to figure it out.
I think the best thing would be for meson to allow these characters to exist, since the compiler has no problems with it. If that's not possible, then I was wondering if meson provides any utilities in stripping characters from files, as that seems to be fairly useful.
**system parameters**
* Is this a [cross build](https://mesonbuild.com/Cross-compilation.html) or just a plain native build (for the same computer)? Native build
* what operating system (e.g. MacOS Catalina, Windows 10, CentOS 8.0, Ubuntu 18.04, etc.) Manjaro 21.2.6
* what Python version are you using? Python 3.9
* what `meson --version` 0.63.0
* what `ninja --version` if it's a Ninja build `1.10.2.git.kitware.jobserver-1`
</issue>
<code>
[start of mesonbuild/scripts/depscan.py]
1 # Copyright 2020 The Meson development team
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from __future__ import annotations
15
16 import json
17 import os
18 import pathlib
19 import pickle
20 import re
21 import sys
22 import typing as T
23
24 from ..backend.ninjabackend import ninja_quote
25 from ..compilers.compilers import lang_suffixes
26
27 if T.TYPE_CHECKING:
28 from ..backend.ninjabackend import TargetDependencyScannerInfo
29
30 CPP_IMPORT_RE = re.compile(r'\w*import ([a-zA-Z0-9]+);')
31 CPP_EXPORT_RE = re.compile(r'\w*export module ([a-zA-Z0-9]+);')
32
33 FORTRAN_INCLUDE_PAT = r"^\s*include\s*['\"](\w+\.\w+)['\"]"
34 FORTRAN_MODULE_PAT = r"^\s*\bmodule\b\s+(\w+)\s*(?:!+.*)*$"
35 FORTRAN_SUBMOD_PAT = r"^\s*\bsubmodule\b\s*\((\w+:?\w+)\)\s*(\w+)"
36 FORTRAN_USE_PAT = r"^\s*use,?\s*(?:non_intrinsic)?\s*(?:::)?\s*(\w+)"
37
38 FORTRAN_MODULE_RE = re.compile(FORTRAN_MODULE_PAT, re.IGNORECASE)
39 FORTRAN_SUBMOD_RE = re.compile(FORTRAN_SUBMOD_PAT, re.IGNORECASE)
40 FORTRAN_USE_RE = re.compile(FORTRAN_USE_PAT, re.IGNORECASE)
41
42 class DependencyScanner:
43 def __init__(self, pickle_file: str, outfile: str, sources: T.List[str]):
44 with open(pickle_file, 'rb') as pf:
45 self.target_data: TargetDependencyScannerInfo = pickle.load(pf)
46 self.outfile = outfile
47 self.sources = sources
48 self.provided_by: T.Dict[str, str] = {}
49 self.exports: T.Dict[str, str] = {}
50 self.needs: T.Dict[str, T.List[str]] = {}
51 self.sources_with_exports: T.List[str] = []
52
53 def scan_file(self, fname: str) -> None:
54 suffix = os.path.splitext(fname)[1][1:].lower()
55 if suffix in lang_suffixes['fortran']:
56 self.scan_fortran_file(fname)
57 elif suffix in lang_suffixes['cpp']:
58 self.scan_cpp_file(fname)
59 else:
60 sys.exit(f'Can not scan files with suffix .{suffix}.')
61
62 def scan_fortran_file(self, fname: str) -> None:
63 fpath = pathlib.Path(fname)
64 modules_in_this_file = set()
65 for line in fpath.read_text(encoding='utf-8').split('\n'):
66 import_match = FORTRAN_USE_RE.match(line)
67 export_match = FORTRAN_MODULE_RE.match(line)
68 submodule_export_match = FORTRAN_SUBMOD_RE.match(line)
69 if import_match:
70 needed = import_match.group(1).lower()
71 # In Fortran you have an using declaration also for the module
72 # you define in the same file. Prevent circular dependencies.
73 if needed not in modules_in_this_file:
74 if fname in self.needs:
75 self.needs[fname].append(needed)
76 else:
77 self.needs[fname] = [needed]
78 if export_match:
79 exported_module = export_match.group(1).lower()
80 assert exported_module not in modules_in_this_file
81 modules_in_this_file.add(exported_module)
82 if exported_module in self.provided_by:
83 raise RuntimeError(f'Multiple files provide module {exported_module}.')
84 self.sources_with_exports.append(fname)
85 self.provided_by[exported_module] = fname
86 self.exports[fname] = exported_module
87 if submodule_export_match:
88 # Store submodule "Foo" "Bar" as "foo:bar".
89 # A submodule declaration can be both an import and an export declaration:
90 #
91 # submodule (a1:a2) a3
92 # - requires [email protected]
93 # - produces [email protected]
94 parent_module_name_full = submodule_export_match.group(1).lower()
95 parent_module_name = parent_module_name_full.split(':')[0]
96 submodule_name = submodule_export_match.group(2).lower()
97 concat_name = f'{parent_module_name}:{submodule_name}'
98 self.sources_with_exports.append(fname)
99 self.provided_by[concat_name] = fname
100 self.exports[fname] = concat_name
101 # Fortran requires that the immediate parent module must be built
102 # before the current one. Thus:
103 #
104 # submodule (parent) parent <- requires parent.mod (really parent.smod, but they are created at the same time)
105 # submodule (a1:a2) a3 <- requires [email protected]
106 #
107 # a3 does not depend on the a1 parent module directly, only transitively.
108 if fname in self.needs:
109 self.needs[fname].append(parent_module_name_full)
110 else:
111 self.needs[fname] = [parent_module_name_full]
112
113 def scan_cpp_file(self, fname: str) -> None:
114 fpath = pathlib.Path(fname)
115 for line in fpath.read_text(encoding='utf-8').split('\n'):
116 import_match = CPP_IMPORT_RE.match(line)
117 export_match = CPP_EXPORT_RE.match(line)
118 if import_match:
119 needed = import_match.group(1)
120 if fname in self.needs:
121 self.needs[fname].append(needed)
122 else:
123 self.needs[fname] = [needed]
124 if export_match:
125 exported_module = export_match.group(1)
126 if exported_module in self.provided_by:
127 raise RuntimeError(f'Multiple files provide module {exported_module}.')
128 self.sources_with_exports.append(fname)
129 self.provided_by[exported_module] = fname
130 self.exports[fname] = exported_module
131
132 def objname_for(self, src: str) -> str:
133 objname = self.target_data.source2object[src]
134 assert isinstance(objname, str)
135 return objname
136
137 def module_name_for(self, src: str) -> str:
138 suffix = os.path.splitext(src)[1][1:].lower()
139 if suffix in lang_suffixes['fortran']:
140 exported = self.exports[src]
141 # Module foo:bar goes to a file name [email protected]
142 # Module Foo goes to a file name foo.mod
143 namebase = exported.replace(':', '@')
144 if ':' in exported:
145 extension = 'smod'
146 else:
147 extension = 'mod'
148 return os.path.join(self.target_data.private_dir, f'{namebase}.{extension}')
149 elif suffix in lang_suffixes['cpp']:
150 return '{}.ifc'.format(self.exports[src])
151 else:
152 raise RuntimeError('Unreachable code.')
153
154 def scan(self) -> int:
155 for s in self.sources:
156 self.scan_file(s)
157 with open(self.outfile, 'w', encoding='utf-8') as ofile:
158 ofile.write('ninja_dyndep_version = 1\n')
159 for src in self.sources:
160 objfilename = self.objname_for(src)
161 mods_and_submods_needed = []
162 module_files_generated = []
163 module_files_needed = []
164 if src in self.sources_with_exports:
165 module_files_generated.append(self.module_name_for(src))
166 if src in self.needs:
167 for modname in self.needs[src]:
168 if modname not in self.provided_by:
169 # Nothing provides this module, we assume that it
170 # comes from a dependency library somewhere and is
171 # already built by the time this compilation starts.
172 pass
173 else:
174 mods_and_submods_needed.append(modname)
175
176 for modname in mods_and_submods_needed:
177 provider_src = self.provided_by[modname]
178 provider_modfile = self.module_name_for(provider_src)
179 # Prune self-dependencies
180 if provider_src != src:
181 module_files_needed.append(provider_modfile)
182
183 quoted_objfilename = ninja_quote(objfilename, True)
184 quoted_module_files_generated = [ninja_quote(x, True) for x in module_files_generated]
185 quoted_module_files_needed = [ninja_quote(x, True) for x in module_files_needed]
186 if quoted_module_files_generated:
187 mod_gen = '| ' + ' '.join(quoted_module_files_generated)
188 else:
189 mod_gen = ''
190 if quoted_module_files_needed:
191 mod_dep = '| ' + ' '.join(quoted_module_files_needed)
192 else:
193 mod_dep = ''
194 build_line = 'build {} {}: dyndep {}'.format(quoted_objfilename,
195 mod_gen,
196 mod_dep)
197 ofile.write(build_line + '\n')
198 return 0
199
200 def run(args: T.List[str]) -> int:
201 assert len(args) == 3, 'got wrong number of arguments!'
202 pickle_file, outfile, jsonfile = args
203 with open(jsonfile, encoding='utf-8') as f:
204 sources = json.load(f)
205 scanner = DependencyScanner(pickle_file, outfile, sources)
206 return scanner.scan()
207
[end of mesonbuild/scripts/depscan.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mesonbuild/scripts/depscan.py b/mesonbuild/scripts/depscan.py
--- a/mesonbuild/scripts/depscan.py
+++ b/mesonbuild/scripts/depscan.py
@@ -62,7 +62,7 @@
def scan_fortran_file(self, fname: str) -> None:
fpath = pathlib.Path(fname)
modules_in_this_file = set()
- for line in fpath.read_text(encoding='utf-8').split('\n'):
+ for line in fpath.read_text(encoding='utf-8', errors='ignore').split('\n'):
import_match = FORTRAN_USE_RE.match(line)
export_match = FORTRAN_MODULE_RE.match(line)
submodule_export_match = FORTRAN_SUBMOD_RE.match(line)
@@ -112,7 +112,7 @@
def scan_cpp_file(self, fname: str) -> None:
fpath = pathlib.Path(fname)
- for line in fpath.read_text(encoding='utf-8').split('\n'):
+ for line in fpath.read_text(encoding='utf-8', errors='ignore').split('\n'):
import_match = CPP_IMPORT_RE.match(line)
export_match = CPP_EXPORT_RE.match(line)
if import_match:
| {"golden_diff": "diff --git a/mesonbuild/scripts/depscan.py b/mesonbuild/scripts/depscan.py\n--- a/mesonbuild/scripts/depscan.py\n+++ b/mesonbuild/scripts/depscan.py\n@@ -62,7 +62,7 @@\n def scan_fortran_file(self, fname: str) -> None:\n fpath = pathlib.Path(fname)\n modules_in_this_file = set()\n- for line in fpath.read_text(encoding='utf-8').split('\\n'):\n+ for line in fpath.read_text(encoding='utf-8', errors='ignore').split('\\n'):\n import_match = FORTRAN_USE_RE.match(line)\n export_match = FORTRAN_MODULE_RE.match(line)\n submodule_export_match = FORTRAN_SUBMOD_RE.match(line)\n@@ -112,7 +112,7 @@\n \n def scan_cpp_file(self, fname: str) -> None:\n fpath = pathlib.Path(fname)\n- for line in fpath.read_text(encoding='utf-8').split('\\n'):\n+ for line in fpath.read_text(encoding='utf-8', errors='ignore').split('\\n'):\n import_match = CPP_IMPORT_RE.match(line)\n export_match = CPP_EXPORT_RE.match(line)\n if import_match:\n", "issue": "Stripping non-ASCII characters\n**Describe the bug**\r\nWe have some Fortran files that contain non-ASCII characters in comments. They compile fine but when used with meson, I get errors in `depscan.py`:\r\n```\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xd6 in position 3329: invalid continuation byte\r\n```\r\n\r\nInstead of writing a custom script that strips out these non-ASCII characters, I wonder if there's a way to do it from within meson. Alternatively, is it possible to just bypass this check or make it skip comments?\r\n\r\n\r\n\r\n**To Reproduce**\r\nI'm working off of this pyOptSparse build at mdolab/pyoptsparse#300. The offending Fortran source file is unfortunately not publicly available, but any Fortran file with a non-ASCII character in a comment line should do.\r\n\r\n**Expected behavior**\r\nFirst, I think better debugging info would be helpful. It's not clear from the message which file was the issue, and I had to check with `chardetect` to figure it out.\r\n\r\nI think the best thing would be for meson to allow these characters to exist, since the compiler has no problems with it. If that's not possible, then I was wondering if meson provides any utilities in stripping characters from files, as that seems to be fairly useful.\r\n\r\n**system parameters**\r\n* Is this a [cross build](https://mesonbuild.com/Cross-compilation.html) or just a plain native build (for the same computer)? Native build\r\n* what operating system (e.g. MacOS Catalina, Windows 10, CentOS 8.0, Ubuntu 18.04, etc.) Manjaro 21.2.6\r\n* what Python version are you using? Python 3.9\r\n* what `meson --version` 0.63.0\r\n* what `ninja --version` if it's a Ninja build `1.10.2.git.kitware.jobserver-1`\r\n\n", "before_files": [{"content": "# Copyright 2020 The Meson development team\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom __future__ import annotations\n\nimport json\nimport os\nimport pathlib\nimport pickle\nimport re\nimport sys\nimport typing as T\n\nfrom ..backend.ninjabackend import ninja_quote\nfrom ..compilers.compilers import lang_suffixes\n\nif T.TYPE_CHECKING:\n from ..backend.ninjabackend import TargetDependencyScannerInfo\n\nCPP_IMPORT_RE = re.compile(r'\\w*import ([a-zA-Z0-9]+);')\nCPP_EXPORT_RE = re.compile(r'\\w*export module ([a-zA-Z0-9]+);')\n\nFORTRAN_INCLUDE_PAT = r\"^\\s*include\\s*['\\\"](\\w+\\.\\w+)['\\\"]\"\nFORTRAN_MODULE_PAT = r\"^\\s*\\bmodule\\b\\s+(\\w+)\\s*(?:!+.*)*$\"\nFORTRAN_SUBMOD_PAT = r\"^\\s*\\bsubmodule\\b\\s*\\((\\w+:?\\w+)\\)\\s*(\\w+)\"\nFORTRAN_USE_PAT = r\"^\\s*use,?\\s*(?:non_intrinsic)?\\s*(?:::)?\\s*(\\w+)\"\n\nFORTRAN_MODULE_RE = re.compile(FORTRAN_MODULE_PAT, re.IGNORECASE)\nFORTRAN_SUBMOD_RE = re.compile(FORTRAN_SUBMOD_PAT, re.IGNORECASE)\nFORTRAN_USE_RE = re.compile(FORTRAN_USE_PAT, re.IGNORECASE)\n\nclass DependencyScanner:\n def __init__(self, pickle_file: str, outfile: str, sources: T.List[str]):\n with open(pickle_file, 'rb') as pf:\n self.target_data: TargetDependencyScannerInfo = pickle.load(pf)\n self.outfile = outfile\n self.sources = sources\n self.provided_by: T.Dict[str, str] = {}\n self.exports: T.Dict[str, str] = {}\n self.needs: T.Dict[str, T.List[str]] = {}\n self.sources_with_exports: T.List[str] = []\n\n def scan_file(self, fname: str) -> None:\n suffix = os.path.splitext(fname)[1][1:].lower()\n if suffix in lang_suffixes['fortran']:\n self.scan_fortran_file(fname)\n elif suffix in lang_suffixes['cpp']:\n self.scan_cpp_file(fname)\n else:\n sys.exit(f'Can not scan files with suffix .{suffix}.')\n\n def scan_fortran_file(self, fname: str) -> None:\n fpath = pathlib.Path(fname)\n modules_in_this_file = set()\n for line in fpath.read_text(encoding='utf-8').split('\\n'):\n import_match = FORTRAN_USE_RE.match(line)\n export_match = FORTRAN_MODULE_RE.match(line)\n submodule_export_match = FORTRAN_SUBMOD_RE.match(line)\n if import_match:\n needed = import_match.group(1).lower()\n # In Fortran you have an using declaration also for the module\n # you define in the same file. Prevent circular dependencies.\n if needed not in modules_in_this_file:\n if fname in self.needs:\n self.needs[fname].append(needed)\n else:\n self.needs[fname] = [needed]\n if export_match:\n exported_module = export_match.group(1).lower()\n assert exported_module not in modules_in_this_file\n modules_in_this_file.add(exported_module)\n if exported_module in self.provided_by:\n raise RuntimeError(f'Multiple files provide module {exported_module}.')\n self.sources_with_exports.append(fname)\n self.provided_by[exported_module] = fname\n self.exports[fname] = exported_module\n if submodule_export_match:\n # Store submodule \"Foo\" \"Bar\" as \"foo:bar\".\n # A submodule declaration can be both an import and an export declaration:\n #\n # submodule (a1:a2) a3\n # - requires [email protected]\n # - produces [email protected]\n parent_module_name_full = submodule_export_match.group(1).lower()\n parent_module_name = parent_module_name_full.split(':')[0]\n submodule_name = submodule_export_match.group(2).lower()\n concat_name = f'{parent_module_name}:{submodule_name}'\n self.sources_with_exports.append(fname)\n self.provided_by[concat_name] = fname\n self.exports[fname] = concat_name\n # Fortran requires that the immediate parent module must be built\n # before the current one. Thus:\n #\n # submodule (parent) parent <- requires parent.mod (really parent.smod, but they are created at the same time)\n # submodule (a1:a2) a3 <- requires [email protected]\n #\n # a3 does not depend on the a1 parent module directly, only transitively.\n if fname in self.needs:\n self.needs[fname].append(parent_module_name_full)\n else:\n self.needs[fname] = [parent_module_name_full]\n\n def scan_cpp_file(self, fname: str) -> None:\n fpath = pathlib.Path(fname)\n for line in fpath.read_text(encoding='utf-8').split('\\n'):\n import_match = CPP_IMPORT_RE.match(line)\n export_match = CPP_EXPORT_RE.match(line)\n if import_match:\n needed = import_match.group(1)\n if fname in self.needs:\n self.needs[fname].append(needed)\n else:\n self.needs[fname] = [needed]\n if export_match:\n exported_module = export_match.group(1)\n if exported_module in self.provided_by:\n raise RuntimeError(f'Multiple files provide module {exported_module}.')\n self.sources_with_exports.append(fname)\n self.provided_by[exported_module] = fname\n self.exports[fname] = exported_module\n\n def objname_for(self, src: str) -> str:\n objname = self.target_data.source2object[src]\n assert isinstance(objname, str)\n return objname\n\n def module_name_for(self, src: str) -> str:\n suffix = os.path.splitext(src)[1][1:].lower()\n if suffix in lang_suffixes['fortran']:\n exported = self.exports[src]\n # Module foo:bar goes to a file name [email protected]\n # Module Foo goes to a file name foo.mod\n namebase = exported.replace(':', '@')\n if ':' in exported:\n extension = 'smod'\n else:\n extension = 'mod'\n return os.path.join(self.target_data.private_dir, f'{namebase}.{extension}')\n elif suffix in lang_suffixes['cpp']:\n return '{}.ifc'.format(self.exports[src])\n else:\n raise RuntimeError('Unreachable code.')\n\n def scan(self) -> int:\n for s in self.sources:\n self.scan_file(s)\n with open(self.outfile, 'w', encoding='utf-8') as ofile:\n ofile.write('ninja_dyndep_version = 1\\n')\n for src in self.sources:\n objfilename = self.objname_for(src)\n mods_and_submods_needed = []\n module_files_generated = []\n module_files_needed = []\n if src in self.sources_with_exports:\n module_files_generated.append(self.module_name_for(src))\n if src in self.needs:\n for modname in self.needs[src]:\n if modname not in self.provided_by:\n # Nothing provides this module, we assume that it\n # comes from a dependency library somewhere and is\n # already built by the time this compilation starts.\n pass\n else:\n mods_and_submods_needed.append(modname)\n\n for modname in mods_and_submods_needed:\n provider_src = self.provided_by[modname]\n provider_modfile = self.module_name_for(provider_src)\n # Prune self-dependencies\n if provider_src != src:\n module_files_needed.append(provider_modfile)\n\n quoted_objfilename = ninja_quote(objfilename, True)\n quoted_module_files_generated = [ninja_quote(x, True) for x in module_files_generated]\n quoted_module_files_needed = [ninja_quote(x, True) for x in module_files_needed]\n if quoted_module_files_generated:\n mod_gen = '| ' + ' '.join(quoted_module_files_generated)\n else:\n mod_gen = ''\n if quoted_module_files_needed:\n mod_dep = '| ' + ' '.join(quoted_module_files_needed)\n else:\n mod_dep = ''\n build_line = 'build {} {}: dyndep {}'.format(quoted_objfilename,\n mod_gen,\n mod_dep)\n ofile.write(build_line + '\\n')\n return 0\n\ndef run(args: T.List[str]) -> int:\n assert len(args) == 3, 'got wrong number of arguments!'\n pickle_file, outfile, jsonfile = args\n with open(jsonfile, encoding='utf-8') as f:\n sources = json.load(f)\n scanner = DependencyScanner(pickle_file, outfile, sources)\n return scanner.scan()\n", "path": "mesonbuild/scripts/depscan.py"}]} | 3,546 | 269 |
gh_patches_debug_63370 | rasdani/github-patches | git_diff | mkdocs__mkdocs-130 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update requirements
While working with Markdown extensions (c.f. #74), I noticed that mkdocs' setup.py has its dependencies [pinned to specific patch versions](https://github.com/tomchristie/mkdocs/blob/master/setup.py#L18):
```
install_requires = [
'Jinja2==2.7.1',
'Markdown==2.3.1',
'PyYAML==3.10',
'watchdog==0.7.0',
'ghp-import==0.4.1'
]
```
Since these dependencies are slightly out of date (e.g., [Jinja2 is at 2.7.3](https://pypi.python.org/pypi/Jinja2) and [Markdown is at 2.4.1](https://pypi.python.org/pypi/Markdown)), it's hard to use mkdocs on a system with other software. Perhaps it's a shame that Python doesn't have npm-like dependency management, but that's the way it is—you'll get a setuptools when trying to run mkdocs error if any other package upgrades Jinja to a bugfix release.
How would the developers feel about loosening these version requirements? An idiomatic approach is to [just use `>=`](https://github.com/mitsuhiko/flask/blob/master/setup.py#L99).
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 from __future__ import print_function
5 from setuptools import setup
6 import re
7 import os
8 import sys
9
10
11 name = 'mkdocs'
12 package = 'mkdocs'
13 description = 'In progress.'
14 url = 'http://www.mkdocs.org'
15 author = 'Tom Christie'
16 author_email = '[email protected]'
17 license = 'BSD'
18 install_requires = [
19 'Jinja2==2.7.1',
20 'Markdown==2.3.1',
21 'PyYAML==3.10',
22 'watchdog==0.7.0',
23 'ghp-import==0.4.1'
24 ]
25
26 long_description = """Work in progress."""
27
28
29 def get_version(package):
30 """
31 Return package version as listed in `__version__` in `init.py`.
32 """
33 init_py = open(os.path.join(package, '__init__.py')).read()
34 return re.search("^__version__ = ['\"]([^'\"]+)['\"]", init_py, re.MULTILINE).group(1)
35
36
37 def get_packages(package):
38 """
39 Return root package and all sub-packages.
40 """
41 return [dirpath
42 for dirpath, dirnames, filenames in os.walk(package)
43 if os.path.exists(os.path.join(dirpath, '__init__.py'))]
44
45
46 def get_package_data(package):
47 """
48 Return all files under the root package, that are not in a
49 package themselves.
50 """
51 walk = [(dirpath.replace(package + os.sep, '', 1), filenames)
52 for dirpath, dirnames, filenames in os.walk(package)
53 if not os.path.exists(os.path.join(dirpath, '__init__.py'))]
54
55 filepaths = []
56 for base, filenames in walk:
57 filepaths.extend([os.path.join(base, filename)
58 for filename in filenames])
59 return {package: filepaths}
60
61
62 if sys.argv[-1] == 'publish':
63 os.system("python setup.py sdist upload")
64 args = {'version': get_version(package)}
65 print("You probably want to also tag the version now:")
66 print(" git tag -a %(version)s -m 'version %(version)s'" % args)
67 print(" git push --tags")
68 sys.exit()
69
70
71 setup(
72 name=name,
73 version=get_version(package),
74 url=url,
75 license=license,
76 description=description,
77 long_description=long_description,
78 author=author,
79 author_email=author_email,
80 packages=get_packages(package),
81 package_data=get_package_data(package),
82 install_requires=install_requires,
83 entry_points={
84 'console_scripts': [
85 'mkdocs = mkdocs.main:run_main',
86 ],
87 },
88 classifiers=[
89 'Development Status :: 5 - Production/Stable',
90 'Environment :: Console',
91 'Environment :: Web Environment',
92 'Intended Audience :: Developers',
93 'License :: OSI Approved :: BSD License',
94 'Operating System :: OS Independent',
95 'Programming Language :: Python',
96 'Programming Language :: Python :: 2',
97 'Programming Language :: Python :: 2.6',
98 'Programming Language :: Python :: 2.7',
99 'Programming Language :: Python :: 3',
100 'Programming Language :: Python :: 3.3',
101 'Programming Language :: Python :: 3.4',
102 'Topic :: Documentation',
103 'Topic :: Text Processing',
104 ]
105 )
106
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -16,11 +16,11 @@
author_email = '[email protected]'
license = 'BSD'
install_requires = [
- 'Jinja2==2.7.1',
- 'Markdown==2.3.1',
- 'PyYAML==3.10',
- 'watchdog==0.7.0',
- 'ghp-import==0.4.1'
+ 'Jinja2>=2.7.1',
+ 'Markdown>=2.3.1,<2.5',
+ 'PyYAML>=3.10',
+ 'watchdog>=0.7.0',
+ 'ghp-import>=0.4.1'
]
long_description = """Work in progress."""
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -16,11 +16,11 @@\n author_email = '[email protected]'\n license = 'BSD'\n install_requires = [\n- 'Jinja2==2.7.1',\n- 'Markdown==2.3.1',\n- 'PyYAML==3.10',\n- 'watchdog==0.7.0',\n- 'ghp-import==0.4.1'\n+ 'Jinja2>=2.7.1',\n+ 'Markdown>=2.3.1,<2.5',\n+ 'PyYAML>=3.10',\n+ 'watchdog>=0.7.0',\n+ 'ghp-import>=0.4.1'\n ]\n \n long_description = \"\"\"Work in progress.\"\"\"\n", "issue": "Update requirements\nWhile working with Markdown extensions (c.f. #74), I noticed that mkdocs' setup.py has its dependencies [pinned to specific patch versions](https://github.com/tomchristie/mkdocs/blob/master/setup.py#L18):\n\n```\ninstall_requires = [\n 'Jinja2==2.7.1',\n 'Markdown==2.3.1',\n 'PyYAML==3.10',\n 'watchdog==0.7.0',\n 'ghp-import==0.4.1'\n]\n```\n\nSince these dependencies are slightly out of date (e.g., [Jinja2 is at 2.7.3](https://pypi.python.org/pypi/Jinja2) and [Markdown is at 2.4.1](https://pypi.python.org/pypi/Markdown)), it's hard to use mkdocs on a system with other software. Perhaps it's a shame that Python doesn't have npm-like dependency management, but that's the way it is\u2014you'll get a setuptools when trying to run mkdocs error if any other package upgrades Jinja to a bugfix release.\n\nHow would the developers feel about loosening these version requirements? An idiomatic approach is to [just use `>=`](https://github.com/mitsuhiko/flask/blob/master/setup.py#L99).\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nfrom __future__ import print_function\nfrom setuptools import setup\nimport re\nimport os\nimport sys\n\n\nname = 'mkdocs'\npackage = 'mkdocs'\ndescription = 'In progress.'\nurl = 'http://www.mkdocs.org'\nauthor = 'Tom Christie'\nauthor_email = '[email protected]'\nlicense = 'BSD'\ninstall_requires = [\n 'Jinja2==2.7.1',\n 'Markdown==2.3.1',\n 'PyYAML==3.10',\n 'watchdog==0.7.0',\n 'ghp-import==0.4.1'\n]\n\nlong_description = \"\"\"Work in progress.\"\"\"\n\n\ndef get_version(package):\n \"\"\"\n Return package version as listed in `__version__` in `init.py`.\n \"\"\"\n init_py = open(os.path.join(package, '__init__.py')).read()\n return re.search(\"^__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py, re.MULTILINE).group(1)\n\n\ndef get_packages(package):\n \"\"\"\n Return root package and all sub-packages.\n \"\"\"\n return [dirpath\n for dirpath, dirnames, filenames in os.walk(package)\n if os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n\ndef get_package_data(package):\n \"\"\"\n Return all files under the root package, that are not in a\n package themselves.\n \"\"\"\n walk = [(dirpath.replace(package + os.sep, '', 1), filenames)\n for dirpath, dirnames, filenames in os.walk(package)\n if not os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n filepaths = []\n for base, filenames in walk:\n filepaths.extend([os.path.join(base, filename)\n for filename in filenames])\n return {package: filepaths}\n\n\nif sys.argv[-1] == 'publish':\n os.system(\"python setup.py sdist upload\")\n args = {'version': get_version(package)}\n print(\"You probably want to also tag the version now:\")\n print(\" git tag -a %(version)s -m 'version %(version)s'\" % args)\n print(\" git push --tags\")\n sys.exit()\n\n\nsetup(\n name=name,\n version=get_version(package),\n url=url,\n license=license,\n description=description,\n long_description=long_description,\n author=author,\n author_email=author_email,\n packages=get_packages(package),\n package_data=get_package_data(package),\n install_requires=install_requires,\n entry_points={\n 'console_scripts': [\n 'mkdocs = mkdocs.main:run_main',\n ],\n },\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Topic :: Documentation',\n 'Topic :: Text Processing',\n ]\n)\n", "path": "setup.py"}]} | 1,755 | 190 |
gh_patches_debug_49 | rasdani/github-patches | git_diff | cookiecutter__cookiecutter-1712 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CI/CD: Verify .pre-commit-config.yaml use latest hooks versions
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 """cookiecutter distutils configuration."""
3 from setuptools import setup
4
5 version = "2.1.2.dev0"
6
7 with open('README.md', encoding='utf-8') as readme_file:
8 readme = readme_file.read()
9
10 requirements = [
11 'binaryornot>=0.4.4',
12 'Jinja2>=2.7,<4.0.0',
13 'click>=7.0,<9.0.0',
14 'pyyaml>=5.3.1',
15 'jinja2-time>=0.2.0',
16 'python-slugify>=4.0.0',
17 'requests>=2.23.0',
18 ]
19
20 setup(
21 name='cookiecutter',
22 version=version,
23 description=(
24 'A command-line utility that creates projects from project '
25 'templates, e.g. creating a Python package project from a '
26 'Python package project template.'
27 ),
28 long_description=readme,
29 long_description_content_type='text/markdown',
30 author='Audrey Feldroy',
31 author_email='[email protected]',
32 url='https://github.com/cookiecutter/cookiecutter',
33 packages=['cookiecutter'],
34 package_dir={'cookiecutter': 'cookiecutter'},
35 entry_points={'console_scripts': ['cookiecutter = cookiecutter.__main__:main']},
36 include_package_data=True,
37 python_requires='>=3.7',
38 install_requires=requirements,
39 license='BSD',
40 zip_safe=False,
41 classifiers=[
42 "Development Status :: 5 - Production/Stable",
43 "Environment :: Console",
44 "Intended Audience :: Developers",
45 "Natural Language :: English",
46 "License :: OSI Approved :: BSD License",
47 "Programming Language :: Python :: 3 :: Only",
48 "Programming Language :: Python :: 3",
49 "Programming Language :: Python :: 3.7",
50 "Programming Language :: Python :: 3.8",
51 "Programming Language :: Python :: 3.9",
52 "Programming Language :: Python :: 3.10",
53 "Programming Language :: Python :: Implementation :: CPython",
54 "Programming Language :: Python :: Implementation :: PyPy",
55 "Programming Language :: Python",
56 "Topic :: Software Development",
57 ],
58 keywords=[
59 "cookiecutter",
60 "Python",
61 "projects",
62 "project templates",
63 "Jinja2",
64 "skeleton",
65 "scaffolding",
66 "project directory",
67 "package",
68 "packaging",
69 ],
70 )
71
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -1,4 +1,3 @@
-#!/usr/bin/env python
"""cookiecutter distutils configuration."""
from setuptools import setup
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -1,4 +1,3 @@\n-#!/usr/bin/env python\n \"\"\"cookiecutter distutils configuration.\"\"\"\n from setuptools import setup\n", "issue": "CI/CD: Verify .pre-commit-config.yaml use latest hooks versions\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"cookiecutter distutils configuration.\"\"\"\nfrom setuptools import setup\n\nversion = \"2.1.2.dev0\"\n\nwith open('README.md', encoding='utf-8') as readme_file:\n readme = readme_file.read()\n\nrequirements = [\n 'binaryornot>=0.4.4',\n 'Jinja2>=2.7,<4.0.0',\n 'click>=7.0,<9.0.0',\n 'pyyaml>=5.3.1',\n 'jinja2-time>=0.2.0',\n 'python-slugify>=4.0.0',\n 'requests>=2.23.0',\n]\n\nsetup(\n name='cookiecutter',\n version=version,\n description=(\n 'A command-line utility that creates projects from project '\n 'templates, e.g. creating a Python package project from a '\n 'Python package project template.'\n ),\n long_description=readme,\n long_description_content_type='text/markdown',\n author='Audrey Feldroy',\n author_email='[email protected]',\n url='https://github.com/cookiecutter/cookiecutter',\n packages=['cookiecutter'],\n package_dir={'cookiecutter': 'cookiecutter'},\n entry_points={'console_scripts': ['cookiecutter = cookiecutter.__main__:main']},\n include_package_data=True,\n python_requires='>=3.7',\n install_requires=requirements,\n license='BSD',\n zip_safe=False,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Natural Language :: English\",\n \"License :: OSI Approved :: BSD License\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Programming Language :: Python\",\n \"Topic :: Software Development\",\n ],\n keywords=[\n \"cookiecutter\",\n \"Python\",\n \"projects\",\n \"project templates\",\n \"Jinja2\",\n \"skeleton\",\n \"scaffolding\",\n \"project directory\",\n \"package\",\n \"packaging\",\n ],\n)\n", "path": "setup.py"}]} | 1,216 | 50 |
gh_patches_debug_35130 | rasdani/github-patches | git_diff | mlflow__mlflow-6206 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Timeout value too small: when downloading large model files, timeout is reached
https://github.com/mlflow/mlflow/blob/d40780be361f4bd2741c2e8fcbd428c1d693edcf/mlflow/store/artifact/http_artifact_repo.py#L63
</issue>
<code>
[start of mlflow/store/artifact/http_artifact_repo.py]
1 import os
2 import posixpath
3
4 from mlflow.entities import FileInfo
5 from mlflow.store.artifact.artifact_repo import ArtifactRepository, verify_artifact_path
6 from mlflow.tracking._tracking_service.utils import _get_default_host_creds
7 from mlflow.utils.file_utils import relative_path_to_artifact_path
8 from mlflow.utils.rest_utils import augmented_raise_for_status, http_request
9
10
11 class HttpArtifactRepository(ArtifactRepository):
12 """Stores artifacts in a remote artifact storage using HTTP requests"""
13
14 @property
15 def _host_creds(self):
16 return _get_default_host_creds(self.artifact_uri)
17
18 def log_artifact(self, local_file, artifact_path=None):
19 verify_artifact_path(artifact_path)
20
21 file_name = os.path.basename(local_file)
22 paths = (artifact_path, file_name) if artifact_path else (file_name,)
23 endpoint = posixpath.join("/", *paths)
24 with open(local_file, "rb") as f:
25 resp = http_request(self._host_creds, endpoint, "PUT", data=f, timeout=600)
26 augmented_raise_for_status(resp)
27
28 def log_artifacts(self, local_dir, artifact_path=None):
29 local_dir = os.path.abspath(local_dir)
30 for root, _, filenames in os.walk(local_dir):
31 if root == local_dir:
32 artifact_dir = artifact_path
33 else:
34 rel_path = os.path.relpath(root, local_dir)
35 rel_path = relative_path_to_artifact_path(rel_path)
36 artifact_dir = (
37 posixpath.join(artifact_path, rel_path) if artifact_path else rel_path
38 )
39 for f in filenames:
40 self.log_artifact(os.path.join(root, f), artifact_dir)
41
42 def list_artifacts(self, path=None):
43 endpoint = "/mlflow-artifacts/artifacts"
44 url, tail = self.artifact_uri.split(endpoint, maxsplit=1)
45 root = tail.lstrip("/")
46 params = {"path": posixpath.join(root, path) if path else root}
47 host_creds = _get_default_host_creds(url)
48 resp = http_request(host_creds, endpoint, "GET", params=params, timeout=10)
49 augmented_raise_for_status(resp)
50 file_infos = []
51 for f in resp.json().get("files", []):
52 file_info = FileInfo(
53 posixpath.join(path, f["path"]) if path else f["path"],
54 f["is_dir"],
55 int(f["file_size"]) if ("file_size" in f) else None,
56 )
57 file_infos.append(file_info)
58
59 return sorted(file_infos, key=lambda f: f.path)
60
61 def _download_file(self, remote_file_path, local_path):
62 endpoint = posixpath.join("/", remote_file_path)
63 resp = http_request(self._host_creds, endpoint, "GET", stream=True, timeout=10)
64 augmented_raise_for_status(resp)
65 with open(local_path, "wb") as f:
66 chunk_size = 1024 * 1024 # 1 MB
67 for chunk in resp.iter_content(chunk_size=chunk_size):
68 f.write(chunk)
69
70 def delete_artifacts(self, artifact_path=None):
71 endpoint = posixpath.join("/", artifact_path) if artifact_path else "/"
72 resp = http_request(self._host_creds, endpoint, "DELETE", stream=True, timeout=10)
73 augmented_raise_for_status(resp)
74
[end of mlflow/store/artifact/http_artifact_repo.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mlflow/store/artifact/http_artifact_repo.py b/mlflow/store/artifact/http_artifact_repo.py
--- a/mlflow/store/artifact/http_artifact_repo.py
+++ b/mlflow/store/artifact/http_artifact_repo.py
@@ -22,7 +22,7 @@
paths = (artifact_path, file_name) if artifact_path else (file_name,)
endpoint = posixpath.join("/", *paths)
with open(local_file, "rb") as f:
- resp = http_request(self._host_creds, endpoint, "PUT", data=f, timeout=600)
+ resp = http_request(self._host_creds, endpoint, "PUT", data=f)
augmented_raise_for_status(resp)
def log_artifacts(self, local_dir, artifact_path=None):
@@ -45,7 +45,7 @@
root = tail.lstrip("/")
params = {"path": posixpath.join(root, path) if path else root}
host_creds = _get_default_host_creds(url)
- resp = http_request(host_creds, endpoint, "GET", params=params, timeout=10)
+ resp = http_request(host_creds, endpoint, "GET", params=params)
augmented_raise_for_status(resp)
file_infos = []
for f in resp.json().get("files", []):
@@ -60,7 +60,7 @@
def _download_file(self, remote_file_path, local_path):
endpoint = posixpath.join("/", remote_file_path)
- resp = http_request(self._host_creds, endpoint, "GET", stream=True, timeout=10)
+ resp = http_request(self._host_creds, endpoint, "GET", stream=True)
augmented_raise_for_status(resp)
with open(local_path, "wb") as f:
chunk_size = 1024 * 1024 # 1 MB
@@ -69,5 +69,5 @@
def delete_artifacts(self, artifact_path=None):
endpoint = posixpath.join("/", artifact_path) if artifact_path else "/"
- resp = http_request(self._host_creds, endpoint, "DELETE", stream=True, timeout=10)
+ resp = http_request(self._host_creds, endpoint, "DELETE", stream=True)
augmented_raise_for_status(resp)
| {"golden_diff": "diff --git a/mlflow/store/artifact/http_artifact_repo.py b/mlflow/store/artifact/http_artifact_repo.py\n--- a/mlflow/store/artifact/http_artifact_repo.py\n+++ b/mlflow/store/artifact/http_artifact_repo.py\n@@ -22,7 +22,7 @@\n paths = (artifact_path, file_name) if artifact_path else (file_name,)\n endpoint = posixpath.join(\"/\", *paths)\n with open(local_file, \"rb\") as f:\n- resp = http_request(self._host_creds, endpoint, \"PUT\", data=f, timeout=600)\n+ resp = http_request(self._host_creds, endpoint, \"PUT\", data=f)\n augmented_raise_for_status(resp)\n \n def log_artifacts(self, local_dir, artifact_path=None):\n@@ -45,7 +45,7 @@\n root = tail.lstrip(\"/\")\n params = {\"path\": posixpath.join(root, path) if path else root}\n host_creds = _get_default_host_creds(url)\n- resp = http_request(host_creds, endpoint, \"GET\", params=params, timeout=10)\n+ resp = http_request(host_creds, endpoint, \"GET\", params=params)\n augmented_raise_for_status(resp)\n file_infos = []\n for f in resp.json().get(\"files\", []):\n@@ -60,7 +60,7 @@\n \n def _download_file(self, remote_file_path, local_path):\n endpoint = posixpath.join(\"/\", remote_file_path)\n- resp = http_request(self._host_creds, endpoint, \"GET\", stream=True, timeout=10)\n+ resp = http_request(self._host_creds, endpoint, \"GET\", stream=True)\n augmented_raise_for_status(resp)\n with open(local_path, \"wb\") as f:\n chunk_size = 1024 * 1024 # 1 MB\n@@ -69,5 +69,5 @@\n \n def delete_artifacts(self, artifact_path=None):\n endpoint = posixpath.join(\"/\", artifact_path) if artifact_path else \"/\"\n- resp = http_request(self._host_creds, endpoint, \"DELETE\", stream=True, timeout=10)\n+ resp = http_request(self._host_creds, endpoint, \"DELETE\", stream=True)\n augmented_raise_for_status(resp)\n", "issue": "Timeout value too small: when downloading large model files, timeout is reached\nhttps://github.com/mlflow/mlflow/blob/d40780be361f4bd2741c2e8fcbd428c1d693edcf/mlflow/store/artifact/http_artifact_repo.py#L63\n", "before_files": [{"content": "import os\nimport posixpath\n\nfrom mlflow.entities import FileInfo\nfrom mlflow.store.artifact.artifact_repo import ArtifactRepository, verify_artifact_path\nfrom mlflow.tracking._tracking_service.utils import _get_default_host_creds\nfrom mlflow.utils.file_utils import relative_path_to_artifact_path\nfrom mlflow.utils.rest_utils import augmented_raise_for_status, http_request\n\n\nclass HttpArtifactRepository(ArtifactRepository):\n \"\"\"Stores artifacts in a remote artifact storage using HTTP requests\"\"\"\n\n @property\n def _host_creds(self):\n return _get_default_host_creds(self.artifact_uri)\n\n def log_artifact(self, local_file, artifact_path=None):\n verify_artifact_path(artifact_path)\n\n file_name = os.path.basename(local_file)\n paths = (artifact_path, file_name) if artifact_path else (file_name,)\n endpoint = posixpath.join(\"/\", *paths)\n with open(local_file, \"rb\") as f:\n resp = http_request(self._host_creds, endpoint, \"PUT\", data=f, timeout=600)\n augmented_raise_for_status(resp)\n\n def log_artifacts(self, local_dir, artifact_path=None):\n local_dir = os.path.abspath(local_dir)\n for root, _, filenames in os.walk(local_dir):\n if root == local_dir:\n artifact_dir = artifact_path\n else:\n rel_path = os.path.relpath(root, local_dir)\n rel_path = relative_path_to_artifact_path(rel_path)\n artifact_dir = (\n posixpath.join(artifact_path, rel_path) if artifact_path else rel_path\n )\n for f in filenames:\n self.log_artifact(os.path.join(root, f), artifact_dir)\n\n def list_artifacts(self, path=None):\n endpoint = \"/mlflow-artifacts/artifacts\"\n url, tail = self.artifact_uri.split(endpoint, maxsplit=1)\n root = tail.lstrip(\"/\")\n params = {\"path\": posixpath.join(root, path) if path else root}\n host_creds = _get_default_host_creds(url)\n resp = http_request(host_creds, endpoint, \"GET\", params=params, timeout=10)\n augmented_raise_for_status(resp)\n file_infos = []\n for f in resp.json().get(\"files\", []):\n file_info = FileInfo(\n posixpath.join(path, f[\"path\"]) if path else f[\"path\"],\n f[\"is_dir\"],\n int(f[\"file_size\"]) if (\"file_size\" in f) else None,\n )\n file_infos.append(file_info)\n\n return sorted(file_infos, key=lambda f: f.path)\n\n def _download_file(self, remote_file_path, local_path):\n endpoint = posixpath.join(\"/\", remote_file_path)\n resp = http_request(self._host_creds, endpoint, \"GET\", stream=True, timeout=10)\n augmented_raise_for_status(resp)\n with open(local_path, \"wb\") as f:\n chunk_size = 1024 * 1024 # 1 MB\n for chunk in resp.iter_content(chunk_size=chunk_size):\n f.write(chunk)\n\n def delete_artifacts(self, artifact_path=None):\n endpoint = posixpath.join(\"/\", artifact_path) if artifact_path else \"/\"\n resp = http_request(self._host_creds, endpoint, \"DELETE\", stream=True, timeout=10)\n augmented_raise_for_status(resp)\n", "path": "mlflow/store/artifact/http_artifact_repo.py"}]} | 1,478 | 502 |
gh_patches_debug_15814 | rasdani/github-patches | git_diff | svthalia__concrexit-1977 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error in pizza admin related to title_en
Sentry Issue: [CONCREXIT-9W](https://sentry.io/organizations/thalia/issues/2692657224/?referrer=github_integration)
```
FieldError: Related Field got invalid lookup: title_en
(16 additional frame(s) were not displayed)
...
File "django/db/models/sql/query.py", line 1393, in add_q
clause, _ = self._add_q(q_object, self.used_aliases)
File "django/db/models/sql/query.py", line 1412, in _add_q
child_clause, needed_inner = self.build_filter(
File "django/db/models/sql/query.py", line 1265, in build_filter
return self._add_q(
File "django/db/models/sql/query.py", line 1412, in _add_q
child_clause, needed_inner = self.build_filter(
File "django/db/models/sql/query.py", line 1339, in build_filter
raise FieldError('Related Field got invalid lookup: {}'.format(lookups[0]))
```
</issue>
<code>
[start of website/pizzas/admin.py]
1 """Registers admin interfaces for the pizzas module."""
2 from django.conf import settings
3 from django.contrib import admin
4 from django.core.exceptions import PermissionDenied
5 from django.forms import Field
6 from django.urls import reverse, path
7 from django.utils.html import format_html
8 from django.utils.translation import gettext_lazy as _
9
10 from events import services
11 from events.services import is_organiser
12 from payments.widgets import PaymentWidget
13 from pizzas import admin_views
14 from utils.admin import DoNextModelAdmin
15 from .models import FoodOrder, FoodEvent, Product
16
17
18 @admin.register(Product)
19 class ProductAdmin(admin.ModelAdmin):
20 """Manage the products."""
21
22 list_display = ("name", "price", "available")
23 list_filter = ("available", "restricted")
24 search_fields = ("name",)
25
26
27 @admin.register(FoodEvent)
28 class FoodEventAdmin(admin.ModelAdmin):
29 """Manage the pizza events."""
30
31 list_display = ("title", "start", "end", "notification_enabled", "orders")
32 date_hierarchy = "start"
33 exclude = ("end_reminder",)
34 search_fields = [f"event__title_{l[0]}" for l in settings.LANGUAGES]
35 autocomplete_fields = ("event",)
36
37 def notification_enabled(self, obj):
38 return obj.send_notification
39
40 notification_enabled.short_description = _("reminder")
41 notification_enabled.admin_order_field = "send_notification"
42 notification_enabled.boolean = True
43
44 def has_change_permission(self, request, obj=None):
45 """Only allow access to the change form if the user is an organiser."""
46 if obj is not None and not services.is_organiser(request.member, obj.event):
47 return False
48 return super().has_change_permission(request, obj)
49
50 def has_delete_permission(self, request, obj=None):
51 """Only allow access to delete if the user is an organiser."""
52 if obj is not None and not services.is_organiser(request.member, obj.event):
53 return False
54 return super().has_delete_permission(request, obj)
55
56 def orders(self, obj):
57 url = reverse("admin:pizzas_foodevent_details", kwargs={"pk": obj.pk})
58 return format_html('<a href="{url}">{text}</a>', url=url, text=_("Orders"))
59
60 def get_urls(self):
61 urls = super().get_urls()
62 custom_urls = [
63 path(
64 "<int:pk>/details/",
65 self.admin_site.admin_view(
66 admin_views.PizzaOrderDetails.as_view(admin=self)
67 ),
68 name="pizzas_foodevent_details",
69 ),
70 path(
71 "<int:pk>/overview/",
72 self.admin_site.admin_view(
73 admin_views.PizzaOrderSummary.as_view(admin=self)
74 ),
75 name="pizzas_foodevent_overview",
76 ),
77 ]
78 return custom_urls + urls
79
80
81 @admin.register(FoodOrder)
82 class FoodOrderAdmin(DoNextModelAdmin):
83 """Manage the orders."""
84
85 list_display = (
86 "food_event",
87 "member_first_name",
88 "member_last_name",
89 "product",
90 "payment",
91 )
92
93 fields = (
94 "food_event",
95 "member",
96 "name",
97 "product",
98 "payment",
99 )
100
101 def formfield_for_dbfield(self, db_field, request, obj=None, **kwargs):
102 """Payment field widget."""
103 field = super().formfield_for_dbfield(db_field, request, **kwargs)
104 if db_field.name == "payment":
105 return Field(
106 widget=PaymentWidget(obj=obj), initial=field.initial, required=False,
107 )
108 return field
109
110 def save_model(self, request, obj, form, change):
111 """You can only save the orders if you have permission."""
112 if not is_organiser(request.member, obj.food_event.event):
113 raise PermissionDenied
114 return super().save_model(request, obj, form, change)
115
116 def has_view_permission(self, request, order=None):
117 """Only give view permission if the user is an organiser."""
118 if order is not None and not is_organiser(
119 request.member, order.food_event.event
120 ):
121 return False
122 return super().has_view_permission(request, order)
123
124 def has_change_permission(self, request, order=None):
125 """Only give change permission if the user is an organiser."""
126 if order is not None and not is_organiser(
127 request.member, order.food_event.event
128 ):
129 return False
130 return super().has_change_permission(request, order)
131
132 def has_delete_permission(self, request, order=None):
133 """Only give delete permission if the user is an organiser."""
134 if order is not None and not is_organiser(
135 request.member, order.food_event.event
136 ):
137 return False
138 return super().has_delete_permission(request, order)
139
[end of website/pizzas/admin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/website/pizzas/admin.py b/website/pizzas/admin.py
--- a/website/pizzas/admin.py
+++ b/website/pizzas/admin.py
@@ -1,5 +1,4 @@
"""Registers admin interfaces for the pizzas module."""
-from django.conf import settings
from django.contrib import admin
from django.core.exceptions import PermissionDenied
from django.forms import Field
@@ -31,7 +30,7 @@
list_display = ("title", "start", "end", "notification_enabled", "orders")
date_hierarchy = "start"
exclude = ("end_reminder",)
- search_fields = [f"event__title_{l[0]}" for l in settings.LANGUAGES]
+ search_fields = ("event__title",)
autocomplete_fields = ("event",)
def notification_enabled(self, obj):
| {"golden_diff": "diff --git a/website/pizzas/admin.py b/website/pizzas/admin.py\n--- a/website/pizzas/admin.py\n+++ b/website/pizzas/admin.py\n@@ -1,5 +1,4 @@\n \"\"\"Registers admin interfaces for the pizzas module.\"\"\"\n-from django.conf import settings\n from django.contrib import admin\n from django.core.exceptions import PermissionDenied\n from django.forms import Field\n@@ -31,7 +30,7 @@\n list_display = (\"title\", \"start\", \"end\", \"notification_enabled\", \"orders\")\n date_hierarchy = \"start\"\n exclude = (\"end_reminder\",)\n- search_fields = [f\"event__title_{l[0]}\" for l in settings.LANGUAGES]\n+ search_fields = (\"event__title\",)\n autocomplete_fields = (\"event\",)\n \n def notification_enabled(self, obj):\n", "issue": "Error in pizza admin related to title_en\nSentry Issue: [CONCREXIT-9W](https://sentry.io/organizations/thalia/issues/2692657224/?referrer=github_integration)\n\n```\nFieldError: Related Field got invalid lookup: title_en\n(16 additional frame(s) were not displayed)\n...\n File \"django/db/models/sql/query.py\", line 1393, in add_q\n clause, _ = self._add_q(q_object, self.used_aliases)\n File \"django/db/models/sql/query.py\", line 1412, in _add_q\n child_clause, needed_inner = self.build_filter(\n File \"django/db/models/sql/query.py\", line 1265, in build_filter\n return self._add_q(\n File \"django/db/models/sql/query.py\", line 1412, in _add_q\n child_clause, needed_inner = self.build_filter(\n File \"django/db/models/sql/query.py\", line 1339, in build_filter\n raise FieldError('Related Field got invalid lookup: {}'.format(lookups[0]))\n```\n", "before_files": [{"content": "\"\"\"Registers admin interfaces for the pizzas module.\"\"\"\nfrom django.conf import settings\nfrom django.contrib import admin\nfrom django.core.exceptions import PermissionDenied\nfrom django.forms import Field\nfrom django.urls import reverse, path\nfrom django.utils.html import format_html\nfrom django.utils.translation import gettext_lazy as _\n\nfrom events import services\nfrom events.services import is_organiser\nfrom payments.widgets import PaymentWidget\nfrom pizzas import admin_views\nfrom utils.admin import DoNextModelAdmin\nfrom .models import FoodOrder, FoodEvent, Product\n\n\[email protected](Product)\nclass ProductAdmin(admin.ModelAdmin):\n \"\"\"Manage the products.\"\"\"\n\n list_display = (\"name\", \"price\", \"available\")\n list_filter = (\"available\", \"restricted\")\n search_fields = (\"name\",)\n\n\[email protected](FoodEvent)\nclass FoodEventAdmin(admin.ModelAdmin):\n \"\"\"Manage the pizza events.\"\"\"\n\n list_display = (\"title\", \"start\", \"end\", \"notification_enabled\", \"orders\")\n date_hierarchy = \"start\"\n exclude = (\"end_reminder\",)\n search_fields = [f\"event__title_{l[0]}\" for l in settings.LANGUAGES]\n autocomplete_fields = (\"event\",)\n\n def notification_enabled(self, obj):\n return obj.send_notification\n\n notification_enabled.short_description = _(\"reminder\")\n notification_enabled.admin_order_field = \"send_notification\"\n notification_enabled.boolean = True\n\n def has_change_permission(self, request, obj=None):\n \"\"\"Only allow access to the change form if the user is an organiser.\"\"\"\n if obj is not None and not services.is_organiser(request.member, obj.event):\n return False\n return super().has_change_permission(request, obj)\n\n def has_delete_permission(self, request, obj=None):\n \"\"\"Only allow access to delete if the user is an organiser.\"\"\"\n if obj is not None and not services.is_organiser(request.member, obj.event):\n return False\n return super().has_delete_permission(request, obj)\n\n def orders(self, obj):\n url = reverse(\"admin:pizzas_foodevent_details\", kwargs={\"pk\": obj.pk})\n return format_html('<a href=\"{url}\">{text}</a>', url=url, text=_(\"Orders\"))\n\n def get_urls(self):\n urls = super().get_urls()\n custom_urls = [\n path(\n \"<int:pk>/details/\",\n self.admin_site.admin_view(\n admin_views.PizzaOrderDetails.as_view(admin=self)\n ),\n name=\"pizzas_foodevent_details\",\n ),\n path(\n \"<int:pk>/overview/\",\n self.admin_site.admin_view(\n admin_views.PizzaOrderSummary.as_view(admin=self)\n ),\n name=\"pizzas_foodevent_overview\",\n ),\n ]\n return custom_urls + urls\n\n\[email protected](FoodOrder)\nclass FoodOrderAdmin(DoNextModelAdmin):\n \"\"\"Manage the orders.\"\"\"\n\n list_display = (\n \"food_event\",\n \"member_first_name\",\n \"member_last_name\",\n \"product\",\n \"payment\",\n )\n\n fields = (\n \"food_event\",\n \"member\",\n \"name\",\n \"product\",\n \"payment\",\n )\n\n def formfield_for_dbfield(self, db_field, request, obj=None, **kwargs):\n \"\"\"Payment field widget.\"\"\"\n field = super().formfield_for_dbfield(db_field, request, **kwargs)\n if db_field.name == \"payment\":\n return Field(\n widget=PaymentWidget(obj=obj), initial=field.initial, required=False,\n )\n return field\n\n def save_model(self, request, obj, form, change):\n \"\"\"You can only save the orders if you have permission.\"\"\"\n if not is_organiser(request.member, obj.food_event.event):\n raise PermissionDenied\n return super().save_model(request, obj, form, change)\n\n def has_view_permission(self, request, order=None):\n \"\"\"Only give view permission if the user is an organiser.\"\"\"\n if order is not None and not is_organiser(\n request.member, order.food_event.event\n ):\n return False\n return super().has_view_permission(request, order)\n\n def has_change_permission(self, request, order=None):\n \"\"\"Only give change permission if the user is an organiser.\"\"\"\n if order is not None and not is_organiser(\n request.member, order.food_event.event\n ):\n return False\n return super().has_change_permission(request, order)\n\n def has_delete_permission(self, request, order=None):\n \"\"\"Only give delete permission if the user is an organiser.\"\"\"\n if order is not None and not is_organiser(\n request.member, order.food_event.event\n ):\n return False\n return super().has_delete_permission(request, order)\n", "path": "website/pizzas/admin.py"}]} | 2,108 | 182 |
gh_patches_debug_6872 | rasdani/github-patches | git_diff | PennyLaneAI__pennylane-3386 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] qml.matrix(op) and op.matrix() are different for Barrier and WireCut
### Expected behavior
I expected them both to pass, or both to fail.
### Actual behavior
`qml.matrix(qml.Barrier(0))` passed, `qml.Barrier(0).matrix()` failed. Same for `qml.WireCut`.
### Additional information
Attached source code is for Barrier, but the same thing happens with WireCut.
### Source code
```shell
>>> import pennylane as qml
>>> op = qml.Barrier(0)
>>> qml.matrix(op)
array([[1.]])
>>> op.matrix()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/matthews/src/github.com/PennyLaneAI/pennylane/pennylane/operation.py", line 1405, in matrix
canonical_matrix = self.compute_matrix(*self.parameters, **self.hyperparameters)
File "/Users/matthews/src/github.com/PennyLaneAI/pennylane/pennylane/operation.py", line 446, in compute_matrix
raise MatrixUndefinedError
pennylane.operation.MatrixUndefinedError
```
### Tracebacks
_No response_
### System information
```shell
Name: PennyLane
Version: 0.27.0
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
Home-page: https://github.com/XanaduAI/pennylane
Author:
Author-email:
License: Apache License 2.0
Location: /Users/matthews/.pyenv/versions/3.9.13/envs/pennylane/lib/python3.9/site-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, retworkx, scipy, semantic-version, toml
Required-by: PennyLane-Lightning
Platform info: macOS-12.6.1-arm64-arm-64bit
Python version: 3.9.13
Numpy version: 1.23.2
Scipy version: 1.9.0
Installed devices:
- default.gaussian (PennyLane-0.27.0)
- default.mixed (PennyLane-0.27.0)
- default.qubit (PennyLane-0.27.0)
- default.qubit.autograd (PennyLane-0.27.0)
- default.qubit.jax (PennyLane-0.27.0)
- default.qubit.tf (PennyLane-0.27.0)
- default.qubit.torch (PennyLane-0.27.0)
- default.qutrit (PennyLane-0.27.0)
- null.qubit (PennyLane-0.27.0)
- lightning.qubit (PennyLane-Lightning-0.26.1)
```
### Existing GitHub issues
- [X] I have searched existing GitHub issues to make sure the issue does not already exist.
</issue>
<code>
[start of pennylane/ops/functions/matrix.py]
1 # Copyright 2018-2021 Xanadu Quantum Technologies Inc.
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """
15 This module contains the qml.matrix function.
16 """
17 # pylint: disable=protected-access
18 import pennylane as qml
19
20
21 @qml.op_transform
22 def matrix(op, *, wire_order=None):
23 r"""The matrix representation of an operation or quantum circuit.
24
25 Args:
26 op (.Operator, pennylane.QNode, .QuantumTape, or Callable): An operator, quantum node, tape,
27 or function that applies quantum operations.
28 wire_order (Sequence[Any], optional): Order of the wires in the quantum circuit.
29 Defaults to the order in which the wires appear in the quantum function.
30
31 Returns:
32 tensor_like or function: Function which accepts the same arguments as the QNode or quantum
33 function. When called, this function will return the unitary matrix in the appropriate
34 autodiff framework (Autograd, TensorFlow, PyTorch, JAX) given its parameters.
35
36 **Example**
37
38 Given an instantiated operator, ``qml.matrix`` returns the matrix representation:
39
40 >>> op = qml.RX(0.54, wires=0)
41 >>> qml.matrix(op)
42 [[0.9637709+0.j 0. -0.26673144j]
43 [0. -0.26673144j 0.9637709+0.j ]]
44
45 It can also be used in a functional form:
46
47 >>> x = torch.tensor(0.6, requires_grad=True)
48 >>> matrix_fn = qml.matrix(qml.RX)
49 >>> matrix_fn(x, wires=0)
50 tensor([[0.9553+0.0000j, 0.0000-0.2955j],
51 [0.0000-0.2955j, 0.9553+0.0000j]], grad_fn=<AddBackward0>)
52
53 In its functional form, it is fully differentiable with respect to gate arguments:
54
55 >>> loss = torch.real(torch.trace(matrix_fn(x, wires=0)))
56 >>> loss.backward()
57 >>> x.grad
58 tensor(-0.5910)
59
60 This operator transform can also be applied to QNodes, tapes, and quantum functions
61 that contain multiple operations; see Usage Details below for more details.
62
63 .. details::
64 :title: Usage Details
65
66 ``qml.matrix`` can also be used with QNodes, tapes, or quantum functions that
67 contain multiple operations.
68
69 Consider the following quantum function:
70
71 .. code-block:: python3
72
73 def circuit(theta):
74 qml.RX(theta, wires=1)
75 qml.PauliZ(wires=0)
76
77 We can use ``qml.matrix`` to generate a new function that returns the unitary matrix
78 corresponding to the function ``circuit``:
79
80 >>> matrix_fn = qml.matrix(circuit)
81 >>> theta = np.pi / 4
82 >>> matrix_fn(theta)
83 array([[ 0.92387953+0.j, 0.+0.j , 0.-0.38268343j, 0.+0.j],
84 [ 0.+0.j, -0.92387953+0.j, 0.+0.j, 0. +0.38268343j],
85 [ 0. -0.38268343j, 0.+0.j, 0.92387953+0.j, 0.+0.j],
86 [ 0.+0.j, 0.+0.38268343j, 0.+0.j, -0.92387953+0.j]])
87
88 Note that since ``wire_order`` was not specified, the default order ``[1, 0]`` for ``circuit``
89 was used, and the unitary matrix corresponds to the operation :math:`Z\otimes R_X(\theta)`. To
90 obtain the matrix for :math:`R_X(\theta)\otimes Z`, specify ``wire_order=[0, 1]`` in the
91 function call:
92
93 >>> matrix = qml.matrix(circuit, wire_order=[0, 1])
94
95 You can also get the unitary matrix for operations on a subspace of a larger Hilbert space. For
96 example, with the same function ``circuit`` and ``wire_order=["a", 0, "b", 1]`` you obtain the
97 :math:`16\times 16` matrix for the operation :math:`I\otimes Z\otimes I\otimes R_X(\theta)`.
98
99 This unitary matrix can also be used in differentiable calculations. For example, consider the
100 following cost function:
101
102 .. code-block:: python
103
104 def circuit(theta):
105 qml.RX(theta, wires=1) qml.PauliZ(wires=0)
106 qml.CNOT(wires=[0, 1])
107
108 def cost(theta):
109 matrix = qml.matrix(circuit)(theta)
110 return np.real(np.trace(matrix))
111
112 Since this cost function returns a real scalar as a function of ``theta``, we can differentiate
113 it:
114
115 >>> theta = np.array(0.3, requires_grad=True)
116 >>> cost(theta)
117 1.9775421558720845
118 >>> qml.grad(cost)(theta)
119 -0.14943813247359922
120 """
121 if isinstance(op, qml.operation.Tensor) and wire_order is not None:
122 op = 1.0 * op # convert to a Hamiltonian
123
124 if isinstance(op, qml.Hamiltonian):
125 return qml.utils.sparse_hamiltonian(op, wires=wire_order).toarray()
126
127 return op.matrix(wire_order=wire_order)
128
129
130 @matrix.tape_transform
131 def _matrix(tape, wire_order=None):
132 """Defines how matrix works if applied to a tape containing multiple operations."""
133 params = tape.get_parameters(trainable_only=False)
134 interface = qml.math.get_interface(*params)
135
136 wire_order = wire_order or tape.wires
137
138 # initialize the unitary matrix
139 result = qml.math.eye(2 ** len(wire_order), like=interface)
140
141 for op in tape.operations:
142 U = matrix(op, wire_order=wire_order)
143 # Coerce the matrices U and result and use matrix multiplication. Broadcasted axes
144 # are handled correctly automatically by ``matmul`` (See e.g. NumPy documentation)
145 result = qml.math.matmul(*qml.math.coerce([U, result], like=interface), like=interface)
146
147 return result
148
[end of pennylane/ops/functions/matrix.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pennylane/ops/functions/matrix.py b/pennylane/ops/functions/matrix.py
--- a/pennylane/ops/functions/matrix.py
+++ b/pennylane/ops/functions/matrix.py
@@ -130,6 +130,8 @@
@matrix.tape_transform
def _matrix(tape, wire_order=None):
"""Defines how matrix works if applied to a tape containing multiple operations."""
+ if not tape.wires:
+ raise qml.operation.MatrixUndefinedError
params = tape.get_parameters(trainable_only=False)
interface = qml.math.get_interface(*params)
| {"golden_diff": "diff --git a/pennylane/ops/functions/matrix.py b/pennylane/ops/functions/matrix.py\n--- a/pennylane/ops/functions/matrix.py\n+++ b/pennylane/ops/functions/matrix.py\n@@ -130,6 +130,8 @@\n @matrix.tape_transform\r\n def _matrix(tape, wire_order=None):\r\n \"\"\"Defines how matrix works if applied to a tape containing multiple operations.\"\"\"\r\n+ if not tape.wires:\r\n+ raise qml.operation.MatrixUndefinedError\r\n params = tape.get_parameters(trainable_only=False)\r\n interface = qml.math.get_interface(*params)\n", "issue": "[BUG] qml.matrix(op) and op.matrix() are different for Barrier and WireCut\n### Expected behavior\n\nI expected them both to pass, or both to fail.\n\n### Actual behavior\n\n`qml.matrix(qml.Barrier(0))` passed, `qml.Barrier(0).matrix()` failed. Same for `qml.WireCut`.\n\n### Additional information\n\nAttached source code is for Barrier, but the same thing happens with WireCut.\n\n### Source code\n\n```shell\n>>> import pennylane as qml\r\n>>> op = qml.Barrier(0)\r\n>>> qml.matrix(op)\r\narray([[1.]])\r\n>>> op.matrix()\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/matthews/src/github.com/PennyLaneAI/pennylane/pennylane/operation.py\", line 1405, in matrix\r\n canonical_matrix = self.compute_matrix(*self.parameters, **self.hyperparameters)\r\n File \"/Users/matthews/src/github.com/PennyLaneAI/pennylane/pennylane/operation.py\", line 446, in compute_matrix\r\n raise MatrixUndefinedError\r\npennylane.operation.MatrixUndefinedError\n```\n\n\n### Tracebacks\n\n_No response_\n\n### System information\n\n```shell\nName: PennyLane\r\nVersion: 0.27.0\r\nSummary: PennyLane is a Python quantum machine learning library by Xanadu Inc.\r\nHome-page: https://github.com/XanaduAI/pennylane\r\nAuthor: \r\nAuthor-email: \r\nLicense: Apache License 2.0\r\nLocation: /Users/matthews/.pyenv/versions/3.9.13/envs/pennylane/lib/python3.9/site-packages\r\nRequires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, retworkx, scipy, semantic-version, toml\r\nRequired-by: PennyLane-Lightning\r\n\r\nPlatform info: macOS-12.6.1-arm64-arm-64bit\r\nPython version: 3.9.13\r\nNumpy version: 1.23.2\r\nScipy version: 1.9.0\r\nInstalled devices:\r\n- default.gaussian (PennyLane-0.27.0)\r\n- default.mixed (PennyLane-0.27.0)\r\n- default.qubit (PennyLane-0.27.0)\r\n- default.qubit.autograd (PennyLane-0.27.0)\r\n- default.qubit.jax (PennyLane-0.27.0)\r\n- default.qubit.tf (PennyLane-0.27.0)\r\n- default.qubit.torch (PennyLane-0.27.0)\r\n- default.qutrit (PennyLane-0.27.0)\r\n- null.qubit (PennyLane-0.27.0)\r\n- lightning.qubit (PennyLane-Lightning-0.26.1)\n```\n\n\n### Existing GitHub issues\n\n- [X] I have searched existing GitHub issues to make sure the issue does not already exist.\n", "before_files": [{"content": "# Copyright 2018-2021 Xanadu Quantum Technologies Inc.\r\n\r\n# Licensed under the Apache License, Version 2.0 (the \"License\");\r\n# you may not use this file except in compliance with the License.\r\n# You may obtain a copy of the License at\r\n\r\n# http://www.apache.org/licenses/LICENSE-2.0\r\n\r\n# Unless required by applicable law or agreed to in writing, software\r\n# distributed under the License is distributed on an \"AS IS\" BASIS,\r\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r\n# See the License for the specific language governing permissions and\r\n# limitations under the License.\r\n\"\"\"\r\nThis module contains the qml.matrix function.\r\n\"\"\"\r\n# pylint: disable=protected-access\r\nimport pennylane as qml\r\n\r\n\r\[email protected]_transform\r\ndef matrix(op, *, wire_order=None):\r\n r\"\"\"The matrix representation of an operation or quantum circuit.\r\n\r\n Args:\r\n op (.Operator, pennylane.QNode, .QuantumTape, or Callable): An operator, quantum node, tape,\r\n or function that applies quantum operations.\r\n wire_order (Sequence[Any], optional): Order of the wires in the quantum circuit.\r\n Defaults to the order in which the wires appear in the quantum function.\r\n\r\n Returns:\r\n tensor_like or function: Function which accepts the same arguments as the QNode or quantum\r\n function. When called, this function will return the unitary matrix in the appropriate\r\n autodiff framework (Autograd, TensorFlow, PyTorch, JAX) given its parameters.\r\n\r\n **Example**\r\n\r\n Given an instantiated operator, ``qml.matrix`` returns the matrix representation:\r\n\r\n >>> op = qml.RX(0.54, wires=0)\r\n >>> qml.matrix(op)\r\n [[0.9637709+0.j 0. -0.26673144j]\r\n [0. -0.26673144j 0.9637709+0.j ]]\r\n\r\n It can also be used in a functional form:\r\n\r\n >>> x = torch.tensor(0.6, requires_grad=True)\r\n >>> matrix_fn = qml.matrix(qml.RX)\r\n >>> matrix_fn(x, wires=0)\r\n tensor([[0.9553+0.0000j, 0.0000-0.2955j],\r\n [0.0000-0.2955j, 0.9553+0.0000j]], grad_fn=<AddBackward0>)\r\n\r\n In its functional form, it is fully differentiable with respect to gate arguments:\r\n\r\n >>> loss = torch.real(torch.trace(matrix_fn(x, wires=0)))\r\n >>> loss.backward()\r\n >>> x.grad\r\n tensor(-0.5910)\r\n\r\n This operator transform can also be applied to QNodes, tapes, and quantum functions\r\n that contain multiple operations; see Usage Details below for more details.\r\n\r\n .. details::\r\n :title: Usage Details\r\n\r\n ``qml.matrix`` can also be used with QNodes, tapes, or quantum functions that\r\n contain multiple operations.\r\n\r\n Consider the following quantum function:\r\n\r\n .. code-block:: python3\r\n\r\n def circuit(theta):\r\n qml.RX(theta, wires=1)\r\n qml.PauliZ(wires=0)\r\n\r\n We can use ``qml.matrix`` to generate a new function that returns the unitary matrix\r\n corresponding to the function ``circuit``:\r\n\r\n >>> matrix_fn = qml.matrix(circuit)\r\n >>> theta = np.pi / 4\r\n >>> matrix_fn(theta)\r\n array([[ 0.92387953+0.j, 0.+0.j , 0.-0.38268343j, 0.+0.j],\r\n [ 0.+0.j, -0.92387953+0.j, 0.+0.j, 0. +0.38268343j],\r\n [ 0. -0.38268343j, 0.+0.j, 0.92387953+0.j, 0.+0.j],\r\n [ 0.+0.j, 0.+0.38268343j, 0.+0.j, -0.92387953+0.j]])\r\n\r\n Note that since ``wire_order`` was not specified, the default order ``[1, 0]`` for ``circuit``\r\n was used, and the unitary matrix corresponds to the operation :math:`Z\\otimes R_X(\\theta)`. To\r\n obtain the matrix for :math:`R_X(\\theta)\\otimes Z`, specify ``wire_order=[0, 1]`` in the\r\n function call:\r\n\r\n >>> matrix = qml.matrix(circuit, wire_order=[0, 1])\r\n\r\n You can also get the unitary matrix for operations on a subspace of a larger Hilbert space. For\r\n example, with the same function ``circuit`` and ``wire_order=[\"a\", 0, \"b\", 1]`` you obtain the\r\n :math:`16\\times 16` matrix for the operation :math:`I\\otimes Z\\otimes I\\otimes R_X(\\theta)`.\r\n\r\n This unitary matrix can also be used in differentiable calculations. For example, consider the\r\n following cost function:\r\n\r\n .. code-block:: python\r\n\r\n def circuit(theta):\r\n qml.RX(theta, wires=1) qml.PauliZ(wires=0)\r\n qml.CNOT(wires=[0, 1])\r\n\r\n def cost(theta):\r\n matrix = qml.matrix(circuit)(theta)\r\n return np.real(np.trace(matrix))\r\n\r\n Since this cost function returns a real scalar as a function of ``theta``, we can differentiate\r\n it:\r\n\r\n >>> theta = np.array(0.3, requires_grad=True)\r\n >>> cost(theta)\r\n 1.9775421558720845\r\n >>> qml.grad(cost)(theta)\r\n -0.14943813247359922\r\n \"\"\"\r\n if isinstance(op, qml.operation.Tensor) and wire_order is not None:\r\n op = 1.0 * op # convert to a Hamiltonian\r\n\r\n if isinstance(op, qml.Hamiltonian):\r\n return qml.utils.sparse_hamiltonian(op, wires=wire_order).toarray()\r\n\r\n return op.matrix(wire_order=wire_order)\r\n\r\n\r\[email protected]_transform\r\ndef _matrix(tape, wire_order=None):\r\n \"\"\"Defines how matrix works if applied to a tape containing multiple operations.\"\"\"\r\n params = tape.get_parameters(trainable_only=False)\r\n interface = qml.math.get_interface(*params)\r\n\r\n wire_order = wire_order or tape.wires\r\n\r\n # initialize the unitary matrix\r\n result = qml.math.eye(2 ** len(wire_order), like=interface)\r\n\r\n for op in tape.operations:\r\n U = matrix(op, wire_order=wire_order)\r\n # Coerce the matrices U and result and use matrix multiplication. Broadcasted axes\r\n # are handled correctly automatically by ``matmul`` (See e.g. NumPy documentation)\r\n result = qml.math.matmul(*qml.math.coerce([U, result], like=interface), like=interface)\r\n\r\n return result\r\n", "path": "pennylane/ops/functions/matrix.py"}]} | 3,225 | 138 |
gh_patches_debug_6632 | rasdani/github-patches | git_diff | cloudtools__troposphere-1695 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
implement AWS::ServiceCatalog changes from May 14, 2020 update
</issue>
<code>
[start of troposphere/servicecatalog.py]
1 # Copyright (c) 2012-2018, Mark Peek <[email protected]>
2 # All rights reserved.
3 #
4 # See LICENSE file for full license.
5
6 from . import AWSObject, AWSProperty, Tags
7 from .validators import boolean, integer
8
9
10 class AcceptedPortfolioShare(AWSObject):
11 resource_type = "AWS::ServiceCatalog::AcceptedPortfolioShare"
12
13 props = {
14 'AcceptLanguage': (basestring, False),
15 'PortfolioId': (basestring, True),
16 }
17
18
19 class ProvisioningArtifactProperties(AWSProperty):
20 props = {
21 'Description': (basestring, False),
22 'DisableTemplateValidation': (boolean, False),
23 'Info': (dict, True),
24 'Name': (basestring, False),
25 }
26
27
28 class CloudFormationProduct(AWSObject):
29 resource_type = "AWS::ServiceCatalog::CloudFormationProduct"
30
31 props = {
32 'AcceptLanguage': (basestring, False),
33 'Description': (basestring, False),
34 'Distributor': (basestring, False),
35 'Name': (basestring, True),
36 'Owner': (basestring, True),
37 'ProvisioningArtifactParameters':
38 ([ProvisioningArtifactProperties], True),
39 'SupportDescription': (basestring, False),
40 'SupportEmail': (basestring, False),
41 'SupportUrl': (basestring, False),
42 'Tags': (Tags, False),
43 }
44
45
46 class ProvisioningParameter(AWSProperty):
47 props = {
48 'Key': (basestring, False),
49 'Value': (basestring, False),
50 }
51
52
53 class ProvisioningPreferences(AWSProperty):
54 props = {
55 'StackSetAccounts': ([basestring], False),
56 'StackSetFailureToleranceCount': (integer, False),
57 'StackSetFailureTolerancePercentage': (integer, False),
58 'StackSetMaxConcurrencyCount': (integer, False),
59 'StackSetMaxConcurrencyPercentage': (integer, False),
60 'StackSetOperationType': (basestring, False),
61 'StackSetRegions': ([basestring], False),
62 }
63
64
65 class CloudFormationProvisionedProduct(AWSObject):
66 resource_type = "AWS::ServiceCatalog::CloudFormationProvisionedProduct"
67
68 props = {
69 'AcceptLanguage': (basestring, False),
70 'NotificationArns': ([basestring], False),
71 'PathId': (basestring, False),
72 'ProductId': (basestring, False),
73 'ProductName': (basestring, False),
74 'ProvisionedProductName': (basestring, False),
75 'ProvisioningArtifactId': (basestring, False),
76 'ProvisioningArtifactName': (basestring, False),
77 'ProvisioningParameters': ([ProvisioningParameter], False),
78 'ProvisioningPreferences': (ProvisioningPreferences, False),
79 'Tags': (Tags, False),
80 }
81
82
83 class LaunchNotificationConstraint(AWSObject):
84 resource_type = "AWS::ServiceCatalog::LaunchNotificationConstraint"
85
86 props = {
87 'AcceptLanguage': (basestring, False),
88 'Description': (basestring, False),
89 'NotificationArns': ([basestring], True),
90 'PortfolioId': (basestring, True),
91 'ProductId': (basestring, True),
92 }
93
94
95 class LaunchRoleConstraint(AWSObject):
96 resource_type = "AWS::ServiceCatalog::LaunchRoleConstraint"
97
98 props = {
99 'AcceptLanguage': (basestring, False),
100 'Description': (basestring, False),
101 'LocalRoleName': (basestring, False),
102 'PortfolioId': (basestring, True),
103 'ProductId': (basestring, True),
104 'RoleArn': (basestring, True),
105 }
106
107
108 class LaunchTemplateConstraint(AWSObject):
109 resource_type = "AWS::ServiceCatalog::LaunchTemplateConstraint"
110
111 props = {
112 'AcceptLanguage': (basestring, False),
113 'Description': (basestring, False),
114 'PortfolioId': (basestring, True),
115 'ProductId': (basestring, True),
116 'Rules': (basestring, True),
117 }
118
119
120 class Portfolio(AWSObject):
121 resource_type = "AWS::ServiceCatalog::Portfolio"
122
123 props = {
124 'AcceptLanguage': (basestring, False),
125 'Description': (basestring, False),
126 'DisplayName': (basestring, True),
127 'ProviderName': (basestring, True),
128 'Tags': (Tags, False),
129 }
130
131
132 class PortfolioPrincipalAssociation(AWSObject):
133 resource_type = "AWS::ServiceCatalog::PortfolioPrincipalAssociation"
134
135 props = {
136 'AcceptLanguage': (basestring, False),
137 'PortfolioId': (basestring, True),
138 'PrincipalARN': (basestring, True),
139 'PrincipalType': (basestring, True),
140 }
141
142
143 class PortfolioProductAssociation(AWSObject):
144 resource_type = "AWS::ServiceCatalog::PortfolioProductAssociation"
145
146 props = {
147 'AcceptLanguage': (basestring, False),
148 'PortfolioId': (basestring, True),
149 'ProductId': (basestring, True),
150 'SourcePortfolioId': (basestring, False),
151 }
152
153
154 class PortfolioShare(AWSObject):
155 resource_type = "AWS::ServiceCatalog::PortfolioShare"
156
157 props = {
158 'AcceptLanguage': (basestring, False),
159 'AccountId': (basestring, True),
160 'PortfolioId': (basestring, True),
161 }
162
163
164 def validate_tag_update(update):
165 valid_tag_update_values = [
166 "ALLOWED",
167 "NOT_ALLOWED",
168 ]
169 if update not in valid_tag_update_values:
170 raise ValueError(
171 "{} is not a valid tag update value".format(update)
172 )
173 return update
174
175
176 class ResourceUpdateConstraint(AWSObject):
177 resource_type = "AWS::ServiceCatalog::ResourceUpdateConstraint"
178
179 props = {
180 'AcceptLanguage': (basestring, False),
181 'Description': (basestring, False),
182 'PortfolioId': (basestring, True),
183 'ProductId': (basestring, True),
184 'TagUpdateOnProvisionedProduct': (validate_tag_update, True),
185 }
186
187
188 class StackSetConstraint(AWSObject):
189 resource_type = "AWS::ServiceCatalog::StackSetConstraint"
190
191 props = {
192 'AcceptLanguage': (basestring, False),
193 'AccountList': ([basestring], True),
194 'AdminRole': (basestring, True),
195 'Description': (basestring, True),
196 'ExecutionRole': (basestring, True),
197 'PortfolioId': (basestring, True),
198 'ProductId': (basestring, True),
199 'RegionList': ([basestring], True),
200 'StackInstanceControl': (basestring, True),
201 }
202
203
204 class TagOption(AWSObject):
205 resource_type = "AWS::ServiceCatalog::TagOption"
206
207 props = {
208 'Active': (boolean, False),
209 'Key': (basestring, True),
210 'Value': (basestring, True),
211 }
212
213
214 class TagOptionAssociation(AWSObject):
215 resource_type = "AWS::ServiceCatalog::TagOptionAssociation"
216
217 props = {
218 'ResourceId': (basestring, True),
219 'TagOptionId': (basestring, True),
220 }
221
[end of troposphere/servicecatalog.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/troposphere/servicecatalog.py b/troposphere/servicecatalog.py
--- a/troposphere/servicecatalog.py
+++ b/troposphere/servicecatalog.py
@@ -36,6 +36,7 @@
'Owner': (basestring, True),
'ProvisioningArtifactParameters':
([ProvisioningArtifactProperties], True),
+ 'ReplaceProvisioningArtifacts': (boolean, False),
'SupportDescription': (basestring, False),
'SupportEmail': (basestring, False),
'SupportUrl': (basestring, False),
| {"golden_diff": "diff --git a/troposphere/servicecatalog.py b/troposphere/servicecatalog.py\n--- a/troposphere/servicecatalog.py\n+++ b/troposphere/servicecatalog.py\n@@ -36,6 +36,7 @@\n 'Owner': (basestring, True),\n 'ProvisioningArtifactParameters':\n ([ProvisioningArtifactProperties], True),\n+ 'ReplaceProvisioningArtifacts': (boolean, False),\n 'SupportDescription': (basestring, False),\n 'SupportEmail': (basestring, False),\n 'SupportUrl': (basestring, False),\n", "issue": "implement AWS::ServiceCatalog changes from May 14, 2020 update\n\n", "before_files": [{"content": "# Copyright (c) 2012-2018, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\nfrom . import AWSObject, AWSProperty, Tags\nfrom .validators import boolean, integer\n\n\nclass AcceptedPortfolioShare(AWSObject):\n resource_type = \"AWS::ServiceCatalog::AcceptedPortfolioShare\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'PortfolioId': (basestring, True),\n }\n\n\nclass ProvisioningArtifactProperties(AWSProperty):\n props = {\n 'Description': (basestring, False),\n 'DisableTemplateValidation': (boolean, False),\n 'Info': (dict, True),\n 'Name': (basestring, False),\n }\n\n\nclass CloudFormationProduct(AWSObject):\n resource_type = \"AWS::ServiceCatalog::CloudFormationProduct\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'Description': (basestring, False),\n 'Distributor': (basestring, False),\n 'Name': (basestring, True),\n 'Owner': (basestring, True),\n 'ProvisioningArtifactParameters':\n ([ProvisioningArtifactProperties], True),\n 'SupportDescription': (basestring, False),\n 'SupportEmail': (basestring, False),\n 'SupportUrl': (basestring, False),\n 'Tags': (Tags, False),\n }\n\n\nclass ProvisioningParameter(AWSProperty):\n props = {\n 'Key': (basestring, False),\n 'Value': (basestring, False),\n }\n\n\nclass ProvisioningPreferences(AWSProperty):\n props = {\n 'StackSetAccounts': ([basestring], False),\n 'StackSetFailureToleranceCount': (integer, False),\n 'StackSetFailureTolerancePercentage': (integer, False),\n 'StackSetMaxConcurrencyCount': (integer, False),\n 'StackSetMaxConcurrencyPercentage': (integer, False),\n 'StackSetOperationType': (basestring, False),\n 'StackSetRegions': ([basestring], False),\n }\n\n\nclass CloudFormationProvisionedProduct(AWSObject):\n resource_type = \"AWS::ServiceCatalog::CloudFormationProvisionedProduct\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'NotificationArns': ([basestring], False),\n 'PathId': (basestring, False),\n 'ProductId': (basestring, False),\n 'ProductName': (basestring, False),\n 'ProvisionedProductName': (basestring, False),\n 'ProvisioningArtifactId': (basestring, False),\n 'ProvisioningArtifactName': (basestring, False),\n 'ProvisioningParameters': ([ProvisioningParameter], False),\n 'ProvisioningPreferences': (ProvisioningPreferences, False),\n 'Tags': (Tags, False),\n }\n\n\nclass LaunchNotificationConstraint(AWSObject):\n resource_type = \"AWS::ServiceCatalog::LaunchNotificationConstraint\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'Description': (basestring, False),\n 'NotificationArns': ([basestring], True),\n 'PortfolioId': (basestring, True),\n 'ProductId': (basestring, True),\n }\n\n\nclass LaunchRoleConstraint(AWSObject):\n resource_type = \"AWS::ServiceCatalog::LaunchRoleConstraint\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'Description': (basestring, False),\n 'LocalRoleName': (basestring, False),\n 'PortfolioId': (basestring, True),\n 'ProductId': (basestring, True),\n 'RoleArn': (basestring, True),\n }\n\n\nclass LaunchTemplateConstraint(AWSObject):\n resource_type = \"AWS::ServiceCatalog::LaunchTemplateConstraint\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'Description': (basestring, False),\n 'PortfolioId': (basestring, True),\n 'ProductId': (basestring, True),\n 'Rules': (basestring, True),\n }\n\n\nclass Portfolio(AWSObject):\n resource_type = \"AWS::ServiceCatalog::Portfolio\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'Description': (basestring, False),\n 'DisplayName': (basestring, True),\n 'ProviderName': (basestring, True),\n 'Tags': (Tags, False),\n }\n\n\nclass PortfolioPrincipalAssociation(AWSObject):\n resource_type = \"AWS::ServiceCatalog::PortfolioPrincipalAssociation\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'PortfolioId': (basestring, True),\n 'PrincipalARN': (basestring, True),\n 'PrincipalType': (basestring, True),\n }\n\n\nclass PortfolioProductAssociation(AWSObject):\n resource_type = \"AWS::ServiceCatalog::PortfolioProductAssociation\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'PortfolioId': (basestring, True),\n 'ProductId': (basestring, True),\n 'SourcePortfolioId': (basestring, False),\n }\n\n\nclass PortfolioShare(AWSObject):\n resource_type = \"AWS::ServiceCatalog::PortfolioShare\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'AccountId': (basestring, True),\n 'PortfolioId': (basestring, True),\n }\n\n\ndef validate_tag_update(update):\n valid_tag_update_values = [\n \"ALLOWED\",\n \"NOT_ALLOWED\",\n ]\n if update not in valid_tag_update_values:\n raise ValueError(\n \"{} is not a valid tag update value\".format(update)\n )\n return update\n\n\nclass ResourceUpdateConstraint(AWSObject):\n resource_type = \"AWS::ServiceCatalog::ResourceUpdateConstraint\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'Description': (basestring, False),\n 'PortfolioId': (basestring, True),\n 'ProductId': (basestring, True),\n 'TagUpdateOnProvisionedProduct': (validate_tag_update, True),\n }\n\n\nclass StackSetConstraint(AWSObject):\n resource_type = \"AWS::ServiceCatalog::StackSetConstraint\"\n\n props = {\n 'AcceptLanguage': (basestring, False),\n 'AccountList': ([basestring], True),\n 'AdminRole': (basestring, True),\n 'Description': (basestring, True),\n 'ExecutionRole': (basestring, True),\n 'PortfolioId': (basestring, True),\n 'ProductId': (basestring, True),\n 'RegionList': ([basestring], True),\n 'StackInstanceControl': (basestring, True),\n }\n\n\nclass TagOption(AWSObject):\n resource_type = \"AWS::ServiceCatalog::TagOption\"\n\n props = {\n 'Active': (boolean, False),\n 'Key': (basestring, True),\n 'Value': (basestring, True),\n }\n\n\nclass TagOptionAssociation(AWSObject):\n resource_type = \"AWS::ServiceCatalog::TagOptionAssociation\"\n\n props = {\n 'ResourceId': (basestring, True),\n 'TagOptionId': (basestring, True),\n }\n", "path": "troposphere/servicecatalog.py"}]} | 2,677 | 124 |
gh_patches_debug_50237 | rasdani/github-patches | git_diff | sopel-irc__sopel-914 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
The example() decorator is improperly "fixing" the example text.
Version: 6.0.0
Code excerpt
``` python
@commands('abort')
@example(".abort")
```
Help output:
``` text
08:49 bgallew: .help abort
08:49 DevEgo: bgallew: Abort any/all pending power management commands.
08:49 DevEgo: bgallew: e.g. \.bort
```
If you update the prefix setting to be, say, '.|!', it's even more broken.
FTR, this affects Sopel's built-ins, too:
``` text
08:54 bgallew: .help help
08:54 DevEgo: bgallew: Shows a command's documentation, and possibly an example.
08:54 DevEgo: bgallew: e.g. \.elp tell
```
</issue>
<code>
[start of sopel/loader.py]
1 # coding=utf-8
2 from __future__ import unicode_literals, absolute_import
3
4 import imp
5 import os.path
6 import re
7 import sys
8
9 from sopel.tools import itervalues, get_command_regexp
10
11 if sys.version_info.major >= 3:
12 basestring = (str, bytes)
13
14
15 def get_module_description(path):
16 good_file = (os.path.isfile(path) and path.endswith('.py')
17 and not path.startswith('_'))
18 good_dir = (os.path.isdir(path) and
19 os.path.isfile(os.path.join(path, '__init__.py')))
20 if good_file:
21 name = os.path.basename(path)[:-3]
22 return (name, path, imp.PY_SOURCE)
23 elif good_dir:
24 name = os.path.basename(path)
25 return (name, path, imp.PKG_DIRECTORY)
26 else:
27 return None
28
29
30 def _update_modules_from_dir(modules, directory):
31 # Note that this modifies modules in place
32 for path in os.listdir(directory):
33 path = os.path.join(directory, path)
34 result = get_module_description(path)
35 if result:
36 modules[result[0]] = result[1:]
37
38
39 def enumerate_modules(config, show_all=False):
40 """Map the names of modules to the location of their file.
41
42 Return a dict mapping the names of modules to a tuple of the module name,
43 the pathname and either `imp.PY_SOURCE` or `imp.PKG_DIRECTORY`. This
44 searches the regular modules directory and all directories specified in the
45 `core.extra` attribute of the `config` object. If two modules have the same
46 name, the last one to be found will be returned and the rest will be
47 ignored. Modules are found starting in the regular directory, followed by
48 `~/.sopel/modules`, and then through the extra directories in the order
49 that the are specified.
50
51 If `show_all` is given as `True`, the `enable` and `exclude`
52 configuration options will be ignored, and all modules will be shown
53 (though duplicates will still be ignored as above).
54 """
55 modules = {}
56
57 # First, add modules from the regular modules directory
58 main_dir = os.path.dirname(os.path.abspath(__file__))
59 modules_dir = os.path.join(main_dir, 'modules')
60 _update_modules_from_dir(modules, modules_dir)
61 for path in os.listdir(modules_dir):
62 break
63
64 # Then, find PyPI installed modules
65 # TODO does this work with all possible install mechanisms?
66 try:
67 import sopel_modules
68 except:
69 pass
70 else:
71 for directory in sopel_modules.__path__:
72 _update_modules_from_dir(modules, directory)
73
74 # Next, look in ~/.sopel/modules
75 home_modules_dir = os.path.join(config.homedir, 'modules')
76 if not os.path.isdir(home_modules_dir):
77 os.makedirs(home_modules_dir)
78 _update_modules_from_dir(modules, home_modules_dir)
79
80 # Last, look at all the extra directories.
81 for directory in config.core.extra:
82 _update_modules_from_dir(modules, directory)
83
84 # Coretasks is special. No custom user coretasks.
85 ct_path = os.path.join(main_dir, 'coretasks.py')
86 modules['coretasks'] = (ct_path, imp.PY_SOURCE)
87
88 # If caller wants all of them, don't apply white and blacklists
89 if show_all:
90 return modules
91
92 # Apply whitelist, if present
93 enable = config.core.enable
94 if enable:
95 enabled_modules = {'coretasks': modules['coretasks']}
96 for module in enable:
97 if module in modules:
98 enabled_modules[module] = modules[module]
99 modules = enabled_modules
100
101 # Apply blacklist, if present
102 exclude = config.core.exclude
103 for module in exclude:
104 if module in modules:
105 del modules[module]
106
107 return modules
108
109
110 def compile_rule(nick, pattern):
111 pattern = pattern.replace('$nickname', nick)
112 pattern = pattern.replace('$nick', r'{}[,:]\s+'.format(nick))
113 flags = re.IGNORECASE
114 if '\n' in pattern:
115 flags |= re.VERBOSE
116 return re.compile(pattern, flags)
117
118
119 def trim_docstring(doc):
120 """Get the docstring as a series of lines that can be sent"""
121 if not doc:
122 return []
123 lines = doc.expandtabs().splitlines()
124 indent = sys.maxsize
125 for line in lines[1:]:
126 stripped = line.lstrip()
127 if stripped:
128 indent = min(indent, len(line) - len(stripped))
129 trimmed = [lines[0].strip()]
130 if indent < sys.maxsize:
131 for line in lines[1:]:
132 trimmed.append(line[:].rstrip())
133 while trimmed and not trimmed[-1]:
134 trimmed.pop()
135 while trimmed and not trimmed[0]:
136 trimmed.pop(0)
137 return trimmed
138
139
140 def clean_callable(func, config):
141 """Compiles the regexes, moves commands into func.rule, fixes up docs and
142 puts them in func._docs, and sets defaults"""
143 nick = config.core.nick
144 prefix = config.core.prefix
145 help_prefix = config.core.prefix
146 func._docs = {}
147 doc = trim_docstring(func.__doc__)
148 example = None
149
150 func.unblockable = getattr(func, 'unblockable', True)
151 func.priority = getattr(func, 'priority', 'medium')
152 func.thread = getattr(func, 'thread', True)
153 func.rate = getattr(func, 'rate', 0)
154
155 if not hasattr(func, 'event'):
156 func.event = ['PRIVMSG']
157 else:
158 if isinstance(func.event, basestring):
159 func.event = [func.event.upper()]
160 else:
161 func.event = [event.upper() for event in func.event]
162
163 if hasattr(func, 'rule'):
164 if isinstance(func.rule, basestring):
165 func.rule = [func.rule]
166 func.rule = [compile_rule(nick, rule) for rule in func.rule]
167
168 if hasattr(func, 'commands'):
169 func.rule = getattr(func, 'rule', [])
170 for command in func.commands:
171 regexp = get_command_regexp(prefix, command)
172 func.rule.append(regexp)
173 if hasattr(func, 'example'):
174 example = func.example[0]["example"]
175 example = example.replace('$nickname', nick)
176 if example[0] != help_prefix:
177 example = help_prefix + example[len(help_prefix):]
178 if doc or example:
179 for command in func.commands:
180 func._docs[command] = (doc, example)
181
182
183 def load_module(name, path, type_):
184 """Load a module, and sort out the callables and shutdowns"""
185 if type_ == imp.PY_SOURCE:
186 with open(path) as mod:
187 module = imp.load_module(name, mod, path, ('.py', 'U', type_))
188 elif type_ == imp.PKG_DIRECTORY:
189 module = imp.load_module(name, None, path, ('', '', type_))
190 else:
191 raise TypeError('Unsupported module type')
192 return module, os.path.getmtime(path)
193
194
195 def is_triggerable(obj):
196 return any(hasattr(obj, attr) for attr in ('rule', 'rule', 'intent',
197 'commands'))
198
199
200 def clean_module(module, config):
201 callables = []
202 shutdowns = []
203 jobs = []
204 for obj in itervalues(vars(module)):
205 if callable(obj):
206 if getattr(obj, '__name__', None) == 'shutdown':
207 shutdowns.append(obj)
208 elif is_triggerable(obj):
209 clean_callable(obj, config)
210 callables.append(obj)
211 elif hasattr(obj, 'interval'):
212 clean_callable(obj, config)
213 jobs.append(obj)
214 return callables, jobs, shutdowns
215
[end of sopel/loader.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sopel/loader.py b/sopel/loader.py
--- a/sopel/loader.py
+++ b/sopel/loader.py
@@ -142,7 +142,7 @@
puts them in func._docs, and sets defaults"""
nick = config.core.nick
prefix = config.core.prefix
- help_prefix = config.core.prefix
+ help_prefix = config.core.help_prefix
func._docs = {}
doc = trim_docstring(func.__doc__)
example = None
| {"golden_diff": "diff --git a/sopel/loader.py b/sopel/loader.py\n--- a/sopel/loader.py\n+++ b/sopel/loader.py\n@@ -142,7 +142,7 @@\n puts them in func._docs, and sets defaults\"\"\"\n nick = config.core.nick\n prefix = config.core.prefix\n- help_prefix = config.core.prefix\n+ help_prefix = config.core.help_prefix\n func._docs = {}\n doc = trim_docstring(func.__doc__)\n example = None\n", "issue": "The example() decorator is improperly \"fixing\" the example text.\nVersion: 6.0.0\n\nCode excerpt\n\n``` python\n@commands('abort')\n@example(\".abort\")\n```\n\nHelp output:\n\n``` text\n08:49 bgallew: .help abort\n08:49 DevEgo: bgallew: Abort any/all pending power management commands.\n08:49 DevEgo: bgallew: e.g. \\.bort\n```\n\nIf you update the prefix setting to be, say, '.|!', it's even more broken.\n\nFTR, this affects Sopel's built-ins, too:\n\n``` text\n08:54 bgallew: .help help\n08:54 DevEgo: bgallew: Shows a command's documentation, and possibly an example.\n08:54 DevEgo: bgallew: e.g. \\.elp tell\n```\n\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import unicode_literals, absolute_import\n\nimport imp\nimport os.path\nimport re\nimport sys\n\nfrom sopel.tools import itervalues, get_command_regexp\n\nif sys.version_info.major >= 3:\n basestring = (str, bytes)\n\n\ndef get_module_description(path):\n good_file = (os.path.isfile(path) and path.endswith('.py')\n and not path.startswith('_'))\n good_dir = (os.path.isdir(path) and\n os.path.isfile(os.path.join(path, '__init__.py')))\n if good_file:\n name = os.path.basename(path)[:-3]\n return (name, path, imp.PY_SOURCE)\n elif good_dir:\n name = os.path.basename(path)\n return (name, path, imp.PKG_DIRECTORY)\n else:\n return None\n\n\ndef _update_modules_from_dir(modules, directory):\n # Note that this modifies modules in place\n for path in os.listdir(directory):\n path = os.path.join(directory, path)\n result = get_module_description(path)\n if result:\n modules[result[0]] = result[1:]\n\n\ndef enumerate_modules(config, show_all=False):\n \"\"\"Map the names of modules to the location of their file.\n\n Return a dict mapping the names of modules to a tuple of the module name,\n the pathname and either `imp.PY_SOURCE` or `imp.PKG_DIRECTORY`. This\n searches the regular modules directory and all directories specified in the\n `core.extra` attribute of the `config` object. If two modules have the same\n name, the last one to be found will be returned and the rest will be\n ignored. Modules are found starting in the regular directory, followed by\n `~/.sopel/modules`, and then through the extra directories in the order\n that the are specified.\n\n If `show_all` is given as `True`, the `enable` and `exclude`\n configuration options will be ignored, and all modules will be shown\n (though duplicates will still be ignored as above).\n \"\"\"\n modules = {}\n\n # First, add modules from the regular modules directory\n main_dir = os.path.dirname(os.path.abspath(__file__))\n modules_dir = os.path.join(main_dir, 'modules')\n _update_modules_from_dir(modules, modules_dir)\n for path in os.listdir(modules_dir):\n break\n\n # Then, find PyPI installed modules\n # TODO does this work with all possible install mechanisms?\n try:\n import sopel_modules\n except:\n pass\n else:\n for directory in sopel_modules.__path__:\n _update_modules_from_dir(modules, directory)\n\n # Next, look in ~/.sopel/modules\n home_modules_dir = os.path.join(config.homedir, 'modules')\n if not os.path.isdir(home_modules_dir):\n os.makedirs(home_modules_dir)\n _update_modules_from_dir(modules, home_modules_dir)\n\n # Last, look at all the extra directories.\n for directory in config.core.extra:\n _update_modules_from_dir(modules, directory)\n\n # Coretasks is special. No custom user coretasks.\n ct_path = os.path.join(main_dir, 'coretasks.py')\n modules['coretasks'] = (ct_path, imp.PY_SOURCE)\n\n # If caller wants all of them, don't apply white and blacklists\n if show_all:\n return modules\n\n # Apply whitelist, if present\n enable = config.core.enable\n if enable:\n enabled_modules = {'coretasks': modules['coretasks']}\n for module in enable:\n if module in modules:\n enabled_modules[module] = modules[module]\n modules = enabled_modules\n\n # Apply blacklist, if present\n exclude = config.core.exclude\n for module in exclude:\n if module in modules:\n del modules[module]\n\n return modules\n\n\ndef compile_rule(nick, pattern):\n pattern = pattern.replace('$nickname', nick)\n pattern = pattern.replace('$nick', r'{}[,:]\\s+'.format(nick))\n flags = re.IGNORECASE\n if '\\n' in pattern:\n flags |= re.VERBOSE\n return re.compile(pattern, flags)\n\n\ndef trim_docstring(doc):\n \"\"\"Get the docstring as a series of lines that can be sent\"\"\"\n if not doc:\n return []\n lines = doc.expandtabs().splitlines()\n indent = sys.maxsize\n for line in lines[1:]:\n stripped = line.lstrip()\n if stripped:\n indent = min(indent, len(line) - len(stripped))\n trimmed = [lines[0].strip()]\n if indent < sys.maxsize:\n for line in lines[1:]:\n trimmed.append(line[:].rstrip())\n while trimmed and not trimmed[-1]:\n trimmed.pop()\n while trimmed and not trimmed[0]:\n trimmed.pop(0)\n return trimmed\n\n\ndef clean_callable(func, config):\n \"\"\"Compiles the regexes, moves commands into func.rule, fixes up docs and\n puts them in func._docs, and sets defaults\"\"\"\n nick = config.core.nick\n prefix = config.core.prefix\n help_prefix = config.core.prefix\n func._docs = {}\n doc = trim_docstring(func.__doc__)\n example = None\n\n func.unblockable = getattr(func, 'unblockable', True)\n func.priority = getattr(func, 'priority', 'medium')\n func.thread = getattr(func, 'thread', True)\n func.rate = getattr(func, 'rate', 0)\n\n if not hasattr(func, 'event'):\n func.event = ['PRIVMSG']\n else:\n if isinstance(func.event, basestring):\n func.event = [func.event.upper()]\n else:\n func.event = [event.upper() for event in func.event]\n\n if hasattr(func, 'rule'):\n if isinstance(func.rule, basestring):\n func.rule = [func.rule]\n func.rule = [compile_rule(nick, rule) for rule in func.rule]\n\n if hasattr(func, 'commands'):\n func.rule = getattr(func, 'rule', [])\n for command in func.commands:\n regexp = get_command_regexp(prefix, command)\n func.rule.append(regexp)\n if hasattr(func, 'example'):\n example = func.example[0][\"example\"]\n example = example.replace('$nickname', nick)\n if example[0] != help_prefix:\n example = help_prefix + example[len(help_prefix):]\n if doc or example:\n for command in func.commands:\n func._docs[command] = (doc, example)\n\n\ndef load_module(name, path, type_):\n \"\"\"Load a module, and sort out the callables and shutdowns\"\"\"\n if type_ == imp.PY_SOURCE:\n with open(path) as mod:\n module = imp.load_module(name, mod, path, ('.py', 'U', type_))\n elif type_ == imp.PKG_DIRECTORY:\n module = imp.load_module(name, None, path, ('', '', type_))\n else:\n raise TypeError('Unsupported module type')\n return module, os.path.getmtime(path)\n\n\ndef is_triggerable(obj):\n return any(hasattr(obj, attr) for attr in ('rule', 'rule', 'intent',\n 'commands'))\n\n\ndef clean_module(module, config):\n callables = []\n shutdowns = []\n jobs = []\n for obj in itervalues(vars(module)):\n if callable(obj):\n if getattr(obj, '__name__', None) == 'shutdown':\n shutdowns.append(obj)\n elif is_triggerable(obj):\n clean_callable(obj, config)\n callables.append(obj)\n elif hasattr(obj, 'interval'):\n clean_callable(obj, config)\n jobs.append(obj)\n return callables, jobs, shutdowns\n", "path": "sopel/loader.py"}]} | 2,955 | 112 |
gh_patches_debug_10213 | rasdani/github-patches | git_diff | readthedocs__readthedocs.org-3111 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Click twice in the 404 sustainability link doesn't work
## Steps to reproduce it
1. Go to https://readthedocs.org/humitos
2. You will see a 404 page with a sustainability link (https://readthedocs.org/sustainability/click/90/EdfO7Jed1YQr/)
3. Click on it
4. It goes to Sentry home page
5. Go back and click it again
## Expected Result
Go to Sentry again.
## Actual Result
You get **a new** 404 page with a new link :)
</issue>
<code>
[start of readthedocs/donate/views.py]
1 """Donation views"""
2 # We use 'hash' heavily in the API here.
3 # pylint: disable=redefined-builtin
4
5 from __future__ import absolute_import
6 import logging
7
8 from django.views.generic import TemplateView
9 from django.core.urlresolvers import reverse
10 from django.utils.translation import ugettext_lazy as _
11 from django.shortcuts import redirect, get_object_or_404, render_to_response
12 from django.template import RequestContext
13 from django.core.cache import cache
14 from django.http import Http404
15
16 from vanilla import CreateView, ListView
17
18 from readthedocs.donate.utils import offer_promo
19 from readthedocs.payments.mixins import StripeMixin
20 from readthedocs.projects.models import Project
21 from readthedocs.redirects.utils import get_redirect_response
22
23 from .models import Supporter, SupporterPromo
24 from .constants import CLICKS, VIEWS
25 from .forms import SupporterForm, EthicalAdForm
26 from .mixins import DonateProgressMixin
27
28 log = logging.getLogger(__name__)
29
30
31 class PayAdsView(StripeMixin, CreateView):
32
33 """Create a payment locally and in Stripe"""
34
35 form_class = EthicalAdForm
36 success_message = _('Your payment has been received')
37 template_name = 'donate/ethicalads.html'
38
39 def get_success_url(self):
40 return reverse('pay_success')
41
42
43 class PaySuccess(TemplateView):
44 template_name = 'donate/ethicalads-success.html'
45
46
47 class DonateCreateView(StripeMixin, CreateView):
48
49 """Create a donation locally and in Stripe"""
50
51 form_class = SupporterForm
52 success_message = _('Your contribution has been received')
53 template_name = 'donate/create.html'
54
55 def get_success_url(self):
56 return reverse('donate_success')
57
58 def get_initial(self):
59 return {'dollars': self.request.GET.get('dollars', 50)}
60
61 def get_form(self, data=None, files=None, **kwargs):
62 kwargs['user'] = self.request.user
63 return super(DonateCreateView, self).get_form(data, files, **kwargs)
64
65
66 class DonateSuccessView(TemplateView):
67 template_name = 'donate/success.html'
68
69
70 class DonateListView(DonateProgressMixin, ListView):
71
72 """Donation list and detail view"""
73
74 template_name = 'donate/list.html'
75 model = Supporter
76 context_object_name = 'supporters'
77
78 def get_queryset(self):
79 return (Supporter.objects
80 .filter(public=True)
81 .order_by('-dollars', '-pub_date'))
82
83 def get_template_names(self):
84 return [self.template_name]
85
86
87 class PromoDetailView(TemplateView):
88 template_name = 'donate/promo_detail.html'
89
90 def get_context_data(self, **kwargs):
91 promo_slug = kwargs['promo_slug']
92 days = int(self.request.GET.get('days', 90))
93
94 if promo_slug == 'live' and self.request.user.is_staff:
95 promos = SupporterPromo.objects.filter(live=True)
96 elif promo_slug[-1] == '*' and '-' in promo_slug:
97 promos = SupporterPromo.objects.filter(
98 analytics_id__contains=promo_slug.replace('*', '')
99 )
100 else:
101 slugs = promo_slug.split(',')
102 promos = SupporterPromo.objects.filter(analytics_id__in=slugs)
103
104 total_clicks = sum(promo.total_clicks() for promo in promos)
105
106 return {
107 'promos': promos,
108 'total_clicks': total_clicks,
109 'days': days,
110 'days_slice': ':%s' % days,
111 }
112
113
114 def click_proxy(request, promo_id, hash):
115 """Track a click on a promotion and redirect to the link."""
116 promo = get_object_or_404(SupporterPromo, pk=promo_id)
117 count = cache.get(promo.cache_key(type=CLICKS, hash=hash), None)
118 if count is None:
119 log.warning('Old or nonexistent hash tried on Click.')
120 elif count == 0:
121 promo.incr(CLICKS)
122 cache.incr(promo.cache_key(type=CLICKS, hash=hash))
123 project_slug = cache.get(
124 promo.cache_key(type='project', hash=hash),
125 None
126 )
127 if project_slug:
128 project = Project.objects.get(slug=project_slug)
129 promo.incr(CLICKS, project=project)
130 else:
131 agent = request.META.get('HTTP_USER_AGENT', 'Unknown')
132 log.warning(
133 'Duplicate click logged. {count} total clicks tried. User Agent: [{agent}]'.format(
134 count=count, agent=agent
135 )
136 )
137 cache.incr(promo.cache_key(type=CLICKS, hash=hash))
138 raise Http404('Invalid click. This has been logged.')
139 return redirect(promo.link)
140
141
142 def view_proxy(request, promo_id, hash):
143 """Track a view of a promotion and redirect to the image."""
144 promo = get_object_or_404(SupporterPromo, pk=promo_id)
145 if not promo.image:
146 raise Http404('No image defined for this promo.')
147 count = cache.get(promo.cache_key(type=VIEWS, hash=hash), None)
148 if count is None:
149 log.warning('Old or nonexistent hash tried on View.')
150 elif count == 0:
151 promo.incr(VIEWS)
152 cache.incr(promo.cache_key(type=VIEWS, hash=hash))
153 project_slug = cache.get(
154 promo.cache_key(type='project', hash=hash),
155 None
156 )
157 if project_slug:
158 project = Project.objects.get(slug=project_slug)
159 promo.incr(VIEWS, project=project)
160 else:
161 agent = request.META.get('HTTP_USER_AGENT', 'Unknown')
162 log.warning(
163 'Duplicate view logged. {count} total views tried. User Agent: [{agent}]'.format(
164 count=count, agent=agent
165 )
166 )
167 cache.incr(promo.cache_key(type=VIEWS, hash=hash))
168 raise Http404('Invalid click. This has been logged.')
169 return redirect(promo.image)
170
171
172 def _add_promo_data(display_type):
173 promo_queryset = SupporterPromo.objects.filter(live=True, display_type=display_type)
174 promo_obj = promo_queryset.order_by('?').first()
175 if promo_obj:
176 promo_dict = offer_promo(promo_obj=promo_obj, project=None)
177 else:
178 promo_dict = None
179 return promo_dict
180
181
182 def promo_500(request, template_name='donate/promo_500.html', **__):
183 """A simple 500 handler so we get media"""
184 promo_dict = _add_promo_data(display_type='error')
185 r = render_to_response(template_name,
186 context_instance=RequestContext(request),
187 context={
188 'promo_data': promo_dict,
189 })
190 r.status_code = 500
191 return r
192
193
194 def promo_404(request, template_name='donate/promo_404.html', **__):
195 """A simple 404 handler so we get media"""
196 promo_dict = _add_promo_data(display_type='error')
197 response = get_redirect_response(request, path=request.get_full_path())
198 if response:
199 return response
200 r = render_to_response(template_name,
201 context_instance=RequestContext(request),
202 context={
203 'promo_data': promo_dict,
204 })
205 r.status_code = 404
206 return r
207
[end of readthedocs/donate/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/readthedocs/donate/views.py b/readthedocs/donate/views.py
--- a/readthedocs/donate/views.py
+++ b/readthedocs/donate/views.py
@@ -135,7 +135,6 @@
)
)
cache.incr(promo.cache_key(type=CLICKS, hash=hash))
- raise Http404('Invalid click. This has been logged.')
return redirect(promo.link)
@@ -165,7 +164,6 @@
)
)
cache.incr(promo.cache_key(type=VIEWS, hash=hash))
- raise Http404('Invalid click. This has been logged.')
return redirect(promo.image)
| {"golden_diff": "diff --git a/readthedocs/donate/views.py b/readthedocs/donate/views.py\n--- a/readthedocs/donate/views.py\n+++ b/readthedocs/donate/views.py\n@@ -135,7 +135,6 @@\n )\n )\n cache.incr(promo.cache_key(type=CLICKS, hash=hash))\n- raise Http404('Invalid click. This has been logged.')\n return redirect(promo.link)\n \n \n@@ -165,7 +164,6 @@\n )\n )\n cache.incr(promo.cache_key(type=VIEWS, hash=hash))\n- raise Http404('Invalid click. This has been logged.')\n return redirect(promo.image)\n", "issue": "Click twice in the 404 sustainability link doesn't work\n## Steps to reproduce it\r\n\r\n1. Go to https://readthedocs.org/humitos\r\n2. You will see a 404 page with a sustainability link (https://readthedocs.org/sustainability/click/90/EdfO7Jed1YQr/)\r\n3. Click on it\r\n4. It goes to Sentry home page\r\n5. Go back and click it again\r\n\r\n## Expected Result\r\n\r\nGo to Sentry again.\r\n\r\n## Actual Result\r\n\r\nYou get **a new** 404 page with a new link :)\r\n\n", "before_files": [{"content": "\"\"\"Donation views\"\"\"\n# We use 'hash' heavily in the API here.\n# pylint: disable=redefined-builtin\n\nfrom __future__ import absolute_import\nimport logging\n\nfrom django.views.generic import TemplateView\nfrom django.core.urlresolvers import reverse\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.shortcuts import redirect, get_object_or_404, render_to_response\nfrom django.template import RequestContext\nfrom django.core.cache import cache\nfrom django.http import Http404\n\nfrom vanilla import CreateView, ListView\n\nfrom readthedocs.donate.utils import offer_promo\nfrom readthedocs.payments.mixins import StripeMixin\nfrom readthedocs.projects.models import Project\nfrom readthedocs.redirects.utils import get_redirect_response\n\nfrom .models import Supporter, SupporterPromo\nfrom .constants import CLICKS, VIEWS\nfrom .forms import SupporterForm, EthicalAdForm\nfrom .mixins import DonateProgressMixin\n\nlog = logging.getLogger(__name__)\n\n\nclass PayAdsView(StripeMixin, CreateView):\n\n \"\"\"Create a payment locally and in Stripe\"\"\"\n\n form_class = EthicalAdForm\n success_message = _('Your payment has been received')\n template_name = 'donate/ethicalads.html'\n\n def get_success_url(self):\n return reverse('pay_success')\n\n\nclass PaySuccess(TemplateView):\n template_name = 'donate/ethicalads-success.html'\n\n\nclass DonateCreateView(StripeMixin, CreateView):\n\n \"\"\"Create a donation locally and in Stripe\"\"\"\n\n form_class = SupporterForm\n success_message = _('Your contribution has been received')\n template_name = 'donate/create.html'\n\n def get_success_url(self):\n return reverse('donate_success')\n\n def get_initial(self):\n return {'dollars': self.request.GET.get('dollars', 50)}\n\n def get_form(self, data=None, files=None, **kwargs):\n kwargs['user'] = self.request.user\n return super(DonateCreateView, self).get_form(data, files, **kwargs)\n\n\nclass DonateSuccessView(TemplateView):\n template_name = 'donate/success.html'\n\n\nclass DonateListView(DonateProgressMixin, ListView):\n\n \"\"\"Donation list and detail view\"\"\"\n\n template_name = 'donate/list.html'\n model = Supporter\n context_object_name = 'supporters'\n\n def get_queryset(self):\n return (Supporter.objects\n .filter(public=True)\n .order_by('-dollars', '-pub_date'))\n\n def get_template_names(self):\n return [self.template_name]\n\n\nclass PromoDetailView(TemplateView):\n template_name = 'donate/promo_detail.html'\n\n def get_context_data(self, **kwargs):\n promo_slug = kwargs['promo_slug']\n days = int(self.request.GET.get('days', 90))\n\n if promo_slug == 'live' and self.request.user.is_staff:\n promos = SupporterPromo.objects.filter(live=True)\n elif promo_slug[-1] == '*' and '-' in promo_slug:\n promos = SupporterPromo.objects.filter(\n analytics_id__contains=promo_slug.replace('*', '')\n )\n else:\n slugs = promo_slug.split(',')\n promos = SupporterPromo.objects.filter(analytics_id__in=slugs)\n\n total_clicks = sum(promo.total_clicks() for promo in promos)\n\n return {\n 'promos': promos,\n 'total_clicks': total_clicks,\n 'days': days,\n 'days_slice': ':%s' % days,\n }\n\n\ndef click_proxy(request, promo_id, hash):\n \"\"\"Track a click on a promotion and redirect to the link.\"\"\"\n promo = get_object_or_404(SupporterPromo, pk=promo_id)\n count = cache.get(promo.cache_key(type=CLICKS, hash=hash), None)\n if count is None:\n log.warning('Old or nonexistent hash tried on Click.')\n elif count == 0:\n promo.incr(CLICKS)\n cache.incr(promo.cache_key(type=CLICKS, hash=hash))\n project_slug = cache.get(\n promo.cache_key(type='project', hash=hash),\n None\n )\n if project_slug:\n project = Project.objects.get(slug=project_slug)\n promo.incr(CLICKS, project=project)\n else:\n agent = request.META.get('HTTP_USER_AGENT', 'Unknown')\n log.warning(\n 'Duplicate click logged. {count} total clicks tried. User Agent: [{agent}]'.format(\n count=count, agent=agent\n )\n )\n cache.incr(promo.cache_key(type=CLICKS, hash=hash))\n raise Http404('Invalid click. This has been logged.')\n return redirect(promo.link)\n\n\ndef view_proxy(request, promo_id, hash):\n \"\"\"Track a view of a promotion and redirect to the image.\"\"\"\n promo = get_object_or_404(SupporterPromo, pk=promo_id)\n if not promo.image:\n raise Http404('No image defined for this promo.')\n count = cache.get(promo.cache_key(type=VIEWS, hash=hash), None)\n if count is None:\n log.warning('Old or nonexistent hash tried on View.')\n elif count == 0:\n promo.incr(VIEWS)\n cache.incr(promo.cache_key(type=VIEWS, hash=hash))\n project_slug = cache.get(\n promo.cache_key(type='project', hash=hash),\n None\n )\n if project_slug:\n project = Project.objects.get(slug=project_slug)\n promo.incr(VIEWS, project=project)\n else:\n agent = request.META.get('HTTP_USER_AGENT', 'Unknown')\n log.warning(\n 'Duplicate view logged. {count} total views tried. User Agent: [{agent}]'.format(\n count=count, agent=agent\n )\n )\n cache.incr(promo.cache_key(type=VIEWS, hash=hash))\n raise Http404('Invalid click. This has been logged.')\n return redirect(promo.image)\n\n\ndef _add_promo_data(display_type):\n promo_queryset = SupporterPromo.objects.filter(live=True, display_type=display_type)\n promo_obj = promo_queryset.order_by('?').first()\n if promo_obj:\n promo_dict = offer_promo(promo_obj=promo_obj, project=None)\n else:\n promo_dict = None\n return promo_dict\n\n\ndef promo_500(request, template_name='donate/promo_500.html', **__):\n \"\"\"A simple 500 handler so we get media\"\"\"\n promo_dict = _add_promo_data(display_type='error')\n r = render_to_response(template_name,\n context_instance=RequestContext(request),\n context={\n 'promo_data': promo_dict,\n })\n r.status_code = 500\n return r\n\n\ndef promo_404(request, template_name='donate/promo_404.html', **__):\n \"\"\"A simple 404 handler so we get media\"\"\"\n promo_dict = _add_promo_data(display_type='error')\n response = get_redirect_response(request, path=request.get_full_path())\n if response:\n return response\n r = render_to_response(template_name,\n context_instance=RequestContext(request),\n context={\n 'promo_data': promo_dict,\n })\n r.status_code = 404\n return r\n", "path": "readthedocs/donate/views.py"}]} | 2,811 | 161 |
gh_patches_debug_10832 | rasdani/github-patches | git_diff | pyqtgraph__pyqtgraph-309 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
unexpected 'str' during Arrow test
testing on Windows Python 3.4.4/Qt5.5.1/ PyQtgraph github of 20160102, I have thefollowing error on "Arrow" test:
```
Using PyQt5 (default graphics system)
Using PyQt5 (default graphics system)
Using PyQt5 (default graphics system)
Using PyQt5 (default graphics system)
Traceback (most recent call last):
File "D:\WinPython\basedir34\buildQt5\winpython-3.4.4.amd64\python-3.4.4.amd64
\lib\site-packages\pyqtgraph\examples\Arrow.py", line 50, in <module>
anim = a.makeAnimation(loop=-1)
File "D:\WinPython\basedir34\buildQt5\winpython-3.4.4.amd64\python-3.4.4.amd64
\lib\site-packages\pyqtgraph\graphicsItems\CurvePoint.py", line 94, in makeAnima
tion
anim = QtCore.QPropertyAnimation(self, prop)
TypeError: arguments did not match any overloaded call:
QPropertyAnimation(QObject parent=None): too many arguments
QPropertyAnimation(QObject, QByteArray, QObject parent=None): argument 2 has u
nexpected type 'str'
```
</issue>
<code>
[start of pyqtgraph/graphicsItems/CurvePoint.py]
1 from ..Qt import QtGui, QtCore
2 from . import ArrowItem
3 import numpy as np
4 from ..Point import Point
5 import weakref
6 from .GraphicsObject import GraphicsObject
7
8 __all__ = ['CurvePoint', 'CurveArrow']
9 class CurvePoint(GraphicsObject):
10 """A GraphicsItem that sets its location to a point on a PlotCurveItem.
11 Also rotates to be tangent to the curve.
12 The position along the curve is a Qt property, and thus can be easily animated.
13
14 Note: This class does not display anything; see CurveArrow for an applied example
15 """
16
17 def __init__(self, curve, index=0, pos=None, rotate=True):
18 """Position can be set either as an index referring to the sample number or
19 the position 0.0 - 1.0
20 If *rotate* is True, then the item rotates to match the tangent of the curve.
21 """
22
23 GraphicsObject.__init__(self)
24 #QObjectWorkaround.__init__(self)
25 self._rotate = rotate
26 self.curve = weakref.ref(curve)
27 self.setParentItem(curve)
28 self.setProperty('position', 0.0)
29 self.setProperty('index', 0)
30
31 if hasattr(self, 'ItemHasNoContents'):
32 self.setFlags(self.flags() | self.ItemHasNoContents)
33
34 if pos is not None:
35 self.setPos(pos)
36 else:
37 self.setIndex(index)
38
39 def setPos(self, pos):
40 self.setProperty('position', float(pos))## cannot use numpy types here, MUST be python float.
41
42 def setIndex(self, index):
43 self.setProperty('index', int(index)) ## cannot use numpy types here, MUST be python int.
44
45 def event(self, ev):
46 if not isinstance(ev, QtCore.QDynamicPropertyChangeEvent) or self.curve() is None:
47 return False
48
49 if ev.propertyName() == 'index':
50 index = self.property('index')
51 if 'QVariant' in repr(index):
52 index = index.toInt()[0]
53 elif ev.propertyName() == 'position':
54 index = None
55 else:
56 return False
57
58 (x, y) = self.curve().getData()
59 if index is None:
60 #print ev.propertyName(), self.property('position').toDouble()[0], self.property('position').typeName()
61 pos = self.property('position')
62 if 'QVariant' in repr(pos): ## need to support 2 APIs :(
63 pos = pos.toDouble()[0]
64 index = (len(x)-1) * np.clip(pos, 0.0, 1.0)
65
66 if index != int(index): ## interpolate floating-point values
67 i1 = int(index)
68 i2 = np.clip(i1+1, 0, len(x)-1)
69 s2 = index-i1
70 s1 = 1.0-s2
71 newPos = (x[i1]*s1+x[i2]*s2, y[i1]*s1+y[i2]*s2)
72 else:
73 index = int(index)
74 i1 = np.clip(index-1, 0, len(x)-1)
75 i2 = np.clip(index+1, 0, len(x)-1)
76 newPos = (x[index], y[index])
77
78 p1 = self.parentItem().mapToScene(QtCore.QPointF(x[i1], y[i1]))
79 p2 = self.parentItem().mapToScene(QtCore.QPointF(x[i2], y[i2]))
80 ang = np.arctan2(p2.y()-p1.y(), p2.x()-p1.x()) ## returns radians
81 self.resetTransform()
82 if self._rotate:
83 self.rotate(180+ ang * 180 / np.pi) ## takes degrees
84 QtGui.QGraphicsItem.setPos(self, *newPos)
85 return True
86
87 def boundingRect(self):
88 return QtCore.QRectF()
89
90 def paint(self, *args):
91 pass
92
93 def makeAnimation(self, prop='position', start=0.0, end=1.0, duration=10000, loop=1):
94 anim = QtCore.QPropertyAnimation(self, prop)
95 anim.setDuration(duration)
96 anim.setStartValue(start)
97 anim.setEndValue(end)
98 anim.setLoopCount(loop)
99 return anim
100
101
102 class CurveArrow(CurvePoint):
103 """Provides an arrow that points to any specific sample on a PlotCurveItem.
104 Provides properties that can be animated."""
105
106 def __init__(self, curve, index=0, pos=None, **opts):
107 CurvePoint.__init__(self, curve, index=index, pos=pos)
108 if opts.get('pxMode', True):
109 opts['pxMode'] = False
110 self.setFlags(self.flags() | self.ItemIgnoresTransformations)
111 opts['angle'] = 0
112 self.arrow = ArrowItem.ArrowItem(**opts)
113 self.arrow.setParentItem(self)
114
115 def setStyle(self, **opts):
116 return self.arrow.setStyle(**opts)
117
118
[end of pyqtgraph/graphicsItems/CurvePoint.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pyqtgraph/graphicsItems/CurvePoint.py b/pyqtgraph/graphicsItems/CurvePoint.py
--- a/pyqtgraph/graphicsItems/CurvePoint.py
+++ b/pyqtgraph/graphicsItems/CurvePoint.py
@@ -91,6 +91,11 @@
pass
def makeAnimation(self, prop='position', start=0.0, end=1.0, duration=10000, loop=1):
+ # In Python 3, a bytes object needs to be used as a property name in
+ # QPropertyAnimation. PyQt stopped automatically encoding a str when a
+ # QByteArray was expected in v5.5 (see qbytearray.sip).
+ if not isinstance(prop, bytes):
+ prop = prop.encode('latin-1')
anim = QtCore.QPropertyAnimation(self, prop)
anim.setDuration(duration)
anim.setStartValue(start)
| {"golden_diff": "diff --git a/pyqtgraph/graphicsItems/CurvePoint.py b/pyqtgraph/graphicsItems/CurvePoint.py\n--- a/pyqtgraph/graphicsItems/CurvePoint.py\n+++ b/pyqtgraph/graphicsItems/CurvePoint.py\n@@ -91,6 +91,11 @@\n pass\n \n def makeAnimation(self, prop='position', start=0.0, end=1.0, duration=10000, loop=1):\n+ # In Python 3, a bytes object needs to be used as a property name in\n+ # QPropertyAnimation. PyQt stopped automatically encoding a str when a\n+ # QByteArray was expected in v5.5 (see qbytearray.sip).\n+ if not isinstance(prop, bytes):\n+ prop = prop.encode('latin-1')\n anim = QtCore.QPropertyAnimation(self, prop)\n anim.setDuration(duration)\n anim.setStartValue(start)\n", "issue": "unexpected 'str' during Arrow test\ntesting on Windows Python 3.4.4/Qt5.5.1/ PyQtgraph github of 20160102, I have thefollowing error on \"Arrow\" test:\n\n```\nUsing PyQt5 (default graphics system)\nUsing PyQt5 (default graphics system)\nUsing PyQt5 (default graphics system)\nUsing PyQt5 (default graphics system)\nTraceback (most recent call last):\n File \"D:\\WinPython\\basedir34\\buildQt5\\winpython-3.4.4.amd64\\python-3.4.4.amd64\n\\lib\\site-packages\\pyqtgraph\\examples\\Arrow.py\", line 50, in <module>\n anim = a.makeAnimation(loop=-1)\n File \"D:\\WinPython\\basedir34\\buildQt5\\winpython-3.4.4.amd64\\python-3.4.4.amd64\n\\lib\\site-packages\\pyqtgraph\\graphicsItems\\CurvePoint.py\", line 94, in makeAnima\ntion\n anim = QtCore.QPropertyAnimation(self, prop)\nTypeError: arguments did not match any overloaded call:\n QPropertyAnimation(QObject parent=None): too many arguments\n QPropertyAnimation(QObject, QByteArray, QObject parent=None): argument 2 has u\nnexpected type 'str'\n\n```\n\n", "before_files": [{"content": "from ..Qt import QtGui, QtCore\nfrom . import ArrowItem\nimport numpy as np\nfrom ..Point import Point\nimport weakref\nfrom .GraphicsObject import GraphicsObject\n\n__all__ = ['CurvePoint', 'CurveArrow']\nclass CurvePoint(GraphicsObject):\n \"\"\"A GraphicsItem that sets its location to a point on a PlotCurveItem.\n Also rotates to be tangent to the curve.\n The position along the curve is a Qt property, and thus can be easily animated.\n \n Note: This class does not display anything; see CurveArrow for an applied example\n \"\"\"\n \n def __init__(self, curve, index=0, pos=None, rotate=True):\n \"\"\"Position can be set either as an index referring to the sample number or\n the position 0.0 - 1.0\n If *rotate* is True, then the item rotates to match the tangent of the curve.\n \"\"\"\n \n GraphicsObject.__init__(self)\n #QObjectWorkaround.__init__(self)\n self._rotate = rotate\n self.curve = weakref.ref(curve)\n self.setParentItem(curve)\n self.setProperty('position', 0.0)\n self.setProperty('index', 0)\n \n if hasattr(self, 'ItemHasNoContents'):\n self.setFlags(self.flags() | self.ItemHasNoContents)\n \n if pos is not None:\n self.setPos(pos)\n else:\n self.setIndex(index)\n \n def setPos(self, pos):\n self.setProperty('position', float(pos))## cannot use numpy types here, MUST be python float.\n \n def setIndex(self, index):\n self.setProperty('index', int(index)) ## cannot use numpy types here, MUST be python int.\n \n def event(self, ev):\n if not isinstance(ev, QtCore.QDynamicPropertyChangeEvent) or self.curve() is None:\n return False\n \n if ev.propertyName() == 'index':\n index = self.property('index')\n if 'QVariant' in repr(index):\n index = index.toInt()[0]\n elif ev.propertyName() == 'position':\n index = None\n else:\n return False\n \n (x, y) = self.curve().getData()\n if index is None:\n #print ev.propertyName(), self.property('position').toDouble()[0], self.property('position').typeName()\n pos = self.property('position')\n if 'QVariant' in repr(pos): ## need to support 2 APIs :(\n pos = pos.toDouble()[0]\n index = (len(x)-1) * np.clip(pos, 0.0, 1.0)\n \n if index != int(index): ## interpolate floating-point values\n i1 = int(index)\n i2 = np.clip(i1+1, 0, len(x)-1)\n s2 = index-i1\n s1 = 1.0-s2\n newPos = (x[i1]*s1+x[i2]*s2, y[i1]*s1+y[i2]*s2)\n else:\n index = int(index)\n i1 = np.clip(index-1, 0, len(x)-1)\n i2 = np.clip(index+1, 0, len(x)-1)\n newPos = (x[index], y[index])\n \n p1 = self.parentItem().mapToScene(QtCore.QPointF(x[i1], y[i1]))\n p2 = self.parentItem().mapToScene(QtCore.QPointF(x[i2], y[i2]))\n ang = np.arctan2(p2.y()-p1.y(), p2.x()-p1.x()) ## returns radians\n self.resetTransform()\n if self._rotate:\n self.rotate(180+ ang * 180 / np.pi) ## takes degrees\n QtGui.QGraphicsItem.setPos(self, *newPos)\n return True\n \n def boundingRect(self):\n return QtCore.QRectF()\n \n def paint(self, *args):\n pass\n \n def makeAnimation(self, prop='position', start=0.0, end=1.0, duration=10000, loop=1):\n anim = QtCore.QPropertyAnimation(self, prop)\n anim.setDuration(duration)\n anim.setStartValue(start)\n anim.setEndValue(end)\n anim.setLoopCount(loop)\n return anim\n\n\nclass CurveArrow(CurvePoint):\n \"\"\"Provides an arrow that points to any specific sample on a PlotCurveItem.\n Provides properties that can be animated.\"\"\"\n \n def __init__(self, curve, index=0, pos=None, **opts):\n CurvePoint.__init__(self, curve, index=index, pos=pos)\n if opts.get('pxMode', True):\n opts['pxMode'] = False\n self.setFlags(self.flags() | self.ItemIgnoresTransformations)\n opts['angle'] = 0\n self.arrow = ArrowItem.ArrowItem(**opts)\n self.arrow.setParentItem(self)\n \n def setStyle(self, **opts):\n return self.arrow.setStyle(**opts)\n \n", "path": "pyqtgraph/graphicsItems/CurvePoint.py"}]} | 2,175 | 202 |
gh_patches_debug_36818 | rasdani/github-patches | git_diff | feast-dev__feast-636 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
No exception when connecting to Kafka fails
Hey guys,
I have installed feast(0.4.4) via Helm3 on GKE
The Basic example is working until the ingestion (online serving) part
There I get ->>
Waiting for feature set to be ready for ingestion...
0%| | 0/15 [00:00<?, ?rows/s]
but there is no progress.
When I look on GCPs bigquery interface I can see that the project "customer_project" is created with the correct columns in "customer_transactions".
But for sure no data
get_feature_set gives me
{
"spec": {
"name": "customer_transactions",
"version": 1,
"entities": [
{
"name": "customer_id",
"valueType": "INT64"
}
],
"features": [
{
"name": "daily_transactions",
"valueType": "DOUBLE"
},
{
"name": "total_transactions",
"valueType": "INT64"
}
],
"maxAge": "432000s",
"source": {
"type": "KAFKA",
"kafkaSourceConfig": {
"bootstrapServers": "feast-kafka:9092",
"topic": "feast"
}
},
"project": "customer_project_1"
},
"meta": {
"createdTimestamp": "2020-04-15T10:26:51Z",
"status": "STATUS_READY"
}
}
I had to modify some port service setup in the chart so it can be that some of feast have connection issues between kafka etc.
But There are no errors in the logs of the core and serving pod.
What can be the problem and how is a way to debug that?
</issue>
<code>
[start of sdk/python/feast/loaders/abstract_producer.py]
1 # Copyright 2019 The Feast Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import Optional, Union
16
17 from tqdm import tqdm
18
19
20 class AbstractProducer:
21 """
22 Abstract class for Kafka producers
23 """
24
25 def __init__(self, brokers: str, row_count: int, disable_progress_bar: bool):
26 self.brokers = brokers
27 self.row_count = row_count
28 self.error_count = 0
29 self.last_exception = ""
30
31 # Progress bar will always display average rate
32 self.pbar = tqdm(
33 total=row_count, unit="rows", smoothing=0, disable=disable_progress_bar
34 )
35
36 def produce(self, topic: str, data: bytes):
37 message = "{} should implement a produce method".format(self.__class__.__name__)
38 raise NotImplementedError(message)
39
40 def flush(self, timeout: int):
41 message = "{} should implement a flush method".format(self.__class__.__name__)
42 raise NotImplementedError(message)
43
44 def _inc_pbar(self, meta):
45 self.pbar.update(1)
46
47 def _set_error(self, exception: str):
48 self.error_count += 1
49 self.last_exception = exception
50
51 def print_results(self) -> None:
52 """
53 Print ingestion statistics.
54
55 Returns:
56 None: None
57 """
58 # Refresh and close tqdm progress bar
59 self.pbar.refresh()
60
61 self.pbar.close()
62
63 print("Ingestion complete!")
64
65 failed_message = (
66 ""
67 if self.error_count == 0
68 else f"\nFail: {self.error_count / self.row_count}"
69 )
70
71 last_exception_message = (
72 ""
73 if self.last_exception == ""
74 else f"\nLast exception:\n{self.last_exception}"
75 )
76
77 print(
78 f"\nIngestion statistics:"
79 f"\nSuccess: {self.pbar.n}/{self.row_count}"
80 f"{failed_message}"
81 f"{last_exception_message}"
82 )
83 return None
84
85
86 class ConfluentProducer(AbstractProducer):
87 """
88 Concrete implementation of Confluent Kafka producer (confluent-kafka)
89 """
90
91 def __init__(self, brokers: str, row_count: int, disable_progress_bar: bool):
92 from confluent_kafka import Producer
93
94 self.producer = Producer({"bootstrap.servers": brokers})
95 super().__init__(brokers, row_count, disable_progress_bar)
96
97 def produce(self, topic: str, value: bytes) -> None:
98 """
99 Generic produce that implements confluent-kafka's produce method to
100 push a byte encoded object into a Kafka topic.
101
102 Args:
103 topic (str): Kafka topic.
104 value (bytes): Byte encoded object.
105
106 Returns:
107 None: None.
108 """
109
110 try:
111 self.producer.produce(topic, value=value, callback=self._delivery_callback)
112 # Serve delivery callback queue.
113 # NOTE: Since produce() is an asynchronous API this poll() call
114 # will most likely not serve the delivery callback for the
115 # last produce()d message.
116 self.producer.poll(0)
117 except Exception as ex:
118 self._set_error(str(ex))
119
120 return None
121
122 def flush(self, timeout: Optional[int]):
123 """
124 Generic flush that implements confluent-kafka's flush method.
125
126 Args:
127 timeout (Optional[int]): Timeout in seconds to wait for completion.
128
129 Returns:
130 int: Number of messages still in queue.
131 """
132 return self.producer.flush(timeout=timeout)
133
134 def _delivery_callback(self, err: str, msg) -> None:
135 """
136 Optional per-message delivery callback (triggered by poll() or flush())
137 when a message has been successfully delivered or permanently failed
138 delivery (after retries).
139
140 Although the msg argument is not used, the current method signature is
141 required as specified in the confluent-kafka documentation.
142
143 Args:
144 err (str): Error message.
145 msg (): Kafka message.
146
147 Returns:
148 None
149 """
150 if err:
151 self._set_error(err)
152 else:
153 self._inc_pbar(None)
154
155
156 class KafkaPythonProducer(AbstractProducer):
157 """
158 Concrete implementation of Python Kafka producer (kafka-python)
159 """
160
161 def __init__(self, brokers: str, row_count: int, disable_progress_bar: bool):
162 from kafka import KafkaProducer
163
164 self.producer = KafkaProducer(bootstrap_servers=[brokers])
165 super().__init__(brokers, row_count, disable_progress_bar)
166
167 def produce(self, topic: str, value: bytes):
168 """
169 Generic produce that implements kafka-python's send method to push a
170 byte encoded object into a Kafka topic.
171
172 Args:
173 topic (str): Kafka topic.
174 value (bytes): Byte encoded object.
175
176 Returns:
177 FutureRecordMetadata: resolves to RecordMetadata
178
179 Raises:
180 KafkaTimeoutError: if unable to fetch topic metadata, or unable
181 to obtain memory buffer prior to configured max_block_ms
182 """
183 return (
184 self.producer.send(topic, value=value)
185 .add_callback(self._inc_pbar)
186 .add_errback(self._set_error)
187 )
188
189 def flush(self, timeout: Optional[int]):
190 """
191 Generic flush that implements kafka-python's flush method.
192
193 Args:
194 timeout (Optional[int]): timeout in seconds to wait for completion.
195
196 Returns:
197 None
198
199 Raises:
200 KafkaTimeoutError: failure to flush buffered records within the
201 provided timeout
202 """
203 return self.producer.flush(timeout=timeout)
204
205
206 def get_producer(
207 brokers: str, row_count: int, disable_progress_bar: bool
208 ) -> Union[ConfluentProducer, KafkaPythonProducer]:
209 """
210 Simple context helper function that returns a AbstractProducer object when
211 invoked.
212
213 This helper function will try to import confluent-kafka as a producer first.
214
215 This helper function will fallback to kafka-python if it fails to import
216 confluent-kafka.
217
218 Args:
219 brokers (str): Kafka broker information with hostname and port.
220 row_count (int): Number of rows in table
221
222 Returns:
223 Union[ConfluentProducer, KafkaPythonProducer]:
224 Concrete implementation of a Kafka producer. Ig can be:
225 * confluent-kafka producer
226 * kafka-python producer
227 """
228 try:
229 return ConfluentProducer(brokers, row_count, disable_progress_bar)
230 except ImportError:
231 print("Unable to import confluent-kafka, falling back to kafka-python")
232 return KafkaPythonProducer(brokers, row_count, disable_progress_bar)
233
[end of sdk/python/feast/loaders/abstract_producer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sdk/python/feast/loaders/abstract_producer.py b/sdk/python/feast/loaders/abstract_producer.py
--- a/sdk/python/feast/loaders/abstract_producer.py
+++ b/sdk/python/feast/loaders/abstract_producer.py
@@ -25,8 +25,6 @@
def __init__(self, brokers: str, row_count: int, disable_progress_bar: bool):
self.brokers = brokers
self.row_count = row_count
- self.error_count = 0
- self.last_exception = ""
# Progress bar will always display average rate
self.pbar = tqdm(
@@ -45,8 +43,7 @@
self.pbar.update(1)
def _set_error(self, exception: str):
- self.error_count += 1
- self.last_exception = exception
+ raise Exception(exception)
def print_results(self) -> None:
"""
@@ -62,24 +59,7 @@
print("Ingestion complete!")
- failed_message = (
- ""
- if self.error_count == 0
- else f"\nFail: {self.error_count / self.row_count}"
- )
-
- last_exception_message = (
- ""
- if self.last_exception == ""
- else f"\nLast exception:\n{self.last_exception}"
- )
-
- print(
- f"\nIngestion statistics:"
- f"\nSuccess: {self.pbar.n}/{self.row_count}"
- f"{failed_message}"
- f"{last_exception_message}"
- )
+ print(f"\nIngestion statistics:" f"\nSuccess: {self.pbar.n}/{self.row_count}")
return None
@@ -129,7 +109,10 @@
Returns:
int: Number of messages still in queue.
"""
- return self.producer.flush(timeout=timeout)
+ messages = self.producer.flush(timeout=timeout)
+ if messages:
+ raise Exception("Not all Kafka messages are successfully delivered.")
+ return messages
def _delivery_callback(self, err: str, msg) -> None:
"""
@@ -200,7 +183,10 @@
KafkaTimeoutError: failure to flush buffered records within the
provided timeout
"""
- return self.producer.flush(timeout=timeout)
+ messages = self.producer.flush(timeout=timeout)
+ if messages:
+ raise Exception("Not all Kafka messages are successfully delivered.")
+ return messages
def get_producer(
| {"golden_diff": "diff --git a/sdk/python/feast/loaders/abstract_producer.py b/sdk/python/feast/loaders/abstract_producer.py\n--- a/sdk/python/feast/loaders/abstract_producer.py\n+++ b/sdk/python/feast/loaders/abstract_producer.py\n@@ -25,8 +25,6 @@\n def __init__(self, brokers: str, row_count: int, disable_progress_bar: bool):\n self.brokers = brokers\n self.row_count = row_count\n- self.error_count = 0\n- self.last_exception = \"\"\n \n # Progress bar will always display average rate\n self.pbar = tqdm(\n@@ -45,8 +43,7 @@\n self.pbar.update(1)\n \n def _set_error(self, exception: str):\n- self.error_count += 1\n- self.last_exception = exception\n+ raise Exception(exception)\n \n def print_results(self) -> None:\n \"\"\"\n@@ -62,24 +59,7 @@\n \n print(\"Ingestion complete!\")\n \n- failed_message = (\n- \"\"\n- if self.error_count == 0\n- else f\"\\nFail: {self.error_count / self.row_count}\"\n- )\n-\n- last_exception_message = (\n- \"\"\n- if self.last_exception == \"\"\n- else f\"\\nLast exception:\\n{self.last_exception}\"\n- )\n-\n- print(\n- f\"\\nIngestion statistics:\"\n- f\"\\nSuccess: {self.pbar.n}/{self.row_count}\"\n- f\"{failed_message}\"\n- f\"{last_exception_message}\"\n- )\n+ print(f\"\\nIngestion statistics:\" f\"\\nSuccess: {self.pbar.n}/{self.row_count}\")\n return None\n \n \n@@ -129,7 +109,10 @@\n Returns:\n int: Number of messages still in queue.\n \"\"\"\n- return self.producer.flush(timeout=timeout)\n+ messages = self.producer.flush(timeout=timeout)\n+ if messages:\n+ raise Exception(\"Not all Kafka messages are successfully delivered.\")\n+ return messages\n \n def _delivery_callback(self, err: str, msg) -> None:\n \"\"\"\n@@ -200,7 +183,10 @@\n KafkaTimeoutError: failure to flush buffered records within the\n provided timeout\n \"\"\"\n- return self.producer.flush(timeout=timeout)\n+ messages = self.producer.flush(timeout=timeout)\n+ if messages:\n+ raise Exception(\"Not all Kafka messages are successfully delivered.\")\n+ return messages\n \n \n def get_producer(\n", "issue": "No exception when connecting to Kafka fails\nHey guys,\r\n\r\nI have installed feast(0.4.4) via Helm3 on GKE \r\n\r\nThe Basic example is working until the ingestion (online serving) part\r\n\r\nThere I get ->>\r\n\r\nWaiting for feature set to be ready for ingestion...\r\n 0%| | 0/15 [00:00<?, ?rows/s]\r\n\r\nbut there is no progress.\r\n\r\n\r\n\r\nWhen I look on GCPs bigquery interface I can see that the project \"customer_project\" is created with the correct columns in \"customer_transactions\".\r\nBut for sure no data\r\n\r\n\r\n\r\nget_feature_set gives me \r\n\r\n\r\n{\r\n \"spec\": {\r\n \"name\": \"customer_transactions\",\r\n \"version\": 1,\r\n \"entities\": [\r\n {\r\n \"name\": \"customer_id\",\r\n \"valueType\": \"INT64\"\r\n }\r\n ],\r\n \"features\": [\r\n {\r\n \"name\": \"daily_transactions\",\r\n \"valueType\": \"DOUBLE\"\r\n },\r\n {\r\n \"name\": \"total_transactions\",\r\n \"valueType\": \"INT64\"\r\n }\r\n ],\r\n \"maxAge\": \"432000s\",\r\n \"source\": {\r\n \"type\": \"KAFKA\",\r\n \"kafkaSourceConfig\": {\r\n \"bootstrapServers\": \"feast-kafka:9092\",\r\n \"topic\": \"feast\"\r\n }\r\n },\r\n \"project\": \"customer_project_1\"\r\n },\r\n \"meta\": {\r\n \"createdTimestamp\": \"2020-04-15T10:26:51Z\",\r\n \"status\": \"STATUS_READY\"\r\n }\r\n}\r\n\r\n\r\nI had to modify some port service setup in the chart so it can be that some of feast have connection issues between kafka etc.\r\nBut There are no errors in the logs of the core and serving pod.\r\n\r\nWhat can be the problem and how is a way to debug that?\n", "before_files": [{"content": "# Copyright 2019 The Feast Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Optional, Union\n\nfrom tqdm import tqdm\n\n\nclass AbstractProducer:\n \"\"\"\n Abstract class for Kafka producers\n \"\"\"\n\n def __init__(self, brokers: str, row_count: int, disable_progress_bar: bool):\n self.brokers = brokers\n self.row_count = row_count\n self.error_count = 0\n self.last_exception = \"\"\n\n # Progress bar will always display average rate\n self.pbar = tqdm(\n total=row_count, unit=\"rows\", smoothing=0, disable=disable_progress_bar\n )\n\n def produce(self, topic: str, data: bytes):\n message = \"{} should implement a produce method\".format(self.__class__.__name__)\n raise NotImplementedError(message)\n\n def flush(self, timeout: int):\n message = \"{} should implement a flush method\".format(self.__class__.__name__)\n raise NotImplementedError(message)\n\n def _inc_pbar(self, meta):\n self.pbar.update(1)\n\n def _set_error(self, exception: str):\n self.error_count += 1\n self.last_exception = exception\n\n def print_results(self) -> None:\n \"\"\"\n Print ingestion statistics.\n\n Returns:\n None: None\n \"\"\"\n # Refresh and close tqdm progress bar\n self.pbar.refresh()\n\n self.pbar.close()\n\n print(\"Ingestion complete!\")\n\n failed_message = (\n \"\"\n if self.error_count == 0\n else f\"\\nFail: {self.error_count / self.row_count}\"\n )\n\n last_exception_message = (\n \"\"\n if self.last_exception == \"\"\n else f\"\\nLast exception:\\n{self.last_exception}\"\n )\n\n print(\n f\"\\nIngestion statistics:\"\n f\"\\nSuccess: {self.pbar.n}/{self.row_count}\"\n f\"{failed_message}\"\n f\"{last_exception_message}\"\n )\n return None\n\n\nclass ConfluentProducer(AbstractProducer):\n \"\"\"\n Concrete implementation of Confluent Kafka producer (confluent-kafka)\n \"\"\"\n\n def __init__(self, brokers: str, row_count: int, disable_progress_bar: bool):\n from confluent_kafka import Producer\n\n self.producer = Producer({\"bootstrap.servers\": brokers})\n super().__init__(brokers, row_count, disable_progress_bar)\n\n def produce(self, topic: str, value: bytes) -> None:\n \"\"\"\n Generic produce that implements confluent-kafka's produce method to\n push a byte encoded object into a Kafka topic.\n\n Args:\n topic (str): Kafka topic.\n value (bytes): Byte encoded object.\n\n Returns:\n None: None.\n \"\"\"\n\n try:\n self.producer.produce(topic, value=value, callback=self._delivery_callback)\n # Serve delivery callback queue.\n # NOTE: Since produce() is an asynchronous API this poll() call\n # will most likely not serve the delivery callback for the\n # last produce()d message.\n self.producer.poll(0)\n except Exception as ex:\n self._set_error(str(ex))\n\n return None\n\n def flush(self, timeout: Optional[int]):\n \"\"\"\n Generic flush that implements confluent-kafka's flush method.\n\n Args:\n timeout (Optional[int]): Timeout in seconds to wait for completion.\n\n Returns:\n int: Number of messages still in queue.\n \"\"\"\n return self.producer.flush(timeout=timeout)\n\n def _delivery_callback(self, err: str, msg) -> None:\n \"\"\"\n Optional per-message delivery callback (triggered by poll() or flush())\n when a message has been successfully delivered or permanently failed\n delivery (after retries).\n\n Although the msg argument is not used, the current method signature is\n required as specified in the confluent-kafka documentation.\n\n Args:\n err (str): Error message.\n msg (): Kafka message.\n\n Returns:\n None\n \"\"\"\n if err:\n self._set_error(err)\n else:\n self._inc_pbar(None)\n\n\nclass KafkaPythonProducer(AbstractProducer):\n \"\"\"\n Concrete implementation of Python Kafka producer (kafka-python)\n \"\"\"\n\n def __init__(self, brokers: str, row_count: int, disable_progress_bar: bool):\n from kafka import KafkaProducer\n\n self.producer = KafkaProducer(bootstrap_servers=[brokers])\n super().__init__(brokers, row_count, disable_progress_bar)\n\n def produce(self, topic: str, value: bytes):\n \"\"\"\n Generic produce that implements kafka-python's send method to push a\n byte encoded object into a Kafka topic.\n\n Args:\n topic (str): Kafka topic.\n value (bytes): Byte encoded object.\n\n Returns:\n FutureRecordMetadata: resolves to RecordMetadata\n\n Raises:\n KafkaTimeoutError: if unable to fetch topic metadata, or unable\n to obtain memory buffer prior to configured max_block_ms\n \"\"\"\n return (\n self.producer.send(topic, value=value)\n .add_callback(self._inc_pbar)\n .add_errback(self._set_error)\n )\n\n def flush(self, timeout: Optional[int]):\n \"\"\"\n Generic flush that implements kafka-python's flush method.\n\n Args:\n timeout (Optional[int]): timeout in seconds to wait for completion.\n\n Returns:\n None\n\n Raises:\n KafkaTimeoutError: failure to flush buffered records within the\n provided timeout\n \"\"\"\n return self.producer.flush(timeout=timeout)\n\n\ndef get_producer(\n brokers: str, row_count: int, disable_progress_bar: bool\n) -> Union[ConfluentProducer, KafkaPythonProducer]:\n \"\"\"\n Simple context helper function that returns a AbstractProducer object when\n invoked.\n\n This helper function will try to import confluent-kafka as a producer first.\n\n This helper function will fallback to kafka-python if it fails to import\n confluent-kafka.\n\n Args:\n brokers (str): Kafka broker information with hostname and port.\n row_count (int): Number of rows in table\n\n Returns:\n Union[ConfluentProducer, KafkaPythonProducer]:\n Concrete implementation of a Kafka producer. Ig can be:\n * confluent-kafka producer\n * kafka-python producer\n \"\"\"\n try:\n return ConfluentProducer(brokers, row_count, disable_progress_bar)\n except ImportError:\n print(\"Unable to import confluent-kafka, falling back to kafka-python\")\n return KafkaPythonProducer(brokers, row_count, disable_progress_bar)\n", "path": "sdk/python/feast/loaders/abstract_producer.py"}]} | 3,080 | 571 |
gh_patches_debug_33750 | rasdani/github-patches | git_diff | conan-io__conan-4349 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix experimental make generator CONAN_CPPFLAGS and CONAN_INCLUDE_PATHS
Following the conversation here https://github.com/conan-io/conan/issues/4286#issuecomment-454194188
We have finally reached the conclusion of having ``cpp_info.cppflags`` converted to ``CONAN_CXXFLAGS`` in the ``make`` generator to be consistent with orhter generators such as ``cmake``.
Also the flag ``CONAN_INCLUDE_PATHS`` should be renamed to ``CONAN_INCLUDE_DIRS`` for the same reason.
In another issue we would probably introduce a ``cpp_info.cxxflags`` that would be an internal alias of ``cpp_info.cppflags`` to avoid this confusion without breaking.
cc/ @solvingj
</issue>
<code>
[start of conans/client/generators/make.py]
1 from conans.model import Generator
2 from conans.paths import BUILD_INFO_MAKE
3
4
5 class MakeGenerator(Generator):
6
7 def __init__(self, conanfile):
8 Generator.__init__(self, conanfile)
9 self.makefile_newline = "\n"
10 self.makefile_line_continuation = " \\\n"
11 self.assignment_if_absent = " ?= "
12 self.assignment_append = " += "
13
14 @property
15 def filename(self):
16 return BUILD_INFO_MAKE
17
18 @property
19 def content(self):
20
21 content = [
22 "#-------------------------------------------------------------------#",
23 "# Makefile variables from Conan Dependencies #",
24 "#-------------------------------------------------------------------#",
25 "",
26 ]
27
28 for line_as_list in self.create_deps_content():
29 content.append("".join(line_as_list))
30
31 content.append("#-------------------------------------------------------------------#")
32 content.append(self.makefile_newline)
33 return self.makefile_newline.join(content)
34
35 def create_deps_content(self):
36 deps_content = self.create_content_from_deps()
37 deps_content.extend(self.create_combined_content())
38 return deps_content
39
40 def create_content_from_deps(self):
41 content = []
42 for pkg_name, cpp_info in self.deps_build_info.dependencies:
43 content.extend(self.create_content_from_dep(pkg_name, cpp_info))
44 return content
45
46 def create_content_from_dep(self, pkg_name, cpp_info):
47
48 vars_info = [("ROOT", self.assignment_if_absent, [cpp_info.rootpath]),
49 ("SYSROOT", self.assignment_if_absent, [cpp_info.sysroot]),
50 ("INCLUDE_PATHS", self.assignment_append, cpp_info.include_paths),
51 ("LIB_PATHS", self.assignment_append, cpp_info.lib_paths),
52 ("BIN_PATHS", self.assignment_append, cpp_info.bin_paths),
53 ("BUILD_PATHS", self.assignment_append, cpp_info.build_paths),
54 ("RES_PATHS", self.assignment_append, cpp_info.res_paths),
55 ("LIBS", self.assignment_append, cpp_info.libs),
56 ("DEFINES", self.assignment_append, cpp_info.defines),
57 ("CFLAGS", self.assignment_append, cpp_info.cflags),
58 ("CPPFLAGS", self.assignment_append, cpp_info.cppflags),
59 ("SHAREDLINKFLAGS", self.assignment_append, cpp_info.sharedlinkflags),
60 ("EXELINKFLAGS", self.assignment_append, cpp_info.exelinkflags)]
61
62 return [self.create_makefile_var_pkg(var_name, pkg_name, operator, info)
63 for var_name, operator, info in vars_info]
64
65 def create_combined_content(self):
66 content = []
67 for var_name in self.all_dep_vars():
68 content.append(self.create_makefile_var_global(var_name, self.assignment_append,
69 self.create_combined_var_list(var_name)))
70 return content
71
72 def create_combined_var_list(self, var_name):
73 make_vars = []
74 for pkg_name, _ in self.deps_build_info.dependencies:
75 pkg_var = self.create_makefile_var_name_pkg(var_name, pkg_name)
76 make_vars.append("$({pkg_var})".format(pkg_var=pkg_var))
77 return make_vars
78
79 def create_makefile_var_global(self, var_name, operator, values):
80 make_var = [self.create_makefile_var_name_global(var_name)]
81 make_var.extend(self.create_makefile_var_common(operator, values))
82 return make_var
83
84 def create_makefile_var_pkg(self, var_name, pkg_name, operator, values):
85 make_var = [self.create_makefile_var_name_pkg(var_name, pkg_name)]
86 make_var.extend(self.create_makefile_var_common(operator, values))
87 return make_var
88
89 def create_makefile_var_common(self, operator, values):
90 return [operator, self.makefile_line_continuation, self.create_makefile_var_value(values),
91 self.makefile_newline]
92
93 @staticmethod
94 def create_makefile_var_name_global(var_name):
95 return "CONAN_{var}".format(var=var_name).upper()
96
97 @staticmethod
98 def create_makefile_var_name_pkg(var_name, pkg_name):
99 return "CONAN_{var}_{lib}".format(var=var_name, lib=pkg_name).upper()
100
101 def create_makefile_var_value(self, values):
102 formatted_values = [value.replace("\\", "/") for value in values]
103 return self.makefile_line_continuation.join(formatted_values)
104
105 @staticmethod
106 def all_dep_vars():
107 return ["rootpath", "sysroot", "include_paths", "lib_paths", "bin_paths", "build_paths",
108 "res_paths", "libs", "defines", "cflags", "cppflags", "sharedlinkflags",
109 "exelinkflags"]
110
[end of conans/client/generators/make.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/conans/client/generators/make.py b/conans/client/generators/make.py
--- a/conans/client/generators/make.py
+++ b/conans/client/generators/make.py
@@ -47,15 +47,15 @@
vars_info = [("ROOT", self.assignment_if_absent, [cpp_info.rootpath]),
("SYSROOT", self.assignment_if_absent, [cpp_info.sysroot]),
- ("INCLUDE_PATHS", self.assignment_append, cpp_info.include_paths),
- ("LIB_PATHS", self.assignment_append, cpp_info.lib_paths),
- ("BIN_PATHS", self.assignment_append, cpp_info.bin_paths),
- ("BUILD_PATHS", self.assignment_append, cpp_info.build_paths),
- ("RES_PATHS", self.assignment_append, cpp_info.res_paths),
+ ("INCLUDE_DIRS", self.assignment_append, cpp_info.include_paths),
+ ("LIB_DIRS", self.assignment_append, cpp_info.lib_paths),
+ ("BIN_DIRS", self.assignment_append, cpp_info.bin_paths),
+ ("BUILD_DIRS", self.assignment_append, cpp_info.build_paths),
+ ("RES_DIRS", self.assignment_append, cpp_info.res_paths),
("LIBS", self.assignment_append, cpp_info.libs),
("DEFINES", self.assignment_append, cpp_info.defines),
("CFLAGS", self.assignment_append, cpp_info.cflags),
- ("CPPFLAGS", self.assignment_append, cpp_info.cppflags),
+ ("CXXFLAGS", self.assignment_append, cpp_info.cppflags),
("SHAREDLINKFLAGS", self.assignment_append, cpp_info.sharedlinkflags),
("EXELINKFLAGS", self.assignment_append, cpp_info.exelinkflags)]
@@ -104,6 +104,6 @@
@staticmethod
def all_dep_vars():
- return ["rootpath", "sysroot", "include_paths", "lib_paths", "bin_paths", "build_paths",
- "res_paths", "libs", "defines", "cflags", "cppflags", "sharedlinkflags",
+ return ["rootpath", "sysroot", "include_dirs", "lib_dirs", "bin_dirs", "build_dirs",
+ "res_dirs", "libs", "defines", "cflags", "cxxflags", "sharedlinkflags",
"exelinkflags"]
| {"golden_diff": "diff --git a/conans/client/generators/make.py b/conans/client/generators/make.py\n--- a/conans/client/generators/make.py\n+++ b/conans/client/generators/make.py\n@@ -47,15 +47,15 @@\n \n vars_info = [(\"ROOT\", self.assignment_if_absent, [cpp_info.rootpath]),\n (\"SYSROOT\", self.assignment_if_absent, [cpp_info.sysroot]),\n- (\"INCLUDE_PATHS\", self.assignment_append, cpp_info.include_paths),\n- (\"LIB_PATHS\", self.assignment_append, cpp_info.lib_paths),\n- (\"BIN_PATHS\", self.assignment_append, cpp_info.bin_paths),\n- (\"BUILD_PATHS\", self.assignment_append, cpp_info.build_paths),\n- (\"RES_PATHS\", self.assignment_append, cpp_info.res_paths),\n+ (\"INCLUDE_DIRS\", self.assignment_append, cpp_info.include_paths),\n+ (\"LIB_DIRS\", self.assignment_append, cpp_info.lib_paths),\n+ (\"BIN_DIRS\", self.assignment_append, cpp_info.bin_paths),\n+ (\"BUILD_DIRS\", self.assignment_append, cpp_info.build_paths),\n+ (\"RES_DIRS\", self.assignment_append, cpp_info.res_paths),\n (\"LIBS\", self.assignment_append, cpp_info.libs),\n (\"DEFINES\", self.assignment_append, cpp_info.defines),\n (\"CFLAGS\", self.assignment_append, cpp_info.cflags),\n- (\"CPPFLAGS\", self.assignment_append, cpp_info.cppflags),\n+ (\"CXXFLAGS\", self.assignment_append, cpp_info.cppflags),\n (\"SHAREDLINKFLAGS\", self.assignment_append, cpp_info.sharedlinkflags),\n (\"EXELINKFLAGS\", self.assignment_append, cpp_info.exelinkflags)]\n \n@@ -104,6 +104,6 @@\n \n @staticmethod\n def all_dep_vars():\n- return [\"rootpath\", \"sysroot\", \"include_paths\", \"lib_paths\", \"bin_paths\", \"build_paths\",\n- \"res_paths\", \"libs\", \"defines\", \"cflags\", \"cppflags\", \"sharedlinkflags\",\n+ return [\"rootpath\", \"sysroot\", \"include_dirs\", \"lib_dirs\", \"bin_dirs\", \"build_dirs\",\n+ \"res_dirs\", \"libs\", \"defines\", \"cflags\", \"cxxflags\", \"sharedlinkflags\",\n \"exelinkflags\"]\n", "issue": "Fix experimental make generator CONAN_CPPFLAGS and CONAN_INCLUDE_PATHS\nFollowing the conversation here https://github.com/conan-io/conan/issues/4286#issuecomment-454194188\r\n\r\nWe have finally reached the conclusion of having ``cpp_info.cppflags`` converted to ``CONAN_CXXFLAGS`` in the ``make`` generator to be consistent with orhter generators such as ``cmake``.\r\n\r\nAlso the flag ``CONAN_INCLUDE_PATHS`` should be renamed to ``CONAN_INCLUDE_DIRS`` for the same reason.\r\n\r\nIn another issue we would probably introduce a ``cpp_info.cxxflags`` that would be an internal alias of ``cpp_info.cppflags`` to avoid this confusion without breaking.\r\n\r\ncc/ @solvingj \n", "before_files": [{"content": "from conans.model import Generator\nfrom conans.paths import BUILD_INFO_MAKE\n\n\nclass MakeGenerator(Generator):\n\n def __init__(self, conanfile):\n Generator.__init__(self, conanfile)\n self.makefile_newline = \"\\n\"\n self.makefile_line_continuation = \" \\\\\\n\"\n self.assignment_if_absent = \" ?= \"\n self.assignment_append = \" += \"\n\n @property\n def filename(self):\n return BUILD_INFO_MAKE\n\n @property\n def content(self):\n\n content = [\n \"#-------------------------------------------------------------------#\",\n \"# Makefile variables from Conan Dependencies #\",\n \"#-------------------------------------------------------------------#\",\n \"\",\n ]\n\n for line_as_list in self.create_deps_content():\n content.append(\"\".join(line_as_list))\n\n content.append(\"#-------------------------------------------------------------------#\")\n content.append(self.makefile_newline)\n return self.makefile_newline.join(content)\n\n def create_deps_content(self):\n deps_content = self.create_content_from_deps()\n deps_content.extend(self.create_combined_content())\n return deps_content\n\n def create_content_from_deps(self):\n content = []\n for pkg_name, cpp_info in self.deps_build_info.dependencies:\n content.extend(self.create_content_from_dep(pkg_name, cpp_info))\n return content\n\n def create_content_from_dep(self, pkg_name, cpp_info):\n\n vars_info = [(\"ROOT\", self.assignment_if_absent, [cpp_info.rootpath]),\n (\"SYSROOT\", self.assignment_if_absent, [cpp_info.sysroot]),\n (\"INCLUDE_PATHS\", self.assignment_append, cpp_info.include_paths),\n (\"LIB_PATHS\", self.assignment_append, cpp_info.lib_paths),\n (\"BIN_PATHS\", self.assignment_append, cpp_info.bin_paths),\n (\"BUILD_PATHS\", self.assignment_append, cpp_info.build_paths),\n (\"RES_PATHS\", self.assignment_append, cpp_info.res_paths),\n (\"LIBS\", self.assignment_append, cpp_info.libs),\n (\"DEFINES\", self.assignment_append, cpp_info.defines),\n (\"CFLAGS\", self.assignment_append, cpp_info.cflags),\n (\"CPPFLAGS\", self.assignment_append, cpp_info.cppflags),\n (\"SHAREDLINKFLAGS\", self.assignment_append, cpp_info.sharedlinkflags),\n (\"EXELINKFLAGS\", self.assignment_append, cpp_info.exelinkflags)]\n\n return [self.create_makefile_var_pkg(var_name, pkg_name, operator, info)\n for var_name, operator, info in vars_info]\n\n def create_combined_content(self):\n content = []\n for var_name in self.all_dep_vars():\n content.append(self.create_makefile_var_global(var_name, self.assignment_append,\n self.create_combined_var_list(var_name)))\n return content\n\n def create_combined_var_list(self, var_name):\n make_vars = []\n for pkg_name, _ in self.deps_build_info.dependencies:\n pkg_var = self.create_makefile_var_name_pkg(var_name, pkg_name)\n make_vars.append(\"$({pkg_var})\".format(pkg_var=pkg_var))\n return make_vars\n\n def create_makefile_var_global(self, var_name, operator, values):\n make_var = [self.create_makefile_var_name_global(var_name)]\n make_var.extend(self.create_makefile_var_common(operator, values))\n return make_var\n\n def create_makefile_var_pkg(self, var_name, pkg_name, operator, values):\n make_var = [self.create_makefile_var_name_pkg(var_name, pkg_name)]\n make_var.extend(self.create_makefile_var_common(operator, values))\n return make_var\n\n def create_makefile_var_common(self, operator, values):\n return [operator, self.makefile_line_continuation, self.create_makefile_var_value(values),\n self.makefile_newline]\n\n @staticmethod\n def create_makefile_var_name_global(var_name):\n return \"CONAN_{var}\".format(var=var_name).upper()\n\n @staticmethod\n def create_makefile_var_name_pkg(var_name, pkg_name):\n return \"CONAN_{var}_{lib}\".format(var=var_name, lib=pkg_name).upper()\n\n def create_makefile_var_value(self, values):\n formatted_values = [value.replace(\"\\\\\", \"/\") for value in values]\n return self.makefile_line_continuation.join(formatted_values)\n\n @staticmethod\n def all_dep_vars():\n return [\"rootpath\", \"sysroot\", \"include_paths\", \"lib_paths\", \"bin_paths\", \"build_paths\",\n \"res_paths\", \"libs\", \"defines\", \"cflags\", \"cppflags\", \"sharedlinkflags\",\n \"exelinkflags\"]\n", "path": "conans/client/generators/make.py"}]} | 1,902 | 498 |
gh_patches_debug_13502 | rasdani/github-patches | git_diff | mne-tools__mne-bids-111 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
prune dependencies that we do not really depend on
As became apparent in a discussion with @agramfort and @jasmainak, we probably do not need the `environment.yml` and instead should rely on minimal dependencies such as numpy, scipy, and matplotlib.
if we decide to keep the `environment.yml` for convenience during installation, we should at least prune it.
</issue>
<code>
[start of mne_bids/datasets.py]
1 """Helper functions to fetch data to work with."""
2 # Authors: Mainak Jas <[email protected]>
3 # Alexandre Gramfort <[email protected]>
4 # Teon Brooks <[email protected]>
5 # Stefan Appelhoff <[email protected]>
6 #
7 # License: BSD (3-clause)
8
9 import os
10 import os.path as op
11 import shutil
12 import tarfile
13 import requests
14
15 from mne.utils import _fetch_file
16
17
18 def fetch_faces_data(data_path=None, repo='ds000117', subject_ids=[1]):
19 """Dataset fetcher for OpenfMRI dataset ds000117.
20
21 Parameters
22 ----------
23 data_path : str | None
24 Path to the folder where data is stored. Defaults to
25 '~/mne_data/mne_bids_examples'
26 repo : str
27 The folder name. Defaults to 'ds000117'.
28 subject_ids : list of int
29 The subjects to fetch. Defaults to [1], downloading subject 1.
30
31 Returns
32 -------
33 data_path : str
34 Path to the folder where data is stored.
35
36 """
37 if not data_path:
38 home = os.path.expanduser('~')
39 data_path = os.path.join(home, 'mne_data', 'mne_bids_examples')
40 if not os.path.exists(data_path):
41 os.makedirs(data_path)
42
43 for subject_id in subject_ids:
44 src_url = ('http://openfmri.s3.amazonaws.com/tarballs/'
45 'ds117_R0.1.1_sub%03d_raw.tgz' % subject_id)
46 tar_fname = op.join(data_path, repo + '.tgz')
47 target_dir = op.join(data_path, repo)
48 if not op.exists(target_dir):
49 if not op.exists(tar_fname):
50 _fetch_file(url=src_url, file_name=tar_fname,
51 print_destination=True, resume=True, timeout=10.)
52 tf = tarfile.open(tar_fname)
53 print('Extracting files. This may take a while ...')
54 tf.extractall(path=data_path)
55 shutil.move(op.join(data_path, 'ds117'), target_dir)
56 os.remove(tar_fname)
57
58 return data_path
59
60
61 def fetch_brainvision_testing_data(data_path=None):
62 """Download the MNE-Python testing data for the BrainVision format.
63
64 Parameters
65 ----------
66 data_path : str | None
67 Path to the folder where data is stored. Defaults to
68 '~/mne_data/mne_bids_examples'
69
70 Returns
71 -------
72 data_path : str
73 Path to the folder where data is stored.
74
75 """
76 if not data_path:
77 home = os.path.expanduser('~')
78 data_path = os.path.join(home, 'mne_data', 'mne_bids_examples')
79 if not os.path.exists(data_path):
80 os.makedirs(data_path)
81
82 base_url = 'https://github.com/mne-tools/mne-python/'
83 base_url += 'raw/master/mne/io/brainvision/tests/data/test'
84 file_endings = ['.vhdr', '.vmrk', '.eeg', ]
85
86 for f_ending in file_endings:
87 url = base_url + f_ending
88 response = requests.get(url)
89
90 fname = os.path.join(data_path, 'test' + f_ending)
91 with open(fname, 'wb') as fout:
92 fout.write(response.content)
93
94 return data_path
95
[end of mne_bids/datasets.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mne_bids/datasets.py b/mne_bids/datasets.py
--- a/mne_bids/datasets.py
+++ b/mne_bids/datasets.py
@@ -10,7 +10,7 @@
import os.path as op
import shutil
import tarfile
-import requests
+from six.moves import urllib
from mne.utils import _fetch_file
@@ -85,10 +85,10 @@
for f_ending in file_endings:
url = base_url + f_ending
- response = requests.get(url)
+ response = urllib.request.urlopen(url)
fname = os.path.join(data_path, 'test' + f_ending)
with open(fname, 'wb') as fout:
- fout.write(response.content)
+ fout.write(response.read())
return data_path
| {"golden_diff": "diff --git a/mne_bids/datasets.py b/mne_bids/datasets.py\n--- a/mne_bids/datasets.py\n+++ b/mne_bids/datasets.py\n@@ -10,7 +10,7 @@\n import os.path as op\n import shutil\n import tarfile\n-import requests\n+from six.moves import urllib\n \n from mne.utils import _fetch_file\n \n@@ -85,10 +85,10 @@\n \n for f_ending in file_endings:\n url = base_url + f_ending\n- response = requests.get(url)\n+ response = urllib.request.urlopen(url)\n \n fname = os.path.join(data_path, 'test' + f_ending)\n with open(fname, 'wb') as fout:\n- fout.write(response.content)\n+ fout.write(response.read())\n \n return data_path\n", "issue": "prune dependencies that we do not really depend on\nAs became apparent in a discussion with @agramfort and @jasmainak, we probably do not need the `environment.yml` and instead should rely on minimal dependencies such as numpy, scipy, and matplotlib.\r\n\r\nif we decide to keep the `environment.yml` for convenience during installation, we should at least prune it.\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"Helper functions to fetch data to work with.\"\"\"\n# Authors: Mainak Jas <[email protected]>\n# Alexandre Gramfort <[email protected]>\n# Teon Brooks <[email protected]>\n# Stefan Appelhoff <[email protected]>\n#\n# License: BSD (3-clause)\n\nimport os\nimport os.path as op\nimport shutil\nimport tarfile\nimport requests\n\nfrom mne.utils import _fetch_file\n\n\ndef fetch_faces_data(data_path=None, repo='ds000117', subject_ids=[1]):\n \"\"\"Dataset fetcher for OpenfMRI dataset ds000117.\n\n Parameters\n ----------\n data_path : str | None\n Path to the folder where data is stored. Defaults to\n '~/mne_data/mne_bids_examples'\n repo : str\n The folder name. Defaults to 'ds000117'.\n subject_ids : list of int\n The subjects to fetch. Defaults to [1], downloading subject 1.\n\n Returns\n -------\n data_path : str\n Path to the folder where data is stored.\n\n \"\"\"\n if not data_path:\n home = os.path.expanduser('~')\n data_path = os.path.join(home, 'mne_data', 'mne_bids_examples')\n if not os.path.exists(data_path):\n os.makedirs(data_path)\n\n for subject_id in subject_ids:\n src_url = ('http://openfmri.s3.amazonaws.com/tarballs/'\n 'ds117_R0.1.1_sub%03d_raw.tgz' % subject_id)\n tar_fname = op.join(data_path, repo + '.tgz')\n target_dir = op.join(data_path, repo)\n if not op.exists(target_dir):\n if not op.exists(tar_fname):\n _fetch_file(url=src_url, file_name=tar_fname,\n print_destination=True, resume=True, timeout=10.)\n tf = tarfile.open(tar_fname)\n print('Extracting files. This may take a while ...')\n tf.extractall(path=data_path)\n shutil.move(op.join(data_path, 'ds117'), target_dir)\n os.remove(tar_fname)\n\n return data_path\n\n\ndef fetch_brainvision_testing_data(data_path=None):\n \"\"\"Download the MNE-Python testing data for the BrainVision format.\n\n Parameters\n ----------\n data_path : str | None\n Path to the folder where data is stored. Defaults to\n '~/mne_data/mne_bids_examples'\n\n Returns\n -------\n data_path : str\n Path to the folder where data is stored.\n\n \"\"\"\n if not data_path:\n home = os.path.expanduser('~')\n data_path = os.path.join(home, 'mne_data', 'mne_bids_examples')\n if not os.path.exists(data_path):\n os.makedirs(data_path)\n\n base_url = 'https://github.com/mne-tools/mne-python/'\n base_url += 'raw/master/mne/io/brainvision/tests/data/test'\n file_endings = ['.vhdr', '.vmrk', '.eeg', ]\n\n for f_ending in file_endings:\n url = base_url + f_ending\n response = requests.get(url)\n\n fname = os.path.join(data_path, 'test' + f_ending)\n with open(fname, 'wb') as fout:\n fout.write(response.content)\n\n return data_path\n", "path": "mne_bids/datasets.py"}]} | 1,568 | 185 |
gh_patches_debug_17256 | rasdani/github-patches | git_diff | apluslms__a-plus-1352 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Login should not take the user to the front page
Common scenario: the user is browsing a particular course module. They aren’t logged in. They decide to log in, but doing so takes them to the A+ front page, from which they have to navigate back to where they were. Inconvenient.
</issue>
<code>
[start of course/templatetags/base.py]
1 from datetime import datetime
2
3 from django import template
4 from django.conf import settings
5 from django.utils.safestring import mark_safe
6 from django.utils.text import format_lazy
7 from django.utils.translation import get_language, gettext_lazy as _
8 from lib.helpers import remove_query_param_from_url, settings_text, update_url_params
9 from exercise.submission_models import PendingSubmission
10 from site_alert.models import SiteAlert
11
12
13 register = template.Library()
14
15
16 def pick_localized(message):
17 if message and isinstance(message, dict):
18 return (message.get(get_language()) or
19 message.get(settings.LANGUAGE_CODE[:2]) or
20 list(message.values())[0])
21 return message
22
23
24 def get_date(cont, key):
25 data = cont.get(key)
26 if data and not isinstance(data, datetime):
27 data = datetime.strptime(data, '%Y-%m-%d')
28 cont[key] = data
29 return data
30
31
32 @register.simple_tag
33 def brand_name():
34 return mark_safe(settings.BRAND_NAME)
35
36
37 @register.simple_tag
38 def brand_name_long():
39 return mark_safe(settings.BRAND_NAME_LONG)
40
41
42 @register.simple_tag
43 def brand_institution_name():
44 return mark_safe(settings_text('BRAND_INSTITUTION_NAME'))
45
46
47 @register.simple_tag
48 def course_alert(instance):
49 exercises = PendingSubmission.objects.get_exercise_names_if_grader_is_unstable(instance)
50 if exercises:
51 message = format_lazy(
52 _('GRADER_PROBLEMS_ALERT -- {exercises}'),
53 exercises=exercises,
54 )
55 return mark_safe(format_lazy('<div class="alert alert-danger sticky-alert">{message}</div>', message=message))
56 return ''
57
58
59 @register.simple_tag
60 def site_alert():
61 alerts = SiteAlert.objects.filter(status=SiteAlert.STATUS.ACTIVE)
62 return mark_safe(
63 ''.join(
64 '<div class="alert alert-danger">{}</div>'.format(pick_localized(alert.alert))
65 for alert in alerts
66 )
67 )
68
69
70 @register.simple_tag
71 def site_advert(): # pylint: disable=inconsistent-return-statements
72 advert = settings.SITEWIDE_ADVERT
73 if not advert or not isinstance(advert, dict):
74 return
75 not_before = get_date(advert, 'not-before')
76 not_after = get_date(advert, 'not-after')
77 if not_before or not_after:
78 now = datetime.now()
79 if not_before and not_before > now:
80 return
81 if not_after and not_after < now:
82 return
83 return {k: pick_localized(advert.get(k))
84 for k in ('title', 'text', 'href', 'image')}
85
86
87 @register.simple_tag
88 def tracking_html():
89 return mark_safe(settings.TRACKING_HTML)
90
91
92 @register.filter
93 def localized_url(path, language=None):
94 base_url = settings.BASE_URL
95 if base_url.endswith('/'):
96 base_url = base_url[:-1]
97 path = remove_query_param_from_url(path, 'hl')
98 if not language:
99 language = settings.LANGUAGE_CODE.split('-')[0]
100 path = update_url_params(path, { 'hl': language })
101 return base_url + path
102
[end of course/templatetags/base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/course/templatetags/base.py b/course/templatetags/base.py
--- a/course/templatetags/base.py
+++ b/course/templatetags/base.py
@@ -2,6 +2,8 @@
from django import template
from django.conf import settings
+from django.urls import resolve
+from django.urls.exceptions import Resolver404
from django.utils.safestring import mark_safe
from django.utils.text import format_lazy
from django.utils.translation import get_language, gettext_lazy as _
@@ -28,6 +30,15 @@
cont[key] = data
return data
[email protected]_tag(takes_context=True)
+def login_next(context):
+ request = context["request"]
+ try:
+ matched_url_name = resolve(request.path).url_name
+ next_path = f"?next={request.path}" if matched_url_name != 'logout' else ""
+ return next_path
+ except Resolver404:
+ return ""
@register.simple_tag
def brand_name():
| {"golden_diff": "diff --git a/course/templatetags/base.py b/course/templatetags/base.py\n--- a/course/templatetags/base.py\n+++ b/course/templatetags/base.py\n@@ -2,6 +2,8 @@\n \n from django import template\n from django.conf import settings\n+from django.urls import resolve\n+from django.urls.exceptions import Resolver404\n from django.utils.safestring import mark_safe\n from django.utils.text import format_lazy\n from django.utils.translation import get_language, gettext_lazy as _\n@@ -28,6 +30,15 @@\n cont[key] = data\n return data\n \[email protected]_tag(takes_context=True)\n+def login_next(context):\n+ request = context[\"request\"]\n+ try:\n+ matched_url_name = resolve(request.path).url_name\n+ next_path = f\"?next={request.path}\" if matched_url_name != 'logout' else \"\"\n+ return next_path\n+ except Resolver404:\n+ return \"\"\n \n @register.simple_tag\n def brand_name():\n", "issue": "Login should not take the user to the front page\nCommon scenario: the user is browsing a particular course module. They aren\u2019t logged in. They decide to log in, but doing so takes them to the A+ front page, from which they have to navigate back to where they were. Inconvenient.\n", "before_files": [{"content": "from datetime import datetime\n\nfrom django import template\nfrom django.conf import settings\nfrom django.utils.safestring import mark_safe\nfrom django.utils.text import format_lazy\nfrom django.utils.translation import get_language, gettext_lazy as _\nfrom lib.helpers import remove_query_param_from_url, settings_text, update_url_params\nfrom exercise.submission_models import PendingSubmission\nfrom site_alert.models import SiteAlert\n\n\nregister = template.Library()\n\n\ndef pick_localized(message):\n if message and isinstance(message, dict):\n return (message.get(get_language()) or\n message.get(settings.LANGUAGE_CODE[:2]) or\n list(message.values())[0])\n return message\n\n\ndef get_date(cont, key):\n data = cont.get(key)\n if data and not isinstance(data, datetime):\n data = datetime.strptime(data, '%Y-%m-%d')\n cont[key] = data\n return data\n\n\[email protected]_tag\ndef brand_name():\n return mark_safe(settings.BRAND_NAME)\n\n\[email protected]_tag\ndef brand_name_long():\n return mark_safe(settings.BRAND_NAME_LONG)\n\n\[email protected]_tag\ndef brand_institution_name():\n return mark_safe(settings_text('BRAND_INSTITUTION_NAME'))\n\n\[email protected]_tag\ndef course_alert(instance):\n exercises = PendingSubmission.objects.get_exercise_names_if_grader_is_unstable(instance)\n if exercises:\n message = format_lazy(\n _('GRADER_PROBLEMS_ALERT -- {exercises}'),\n exercises=exercises,\n )\n return mark_safe(format_lazy('<div class=\"alert alert-danger sticky-alert\">{message}</div>', message=message))\n return ''\n\n\[email protected]_tag\ndef site_alert():\n alerts = SiteAlert.objects.filter(status=SiteAlert.STATUS.ACTIVE)\n return mark_safe(\n ''.join(\n '<div class=\"alert alert-danger\">{}</div>'.format(pick_localized(alert.alert))\n for alert in alerts\n )\n )\n\n\[email protected]_tag\ndef site_advert(): # pylint: disable=inconsistent-return-statements\n advert = settings.SITEWIDE_ADVERT\n if not advert or not isinstance(advert, dict):\n return\n not_before = get_date(advert, 'not-before')\n not_after = get_date(advert, 'not-after')\n if not_before or not_after:\n now = datetime.now()\n if not_before and not_before > now:\n return\n if not_after and not_after < now:\n return\n return {k: pick_localized(advert.get(k))\n for k in ('title', 'text', 'href', 'image')}\n\n\[email protected]_tag\ndef tracking_html():\n return mark_safe(settings.TRACKING_HTML)\n\n\[email protected]\ndef localized_url(path, language=None):\n base_url = settings.BASE_URL\n if base_url.endswith('/'):\n base_url = base_url[:-1]\n path = remove_query_param_from_url(path, 'hl')\n if not language:\n language = settings.LANGUAGE_CODE.split('-')[0]\n path = update_url_params(path, { 'hl': language })\n return base_url + path\n", "path": "course/templatetags/base.py"}]} | 1,466 | 229 |
gh_patches_debug_1790 | rasdani/github-patches | git_diff | scikit-hep__pyhf-933 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Docs build broken with Sphinx v3.1.1
# Description
After the new Sphinx patch release [`v3.1.1`](https://github.com/sphinx-doc/sphinx/releases/tag/v3.1.1) was released there is an error with building the docs due to `autodocumenting`:
```
WARNING: don't know which module to import for autodocumenting 'optimize.opt_jax.jax_optimizer' (try placing a "module" or "currentmodule" directive in the document, or giving an explicit module name)
WARNING: don't know which module to import for autodocumenting 'optimize.opt_minuit.minuit_optimizer' (try placing a "module" or "currentmodule" directive in the document, or giving an explicit module name)
WARNING: don't know which module to import for autodocumenting 'optimize.opt_pytorch.pytorch_optimizer' (try placing a "module" or "currentmodule" directive in the document, or giving an explicit module name)
WARNING: don't know which module to import for autodocumenting 'optimize.opt_scipy.scipy_optimizer' (try placing a "module" or "currentmodule" directive in the document, or giving an explicit module name)
WARNING: don't know which module to import for autodocumenting 'optimize.opt_tflow.tflow_optimizer' (try placing a "module" or "currentmodule" directive in the document, or giving an explicit module name)
WARNING: don't know which module to import for autodocumenting 'tensor.jax_backend.jax_backend' (try placing a "module" or "currentmodule" directive in the document, or giving an explicit module name)
WARNING: don't know which module to import for autodocumenting 'tensor.numpy_backend.numpy_backend' (try placing a "module" or "currentmodule" directive in the document, or giving an explicit module name)
WARNING: don't know which module to import for autodocumenting 'tensor.pytorch_backend.pytorch_backend' (try placing a "module" or "currentmodule" directive in the document, or giving an explicit module name)
WARNING: don't know which module to import for autodocumenting 'tensor.tensorflow_backend.tensorflow_backend' (try placing a "module" or "currentmodule" directive in the document, or giving an explicit module name)
```
</issue>
<code>
[start of setup.py]
1 from setuptools import setup
2
3 extras_require = {
4 'tensorflow': [
5 'tensorflow~=2.0',
6 'tensorflow-probability~=0.10', # TODO: Temp patch until tfp v0.11
7 ],
8 'torch': ['torch~=1.2'],
9 'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],
10 'xmlio': ['uproot~=3.6'], # Future proof against uproot4 API changes
11 'minuit': ['iminuit'],
12 }
13 extras_require['backends'] = sorted(
14 set(
15 extras_require['tensorflow']
16 + extras_require['torch']
17 + extras_require['jax']
18 + extras_require['minuit']
19 )
20 )
21 extras_require['contrib'] = sorted(set(['matplotlib']))
22 extras_require['lint'] = sorted(set(['pyflakes', 'black']))
23
24 extras_require['test'] = sorted(
25 set(
26 extras_require['backends']
27 + extras_require['xmlio']
28 + extras_require['contrib']
29 + [
30 'pytest~=3.5',
31 'pytest-cov>=2.5.1',
32 'pytest-mock',
33 'pytest-benchmark[histogram]',
34 'pytest-console-scripts',
35 'pytest-mpl',
36 'pydocstyle',
37 'coverage>=4.0', # coveralls
38 'papermill~=2.0',
39 'nteract-scrapbook~=0.2',
40 'jupyter',
41 'uproot~=3.3',
42 'graphviz',
43 'jsonpatch',
44 ]
45 )
46 )
47 extras_require['docs'] = sorted(
48 set(
49 [
50 'sphinx~=3.0.0', # Sphinx v3.1.X regressions break docs
51 'sphinxcontrib-bibtex',
52 'sphinx-click',
53 'sphinx_rtd_theme',
54 'nbsphinx',
55 'ipywidgets',
56 'sphinx-issues',
57 'sphinx-copybutton>0.2.9',
58 ]
59 )
60 )
61 extras_require['develop'] = sorted(
62 set(
63 extras_require['docs']
64 + extras_require['lint']
65 + extras_require['test']
66 + ['nbdime', 'bumpversion', 'ipython', 'pre-commit', 'check-manifest', 'twine']
67 )
68 )
69 extras_require['complete'] = sorted(set(sum(extras_require.values(), [])))
70
71
72 setup(
73 extras_require=extras_require,
74 use_scm_version=lambda: {'local_scheme': lambda version: ''},
75 )
76
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -47,7 +47,7 @@
extras_require['docs'] = sorted(
set(
[
- 'sphinx~=3.0.0', # Sphinx v3.1.X regressions break docs
+ 'sphinx>=3.1.2',
'sphinxcontrib-bibtex',
'sphinx-click',
'sphinx_rtd_theme',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -47,7 +47,7 @@\n extras_require['docs'] = sorted(\n set(\n [\n- 'sphinx~=3.0.0', # Sphinx v3.1.X regressions break docs\n+ 'sphinx>=3.1.2',\n 'sphinxcontrib-bibtex',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n", "issue": "Docs build broken with Sphinx v3.1.1\n# Description\r\n\r\nAfter the new Sphinx patch release [`v3.1.1`](https://github.com/sphinx-doc/sphinx/releases/tag/v3.1.1) was released there is an error with building the docs due to `autodocumenting`:\r\n\r\n```\r\n\r\nWARNING: don't know which module to import for autodocumenting 'optimize.opt_jax.jax_optimizer' (try placing a \"module\" or \"currentmodule\" directive in the document, or giving an explicit module name)\r\nWARNING: don't know which module to import for autodocumenting 'optimize.opt_minuit.minuit_optimizer' (try placing a \"module\" or \"currentmodule\" directive in the document, or giving an explicit module name)\r\nWARNING: don't know which module to import for autodocumenting 'optimize.opt_pytorch.pytorch_optimizer' (try placing a \"module\" or \"currentmodule\" directive in the document, or giving an explicit module name)\r\nWARNING: don't know which module to import for autodocumenting 'optimize.opt_scipy.scipy_optimizer' (try placing a \"module\" or \"currentmodule\" directive in the document, or giving an explicit module name)\r\nWARNING: don't know which module to import for autodocumenting 'optimize.opt_tflow.tflow_optimizer' (try placing a \"module\" or \"currentmodule\" directive in the document, or giving an explicit module name)\r\nWARNING: don't know which module to import for autodocumenting 'tensor.jax_backend.jax_backend' (try placing a \"module\" or \"currentmodule\" directive in the document, or giving an explicit module name)\r\nWARNING: don't know which module to import for autodocumenting 'tensor.numpy_backend.numpy_backend' (try placing a \"module\" or \"currentmodule\" directive in the document, or giving an explicit module name)\r\nWARNING: don't know which module to import for autodocumenting 'tensor.pytorch_backend.pytorch_backend' (try placing a \"module\" or \"currentmodule\" directive in the document, or giving an explicit module name)\r\nWARNING: don't know which module to import for autodocumenting 'tensor.tensorflow_backend.tensorflow_backend' (try placing a \"module\" or \"currentmodule\" directive in the document, or giving an explicit module name)\r\n```\n", "before_files": [{"content": "from setuptools import setup\n\nextras_require = {\n 'tensorflow': [\n 'tensorflow~=2.0',\n 'tensorflow-probability~=0.10', # TODO: Temp patch until tfp v0.11\n ],\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],\n 'xmlio': ['uproot~=3.6'], # Future proof against uproot4 API changes\n 'minuit': ['iminuit'],\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted(set(['matplotlib']))\nextras_require['lint'] = sorted(set(['pyflakes', 'black']))\n\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + [\n 'pytest~=3.5',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'coverage>=4.0', # coveralls\n 'papermill~=2.0',\n 'nteract-scrapbook~=0.2',\n 'jupyter',\n 'uproot~=3.3',\n 'graphviz',\n 'jsonpatch',\n ]\n )\n)\nextras_require['docs'] = sorted(\n set(\n [\n 'sphinx~=3.0.0', # Sphinx v3.1.X regressions break docs\n 'sphinxcontrib-bibtex',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>0.2.9',\n ]\n )\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['lint']\n + extras_require['test']\n + ['nbdime', 'bumpversion', 'ipython', 'pre-commit', 'check-manifest', 'twine']\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n extras_require=extras_require,\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py"}]} | 1,722 | 105 |
gh_patches_debug_1464 | rasdani/github-patches | git_diff | conda__conda-build-1716 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
UnboundLocalError with --skip-existing and --no-locking flags
Hit this today on conda-build 2.1.2. Also tried with the tip of master and I get the same result. For reproduciblity, this is the output for trying to build the conda.recipe folder inside of conda-build itself:
```
$ conda build conda.recipe --no-locking --skip-existing master :: 1h :: ⬢
Cloning into '/home/edill/miniconda/conda-bld/conda.recipe_1485803296268/work'...
done.
checkout: 'HEAD'
Your branch is up-to-date with 'origin/_conda_cache_origin_head'.
==> git log -n1 <==
commit 6922ec3ed1afc287a4cd7f3872572f2bef89d892
Merge: 837fbc8 c82ea9b
Author: Mike Sarahan <[email protected]>
Date: Mon Jan 30 11:38:01 2017 -0600
Merge pull request #1704 from jerowe/feature/fix-perl-build
adding some fixes to cpan skeleton
==> git describe --tags --dirty <==
2.1.2-20-g6922ec3
==> git status <==
On branch _conda_cache_origin_head
Your branch is up-to-date with 'origin/_conda_cache_origin_head'.
nothing to commit, working directory clean
updating index in: /home/edill/miniconda/conda-bld/linux-64
Traceback (most recent call last):
File "/home/edill/miniconda/bin/conda-build", line 11, in <module>
load_entry_point('conda-build', 'console_scripts', 'conda-build')()
File "/home/edill/dev/conda/conda-build/conda_build/cli/main_build.py", line 322, in main
execute(sys.argv[1:])
File "/home/edill/dev/conda/conda-build/conda_build/cli/main_build.py", line 313, in execute
noverify=args.no_verify)
File "/home/edill/dev/conda/conda-build/conda_build/api.py", line 97, in build
need_source_download=need_source_download, config=config)
File "/home/edill/dev/conda/conda-build/conda_build/build.py", line 1478, in build_tree
config=config)
File "/home/edill/dev/conda/conda-build/conda_build/build.py", line 928, in build
package_exists = is_package_built(m, config)
File "/home/edill/dev/conda/conda-build/conda_build/build.py", line 1633, in is_package_built
update_index(d, config, could_be_mirror=False)
File "/home/edill/dev/conda/conda-build/conda_build/index.py", line 83, in update_index
with try_acquire_locks(locks, config.timeout):
UnboundLocalError: local variable 'locks' referenced before assignment
```
And some debug info
```
$ conda info
Current conda install:
platform : linux-64
conda version : 4.2.13
conda is private : False
conda-env version : 4.2.13
conda-build version : 2.1.2+20.g6922ec3
python version : 3.5.3.final.0
requests version : 2.13.0
root environment : /home/edill/miniconda (writable)
default environment : /home/edill/miniconda
envs directories : /home/edill/miniconda/envs
package cache : /home/edill/miniconda/pkgs
channel URLs : ...
config file : /home/edill/.condarc
offline mode : False
```
</issue>
<code>
[start of conda_build/index.py]
1 '''
2 Functions related to creating repodata index files.
3 '''
4
5 from __future__ import absolute_import, division, print_function
6
7 import os
8 import bz2
9 import sys
10 import json
11 import tarfile
12 from os.path import isfile, join, getmtime
13
14 from conda_build.utils import file_info, get_lock, try_acquire_locks
15 from .conda_interface import PY3, md5_file
16
17
18 def read_index_tar(tar_path, config, lock):
19 """ Returns the index.json dict inside the given package tarball. """
20 if config.locking:
21 locks = [lock]
22 with try_acquire_locks(locks, config.timeout):
23 with tarfile.open(tar_path) as t:
24 try:
25 return json.loads(t.extractfile('info/index.json').read().decode('utf-8'))
26 except EOFError:
27 raise RuntimeError("Could not extract %s. File probably corrupt."
28 % tar_path)
29 except OSError as e:
30 raise RuntimeError("Could not extract %s (%s)" % (tar_path, e))
31 except tarfile.ReadError:
32 raise RuntimeError("Could not extract metadata from %s. "
33 "File probably corrupt." % tar_path)
34
35
36 def write_repodata(repodata, dir_path, lock, config=None):
37 """ Write updated repodata.json and repodata.json.bz2 """
38 if not config:
39 import conda_build.config
40 config = conda_build.config.config
41 if config.locking:
42 locks = [lock]
43 with try_acquire_locks(locks, config.timeout):
44 data = json.dumps(repodata, indent=2, sort_keys=True)
45 # strip trailing whitespace
46 data = '\n'.join(line.rstrip() for line in data.splitlines())
47 # make sure we have newline at the end
48 if not data.endswith('\n'):
49 data += '\n'
50 with open(join(dir_path, 'repodata.json'), 'w') as fo:
51 fo.write(data)
52 with open(join(dir_path, 'repodata.json.bz2'), 'wb') as fo:
53 fo.write(bz2.compress(data.encode('utf-8')))
54
55
56 def update_index(dir_path, config, force=False, check_md5=False, remove=True, lock=None,
57 could_be_mirror=True):
58 """
59 Update all index files in dir_path with changed packages.
60
61 :param verbose: Should detailed status messages be output?
62 :type verbose: bool
63 :param force: Whether to re-index all packages (including those that
64 haven't changed) or not.
65 :type force: bool
66 :param check_md5: Whether to check MD5s instead of mtimes for determining
67 if a package changed.
68 :type check_md5: bool
69 """
70
71 if config.verbose:
72 print("updating index in:", dir_path)
73 index_path = join(dir_path, '.index.json')
74 if not os.path.isdir(dir_path):
75 os.makedirs(dir_path)
76
77 if not lock:
78 lock = get_lock(dir_path)
79
80 if config.locking:
81 locks = [lock]
82
83 with try_acquire_locks(locks, config.timeout):
84 if force:
85 index = {}
86 else:
87 try:
88 mode_dict = {'mode': 'r', 'encoding': 'utf-8'} if PY3 else {'mode': 'rb'}
89 with open(index_path, **mode_dict) as fi:
90 index = json.load(fi)
91 except (IOError, ValueError):
92 index = {}
93
94 files = set(fn for fn in os.listdir(dir_path) if fn.endswith('.tar.bz2'))
95 if could_be_mirror and any(fn.startswith('_license-') for fn in files):
96 sys.exit("""\
97 Error:
98 Indexing a copy of the Anaconda conda package channel is neither
99 necessary nor supported. If you wish to add your own packages,
100 you can do so by adding them to a separate channel.
101 """)
102 for fn in files:
103 path = join(dir_path, fn)
104 if fn in index:
105 if check_md5:
106 if index[fn]['md5'] == md5_file(path):
107 continue
108 elif index[fn]['mtime'] == getmtime(path):
109 continue
110 if config.verbose:
111 print('updating:', fn)
112 d = read_index_tar(path, config, lock=lock)
113 d.update(file_info(path))
114 index[fn] = d
115
116 for fn in files:
117 index[fn]['sig'] = '.' if isfile(join(dir_path, fn + '.sig')) else None
118
119 if remove:
120 # remove files from the index which are not on disk
121 for fn in set(index) - files:
122 if config.verbose:
123 print("removing:", fn)
124 del index[fn]
125
126 # Deal with Python 2 and 3's different json module type reqs
127 mode_dict = {'mode': 'w', 'encoding': 'utf-8'} if PY3 else {'mode': 'wb'}
128 with open(index_path, **mode_dict) as fo:
129 json.dump(index, fo, indent=2, sort_keys=True, default=str)
130
131 # --- new repodata
132 for fn in index:
133 info = index[fn]
134 for varname in 'arch', 'platform', 'mtime', 'ucs':
135 try:
136 del info[varname]
137 except KeyError:
138 pass
139
140 if 'requires' in info and 'depends' not in info:
141 info['depends'] = info['requires']
142
143 repodata = {'packages': index, 'info': {}}
144 write_repodata(repodata, dir_path, lock=lock, config=config)
145
[end of conda_build/index.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/conda_build/index.py b/conda_build/index.py
--- a/conda_build/index.py
+++ b/conda_build/index.py
@@ -77,8 +77,9 @@
if not lock:
lock = get_lock(dir_path)
+ locks = []
if config.locking:
- locks = [lock]
+ locks.append(lock)
with try_acquire_locks(locks, config.timeout):
if force:
| {"golden_diff": "diff --git a/conda_build/index.py b/conda_build/index.py\n--- a/conda_build/index.py\n+++ b/conda_build/index.py\n@@ -77,8 +77,9 @@\n if not lock:\n lock = get_lock(dir_path)\n \n+ locks = []\n if config.locking:\n- locks = [lock]\n+ locks.append(lock)\n \n with try_acquire_locks(locks, config.timeout):\n if force:\n", "issue": "UnboundLocalError with --skip-existing and --no-locking flags\nHit this today on conda-build 2.1.2. Also tried with the tip of master and I get the same result. For reproduciblity, this is the output for trying to build the conda.recipe folder inside of conda-build itself:\r\n\r\n```\r\n$ conda build conda.recipe --no-locking --skip-existing master :: 1h :: \u2b22\r\nCloning into '/home/edill/miniconda/conda-bld/conda.recipe_1485803296268/work'...\r\ndone.\r\ncheckout: 'HEAD'\r\nYour branch is up-to-date with 'origin/_conda_cache_origin_head'.\r\n==> git log -n1 <==\r\n\r\ncommit 6922ec3ed1afc287a4cd7f3872572f2bef89d892\r\nMerge: 837fbc8 c82ea9b\r\nAuthor: Mike Sarahan <[email protected]>\r\nDate: Mon Jan 30 11:38:01 2017 -0600\r\n\r\n Merge pull request #1704 from jerowe/feature/fix-perl-build\r\n\r\n adding some fixes to cpan skeleton\r\n\r\n\r\n==> git describe --tags --dirty <==\r\n\r\n2.1.2-20-g6922ec3\r\n\r\n\r\n==> git status <==\r\n\r\nOn branch _conda_cache_origin_head\r\nYour branch is up-to-date with 'origin/_conda_cache_origin_head'.\r\n\r\nnothing to commit, working directory clean\r\n\r\n\r\nupdating index in: /home/edill/miniconda/conda-bld/linux-64\r\nTraceback (most recent call last):\r\n File \"/home/edill/miniconda/bin/conda-build\", line 11, in <module>\r\n load_entry_point('conda-build', 'console_scripts', 'conda-build')()\r\n File \"/home/edill/dev/conda/conda-build/conda_build/cli/main_build.py\", line 322, in main\r\n execute(sys.argv[1:])\r\n File \"/home/edill/dev/conda/conda-build/conda_build/cli/main_build.py\", line 313, in execute\r\n noverify=args.no_verify)\r\n File \"/home/edill/dev/conda/conda-build/conda_build/api.py\", line 97, in build\r\n need_source_download=need_source_download, config=config)\r\n File \"/home/edill/dev/conda/conda-build/conda_build/build.py\", line 1478, in build_tree\r\n config=config)\r\n File \"/home/edill/dev/conda/conda-build/conda_build/build.py\", line 928, in build\r\n package_exists = is_package_built(m, config)\r\n File \"/home/edill/dev/conda/conda-build/conda_build/build.py\", line 1633, in is_package_built\r\n update_index(d, config, could_be_mirror=False)\r\n File \"/home/edill/dev/conda/conda-build/conda_build/index.py\", line 83, in update_index\r\n with try_acquire_locks(locks, config.timeout):\r\nUnboundLocalError: local variable 'locks' referenced before assignment\r\n```\r\n\r\nAnd some debug info\r\n\r\n```\r\n$ conda info \r\nCurrent conda install:\r\n\r\n platform : linux-64\r\n conda version : 4.2.13\r\n conda is private : False\r\n conda-env version : 4.2.13\r\n conda-build version : 2.1.2+20.g6922ec3\r\n python version : 3.5.3.final.0\r\n requests version : 2.13.0\r\n root environment : /home/edill/miniconda (writable)\r\n default environment : /home/edill/miniconda\r\n envs directories : /home/edill/miniconda/envs\r\n package cache : /home/edill/miniconda/pkgs\r\n channel URLs : ...\r\n config file : /home/edill/.condarc\r\n offline mode : False\r\n```\r\n\n", "before_files": [{"content": "'''\nFunctions related to creating repodata index files.\n'''\n\nfrom __future__ import absolute_import, division, print_function\n\nimport os\nimport bz2\nimport sys\nimport json\nimport tarfile\nfrom os.path import isfile, join, getmtime\n\nfrom conda_build.utils import file_info, get_lock, try_acquire_locks\nfrom .conda_interface import PY3, md5_file\n\n\ndef read_index_tar(tar_path, config, lock):\n \"\"\" Returns the index.json dict inside the given package tarball. \"\"\"\n if config.locking:\n locks = [lock]\n with try_acquire_locks(locks, config.timeout):\n with tarfile.open(tar_path) as t:\n try:\n return json.loads(t.extractfile('info/index.json').read().decode('utf-8'))\n except EOFError:\n raise RuntimeError(\"Could not extract %s. File probably corrupt.\"\n % tar_path)\n except OSError as e:\n raise RuntimeError(\"Could not extract %s (%s)\" % (tar_path, e))\n except tarfile.ReadError:\n raise RuntimeError(\"Could not extract metadata from %s. \"\n \"File probably corrupt.\" % tar_path)\n\n\ndef write_repodata(repodata, dir_path, lock, config=None):\n \"\"\" Write updated repodata.json and repodata.json.bz2 \"\"\"\n if not config:\n import conda_build.config\n config = conda_build.config.config\n if config.locking:\n locks = [lock]\n with try_acquire_locks(locks, config.timeout):\n data = json.dumps(repodata, indent=2, sort_keys=True)\n # strip trailing whitespace\n data = '\\n'.join(line.rstrip() for line in data.splitlines())\n # make sure we have newline at the end\n if not data.endswith('\\n'):\n data += '\\n'\n with open(join(dir_path, 'repodata.json'), 'w') as fo:\n fo.write(data)\n with open(join(dir_path, 'repodata.json.bz2'), 'wb') as fo:\n fo.write(bz2.compress(data.encode('utf-8')))\n\n\ndef update_index(dir_path, config, force=False, check_md5=False, remove=True, lock=None,\n could_be_mirror=True):\n \"\"\"\n Update all index files in dir_path with changed packages.\n\n :param verbose: Should detailed status messages be output?\n :type verbose: bool\n :param force: Whether to re-index all packages (including those that\n haven't changed) or not.\n :type force: bool\n :param check_md5: Whether to check MD5s instead of mtimes for determining\n if a package changed.\n :type check_md5: bool\n \"\"\"\n\n if config.verbose:\n print(\"updating index in:\", dir_path)\n index_path = join(dir_path, '.index.json')\n if not os.path.isdir(dir_path):\n os.makedirs(dir_path)\n\n if not lock:\n lock = get_lock(dir_path)\n\n if config.locking:\n locks = [lock]\n\n with try_acquire_locks(locks, config.timeout):\n if force:\n index = {}\n else:\n try:\n mode_dict = {'mode': 'r', 'encoding': 'utf-8'} if PY3 else {'mode': 'rb'}\n with open(index_path, **mode_dict) as fi:\n index = json.load(fi)\n except (IOError, ValueError):\n index = {}\n\n files = set(fn for fn in os.listdir(dir_path) if fn.endswith('.tar.bz2'))\n if could_be_mirror and any(fn.startswith('_license-') for fn in files):\n sys.exit(\"\"\"\\\n Error:\n Indexing a copy of the Anaconda conda package channel is neither\n necessary nor supported. If you wish to add your own packages,\n you can do so by adding them to a separate channel.\n \"\"\")\n for fn in files:\n path = join(dir_path, fn)\n if fn in index:\n if check_md5:\n if index[fn]['md5'] == md5_file(path):\n continue\n elif index[fn]['mtime'] == getmtime(path):\n continue\n if config.verbose:\n print('updating:', fn)\n d = read_index_tar(path, config, lock=lock)\n d.update(file_info(path))\n index[fn] = d\n\n for fn in files:\n index[fn]['sig'] = '.' if isfile(join(dir_path, fn + '.sig')) else None\n\n if remove:\n # remove files from the index which are not on disk\n for fn in set(index) - files:\n if config.verbose:\n print(\"removing:\", fn)\n del index[fn]\n\n # Deal with Python 2 and 3's different json module type reqs\n mode_dict = {'mode': 'w', 'encoding': 'utf-8'} if PY3 else {'mode': 'wb'}\n with open(index_path, **mode_dict) as fo:\n json.dump(index, fo, indent=2, sort_keys=True, default=str)\n\n # --- new repodata\n for fn in index:\n info = index[fn]\n for varname in 'arch', 'platform', 'mtime', 'ucs':\n try:\n del info[varname]\n except KeyError:\n pass\n\n if 'requires' in info and 'depends' not in info:\n info['depends'] = info['requires']\n\n repodata = {'packages': index, 'info': {}}\n write_repodata(repodata, dir_path, lock=lock, config=config)\n", "path": "conda_build/index.py"}]} | 2,974 | 99 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.